|
You are here |
transformer-circuits.pub | ||
| | | | |
peterbloem.nl
|
|
| | | | | [AI summary] The text provides an in-depth overview of the Transformer architecture, its evolution, and its applications. It begins by introducing the Transformer as a foundational model for sequence modeling, highlighting its ability to handle long-range dependencies through self-attention mechanisms. The text then explores various extensions and improvements, such as the introduction of positional encodings, the development of models like Transformer-XL and Sparse Transformers to address the quadratic complexity of attention, and the use of techniques like gradient checkpointing and half-precision training to scale up model size. It also discusses the generality of the Transformer, its potential in multi-modal learning, and its future implications across d... | |
| | | | |
matthewmcateer.me
|
|
| | | | | Important mathematical prerequisites for getting into Machine Learning, Deep Learning, or any of the other space | |
| | | | |
haifengl.wordpress.com
|
|
| | | | | Generative artificial intelligence (GenAI), especially ChatGPT, captures everyone's attention. The transformerbased large language models (LLMs), trained on a vast quantity of unlabeled data at scale, demonstrate the ability to generalize to many different tasks. To understand why LLMs are so powerful, we will deep dive into how they work in this post. LLM Evolutionary Tree... | |
| | | | |
tcode2k16.github.io
|
|
| | | a random blog about cybersecurity and programming | ||