|
You are here |
zserge.com | ||
| | | | |
peterbloem.nl
|
|
| | | | | [AI summary] The text provides an in-depth overview of the Transformer architecture, its evolution, and its applications. It begins by introducing the Transformer as a foundational model for sequence modeling, highlighting its ability to handle long-range dependencies through self-attention mechanisms. The text then explores various extensions and improvements, such as the introduction of positional encodings, the development of models like Transformer-XL and Sparse Transformers to address the quadratic complexity of attention, and the use of techniques like gradient checkpointing and half-precision training to scale up model size. It also discusses the generality of the Transformer, its potential in multi-modal learning, and its future implications across d... | |
| | | | |
sebastianraschka.com
|
|
| | | | | I'm Sebastian: a machine learning & AI researcher, programmer, and author. As Staff Research Engineer Lightning AI, I focus on the intersection of AI research, software development, and large language models (LLMs). | |
| | | | |
jaykmody.com
|
|
| | | | | Implementing a GPT model from scratch in NumPy. | |
| | | | |
sigmoidprime.com
|
|
| | | An exploration of Transformer-XL, a modified Transformer optimized for longer context length. | ||