|
You are here |
sebastianraschka.com | ||
| | | | |
www.paepper.com
|
|
| | | | | Introduction LoRA (Low-Rank Adaptation of LLMs) is a technique that focuses on updating only a small set of low-rank matrices instead of adjusting all the parameters of a deep neural network . This reduces the computational complexity of the training process significantly. LoRA is particularly useful when working with large language models (LLMs) which have a huge amount of parameters that need to be fine-tuned. The Core Concept: Reducing Complexity with Low-Rank Decomposition | |
| | | | |
vickiboykis.com
|
|
| | | | | What are ML artifacts? | |
| | | | |
sigmoidprime.com
|
|
| | | | | An exploration of Transformer-XL, a modified Transformer optimized for longer context length. | |
| | | | |
scorpil.com
|
|
| | | In Part One of the "Understanding Generative AI" series, we delved into Tokenization - the process of dividing text into tokens, which serve as the fundamental units of information for neural networks. These tokens are crucial in shaping how AI interprets and processes language. Building upon this foundational knowledge, we are now ready to explore Neural Networks - the cornerstone technology underpinning all Artificial Intelligence research. A Short Look into the History Neural Networks, as a technology, have their roots in the 1940s and 1950s. | ||