|
You are here |
harvardnlp.github.io | ||
| | | | |
teddykoker.com
|
|
| | | | | This post is the first in a series of articles about natural language processing (NLP), a subfield of machine learning concerning the interaction between computers and human language. This article will be focused on attention, a mechanism that forms the backbone of many state-of-the art language models, including Googles BERT (Devlin et al., 2018), and OpenAIs GPT-2 (Radford et al., 2019). | |
| | | | |
sigmoidprime.com
|
|
| | | | | An exploration of Transformer-XL, a modified Transformer optimized for longer context length. | |
| | | | |
blog.eleuther.ai
|
|
| | | | | Rotary Positional Embedding (RoPE) is a new type of position encoding that unifies absolute and relative approaches. We put it to the test. | |
| | | | |
www.hamza.se
|
|
| | | A walkthrough of implementing a neural network from scratch in Python, exploring what makes these seemingly complex systems actually quite straightforward. | ||