|
You are here |
swethatanamala.github.io | ||
| | | | |
research.google
|
|
| | | | | Posted by Jacob Devlin and Ming-Wei Chang, Research Scientists, Google AI Language One of the biggest challenges in natural language processing (NL... | |
| | | | |
lilianweng.github.io
|
|
| | | | | [Updated on 2019-02-14: add ULMFiT and GPT-2.] [Updated on 2020-02-29: add ALBERT.] [Updated on 2020-10-25: add RoBERTa.] [Updated on 2020-12-13: add T5.] [Updated on 2020-12-30: add GPT-3.] [Updated on 2021-11-13: add XLNet, BART and ELECTRA; Also updated the Summary section.] I guess they are Elmo & Bert? (Image source: here) We have seen amazing progress in NLP in 2018. Large-scale pre-trained language modes like OpenAI GPT and BERT have achieved great performance on a variety of language tasks using generic model architectures. The idea is similar to how ImageNet classification pre-training helps many vision tasks (*). Even better than vision classification pre-training, this simple and powerful approach in NLP does not require labeled data for pre-train... | |
| | | | |
ai.googleblog.com
|
|
| | | | | [AI summary] The blog post discusses the Vision Transformer (ViT), a new image recognition model that leverages the Transformer architecture originally designed for text, demonstrating competitive performance with CNNs while requiring fewer computational resources. | |
| | | | |
tritonstation.com
|
|
| | | I want to take another step back in perspective from the last post to say a few words about what the radial acceleration relation (RAR) means and what it doesn't mean. Here it is again: The Radial Acceleration Relation over many decades. The grey region is forbidden - there cannot be less acceleration than caused... | ||