You are here |
blog.jellesmeets.nl | ||
| | | |
www.halhigdon.com
|
|
| | | | Recovery and base training are critical aspects of the marathon, and it can be hard to know how to move forward after the big race day. Hal Higdon offers free and interactive training schedules for the Post Marathon period. | |
| | | |
trishagee.com
|
|
| | | | Trisha Gee is looking for a new role. In this post, she explores the three main options she's considering. | |
| | | |
www.halhigdon.com
|
|
| | | | With the publication ofHal Higdon's Half Marathon Training, I added a new intermediate schedule. Previously, there was only one, titled "Intermediate."Now there are two: "Intermediate 1" and "Intermediate2." The difference is that Intermediate 1 is an endurance-based program; Intermediate 2 is a speed-based program. These two intermediate schedules exist in a parallel universe, the same ... Continue reading "Intermediate 1" | |
| | | |
lilianweng.github.io
|
|
| | [Updated on 2019-02-14: add ULMFiT and GPT-2.] [Updated on 2020-02-29: add ALBERT.] [Updated on 2020-10-25: add RoBERTa.] [Updated on 2020-12-13: add T5.] [Updated on 2020-12-30: add GPT-3.] [Updated on 2021-11-13: add XLNet, BART and ELECTRA; Also updated the Summary section.] I guess they are Elmo & Bert? (Image source: here) We have seen amazing progress in NLP in 2018. Large-scale pre-trained language modes like OpenAI GPT and BERT have achieved great performance on a variety of language tasks using generic model architectures. The idea is similar to how ImageNet classification pre-training helps many vision tasks (*). Even better than vision classification pre-training, this simple and powerful approach in NLP does not require labeled data for pre-training, allowing us to experiment with increased training scale, up to our very limit. |