|
You are here |
blog.heim.xyz | ||
| | | | |
research.google
|
|
| | | | | Posted by Daniel Adiwardana, Senior Research Engineer, and Thang Luong, Senior Research Scientist, Google Research, Brain Team Modern conversatio... | |
| | | | |
jax-ml.github.io
|
|
| | | | | Training LLMs often feels like alchemy, but understanding and optimizing the performance of your models doesn't have to. This book aims to demystify the science of scaling language models: how TPUs (and GPUs) work and how they communicate with each other, how LLMs run on real hardware, and how to parallelize your models during training and inference so they run efficiently at massive scale. If you've ever wondered "how expensive should this LLM be to train" or "how much memory do I need to serve this model myself" or "what's an AllGather", we hope this will be useful to you. | |
| | | | |
www.lesswrong.com
|
|
| | | | | If one believes that unaligned AGI is a significant problem (>10% chance of leading to catastrophe), speeding up public progress towards AGI is obvio... | |
| | | | |
www.techradar.com
|
|
| | | We're going through changes | ||