|
You are here |
www.cold-takes.com | ||
| | | | |
www.alignmentforum.org
|
|
| | | | | "Human feedbackon diverse tasks" could lead to transformative AI, while requiring little innovation on current techniques. But it seems likely that... | |
| | | | |
joecarlsmith.com
|
|
| | | | | Introduction to an essay series about paths to safe, useful superintelligence. | |
| | | | |
scottaaronson.blog
|
|
| | | | | Two weeks ago, I gave a lecture setting out my current thoughts on AI safety, halfway through my year at OpenAI. I was asked to speak by UT Austin's Effective Altruist club. You can watch the lecture on YouTube here (I recommend 2x speed). The timing turned out to be weird, coming immediately after the... | |
| | | | |
jax-ml.github.io
|
|
| | | Training LLMs often feels like alchemy, but understanding and optimizing the performance of your models doesn't have to. This book aims to demystify the science of scaling language models: how TPUs (and GPUs) work and how they communicate with each other, how LLMs run on real hardware, and how to parallelize your models during training and inference so they run efficiently at massive scale. If you've ever wondered "how expensive should this LLM be to train" or "how much memory do I need to serve this model myself" or "what's an AllGather", we hope this will be useful to you. | ||