|
You are here |
research.google | ||
| | | | |
blog.risingstack.com
|
|
| | | | | Artificial intelligence is a complex field. See how different AI development tools compare and find the best one for you. | |
| | | | |
www.anyscale.com
|
|
| | | | | ByteDance, the company behind Tiktok, leverages multi-modal models to enable many applications, such as text-based image retrieval or object detection. | |
| | | | |
jax-ml.github.io
|
|
| | | | | Training LLMs often feels like alchemy, but understanding and optimizing the performance of your models doesn't have to. This book aims to demystify the science of scaling language models: how TPUs (and GPUs) work and how they communicate with each other, how LLMs run on real hardware, and how to parallelize your models during training and inference so they run efficiently at massive scale. If you've ever wondered "how expensive should this LLM be to train" or "how much memory do I need to serve this model myself" or "what's an AllGather", we hope this will be useful to you. | |
| | | | |
windowsontheory.org
|
|
| | | [Yet another "philosophizing" post, but one with some actual numbers. See also this follow up. --Boaz] Recently there have been many debates on "artificial general intelligence" (AGI) and whether or not we are close to achieving it by scaling up our current AI systems. In this post, I'd like to make this debate a bit... | ||