You are here |
ssc.io | ||
| | | |
simons.berkeley.edu
|
|
| | | | Given their complex behavior, diverse skills, and wide range of deployment scenarios, understanding large language models---and especially their failure modes---is important. Given that new models are released every few months, often with brand new capabilities, how can we achieve understanding that keeps pace with modern practice? | |
| | | |
blog.pdebruin.org
|
|
| | | | Retrieval Augmented Generation Hackathon starts on September 3. Repo with more info, stream schedule, samples, registration: https://aka.ms/raghack Large language models are powerful language generators, but they don't know everything about the world. RAG combines the power of large language models with the knowledge of a search engine. This allows you to ask questions of your own data, and get answers that are relevant to the context of your question. LLM AI YouTube playlists Thanks for reading! :-) | |
| | | |
deepmind.google
|
|
| | | | We ask the question: "What is the optimal model size and number of training tokens for a given compute budget?" To answer this question, we train models of various sizes and with various numbers... | |
| | | |
qa.fastforwardlabs.com
|
|
| | A review of Information Retrieval and the role it plays in an IR QA system |