|
You are here |
blog.pamelafox.org | ||
| | | | |
isthisit.nz
|
|
| | | | | August 2024 Update: Now a solved problem. Use Structured Outputs. Large language models (LLMs) return unstructured output. When we prompt them they respond with one large string. This is fine for applications such as ChatGPT, but in others where we want the LLM to return structured data such as lists or key value pairs, a parseable response is needed. In Building A ChatGPT-enhanced Python REPL I used a technique to prompt the LLM to return output in a text format I could parse. | |
| | | | |
newvick.com
|
|
| | | | | RAG is not all you need. This post will cover some of the common problems that are encountered in a simple RAG system, and potential solutions for them. | |
| | | | |
simonwillison.net
|
|
| | | | | Retrieval Augmented Generation (RAG) is a technique for adding extra "knowledge" to systems built on LLMs, allowing them to answer questions against custom information not included in their training data. ... | |
| | | | |
www.onlandscape.co.uk
|
|
| | | [AI summary] The article discusses privacy and cookie policies for the online magazine 'On Landscape' focusing on user data collection and website functionality. | ||