|
You are here |
ollama.com | ||
| | | | |
isthisit.nz
|
|
| | | | | August 2024 Update: Now a solved problem. Use Structured Outputs. Large language models (LLMs) return unstructured output. When we prompt them they respond with one large string. This is fine for applications such as ChatGPT, but in others where we want the LLM to return structured data such as lists or key value pairs, a parseable response is needed. In Building A ChatGPT-enhanced Python REPL I used a technique to prompt the LLM to return output in a text format I could parse. | |
| | | | |
www.promptingguide.ai
|
|
| | | | | A Comprehensive Overview of Prompt Engineering | |
| | | | |
anyscale-staging.herokuapp.com
|
|
| | | | | Try the new LLM APIs available on Ray Data and Ray Serve. It's now easier than ever to use Ray for offline LLM batch inference and online LLM inference. | |
| | | | |
www.singlelunch.com
|
|
| | | This is the blog version of a talk of mine on embedding methods. It's the main slides and what I would say in the talk. Intended audience: Anyone interested in embedding methods. I don'... | ||