You are here |
wandb.ai | ||
| | | |
isthisit.nz
|
|
| | | | August 2024 Update: Now a solved problem. Use Structured Outputs. Large language models (LLMs) return unstructured output. When we prompt them they respond with one large string. This is fine for applications such as ChatGPT, but in others where we want the LLM to return structured data such as lists or key value pairs, a parseable response is needed. In Building A ChatGPT-enhanced Python REPL I used a technique to prompt the LLM to return output in a text format I could parse. | |
| | | |
blog.finxter.com
|
|
| | | | ||
| | | |
cra.mr
|
|
| | | | Recently I've been spending a lot of time worked on Peated. Ive taken the approach of somewhat intentionally over engineering it in order to learn new technologies, but also as a way to dogfood parts of Sentry. One of those technologies is ChatGPT. I thought it'd be interesting to effectively have ChatGPT fill in blanks in my database: details about bottles of whiskey, tasting notes, and even verifying some other data such as the type of spirit or location of the distiller. The results are what they are ... | |
| | | |
neo4j.com
|
|
| | Explore its functionality and how it leverages the power of LLMs and Neo4j graph databases to improve data analysis and knowledge discovery. |