|
You are here |
lancecarlson.com | ||
| | | | |
www.markhneedham.com
|
|
| | | | | In this post, we'll learn how to use LLMs on the command line with Simon Willison's llm library. | |
| | | | |
mattmazur.com
|
|
| | | | | One of the many new features announced at yesterday's OpenAI dev day is better support for generating valid JSON output. From the JSON mode docs: A common way to use Chat Completions is to instruct the model to always return JSON in some format that makes sense for your use case, by providing a system... | |
| | | | |
isthisit.nz
|
|
| | | | | August 2024 Update: Now a solved problem. Use Structured Outputs. Large language models (LLMs) return unstructured output. When we prompt them they respond with one large string. This is fine for applications such as ChatGPT, but in others where we want the LLM to return structured data such as lists or key value pairs, a parseable response is needed. In Building A ChatGPT-enhanced Python REPL I used a technique to prompt the LLM to return output in a text format I could parse. | |
| | | | |
futurism.com
|
|
| | | OpenAI's latest AI GPT-4 was capable of saying some deeply racist things before being constrained by the company's "red team," Insider reports. | ||