Explore >> Select a destination


You are here

sophiabits.com
| | blog.christianposta.com
15.8 parsecs away

Travel
| | Organizations need to think about what data gets sent to any AI services. They also need to consider the LLM may respond with some unexpected or risky results. This is where guardrails come in. There are a number of opensource projects for building guardrails directly into your application. There are also a number of vendor-specific content moderation services. What about building your own? From working with enterprises, I can say they have a lot of opinions over how content creation should be moderated in this AI/LLM world.
| | simonwillison.net
11.9 parsecs away

Travel
| | OpenAI have offered structured outputs for a while now: you could specify `"response_format": {"type": "json_object"}}` to request a valid JSON object, or you could use the [function calling](https://platform.openai.com/docs/guides/function-calling) mechanism to ...
| | isthisit.nz
12.8 parsecs away

Travel
| | August 2024 Update: Now a solved problem. Use Structured Outputs. Large language models (LLMs) return unstructured output. When we prompt them they respond with one large string. This is fine for applications such as ChatGPT, but in others where we want the LLM to return structured data such as lists or key value pairs, a parseable response is needed. In Building A ChatGPT-enhanced Python REPL I used a technique to prompt the LLM to return output in a text format I could parse.
| | thegamesshed.wordpress.com
76.2 parsecs away

Travel
| Listen up! We now have a Facebook page:https://www.facebook.com/groups/1972111873002373/ This will become our new forum :)