|
You are here |
www.confluent.io | ||
| | | | |
www.ververica.com
|
|
| | | | | Discover Fluss, a unified streaming storage solution for Apache Flink, revolutionizing real-time data processing and analytics with sub-second latency. | |
| | | | |
lakefs.io
|
|
| | | | | Discover what an Iceberg catalog is, its role, different types, challenges, and how to choose and configure the right catalog. Read on to learn more. | |
| | | | |
www.getorchestra.io
|
|
| | | | | Discover how Data Engineers are using Apache Iceberg in Snowflake to replace External Tables, thereby truly deocupling compute and storage. Find out more. | |
| | | | |
isthisit.nz
|
|
| | | August 2024 Update: Now a solved problem. Use Structured Outputs. Large language models (LLMs) return unstructured output. When we prompt them they respond with one large string. This is fine for applications such as ChatGPT, but in others where we want the LLM to return structured data such as lists or key value pairs, a parseable response is needed. In Building A ChatGPT-enhanced Python REPL I used a technique to prompt the LLM to return output in a text format I could parse. | ||