|
You are here |
ministryofjustice.github.io | ||
| | | | |
jack-vanlightly.com
|
|
| | | | | Over the past few months, I've seen a growing number of posts on social media promoting the idea of a "zero-copy" integration between Apache Kafka and Apache Iceberg. The idea is that Kafka topics could live directly as Iceberg tables. On the surface it sounds efficient: one copy of the data, unified access for both streaming and analytics. But from a systems point of view, I think this is the wrong direction for the Apache Kafka project. In this post, I'll explain why. | |
| | | | |
www.starburst.io
|
|
| | | | | Trino is a SQL-based query engine built for very large datasets. It powers Starburst and delivers ad hoc and real-time analytics at speed. | |
| | | | |
www.confluent.io
|
|
| | | | | Existing Confluent Cloud (CC) AWS users can now use Tableflow to easily represent Kafka topics as Iceberg tables and then leverage AWS Glue Data catalog to power real-time AI and analytics workloads. | |
| | | | |
lakefs.io
|
|
| | | Discover what an Iceberg catalog is, its role, different types, challenges, and how to choose and configure the right catalog. Read on to learn more. | ||