Explore >> Select a destination


You are here

www.starburst.io
| | www.confluent.io
2.6 parsecs away

Travel
| | Existing Confluent Cloud (CC) AWS users can now use Tableflow to easily represent Kafka topics as Iceberg tables and then leverage AWS Glue Data catalog to power real-time AI and analytics workloads.
| | newsroom.aboutrobinhood.com
3.7 parsecs away

Travel
| | Robinhood's mission is to democratize finance for all. Continuous data analysis and data driven decision making at different levels within
| | celerdata.com
2.8 parsecs away

Travel
| | Learn how CelerData and Onehouse have partnered to offer a powerful, easy-to-use data infrastructure for businesses using StarRocks and Apache Hudi.
| | jack-vanlightly.com
15.5 parsecs away

Travel
| Over the past few months, I've seen a growing number of posts on social media promoting the idea of a "zero-copy" integration between Apache Kafka and Apache Iceberg. The idea is that Kafka topics could live directly as Iceberg tables. On the surface it sounds efficient: one copy of the data, unified access for both streaming and analytics. But from a systems point of view, I think this is the wrong direction for the Apache Kafka project. In this post, I'll explain why.