Second blog post in the Kafka 101 series. In this post, we build on the infrastructure described in the first post, producing fake sales event to the Kafka cluster using the Java API and Apache Avro for messages serialization.
In this series of articles, we will explore the power of Kafka and Flink in building real-time data streaming pipelines. We will delve deeper into the architecture and features of Kafka and Flink, and showcase how these technologies can be used together to process and analyze data at scale.