Part 2: Fundamentals for Apache Kafka

Logo
Presented by

Tim Berglund, Senior Director, Developer Advocacy, Confluent.

About this talk

What is Apache Kafka® and how does it work? Apache Kafka® was built with the vision to become the central nervous system that makes real-time data available to all the applications that need to use it, with numerous use cases like stock trading and fraud detection, to transportation, data integration, and real-time analytics. This 2 part series you will get an overview of what Kafka is, what it's used for, and the core concepts that enable it to power a highly scalable, available and resilient real-time event streaming platform. The series begins with an introduction to the shift toward real-time data streaming, and continues all the way through to best practices for developing applications with Apache Kafka® and how to integrate Kafka into your environment. Whether you’re just getting started or have already built stream processing applications, you will find actionable insights in this series that will enable you to further derive business value from your data systems. This training is comprised of the following topics:: 1.Benefits of Stream Processing and Apache Kafka® Use Cases 2.Apache Kafka® Architecture & Fundamentals Explained 3.How Apache Kafka® Works 4. Integrating Apache Kafka® Into Your Environment 5. Confluent Cloud Register now to learn Apache Kafka® from Confluent, the company founded by Kafka’s original developers.
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (333)
Subscribers (10654)
Confluent is building the foundational platform for data in motion. Our cloud-native offering is designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organisation. With Confluent, organisations can create a central nervous system to innovate and win in a digital-first world.