5 Kafka Best Practices

Logo
Presented by

Alex Pierce

About this talk

Learn five ways to improve your Kafka operations’ readiness and platform performance through proven Kafka best practices. The influx of data from a wide variety of sources is already straining your big data IT infrastructure. On top of that, data must be ingested, processed, and made available in near real-time to support business-critical use cases. Kafka data streaming is used today by 30% of Fortune 500 companies because of its ability to feed data in real-time into a predictive analytics engine in support of these use cases. However, there are critical challenges and limitations. By following the latest Kafka best practices, you can more easily and effectively manage Kafka. Join us for a webinar where we will discuss five specific ways to help keep your Kafka deployment optimized and more easily managed. Best practices covered: -Monitoring key component states to understand Kafka cluster health -Measuring crucial metrics to understand Kafka cluster performance -Observing critical building blocks in the Kafka hardware stack -Tracking important metrics for Kafka capacity planning -Knowing what to alert on and what can be monitored passively
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (117)
Subscribers (6413)
Pepperdata Capacity Optimizer delivers 30-47% greater cost savings for data-intensive workloads, eliminating the need for manual tuning by optimizing CPU and memory in real time with no application changes. Pepperdata pays for itself, immediately decreasing instance hours/waste, increasing utilization, and freeing developers from manual tuning to focus on innovation.