Common Apache Kafka Mistakes to Avoid

Описание к видео Common Apache Kafka Mistakes to Avoid

https://cnfl.io/podcast-episode-221 | What are some of the common mistakes that you have seen with Apache Kafka® record production and consumption? Nikoleta Verbeck (Principal Solutions Architect at Professional Services, Confluent) has a role that specifically tasks her with performance tuning as well as troubleshooting Kafka installations of all kinds. Based on her field experience, she put together a comprehensive list of common issues with recommendations for building, maintaining, and improving Kafka systems that are applicable across use cases.

Kris and Nikoleta begin by discussing the fact that it is common for those migrating to Kafka from other message brokers to implement too many producers, rather than the one per service. Kafka is thread safe and one producer instance can talk to multiple topics, unlike with traditional message brokers, where you may tend to use a client per topic.

Monitoring is an unabashed good in any Kafka system. Nikoleta notes that it is better to monitor from the start of your installation as thoroughly as possible, even if you don't think you ultimately will require so much detail, because it will pay off in the long run. A major advantage of monitoring is that it lets you predict your potential resource growth in a more orderly fashion, as well as helps you to use your current resources more efficiently. Nikoleta mentions the many dashboards that have been built out by her team to accommodate leading monitoring platforms such as Prometheus, Grafana, New Relic, Datadog, and Splunk.

They also discuss a number of useful elements that are optional in Kafka so people tend to be unaware of them. Compression is the first of these, and Nikoleta absolutely recommends that you enable it. Another is producer callbacks, which you can use to catch exceptions. A third is setting a `ConsumerRebalanceListener`, which notifies you about rebalancing events, letting you prepare for any issues that may result from them.

Other topics covered in the episode are batching and the `linger.ms` Kafka producer setting, how to figure out your units of scale, and the metrics tool Trogdor.

EPISODE LINKS
► 5 Common Pitfalls When Using Apache Kafka: https://cnfl.io/5-common-pitfalls-whe...
► Kafka Internals course: https://cnfl.io/internals-101-episode...
► linger.ms producer configs.: https://cnfl.io/linger-ms-episode-221
► Fault Injection—Trogdor: https://cwiki.apache.org/confluence/d...
► From Apache Kafka to Performance in Confluent Cloud: https://cnfl.io/journey-from-apache-k...
► Kafka Compression: https://cwiki.apache.org/confluence/d...
► Interface ConsumerRebalanceListener: https://kafka.apache.org/24/javadoc/i...
► Nikoleta Verbeck’s Twitter:   / nerdynikoleta  
► Kris Jenkins’ Twitter:   / krisajenkins  
► Streaming Audio Playlist:    • Streaming Audio Podcast | Apache Kafk...  
► Join the Confluent Community: https://cnfl.io/confluent-community-e...
►Learn more with Kafka tutorials, resources, and guides at Confluent Developer: https://cnfl.io/confluent-developer-e...
► Use PODCAST100 to get $100 of free Confluent Cloud usage: https://cnfl.io/try-cloud-episode-221
► Promo code details: https://cnfl.io/podcast100-details-ep...

TIMESTAMPS
0:00 - Intro
1:17 - What is a Solutions Architect
2:20 - It's a problem to use multiple producers in a single service
6:19 - The trade off between throughput and latency with batching
8:05 - What is linger.ms
15:00 - Enable compression
25:19 - Define Producer Callbacks
33:16 - One consumer per thread in a single service instance
41:45 - Trogdor
43:37 - Over Committing
55:48 - Provide a `ConsumerRebalanceListener`
1:00:16 - Undersized per Kafka Consumer instances
1:07:28 - It's a wrap

ABOUT CONFLUENT
Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion – designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations. To learn more, please visit www.confluent.io.

#streamprocessing #apachekafka #kafka #confluent

Комментарии

Информация по комментариям в разработке