Configuring Apache Kafka for Performance and Resource Management
Apache Kafka is optimized for small messages. According to benchmarks, the best performance occurs with 1 KB messages. Larger messages (for example, 10 MB to 100 MB) can decrease throughput and significantly impact operations.
This topic describes options that can improve performance and reliability in your Kafka cluster:
Partitions and Memory Usage
For a quick video introduction to load balancing, see tl;dr: Balancing Apache Kafka Clusters.
Brokers allocate a buffer the size of replica.fetch.max.bytes for each partition they replicate. If replica.fetch.max.bytes is set to 1 MiB, and you have 1000 partitions, about 1 GiB of RAM is required. Ensure that the number of partitions multiplied by the size of the largest message does not exceed available memory.
The same consideration applies for the consumer fetch.message.max.bytes setting. Ensure that you have enough memory for the largest message for each partition the consumer replicates. With larger messages, you might need to use fewer partitions or provide more RAM.
Partition Reassignment
At some point you will likely exceed configured resources on your system. If you add a Kafka broker to your cluster to handle increased demand, new partitions are allocated to it (the same as any other broker), but it does not automatically share the load of existing partitions on other brokers. To redistribute the existing load among brokers, you must manually reassign partitions. You can do so using bin/kafka-reassign-partitions.sh script utilities.
- Create a list of topics you want to move.
topics-to-move.json
{"topics": [{"topic": "foo1"},
{"topic": "foo2"}],
"version":1
} - Use the --generate option in kafka-reassign-partitions.sh to list the distribution of partitions and replicas on your current brokers, followed by a list of suggested locations for partitions on your new broker.
bin/kafka-reassign-partitions.sh --zookeeper localhost:2181
--topics-to-move-json-file topics-to-move.json
--broker-list "4"
--generate Current partition replica assignment {"version":1,
"partitions":[{"topic":"foo1","partition":2,"replicas":[1,2]},
{"topic":"foo1","partition":0,"replicas":[3,1]},
{"topic":"foo2","partition":2,"replicas":[1,2]},
{"topic":"foo2","partition":0,"replicas":[3,2]},
{"topic":"foo1","partition":1,"replicas":[2,3]},
{"topic":"foo2","partition":1,"replicas":[2,3]}]
} {"version":1,
"partitions":[{"topic":"foo1","partition":3,"replicas":[4]},
{"topic":"foo1","partition":1,"replicas":[4]},
{"topic":"foo2","partition":2,"replicas":[4]}]
} - Revise the suggested list if required, and then save it as a JSON file.
- Use the --execute option in kafka-reassign-partitions.sh to start the redistirbution process, which can take several hours in some cases.
> bin/kafka-reassign-partitions.sh \
--zookeeper localhost:2181 \
--reassignment-json-file expand-cluster-reassignment.json
--execute - Use the --verify option in kafka-reassign-partitions.sh to check the status of your partitions.
Although reassigning partitions is labor-intensive, you should anticipate system growth and redistribute the load when your system is at 70% capacity. If you wait until you are forced to redistribute because you have reached the limit of your resources, the redistribution process can be extremely slow.
Garbage Collection
Large messages can cause longer garbage collection (GC) pauses as brokers allocate large chunks. Monitor the GC log and the server log. If long GC pauses cause Kafka to abandon the ZooKeeper session, you may need to configure longer timeout values for zookeeper.session.timeout.ms.
Handling Large Messages
Before configuring Kafka to handle large messages, first consider the following options to reduce message size:
The Kafka producer can compress messages. For example, if the original message is a text-based format (such as XML), in most cases the compressed message will be sufficiently small.
Use the compression.codec and compressed.topics producer configuration parameters to enable compression. Gzip and Snappy are supported.
- If shared storage (such as NAS, HDFS, or S3) is available, consider placing large files on the shared storage and using Kafka to send a message with the file location. In many cases, this can be much faster than using Kafka to send the large file itself.
- Split large messages into 1 KB segments with the producing client, using partition keys to ensure that all segments are sent to the same Kafka partition in the correct order. The consuming client can then reconstruct the original large message.
If you still need to send large messages with Kafka, modify the following configuration parameters to match your requirements:
Broker Configuration
message.max.bytes
Maximum message size the broker will accept. Must be smaller than the consumer fetch.message.max.bytes, or the consumer cannot consume the message.
Default value: 1000000 (1 MB)
log.segment.bytes
Size of a Kafka data file. Must be larger than any single message.
Default value: 1073741824 (1 GiB)
replica.fetch.max.bytes
Maximum message size a broker can replicate. Must be larger than message.max.bytes, or a broker can accept messages it cannot replicate, potentially resulting in data loss.
Default value: 1048576 (1 MiB)
Consumer Configuration
If a single message batch is larger than any of the default values below, the consumer will still be able to consume the batch, but the batch will be sent alone, which can cause performance degradation.
max.partition.fetch.bytes
The maximum amount of data per-partition the server will return.
Default value: 1048576 (10 MiB)
fetch.max.bytes
The maximum amount of data the server should return for a fetch request.
Default value: 52428800 (50 MiB)
fetch.message.max.bytes
Maximum message size a consumer can read. Must be at least as large as message.max.bytes.
Default value: 1048576 (1 MiB)
Tuning Kafka for Optimal Performance
For a quick video introduction to tuning Kafka, see tl;dr: Tuning Your Apache Kafka Cluster.
Performance tuning involves two important metrics: Latencymeasures how long it takes to process one event, and throughput measures how many events arrive within a specific amount of time. Most systems are optimized for either latency or throughput. Kafka is balanced for both. A well tuned Kafka system has just enough brokers to handle topic throughput, given the latency required to process information as it is received.
Tuning your producers, brokers, and consumers to send, process, and receive the largest possible batches within a manageable amount of time results in the best balance of latency and throughput for your Kafka cluster.
Tuning Kafka Producers
Kafka uses an asynchronous publish/subscribe model. When your producer calls the send() command, the result returned is a future. The future provides methods to let you check the status of the information in process. When the batch is ready, the producer sends it to the broker. The Kafka broker waits for an event, receives the result, and then responds that the transaction is complete.
If you do not use a future, you could get just one record, wait for the result, and then send a response. Latency is very low, but so is throughput. If each transaction takes 5 ms, throughput is 200 events per second.—slower than the expected 100,000 events per second.
When you use Producer.send(), you fill up buffers on the producer. When a buffer is full, the producer sends the buffer to the Kafka broker and begins to refill the buffer.
Two parameters are particularly important for latency and throughput: batch size and linger time.
Batch Size
batch.size measures batch size in total bytes instead of the number of messages. It controls how many bytes of data to collect before sending messages to the Kafka broker. Set this as high as possible, without exceeding available memory. The default value is 16384.
If you increase the size of your buffer, it might never get full. The Producer sends the information eventually, based on other triggers, such as linger time in milliseconds. Although you can impair memory usage by setting the buffer batch size too high, this does not impact latency.
If your producer is sending all the time, you are probably getting the best throughput possible. If the producer is often idle, you might not be writing enough data to warrant the current allocation of resources.
Linger Time
linger.ms sets the maximum time to buffer data in asynchronous mode. For example, a setting of 100 batches 100ms of messages to send at once. This improves throughput, but the buffering adds message delivery latency.
By default, the producer does not wait. It sends the buffer any time data is available.
Instead of sending immediately, you can set linger.ms to 5 and send more messages in one batch. This would reduce the number of requests sent, but would add up to 5 milliseconds of latency to records sent, even if the load on the system does not warrant the delay.
The farther away the broker is from the producer, the more overhead required to send messages. Increase linger.ms for higher latency and higher throughput in your producer.
Tuning Kafka Brokers
Topics are divided into partitions. Each partition has a leader. Most partitions are written into leaders with multiple replicas. When the leaders are not balanced properly, one might be overworked, compared to others. For more information on load balancing, see Partitions and Memory Usage.
Depending on your system and how critical your data is, you want to be sure that you have sufficient replication sets to preserve your data. Cloudera recommends starting with one partition per physical storage disk and one consumer per partition.
Tuning Kafka Consumers
Consumers can create throughput issues on the other side of the pipeline. The maximum number of consumers for a topic is equal to the number of partitions. You need enough partitions to handle all the consumers needed to keep up with the producers.
Consumers in the same consumer group split the partitions among them. Adding more consumers to a group can enhance performance. Adding more consumer groups does not affect performance.
How you use the replica.high.watermark.checkpoint.interval.ms property can affect throughput. When reading from a partition, you can mark the last point where you read information. That way, if you have to go back and locate missing data, you have a checkpoint from which to move forward without having to reread prior data. If you set the checkpoint watermark for every event, you will never lose a message, but it significantly impacts performance. If, instead, you set it to check the offset every hundred messages, you have a margin of safety with much less impact on throughput.
Configuring JMX Ephemeral Ports
Kafka uses two high-numbered ephemeral ports for JMX. These ports are listed when you view netstat -anpinformation for the Kafka Broker process.
You can change the number for the first port by adding a command similar to -Dcom.sun.management.jmxremote.rmi.port=<port number> to the field Additional Broker Java Options (broker_java_opts) in Cloudera Manager. The JMX_PORT configuration maps to com.sun.management.jmxremote.port by default.
The second ephemeral port used for JMX communication is implemented for the JRMP protocol and cannot be changed.
Quotas
For a quick video introduction to quotas, see tl;dr: Quotas.
In CDK 2.0 and higher Powered By Apache Kafka, Kafka can enforce quotas on produce and fetch requests. Producers and consumers can use very high volumes of data. This can monopolize broker resources, cause network saturation, and generally deny service to other clients and the brokers themselves. Quotas protect against these issues and are important for large, multi-tenant clusters where a small set of clients using high volumes of data can degrade the user experience.
Quotas are byte-rate thresholds, defined per client ID. A client ID logically identifies an application making a request. A single client ID can span multiple producer and consumer instances. The quota is applied for all instances as a single entity: For example, if a client ID has a produce quota of 10 MB/s, that quota is shared across all instances with that same ID.
When running Kafka as a service, quotas can enforce API limits. By default, each unique client ID receives a fixed quota in bytes per second, as configured by the cluster (quota.producer.default, quota.consumer.default). This quota is defined on a per-broker basis. Each client can publish or fetch a maximum of X bytes per second per broker before it gets throttled.
The broker does not return an error when a client exceeds its quota, but instead attempts to slow the client down. The broker computes the amount of delay needed to bring a client under its quota and delays the response for that amount of time. This approach keeps the quota violation transparent to clients (outside of client-side metrics). This also prevents clients from having to implement special backoff and retry behavior.
Setting Quotas
You can override the default quota for client IDs that need a higher or lower quota. The mechanism is similar to per-topic log configuration overrides. Write your client ID overrides to ZooKeeper under /config/clients. All brokers read the overrides, which are effective immediately. You can change quotas without having to do a rolling restart of the entire cluster.
By default, each client ID receives an unlimited quota. The following configuration sets the default quota per producer and consumer client ID to 10 MB/s.
quota.producer.default=10485760
quota.consumer.default=10485760
To set quotas using Cloudera Manager, open the Kafka Configuration page and search for Quota. Use the fields provided to set the Default Consumer Quota or Default Producer Quota. For more information, see Modifying Configuration Properties Using Cloudera Manager.
Setting User Limits for Kafka
ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
Cloudera recommends setting the value to a relatively high starting point, such as 32,768.
- Go to the Kafka service.
- Select a Kafka Broker.
- Open Charts Library > Process Resources and scroll down to the File Descriptors chart.
See http://www.cloudera.com/documentation/enterprise/latest/topics/cm_dg_view_charts.html.
Configuring Apache Kafka for Performance and Resource Management的更多相关文章
- Configuring Apache Kafka Security
This topic describes additional steps you can take to ensure the safety and integrity of your data s ...
- Apache Kafka源码分析 – Log Management
LogManager LogManager会管理broker上所有的logs(在一个log目录下),一个topic的一个partition对应于一个log(一个log子目录)首先loadLogs会加载 ...
- Apache Kafka源码分析 – Broker Server
1. Kafka.scala 在Kafka的main入口中startup KafkaServerStartable, 而KafkaServerStartable这是对KafkaServer的封装 1: ...
- Configuring High Availability and Consistency for Apache Kafka
To achieve high availability and consistency targets, adjust the following parameters to meet your r ...
- Understanding When to use RabbitMQ or Apache Kafka
https://content.pivotal.io/rabbitmq/understanding-when-to-use-rabbitmq-or-apache-kafka How do humans ...
- Apache Kafka - How to Load Test with JMeter
In this article, we are going to look at how to load test Apache Kafka, a distributed streaming plat ...
- Spring for Apache Kafka
官方文档详见:http://docs.spring.io/spring-kafka/docs/1.0.2.RELEASE/reference/htmlsingle/ Authors Gary Russ ...
- How Cigna Tuned Its Spark Streaming App for Real-time Processing with Apache Kafka
Explore the configuration changes that Cigna’s Big Data Analytics team has made to optimize the perf ...
- Benchmarking Apache Kafka: 2 Million Writes Per Second (On Three Cheap Machines)
I wrote a blog post about how LinkedIn uses Apache Kafka as a central publish-subscribe log for inte ...
随机推荐
- sql server 性能调优之 SQL语句阻塞查询
在生产环境下,有时公司客服反映网页半天打不到,除了在浏览器按F12的Network响应来排查,确定web服务器无故障后.就需要检查数据库是否有出现阻塞 当时数据库的生产环境中主表数据量超过2000w, ...
- C#版 - 226. Invert Binary Tree(剑指offer 面试题19) - 题解
版权声明: 本文为博主Bravo Yeung(知乎UserName同名)的原创文章,欲转载请先私信获博主允许,转载时请附上网址 http://blog.csdn.net/lzuacm. C#版 - 2 ...
- 从锅炉工到AI专家(7)
说说计划 不知不觉写到了第七篇,理一下思路: 学会基本的概念,了解什么是什么不是,当前的位置在哪,要去哪.这是第一篇希望做到的.同时第一篇和第二篇的开始部分,非常谨慎的考虑了非IT专业的读者.希望借此 ...
- IntelliJ IDEA中创建Web聚合项目(Maven多模块项目)
Eclipse用多了,IntelliJ中创建Maven聚合项目可能有小伙伴还不太熟悉,我们今天就来看看. IntelliJ中创建普通的Java聚合项目相对来说比较容易,不会涉及到web操作,涉及到we ...
- Vue2.0中的transition组件
组件的过度 Vue1.0中transition做为标签的行内属性被vue支持.但在Vue2.0中.Vue放弃了旧属性的支持并提供了transition组件,transition做为标签被使用. 使用t ...
- 大战Java虚拟机【3】—— 类加载机制
前言 当你的代码编译成class文件之后,那么虚拟机如何加载这些文件呢?我们需要知道虚拟机到底做了什么样的事情. 类的生命周期 加载--链接---初始化----使用---卸载 类加载过程 1.加载 读 ...
- Asp.Net Core 程序部署到Linux(centos)生产环境(二):docker部署
运行环境 照例,先亮环境:软件的话我这里假设你已经批准好了.net core 运行环境,未配置可以看我的这篇[linux(centos)搭建.net core 运行环境] 腾讯云 centos:7.2 ...
- 补习系列(3)-springboot中的几种scope
目标 了解HTTP 请求/响应头及常见的属性: 了解如何使用SpringBoot处理头信息 : 了解如何使用SpringBoot处理Cookie : 学会如何对 Session 进行读写: 了解如何在 ...
- [三]基础数据类型之Integer详解
Integer 基本数据类型int 的包装类 Integer 类型的对象包含一个 int 类型的字段 属性简介 值为 2^31-1 的常量,它表示 int 类型能够表示的最大值 @N ...
- SpringBoot整合系列-整合MyBatis
原创作品,可以转载,但是请标注出处地址:https://www.cnblogs.com/V1haoge/p/9971036.html SpringBoot整合Mybatis 步骤 第一步:添加必要的j ...