TIPS FOR IMPROVING PERFORMANCE OF KAFKA PRODUCER
When we are talking about performance of Kafka Producer, we are really talking about two different things:
- latency: how much time passes from the time KafkaProducer.send() was called until the message shows up in a Kafka broker.
- throughput: how many messages can the producer send to Kafka each second.
Many years ago, I was in a storage class taught by scalability expert James Morle. One of the students asked why we need to worry about both latency and throughput – after all, if processing a message takes 10ms (latency), then clearly throughput is limited to 100 messages per second. When looking at things this way, it may look like higher latency == higher throughput. However, the relation between latency and throughput is not this trivial.
Lets start our discussion with agreeing that we are only talking about the new Kafka Producer (the one in org.apache.kafka.clients package). It makes things simpler and there’s no reason to use the old producer at this point.
Kafka Producer allows to send message batches. Suppose that due to network roundtrip times, it takes 2ms to send a single Kafka message. By sending one message at a time, we have latency of 2ms and throughput of 500 messages per second. But suppose that we are in no big hurry, and are willing to wait few milliseconds and send a larger batch – lets say we decided to wait 8ms and managed to accumulate 1000 messages. Our latency is now 10ms, but our throughput is up to 100,000 messages per second! Thats the main reason I love microbatches so much. By adding a tiny delay, and 10ms is usually acceptable even for financial applications, our throughput is 200 times greater. This type of trade-off is not unique to Kafka, btw. Network and storage subsystem use this kind of “micro batching” all the time.
Sometimes latency and throughput interact in even funnier ways. One day Ted Malaskacomplained that with Flafka, he can get 20ms latency when sending 100,000 messages per second, but huge 1-3s latency when sending just 100 messages a second. This made no sense at all, until we remembered that to save CPU, if Flafka doesn’t find messages to read from Kafka it will back off and retry later. Backoff times started at 0.5s and steadily increased. Ted kindly improved Flume to avoid this issue in FLUME-2729.
Anyway, back to the Kafka Producer. There are few settings you can modify to improve latency or throughput in Kafka Producer:
- batch.size – This is an upper limit of how many messages Kafka Producer will attempt to batch before sending – specified in bytes (Default is 16K bytes – so 16 messages if each message is 1K in size). Kafka may send batches before this limit is reached (so latency doesn’t change by modifying this parameter), but will always send when this limit is reached. Therefore setting this limit too low will hurt throughput without improving latency. The main reason to set this low is lack of memory – Kafka will always allocate enough memory for the entire batch size, even if latency requirements cause it to send half-empty batches.
- linger.ms – How long will the producer wait before sending in order to allow more messages to get accumulated in the same batch. Normally the producer will not wait at all, and simply send all the messages that accumulated while the previous send was in progress (2 ms in the example above), but as we’ve discussed, sometimes we are willing to wait a bit longer in order to improve the overall throughput at the expense of a little higher latency. In this case tuning linger.ms to a higher value will make sense. Note that if batch.size is low and the batch if full before linger.ms time passes, the batch will send early, so it makes sense to tune batch.size and linger.ms together.
Other than tuning these parameters, you will want to avoid waiting on the future of the send method (i.e. the result from Kafka brokers), and instead send data continuously to Kafka. You can simply ignore the result (if success of sending messages is not critical), but its probably better to use a callback. You can find an example of how to do this in my github (look at produceAsync method).
If sending is still slow and you are trying to understand what is going on, you will want to check if the send thread is fully utilized through jvisualsm (it is called kafka-producer-network-thread) or keep an eye on average batch size metric. If you find that you can’t fill the buffer fast enough and the sender is idle, you can try adding application threads that share the same producer and increase throughput this way.
Another concern can be that the Producer will send all the batches that go to the same broker together when at least one of them is full – if you have one very busy topic and others that are less busy, you may see some skew in throughput this way.
Sometimes you will notice that the producer performance doesn’t scale as you add more partitions to a topic. This can happen because, as we mentioned, there is a send buffer for each partition. When you add more partitions, you have more send buffers, so perhaps the configuration you set to keep the buffers full before (# of threads, linger.ms) is no longer sufficient and buffers are sent half-empty (check the batch sizes). In this case you will need to add threads or increase linger.ms to improve utilization and scale your throughput.
Got more tips on ingesting data into Kafka? comments are welcome!
TIPS FOR IMPROVING PERFORMANCE OF KAFKA PRODUCER的更多相关文章
- Apache Kafka(五)- Safe Kafka Producer
Kafka Safe Producer 在应用Kafka的场景中,需要考虑到在异常发生时(如网络异常),被发送的消息有可能会出现丢失.乱序.以及重复消息. 对于这些情况,我们可以创建一个“safe p ...
- 【原创】Kafka producer原理 (Scala版同步producer)
本文分析的Kafka代码为kafka-0.8.2.1.另外,由于Kafka目前提供了两套Producer代码,一套是Scala版的旧版本:一套是Java版的新版本.虽然Kafka社区极力推荐大家使用J ...
- 【转】Kafka producer原理 (Scala版同步producer)
转载自:http://www.cnblogs.com/huxi2b/p/4583249.html 供参考 本文分析的Kafka代码为kafka-0.8.2.1.另外,由于Kafka目前提供了两 ...
- Kafka Producer相关代码分析【转】
来源:https://www.zybuluo.com/jewes/note/63925 @jewes 2015-01-17 20:36 字数 1967 阅读 1093 Kafka Producer相关 ...
- kafka producer源码
producer接口: /** * Licensed to the Apache Software Foundation (ASF) under one or more * contributor l ...
- kafka producer生产数据到kafka异常:Got error produce response with correlation id 16 on topic-partition...Error: NETWORK_EXCEPTION
kafka producer生产数据到kafka异常:Got error produce response with correlation id 16 on topic-partition... ...
- kafka producer 0.8.2.1 示例
package test_kafka; import java.util.Properties; import java.util.concurrent.atomic.AtomicInteger; i ...
- 关于Kafka producer管理TCP连接的讨论
在Kafka中,TCP连接的管理交由底层的Selector类(org.apache.kafka.common.network)来维护.Selector类定义了很多数据结构,其中最核心的当属java.n ...
- Kettle安装Kafka Consumer和Kafka Producer插件
1.从github上下载kettle的kafka插件,地址如下 Kafka Consumer地址: https://github.com/RuckusWirelessIL/pentaho-kafka- ...
随机推荐
- 【转载】C#递归删除文件夹目录及文件
在C#文件操作过程中,有时候需要删除相应目录,如果文件夹中含有其他文件夹或者文件,也需要一并进行删除,此时可能就需要使用递归来删除文件夹目录以及文件,递归过程中,如果遍历的对象是文件夹,则删除文件夹, ...
- c# nginx 配置
listen ; #端口 server_name localhost; #域名可以有多个 用空格隔开 #charset koi8-r; #access_log logs/host.access.log ...
- 元类实现ORM
1. ORM是什么 ORM 是 python编程语言后端web框架 Django的核心思想,"Object Relational Mapping",即对象-关系映射,简称ORM. ...
- vim 的:x和:wq
vim是Unix/Linux系统最常用的编辑器之一,在保存文件时,我通常选择":wq",因为最开始学习vim的时候,就只记住了几个常用的命令:也没有细究命令的含义. 但是,最近我在 ...
- Java 文件流操作.
一.概念 在Java中,文件的输入和输出是通过流(Stream)来实现的.一个流,必有源端和目的端,它们可以是计算机内存的某些区域,也可以是磁盘文件,甚至可以是 Internet 上的某个 URL.对 ...
- Offer选择与总结
今天是2015.11.23,我估计这也是继高考.保研这些决定与选择之后,又一个比较重大的人生选择.最终选择了去微信支付,按钱来说比最高的offer少五万,其实挺心疼的.但是从发展和部门核心程度来讲,应 ...
- 【Wyn Enterprise BI知识库】 认识多维数据建模与分析 ZT
与业务系统类似,商业智能的基础是数据.但是,因为关注的重点不同,业务系统的数据使用方式和商业智能系统有较大差别.本文主要介绍的就是如何理解商业智能所需的多维数据模型和多维数据分析. 数据立方体 多维数 ...
- JVM调优日志解析分析
一.调优参数设置 JVM的GC日志的主要参数包括如下几个: -XX:+PrintGC 输出GC日志 -XX:+PrintGCDetails 输出GC的详细日志 -XX:+PrintGCTimeStam ...
- 深入理解Java虚拟机02--Java内存区域与内存溢出异常
一.概述 我们在进行 Java 开发的时候,很少关心 Java 的内存分配等等,因为这些活都让 JVM 给我们做了.不仅自动给我们分配内存,还有自动的回收无需再占用的内存空间,以腾出内存供其他人使用. ...
- 关于谷歌JSV8与微软JSRT的性能比较
首先,我并没有得到实际的比较结果,望有兴趣的朋友可以帮助完成这个比较. benchmarksgame,提供了各种语言的性能比较,但都为linux下的测试,很难比较谷歌与微软的东西. 众所周知,JSV8 ...