Kafka Tuning Recommendations
- Kafka Brokers per Server
- Recommend 1 Kafka broker per server- Kafka not only disk-intensive but can be network intensive so if you run multiple broker in a single host network I/O can be the bottleneck . Running single broker per host and having a cluster will give you better availability.
- Increase Disks allocated to Kafka Broker
- Kafka parallelism is largely driven by the number of disks and partitions per topic.
- From the Kafka documentation: “We recommend using multiple drives to get good throughput and not sharing the same drives used for Kafka data with application logs or other OS filesystem activity to ensure good latency. As of 0.8 you can format and mount each drive as its own directory. If you configure multiple data directories partitions will be assigned round-robin to data directories. Each partition will be entirely in one of the data directories. If data is not well balanced among partitions this can lead to load imbalance between disks.”
- Number of Threads
- Make sure you set num.io.threads to at least no.of disks you are going to use by default its 8. It be can higher than the number of disks.
- Set num.network.threads higher based on number of concurrent producers, consumers, and replication factor.
- Number of partitions
- Ideally you want to assign the default number of partitions (num.partitions) to at least n-1 servers. This can break up the write workload and it allows for greater parallelism on the consumer side. Remember that Kafka does total ordering within a partition, not over multiple partitions, so make sure you partition intelligently on the producer side to parcel up units of work that might span multiple messages/events.
- Message Size
- Kafka is designed for small messages. I recommend you to avoid using kafka for larger messages. If thats not avoidable there are several ways to go about sending larger messages like 1MB. Use compression if the original message is json, xml or text using compression is the best option to reduce the size. Large messages will affect your performance and throughput. Check your topic partitions and replica.fetch.size to make sure it doesn’t go over your physical ram.
- Large Messages
- Another approach is to break the message into smaller chunks and use the same message key to send it same partition. This way you are sending small messages and these can be re-assembled at the consumer side.
- Broker side:
- message.max.bytes defaults to 1000000 . This indicates the maximum size of message that a kafka broker will accept.
- replica.fetch.max.bytes defaults to 1MB . This has to be bigger than message.max.bytes otherwise brokers will not be able to replicate messages.
- Consumer side:
- fetch.message.max.bytes defaults to 1MB. This indicates maximum size of a message that a consumer can read. This should be equal or larger than message.max.bytes.
- Kafka Heap Size
- By default kafka-broker jvm is set to 1Gb this can be increased using Ambari kafka-env template. When you are sending large messages JVM garbage collection can be an issue. Try to keep the Kafka Heap size below 4GB.
- Example: In kafka-env.sh add following settings.
- export KAFKA_HEAP_OPTS="-Xmx16g -Xms16g"
- export KAFKA_JVM_PERFORMANCE_OPTS="-XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80"
- Example: In kafka-env.sh add following settings.
- By default kafka-broker jvm is set to 1Gb this can be increased using Ambari kafka-env template. When you are sending large messages JVM garbage collection can be an issue. Try to keep the Kafka Heap size below 4GB.
- Dedicated Zookeeper
- Have a separate zookeeper cluster dedicated to Storm/Kafka operations. This will improve Storm/Kafka’s performance for writing offsets to Zookeeper, it will not be competing with HBase or other components for read/write access.
- ZK on separate nodes from Kafka Broker
- Do Not Install zk nodes on the same node as kafka broker if you want optimal Kafka performance. Disk I/O both kafka and zk are disk I/O intensive.
- Disk Tuning sections
- Please review the Kafka documentation on filesystem tuning parameters here.
- Disable THP according to documentation here.
- Either ext4 or xfs filesystems are recommended for performance benefit.
- Minimal replication
- If you are doing replication, start with 2x rather than 3x for Kafka clusters larger than 3 machines. Alternatively, use 2x even if a 3 node cluster if you are able to reprocess upstream from your source.
- Avoid Cross Rack Kafka deployments
- Avoid cross-rack Kafka deployments for now until Kafka 0.8.2 - see: https://issues.apache.org/jira/browse/KAFKA-1215
Kafka Tuning Recommendations的更多相关文章
- 深入了解SQL Tuning Advisor(转载)
1.前言:一直以来SQL调优都是DBA比较费力的技术活,而且很多DBA如果没有从事过开发的工作,那么调优更是一项头疼的工作,即使是SQL调优很厉害的高手,在SQL调优的过程中也要不停的分析执行计划.加 ...
- Kafka性能调优 - Kafka优化的方法
今天,我们将讨论Kafka Performance Tuning.在本文“Kafka性能调优”中,我们将描述在设置集群配置时需要注意的配置.此外,我们将讨论Tuning Kafka Producers ...
- jmeter分布式压测
stop.sh需要跑Jmeter的服务器上安装Jmeteryum install lrzsz 安装rz.sz命令rz jemter的压缩包 拷贝到/usr/local/tools下面unzip apa ...
- jmeter学习记录--03--jmeter负载与监听
jmeter场景主要通过线程组设置完成,有些复杂场景需要与逻辑控制器配合. 一.测试计划设计与执行 场景设计 jmete线程组实际是一个线程池,根据用户设置进行线程池的初始优化,在运行时做各种异常的处 ...
- jmeter对自身性能的优化
测试环境 apache-jmeter-2.13 1. 问题描述 单台机器的下JMeter启动较大线程数时可能会出现运行报错的情况,或者在运行一段时间后,JMeter每秒生成的请求数会逐步下降, ...
- JMeter JMeter自身运行性能优化
JMeter自身运行性能优化 by:授客 QQ:1033553122 测试环境 apache-jmeter-2.13 1. 问题描述 单台机器的下JMeter启动较大线程数时可能会出现运行 ...
- 【翻译自mos文章】私有网络所用的协议 与 Oracle RAC
说的太经典了,不敢翻译.直接上原文. 来源于: Network Protocols and Real Application Clusters (文档 ID 278132.1) PURPOSE --- ...
- JMeter内存溢出:java.lang.OutOfMemoryError: Java heap space解决方法
一.问题原因 用JMeter压测,有时候当模拟并发请求较大或者脚本运行时间较长时,JMeter会停止,报OOM(内存溢出)错误. 原因是JMeter是一个纯Java开发的工具,内存由java虚拟机JV ...
- Jmeter系列(35)- 设置JVM内存
场景 单台机器的下JMeter启动较大线程数时可能会出现运行报错的情况,或者在运行一段时间后,JMeter每秒生成的请求数会逐步下降,直到为0,即JMeter运行变得很"卡",这时 ...
随机推荐
- 第二周Python学习笔记
分支结构: ① 单分支结构: 非常简单,if 条件语句,如果为true 则输出结果.否则不输出结果 ② 二分支结构: 条件结果为true则执行语句1,否则就执行语句2 If <条件>: ...
- [FromBody]与[FromForm]区别
[FromBody]与[FromForm]区别 1,fromBody:在cation方法传入参数后添加[frombody]属性,参数将以一个整体的josn对象的形式传递. 2,fromform:在ca ...
- 恢复oracle中误删除drop掉的表 闪回的方法
恢复oracle中误删除drop掉的表 查看回收站中表 --需要在其所在用户下查询 回收站对象 select object_name,original_name,partition_name,ty ...
- SQL Server 检测到基于一致性的逻辑 I/O 错误 pageid 不正确
最近在查询SQL时遇到SQL文件错误,可能是文件数据已损坏.解决过程分享给大家. 问题描述 消息 824,级别 24,状态 2,第 1 行SQL Server 检测到基于一致性的逻辑 I/O 错误 p ...
- 认识音频格式-Au (NeXT/Sun)
音频格式比较多, Au音频格式是一种被sun微处理器公司发明的一种简单的音频编码格式.日后一直在NEXT系统上使用,后面就演变成了一种标准的音频编码格式.目前很多音频设备上都支持这种编码格式.这种编码 ...
- 1.Flask URL和视图
1.1.第一个flask程序 from flask import Flask #创建一个Flask对象,传递__name__参数进去 app = Flask(__name__) #url与视图映射 @ ...
- 细说mysql索引
本文从如何建立mysql索引以及介绍mysql的索引类型,再讲mysql索引的利与弊,以及建立索引时需要注意的地方 首先:先假设有一张表,表的数据有10W条数据,其中有一条数据是nickname='c ...
- 在linux(centos)系统安装nginx教程
最近在切换服务器操作系统,简单记录一下 一.安装nginx需要如下环境 1.gcc 编译依赖gcc环境,如果没有gcc环境,需要安装gcc yum install gcc-c++ 2.PCRE ...
- .NET Core微服务之基于MassTransit实现数据最终一致性(Part 1)
Tip: 此篇已加入.NET Core微服务基础系列文章索引 一.预备知识:数据一致性 关于数据一致性的文章,园子里已经有很多了,如果你还不了解,那么可以通过以下的几篇文章去快速地了解了解,有个感性认 ...
- 学习ASP.NET Core Razor 编程系列十二——在页面中增加校验
学习ASP.NET Core Razor 编程系列目录 学习ASP.NET Core Razor 编程系列一 学习ASP.NET Core Razor 编程系列二——添加一个实体 学习ASP.NET ...