Kafka Tools
参考,
https://cwiki.apache.org/confluence/display/KAFKA/System+Tools
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools
http://kafka.apache.org/documentation.html#quickstart
http://kafka.apache.org/documentation.html#operations
为了便于使用,kafka提供了比较强大的Tools,把经常需要使用的整理一下
开关kafka Server
- bin/kafka-server-start.sh config/server.properties
bin/kafka-server-stop.sh
JMX_PORT=9999 nohup bin/kafka-server-start.sh config/server.properties &
topic相关
- bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
bin/kafka-topics.sh --list --zookeeper localhost:2181
describe topic的详细情况
- bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
修改topic的partition,只能增加
- bin/kafka-topics.sh --alter --zookeeper localhost:2181 --partitions 3 --topic test
到0.8.2才正式支持删除topic,当前是beta版
/usr/local/rds/kafka/bin/kafka-topics.sh --delete --topic topic_name --zookeeper localhost:2181
注意在配置里面,delete.topic.enable=true
查看有问题的partition
- bin/kafka-topics.sh --describe --zookeeper localhost:2181 --unavailable-partitions --topic test
- per-topic 修改参数
- > bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic my-topic --partitions 1
- --replication-factor 1 --config max.message.bytes=64000 --config flush.messages=1
- > bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic my-topic
- --config max.message.bytes=128000
- > bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic my-topic
- --deleteConfig max.message.bytes
集群扩展
集群扩展,对于broker还是比较简单的,但是现有的topic上的partition是不会做自动迁移的
需要手工做迁移,但kafka提供了比较方便的工具,
--generate,生成参考的迁移计划
given a list of topics and a list of brokers,工具会给出迁徙方案
把topic完全迁移到新的brokers
- > cat topics-to-move.json
- {"topics": [{"topic": "foo1"},
- {"topic": "foo2"}],
- "version":
- }
- > bin/kafka-reassign-partitions.sh --zookeeper localhost: --topics-to-move-json-file topics-to-move.json --broker-list "5,6" --generate
- Current partition replica assignment
- {"version":,
- "partitions":[{"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]},
- {"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]}]
- }
- Proposed partition reassignment configuration
- {"version":,
- "partitions":[{"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]},
- {"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]}]
- }
给出当前的assignment情况和,迁移方案
我们可以同时保存当前的assignment情况和迁移方案,当前的assignment情况可以用于rollback
--execute,开始执行迁移
- > bin/kafka-reassign-partitions.sh --zookeeper localhost: --reassignment-json-file expand-cluster-reassignment.json --execute
- Current partition replica assignment
- {"version":,
- "partitions":[{"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]},
- {"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]}]
- }
- Save this to use as the --reassignment-json-file option during rollback
- Successfully started reassignment of partitions
- {"version":,
- "partitions":[{"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]},
- {"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]}]
- }
--verify,check当前的迁移状态
- > bin/kafka-reassign-partitions.sh --zookeeper localhost: --reassignment-json-file expand-cluster-reassignment.json --verify
- Status of partition reassignment:
- Reassignment of partition [foo1,] completed successfully
- Reassignment of partition [foo1,] is in progress
- Reassignment of partition [foo1,] is in progress
- Reassignment of partition [foo2,] completed successfully
- Reassignment of partition [foo2,] completed successfully
- Reassignment of partition [foo2,] completed successfully
选择topic的某个partition的某些replica进行迁徙
moves partition 0 of topic foo1 to brokers 5,6 and partition 1 of topic foo2 to brokers 2,3
- > cat custom-reassignment.json
- {"version":,"partitions":[{"topic":"foo1","partition":,"replicas":[,]},{"topic":"foo2","partition":,"replicas":[,]}]}
- > bin/kafka-reassign-partitions.sh --zookeeper localhost: --reassignment-json-file custom-reassignment.json --execute
- Current partition replica assignment
- {"version":,
- "partitions":[{"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]}]
- }
- Save this to use as the --reassignment-json-file option during rollback
- Successfully started reassignment of partitions
- {"version":,
- "partitions":[{"topic":"foo1","partition":,"replicas":[,]},
- {"topic":"foo2","partition":,"replicas":[,]}]
- }
brokers下线
当前版本不支持下线的规划,需要到0.8.2才支持,这需要把一个broker上的replica清空
增加replication factor
partition 0的replica数从1增长到3,当前replica存在broker5,在broker6,7上增加replica
- > cat increase-replication-factor.json
- {"version":,
- "partitions":[{"topic":"foo","partition":,"replicas":[,,]}]}
- > bin/kafka-reassign-partitions.sh --zookeeper localhost: --reassignment-json-file increase-replication-factor.json --execute
- Current partition replica assignment
- {"version":,
- "partitions":[{"topic":"foo","partition":,"replicas":[]}]}
- Save this to use as the --reassignment-json-file option during rollback
- Successfully started reassignment of partitions
- {"version":,
- "partitions":[{"topic":"foo","partition":,"replicas":[,,]}]}
Producer console
- > bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
- This is a message
- This is another message
后面可以任意的输入message,都会发到broker的topic中
Comsumer console
- bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
从头读这个topic,可以重复读到所有数据
我在想为啥,每次都能replay,原来每次都是随机产生一个groupid
consumerProps.put("group.id","console-consumer-" + new Random().nextInt(100000))
Consumer Offset Checker
这个会显示出consumer group的offset情况, 必须参数为--group, 不指定--topic,默认为所有topic
Displays the: Consumer Group, Topic, Partitions, Offset, logSize, Lag, Owner for the specified set of Topics and Consumer Group
bin/kafka-run-
class
.sh kafka.tools.ConsumerOffsetChecker
required argument: [group]
Option Description
------ -----------
--broker-info Print broker info
--group Consumer group.
--help Print this message.
--topic Comma-separated list of consumer
topics (all topics if absent).
--zkconnect ZooKeeper connect string. (default: localhost:2181)
Example,
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group pv
Group Topic Pid Offset logSize Lag Owner
pv page_visits 0 21 21 0 none
pv page_visits 1 19 19 0 none
pv page_visits 2 20 20 0 none
Export Zookeeper Offsets
将Zk中的offset信息以下面的形式打到file里面去
A utility that retrieves the offsets of broker partitions in ZK and prints to an output file in the following format:
/consumers/group1/offsets/topic1/1-0:286894308
/consumers/group1/offsets/topic1/2-0:284803985
bin/kafka-run-
class
.sh kafka.tools.ExportZkOffsets
required argument: [zkconnect]
Option Description
------ -----------
--group Consumer group.
--help Print this message.
--output-file Output file
--zkconnect ZooKeeper connect string. (default: localhost:2181)
Update Offsets In Zookeeper
这个挺有用,用于replay, kafka的文档有点坑爹,看了不知道咋用,还是看源码才看明白
A utility that updates the offset of every broker partition to the offset of earliest or latest log segment file, in ZK.
bin/kafka-run-
class
.sh kafka.tools.UpdateOffsetsInZK
USAGE: kafka.tools.UpdateOffsetsInZK$ [earliest | latest] consumer.properties topic
Example,
bin/kafka-run-class.sh kafka.tools.UpdateOffsetsInZK earliest config/consumer.properties page_visits
Group Topic Pid Offset logSize Lag Owner
pv page_visits 0 0 21 21 none
pv page_visits 1 0 19 19 none
pv page_visits 2 0 20 20 none
可以看到offset已经被清0,Lag=logSize
更加直接的方式是,直接去Zookeeper里面看
通过zkCli.sh连上后,通过ls查看
Broker Node Registry
- /brokers/ids/[0...N] --> host:port (ephemeral node)
Broker Topic Registry
- /brokers/topics/[topic]/[0...N] --> nPartions (ephemeral node)
Consumer Id Registry
- /consumers/[group_id]/ids/[consumer_id] --> {"topic1": #streams, ..., "topicN": #streams} (ephemeral node)
Consumer Offset Tracking
- /consumers/[group_id]/offsets/[topic]/[broker_id-partition_id] --> offset_counter_value ((persistent node)
Partition Owner registry
- /consumers/[group_id]/owners/[topic]/[broker_id-partition_id] --> consumer_node_id (ephemeral node)
Kafka Tools的更多相关文章
- 5.1SpringBoot整合Kafka(工具安装Kafka+Tools)
1.工具安装Kafka 上一期我分享了安装zk,下一次我们把Kafka和可视化工具一起搞起来. 注意:这个时候ZK一定要启动成功. zk安装地址:https://www.cnblogs.com/dao ...
- kafka可视化工具kafka tools
一.下载 下载地址 选择windows 傻瓜式安装,选择安装路径,直接下一步就可以了 二. 使用 点击,运行 linux开启9092(broker)端口和2181(zookeeper)然后填写后,确定 ...
- kafka性能参数和压力测试揭秘
转自:http://blog.csdn.net/stark_summer/article/details/50203133 上一篇文章介绍了Kafka在设计上是如何来保证高时效.大吞吐量的,主要的内容 ...
- Kafka 如何读取offset topic内容 (__consumer_offsets)
众所周知,由于Zookeeper并不适合大批量的频繁写入操作,新版Kafka已推荐将consumer的位移信息保存在Kafka内部的topic中,即__consumer_offsets topic,并 ...
- Kafka设计解析(三)- Kafka High Availability (下)
本文转发自Jason’s Blog,原文链接 http://www.jasongj.com/2015/06/08/KafkaColumn3 摘要 本文在上篇文章基础上,更加深入讲解了Kafka的HA机 ...
- Kafka设计解析(一)- Kafka背景及架构介绍
本文转发自Jason’s Blog,原文链接 http://www.jasongj.com/2015/01/02/Kafka深度解析 背景介绍 Kafka简介 Kafka是一种分布式的,基于发布/订阅 ...
- 【原创】Kafka console consumer源代码分析(一)
上一篇中分析了Scala版的console producer代码,这篇文章为读者带来一篇console consumer工作原理分析的随笔.其实不论是哪个consumer,大部分的工作原理都是类似的. ...
- Kafka简介
Kafka简介 转载请注明出处:http://www.cnblogs.com/BYRans/ Apache Kafka发源于LinkedIn,于2011年成为Apache的孵化项目,随后于2012年成 ...
- KAFKA一异常处理记录
kafka-console-consumer.sh --topic TOPIC_KEYWORD --from-beginning --zookeeper localhost报异常,Exception ...
随机推荐
- Linux中断 - GIC代码分析
一.前言 GIC(Generic Interrupt Controller)是ARM公司提供的一个通用的中断控制器,其architecture specification目前有四个版本,V1-V4(V ...
- python解压压缩包的几种方法
这里讨论使用Python解压例如以下五种压缩文件: .gz .tar .tgz .zip .rar 简单介绍 gz: 即gzip.通常仅仅能压缩一个文件.与tar结合起来就能够实现先打包,再压缩. ...
- mysql 添加列的索引
无论哪种模式加入索引.会大幅度增加SELECT速度 索引名:Index_User_Name 栏目名:user_name 索引类型:Nornal 索引方式:BTREE
- mysql-5.7 扩展innodb系统表空间详解
一.innodb系统表空间的简介: innodb 系统表空间是由若干个文件组成的,表空间的大小就是对应文件的大小,表空间文件是由innodb_data_file_path 这人参数来定义的.下面我们来 ...
- vim:将刚写的单词大写和单词的定义
最近打算把caps lock映射成<esc>键,那按起来多爽,现在的有一个小问题,如何快捷的输入大写字母. 用这个键盘映射搞定. inoremap <c-u> <esc& ...
- python3.3使用tkinter实现猜数字游戏代码
发布时间:2014-06-18 编辑:www.jbxue.com 原文地址:http://www.jbxue.com/article/python/22152.html python3.3使用tk ...
- FPGA三分频,五分频,奇数分频
我们在做FPGA设计时,有时会用到时钟频率奇数分频的频率,例如笔者FPGA的晶振为50M,当我们需要10M的时钟时,一种方式可以使用DCM或PLL获取,系统会内部分频到10M,但其实VERILOG内部 ...
- node js 调试方法
1. node-debug tutorial 大家对nodejs调试应该都比较头疼,至少我这个不用IDE写js的人很头疼这个,其实node的生态圈非常好 有非常好的工具和非常潮的开发方式 这里总结了3 ...
- win64位操作系统下安装pl/sql developer 并登录连接到oracle12c
1)安装Oracle 12c 64位2)安装32位的Oracle客户端( instantclient-basic-nt-12.1.0.1.0)下载instantclient-basic-nt-12.1 ...
- Zookeeper已经分布式环境中的假死脑裂
Zookeeper简介 在上班之前都不知道有这样一个东西,在开始说假死脑裂之前先说说Zookeeper吧. Zookeeper zookeeper是一个分布式应用程序的协调服务.它是一个为分布式应用提 ...