kafka迁移与扩容
参考官网site:
http://kafka.apache.org/documentation.html#basic_ops_cluster_expansion
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-6.ReassignPartitionsTool
说明:
当我们对kafka集群扩容时,需要满足2点要求:
- 将指定topic迁移到集群内新增的node上。
- 将topic的指定partition迁移到新增的node上。
1. 迁移topic到新增的node上
假如现在一个kafka集群运行三个broker,broker.id依次为101,102,103,后来由于业务数据突然暴增,需要新增三个broker,broker.id依次为104,105,106.目的是要把push-token-topic迁移到新增node上。
1、脚本migration-push-token-topic.json文件内容如下:
- {
- "topics":
- [
- {
- "topic": "push-token-topic"
- }
- ],
- "version":1
- }
2、执行脚本如下所示:
- root@localhost:$ ./bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2183 --topics-to-move-json-file migration-push-token-topic.json --broker-list "104,105,106" --generate
生成分配partitions的json脚本 备份恢复使用:
Current partition replica assignment
{"version":1,"partitions":[{"topic":"cluster-switch-topic","partition":10,"replicas":[8]},{"topic":"cluster-switch-topic","partition":5,"replicas":[4]},{"topic":"cluster-switch-topic","partition":3,"replicas":[5]},{"topic":"cluster-switch-topic","partition":4,"replicas":[5]},{"topic":"cluster-switch-topic","partition":9,"replicas":[5]},{"topic":"cluster-switch-topic","partition":1,"replicas":[5]},{"topic":"cluster-switch-topic","partition":11,"replicas":[4]},{"topic":"cluster-switch-topic","partition":7,"replicas":[5]},{"topic":"cluster-switch-topic","partition":2,"replicas":[4]},{"topic":"cluster-switch-topic","partition":0,"replicas":[4]},{"topic":"cluster-switch-topic","partition":6,"replicas":[4]},{"topic":"cluster-switch-topic","partition":8,"replicas":[4]}]}
重新分配parttions的json脚本如下:
migration-topic-cluster-switch-topic.json
{"version":1,"partitions":[{"topic":"cluster-switch-topic","partition":10,"replicas":[5]},{"topic":"cluster-switch-topic","partition":5,"replicas":[4]},{"topic":"cluster-switch-topic","partition":4,"replicas":[5]},{"topic":"cluster-switch-topic","partition":3,"replicas":[4]},{"topic":"cluster-switch-topic","partition":9,"replicas":[4]},{"topic":"cluster-switch-topic","partition":1,"replicas":[4]},{"topic":"cluster-switch-topic","partition":11,"replicas":[4]},{"topic":"cluster-switch-topic","partition":7,"replicas":[4]},{"topic":"cluster-switch-topic","partition":2,"replicas":[5]},{"topic":"cluster-switch-topic","partition":0,"replicas":[5]},{"topic":"cluster-switch-topic","partition":6,"replicas":[5]},{"topic":"cluster-switch-topic","partition":8,"replicas":[5]}]}
3、执行:
- root@localhost:$ bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2183 --reassignment-json-file migration-topic-cluster-switch-topic.json --execute
执行后会生成一个json格式文件expand-cluster-reassignment.json
- bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2183 --reassignment-json-file expand-cluster-reassignment.json --verify
- Reassignment of partition [push-token-topic,0] completed successfully //移动成功
- Reassignment of partition [push-token-topic,1] is in progress //这行代表数据在移动中
- Reassignment of partition [push-token-topic,2] is in progress
- Reassignment of partition [push-token-topic,1] completed successfully
- Reassignment of partition [push-token-topic,2] completed successfully
这样做不会影响原来集群上的topic业务
2.topic修改(replicats-factor)副本个数
假如初始时push-token-topic为一个副本,为了提高可用性,需要改为2副本模式。
脚本replicas-update-push-token-topic.json文件内容如下:
{
"partitions":
[
{
"topic": "log.mobile_nginx",
"partition": 0,
"replicas": [101,102,104]
},
{
"topic": "log.mobile_nginx",
"partition": 1,
"replicas": [102,103,106]
}
],
"version":1
}
2、执行:
- root@localhost:$ ./bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2183 --reassignment-json-file replicas-update-push-token-topic.json --execute
执行后会列出当前的partition和修改后的patition
- bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2181 --reassignment-json-file replicas-update-push-token-topic.json --verify
如下:
Status of partition reassignment:
Reassignment of partition [log.mobile_nginx,0] completed successfully
Reassignment of partition [log.mobile_nginx,1] completed successfully
3.自定义分区和迁移
1、The first step is to hand craft the custom reassignment plan in a json file-
> cat custom-reassignment.json
{"version":1,"partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},{"topic":"foo2","partition":1,"replicas":[2,3]}]}
> bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file custom-reassignment.json --execute Current partition replica assignment {"version":1,
"partitions":[{"topic":"foo1","partition":0,"replicas":[1,2]},
{"topic":"foo2","partition":1,"replicas":[3,4]}]
} Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions
{"version":1,
"partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},
{"topic":"foo2","partition":1,"replicas":[2,3]}]
}
3、The --verify option can be used with the tool to check the status of the partition reassignment. Note that the same expand-cluster-reassignment.json (used with the --execute option) should be used with the --verify option
bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file custom-reassignment.json --verify Status of partition reassignment:
Reassignment of partition [foo1,0] completed successfully
Reassignment of partition [foo2,1] completed successfully
4.topic的分区扩容用法
a.先扩容分区数量,脚本如下:
例如:push-token-topic初始分区数量为12,目前到增加到15个
root@localhost:$ ./bin/kafka-topics.sh --zookeeper 192.168.2.225:2183 --alter --partitions 15 --topic push-token-topic
b.设置topic分区副本
root@localhost:$ ./bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2183
--reassignment-json-file partitions-extension-push-token-topic.json --execute
脚本partitions-extension-push-token-topic.json文件内容如下:
{
"partitions":
[
{
"topic": "push-token-topic",
"partition": 12,
"replicas": [101,102]
},
{
"topic": "push-token-topic",
"partition": 13,
"replicas": [103,104]
},
{
"topic": "push-token-topic",
"partition": 14,
"replicas": [105,106]
}
],
"version":1
}
kafka迁移与扩容的更多相关文章
- Kafka迁移与扩容工具用法
1.迁移topic到新增的node上 假如现在一个kafka集群运行三个broker,broker.id依次为101,102,103,后来由于业务数据突然暴增,需要新增三个broker,broker. ...
- (三)kafka集群扩容后的topic分区迁移
kafka集群扩容后的topic分区迁移 kafka集群扩容后,新的broker上面不会数据进入这些节点,也就是说,这些节点是空闲的:它只有在创建新的topic时才会参与工作.除非将已有的partit ...
- kafka集群扩容以及数据迁移
一 kafka集群扩容比较简单,机器配置一样的前提下只需要把配置文件里的brokerid改一个新的启动起来就可以.比较需要注意的是如果公司内网dns更改的不是很及时的话,需要给原有的旧机器加上新服务器 ...
- kafka集群扩容后的topic分区迁移
https://www.cnblogs.com/honeybee/p/5691921.html kafka集群扩容后,新的broker上面不会数据进入这些节点,也就是说,这些节点是空闲的:它只有在创建 ...
- Mysql系列九:使用zookeeper管理远程Mycat配置文件、Mycat监控、Mycat数据迁移(扩容)
一.使用zookeeper管理远程Mycat配置文件 环境准备: 虚拟机192.168.152.130: zookeeper,具体参考前面文章 搭建dubbo+zookeeper+dubboadmin ...
- kafka迁移数据目录
问题 先前存储kafka日志的磁盘空间太小,zabbix警报不断,于是加了磁盘,将日志存到新磁盘上. 解决方案 依次在每台机器上操作,保证有机器能响应producer和consumer的操作. 加磁盘 ...
- redis cluster异地数据迁移,扩容,缩容
由于项目的服务器分布在重庆,上海,台北,休斯顿,所以需要做异地容灾需求.当前的mysql,redis cluster,elastic search都在重庆的如果重庆停电了,整个应用都不能用了. 现在考 ...
- kafka迁移topic
1. 准备移动 这里假设要移动的topic名称为:topicA.topicB vim topics-to-move.json {"topics": [{"topic&qu ...
- 【转】apache kafka技术分享系列(目录索引)
转自: http://blog.csdn.net/lizhitao/article/details/39499283 估计大神会不定期更新,所以还是访问这个链接看最新的目录list比较好 apa ...
随机推荐
- linux - 使用curl实现新浪天气API应用
新浪天气API的使用方法: API地址:http://php.weather.sina.com.cn/xml.php?city=%B1%B1%BE%A9&password=DJOYnieT82 ...
- Sqli-labs less 46
Less-46 从本关开始,我们开始学习order by 相关注入的知识. 本关的sql语句为$sql = "SELECT * FROM users ORDER BY $id"; ...
- ssh-copy-id(双机互信)
最简单的2步骤: ssh-keygen -t rsa 需要输入的地方就回车 ssh-copy-id root@192.168.0.10 详细:ssh-keygen 创建公钥和密钥. ssh-copy- ...
- CPLD VS FPGA
FPGA(Field-Programmable Gate Array),即现场可编程门阵列,它是在PAL.GAL.CPLD等可编程器件的基础上进一步发展的产物.它是作为专用集成电路(ASIC)领域中的 ...
- POJ 2007 Scrambled Polygon (简单极角排序)
题目链接 题意 : 对输入的点极角排序 思路 : 极角排序方法 #include <iostream> #include <cmath> #include <stdio. ...
- CentOS 6.5下安装Zabbix 2.2.x
操作系统:CentOS Mini 6.5 yum install httpd.x86_64 httpd-manual.x86_64 php-xml php-mbstring mysql-server ...
- win7桌面便签。自带的
新建WIN7下的桌面便签小程序 桌面—>新建 快捷方式-> 输入%windir%\system32\StikyNot.exe
- 关于linux下rar文件的解压缩操作
在linux系统下.本身没有对rar文件操作的命令,如果需要对rar格式的文件操作,我们需要安装第三方的软件rar以及unrar. 1.linux下rar管理软件下载的官方地址为:http://www ...
- 8 simple things that will make you sexy
8 simple things that will make you sexy8种方法教你不动声色的性感What makes a women sexy? Is it her body? Is it t ...
- iOS开发--Bison详解连连支付集成简书
"最近由于公司项目需要集成连连支付,文档写的不是很清楚,遇到了一些坑,因此记录一下,希望能帮到有需要的人." 前面简单的集成没有遇到什么坑,在此整理一下官方的集成文档,具体步骤如下 ...