原有环境

主机名  IP 地址 安装路径 系统
sht-sgmhadoopdn-01 172.16.101.58

/opt/kafka_2.12-1.0.0

/opt/kafka(软连接)

CentOS Linux release 7.3.1611 (Core)

sht-sgmhadoopdn-02  172.16.101.59
sht-sgmhadoopdn-03 172.16.101.60

向集群增加节点

sht-sgmhadoopdn-04(172.16.101.66)

过程

一. 新节点配置和集群节点环境一致

二. zookeeper配置

1. 集群各节点增加新节点的zookeeper配置

tickTime=
initLimit=
syncLimit=
dataDir=/opt/kafka/data
clientPort=
server.=sht-sgmhadoopdn-::
server.=sht-sgmhadoopdn-::
server.=sht-sgmhadoopdn-::
server.=sht-sgmhadoopdn-::

2. 新节点创建server-id

# echo  > /opt/kafka/data/myid

3. 启动zookeeper

# /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties

4. 查看新节点zookeeper状态

# echo stat | nc sht-sgmhadoopdn-  | grep Mode
Mode: follower

三.  kafka配置

1.新节点配置文件server.properties

broker.id=
listeners=PLAINTEXT://172.16.101.66:9092
advertised.listeners=PLAINTEXT://172.16.101.66:9092
log.dirs=/opt/kafka/data
zookeeper.connect=sht-sgmhadoopdn-:,sht-sgmhadoopdn-:,sht-sgmhadoopdn-:,sht-sgmhadoopdn-:

2. 向集群中所有节点kafka配置文件增加对新zookeeper节点的支持

zookeeper.connect=sht-sgmhadoopdn-:,sht-sgmhadoopdn-:,sht-sgmhadoopdn-:,sht-sgmhadoopdn-:

3. 启动kafka

/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties

4. 查看集群

# echo dump | nc sht-sgmhadoopdn-  | grep broker
/brokers/ids/
/brokers/ids/
/brokers/ids/
/brokers/ids/

四. 分区重分配

1. 查看现有集群的topic以及分区方案

# kafka-topics.sh --zookeeper 172.16.101.58:,172.16.101.59:,172.16.101.60:,172.16.101.66: --list
__consumer_offsets
test-topic # kafka-topics.sh --zookeeper 172.16.101.58:,172.16.101.59:,172.16.101.60:,172.16.101.66: --describe --topic test-topic
Topic:test-topic PartitionCount: ReplicationFactor: Configs:
Topic: test-topic Partition: Leader: Replicas: ,, Isr: ,,
Topic: test-topic Partition: Leader: Replicas: ,, Isr: ,,
Topic: test-topic Partition: Leader: Replicas: ,, Isr: ,,
Topic: test-topic Partition: Leader: Replicas: ,, Isr: ,,
Topic: test-topic Partition: Leader: Replicas: ,, Isr: ,,
Topic: test-topic Partition: Leader: Replicas: ,, Isr: ,,

可以看到test-topic的6个分区均集中在在老集群中,新添加的节点并未参与分区方案。

现在将执行分区重分配,将数据均匀分散在左右节点上

2. 创建json文件

# cat topics-to-move.json
{"topics":[{"topic":"test-topic"}],"version":}

3. 产生分区分配方案

[root@sht-sgmhadoopdn-01 kafka]# kafka-reassign-partitions.sh --zookeeper 172.16.101.58:2182,172.16.101.59:2182,172.16.101.60:2182,172.16.101.66:2182 --topics-to-move-json-file topics-to-move.json --broker-list "0,1,2,3" --generate
Current partition replica assignment
{"version":1,"partitions":[{"topic":"test-topic","partition":0,"replicas":[1,2,0],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":5,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":3,"replicas":[1,2,0],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":2,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":4,"replicas":[2,0,1],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":1,"replicas":[2,0,1],"log_dirs":["any","any","any"]}]} Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"test-topic","partition":0,"replicas":[3,0,1],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":5,"replicas":[0,2,3],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":3,"replicas":[2,3,0],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":2,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":4,"replicas":[3,1,2],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":1,"replicas":[0,1,2],"log_dirs":["any","any","any"]}]}

注意“Proposed partition reassignment configuration”为kafka提供的分区方案,实际上并没有真正执行,我们将该分区方案保存为另外一个文件expand_cluster_reassignment.json,然后再真正执行这个分区方案。

4. 执行分区重分配

# kafka-reassign-partitions.sh --zookeeper 172.16.101.58:,172.16.101.59:,172.16.101.60:,172.16.101.66: --reassignment-json-file expand_cluster_reassignment.json --execute
Current partition replica assignment {"version":,"partitions":[{"topic":"test-topic","partition":,"replicas":[,,],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":,"replicas":[,,],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":,"replicas":[,,],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":,"replicas":[,,],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":,"replicas":[,,],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":,"replicas":[,,],"log_dirs":["any","any","any"]}]} Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions.

通过--verify查看分区进程

# kafka-reassign-partitions.sh --zookeeper 172.16.101.58:,172.16.101.59:,172.16.101.60:,172.16.101.66: --reassignment-json-file expand_cluster_reassignment.json --verify
Status of partition reassignment:
Reassignment of partition test-topic- is still in progress
Reassignment of partition test-topic- completed successfully
Reassignment of partition test-topic- is still in progress
Reassignment of partition test-topic- is still in progress
Reassignment of partition test-topic- is still in progress
Reassignment of partition test-topic- is still in progress

5. 等到上述分区过程结束后,再次查看topic分区情况

# kafka-topics.sh --zookeeper 172.16.101.58: --describe --topic test-topic
Topic:test-topic PartitionCount: ReplicationFactor: Configs:
Topic: test-topic Partition: Leader: Replicas: ,, Isr: ,,
Topic: test-topic Partition: Leader: Replicas: ,, Isr: ,,
Topic: test-topic Partition: Leader: Replicas: ,, Isr: ,,
Topic: test-topic Partition: Leader: Replicas: ,, Isr: ,,
Topic: test-topic Partition: Leader: Replicas: ,, Isr: ,,
Topic: test-topic Partition: Leader: Replicas: ,, Isr: ,,

Kafka 1.0.0集群增加节点的更多相关文章

  1. Docker swarm集群增加节点和删除节点

    Docker swarm集群增加节点 docker swarm初始化 docker swarm init docker swarm 增加节点 在已经初始化的机器上执行:# docker swarm j ...

  2. Redis集群增加节点和删除节点

    本文主要是承接上一篇文章Redis集群的离线安装成功以后,我们如何进行给集群增加新的主从节点(集群扩容)以及如何从集群中删除节点(集群缩容),也就是集群的伸缩,集群伸缩的原理是控制虚拟槽和数据在节点之 ...

  3. kubeadm 生成的token过期后,集群增加节点

    通过kubeadm初始化后,都会提供node加入的token: You should now deploy a pod network to the cluster. Run "kubect ...

  4. Hadoop集群 增加节点/增加磁盘

    在虚拟机中新建一个机器. 设置静态IP 将修改/etc/hosts 192.168.102.10 master 192.168.102.11 slave-1 192.168.102.12 slave- ...

  5. 全网最新的nacos 2.1.0集群多节点部署教程

    原文链接:全网最新的nacos 2.1.0集群多节点部署教程-语雀 基本信息 进度整理中 版本 2.1.0 版本发布日期 2022-04-29 git revision number b5845313 ...

  6. centos7多节点部署redis4.0.11集群

    1.服务器集群服务器 redis节点node-i(192.168.0.168) 7001,7002node-ii(192.168.0.169) 7003,7004node-iii(192.168.0. ...

  7. Redis-5.0.5集群配置

    版本:redis-5.0.5 参考:http://redis.io/topics/cluster-tutorial. 集群部署交互式命令行工具:https://github.com/eyjian/re ...

  8. Redis-4.0.11集群配置

    版本:redis-3.0.5 redis-3.2.0  redis-3.2.9  redis-4.0.11 参考:http://redis.io/topics/cluster-tutorial. 集群 ...

  9. Hadoop1.0之集群搭建

    VirtualBox虚拟机 下载地址 下载择操作系统对应的基础安装包 下载扩展包(不区分操作系统) http://www.oracle.com/technetwork/cn/server-storag ...

随机推荐

  1. Appium+Python3+iOS真机环境搭建

    Appium 是一个自动化测试开源工具,支持 iOS 平台和 Android 平台上的原生应用,web 应用和混合应用. 本次环境配置相关:macOS:10.13.4Appium-desktop:1. ...

  2. org.w3c.dom.Node.getTextContent()方法编译错误-已解决

    org.w3c.dom.Node.getTextContent()方法编译错误. 在项目的Java Build Path | Order and Export选项卡中,将JRE System Libr ...

  3. Python3标准库

    文本 1. string:通用字符串操作 2. re:正则表达式操作 3. difflib:差异计算工具 4. textwrap:文本填充 5. unicodedata:Unicode字符数据库 6. ...

  4. ceph rbd双挂载导致ext4文件系统inode链接数据污染

    转载自:https://my.oschina.net/xueyi28/blog/1596003 ###故障现象 /data/rbd1/dir1/a/file1 /data/rbd1/dir2/a/fi ...

  5. 无序hashset与hashmap让其有序

    今天迭代hashmap时,hashmap并不能按照put的顺序,迭代输出值.用下述方法可以: HashMap<String,String> hashmap = new LinkedHash ...

  6. snmp模拟器snmpsid使用

    snmpsim使用 安装 pip install snmpsim 简单使用 生成snmpwalk文件: snmpwalk -v2c -c 'password' -ObentU 218.200.x.15 ...

  7. BZOJ-3208|记忆化搜索-花神的秒题计划Ⅰ

    背景[backboard]: Memphis等一群蒟蒻出题中,花神凑过来秒题-- 描述[discribe]: 花花山峰峦起伏,峰顶常年被雪,Memphis打算帮花花山风景区的人员开发一个滑雪项目. 我 ...

  8. .Net dependent configuration

    error info: 解决方案:在.exe.config文件中配置Newtonsoft.Json所用版本 <runtime> <assemblyBinding xmlns=&quo ...

  9. SonarQube 中文教程 (1)- 简介

    SonarQube是什么 SonarQube 是一个用于代码质量管理的开源平台,用于管理源代码的质量. 通过插件形式,可以支持包括 java, C#, C/C++, PL/SQL, Cobol, Ja ...

  10. ASP.NET MVC WebAPI Put和Delete请求出现405(Method not allowed)错误

    解决办法: 在站点根目录下的web.config设置如下(主要参考添加项): <system.webServer> <modules> <remove name=&quo ...