redis集群在线迁移
地址规划
主机名 | ip地 | 端口 |
redis01 | 10.0.0.10 | 6379、6380 |
redis02 | 10.0.0.60 | 6379、6380 |
redis03 | 10.0.0.61 | 6379、6380 |
redis04 | 10.0.0.70 | 6379、6380 |
redis05 | 10.0.0.71 | 6379、6380 |
redis06 | 10.0.0.72 | 6379、6380 |
其中前三台为老集群节点,后三台为新集群节点,本篇针对生成环境,在线迁移集群,不停机。
集群搭建在https://www.cnblogs.com/zh-dream/p/12249767.html
准备工作
[root@redis01 module]# mkdir -p /etc
[root@redis01 module]# mkdir -p /etc [root@redis01 module]# cp redis-5.0./etc/redis.conf /etc/ [root@redis01 module]# cp redis-5.0./etc/redis.conf /etc/ [root@redis01 module]# mkdir /{data,run,logs}
[root@redis01 module]# mkdir /{data,run,logs}
修改配置文件
[root@redis01 module]# sed -ri -e 's@^(dir /data/module/).*@\16380/data@' -e 's@^(pidfile ).*@\1/data/module/6380/run/redis_6380.pid@' -e 's/^(port )6379/\16380/' -e 's@^(logfile "/data/module/).*@\16380/logs/redis_6380.log"@' -e 's@^# cluster-config-file nodes-6379.conf@cluster-config-file nodes-6380.conf@' /etc/redis.conf [root@redis01 module]# sed -ri -e 's@^(dir /data/module/).*@\16379/data@' -e 's@^(pidfile ).*@\1/data/module/6379/run/redis_6379.pid@' -e 's@^(logfile "/data/module/).*@\16379/logs/redis_6379.log"@' -e 's@^# cluster-config-file nodes-6379.conf@cluster-config-file nodes-6379.conf@' /etc/redis.conf
启动redis实例
[root@redis01 module]# for i in ;do redis-server $i/etc/redis.conf;done [root@redis01 module]# ss -lntp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 10.0.0.10: *:* users:(("redis-server",pid=,fd=))
LISTEN 10.0.0.10: *:* users:(("redis-server",pid=,fd=))
LISTEN *: *:* users:(("sshd",pid=,fd=))
LISTEN 127.0.0.1: *:* users:(("master",pid=,fd=))
LISTEN 10.0.0.10: *:* users:(("redis-server",pid=,fd=))
LISTEN 10.0.0.10: *:* users:(("redis-server",pid=,fd=))
LISTEN ::: :::* users:(("sshd",pid=,fd=))
LISTEN ::: :::* users:(("master",pid=,fd=))
建立集群
先加入主节点
[root@redis01 module]# redis-cli --cluster create 10.0.0.10: 10.0.0.60: 10.0.0.61:
>>> Performing hash slots allocation on nodes...
Master[] -> Slots -
Master[] -> Slots -
Master[] -> Slots -
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join >>> Performing Cluster Check (using node 10.0.0.10:)
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
在集群指定主从关系
[root@redis01 module]# redis-cli --cluster add-node --cluster-slave 10.0.0.60: 10.0.0.10:
>>> Adding node 10.0.0.60: to cluster 10.0.0.10:
>>> Performing Cluster Check (using node 10.0.0.10:)
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
Automatically selected master 10.0.0.10:
>>> Send CLUSTER MEET to node 10.0.0.60: to make it join the cluster.
Waiting for the cluster to join >>> Configure node as replica of 10.0.0.10:.
[OK] New node added correctly.
[root@redis01 module]# redis-cli --cluster add-node --cluster-slave 10.0.0.61: 10.0.0.60:
>>> Adding node 10.0.0.61: to cluster 10.0.0.60:
>>> Performing Cluster Check (using node 10.0.0.60:)
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
additional replica(s)
S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:
slots: ( slots) slave
replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
Automatically selected master 10.0.0.60:
>>> Send CLUSTER MEET to node 10.0.0.61: to make it join the cluster.
Waiting for the cluster to join >>> Configure node as replica of 10.0.0.60:.
[OK] New node added correctly.
[root@redis01 module]# redis-cli --cluster add-node --cluster-slave 10.0.0.10: 10.0.0.61:
>>> Adding node 10.0.0.10: to cluster 10.0.0.61:
>>> Performing Cluster Check (using node 10.0.0.61:)
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
additional replica(s)
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
additional replica(s)
S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:
slots: ( slots) slave
replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:
slots: ( slots) slave
replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
Automatically selected master 10.0.0.61:
>>> Send CLUSTER MEET to node 10.0.0.10: to make it join the cluster.
Waiting for the cluster to join >>> Configure node as replica of 10.0.0.61:.
[OK] New node added correctly.
检测集群状态
[root@redis01 module]# redis-cli --cluster check 10.0.0.10:
10.0.0.10: (aca05ab1...) -> keys | slots | slaves.
10.0.0.61: (c934fb00...) -> keys | slots | slaves.
10.0.0.60: (e6fd058c...) -> keys | slots | slaves.
[OK] keys in masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.0.0.10:)
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
additional replica(s)
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
additional replica(s)
S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:
slots: ( slots) slave
replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:
slots: ( slots) slave
replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:
slots: ( slots) slave
replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered. #在集群中插入几条数据,待迁移后检测使用
将迁移的目标节点加入集群
添加主节点
[root@redis01 module]# redis-cli --cluster add-node 10.0.0.70: 10.0.0.10:
>>> Adding node 10.0.0.70: to cluster 10.0.0.10:
>>> Performing Cluster Check (using node 10.0.0.10:)
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
additional replica(s)
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
additional replica(s)
S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:
slots: ( slots) slave
replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:
slots: ( slots) slave
replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:
slots: ( slots) slave
replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
>>> Send CLUSTER MEET to node 10.0.0.70: to make it join the cluster.
[OK] New node added correctly.
[root@redis01 module]# redis-cli --cluster add-node 10.0.0.71: 10.0.0.10:
>>> Adding node 10.0.0.71: to cluster 10.0.0.10:
>>> Performing Cluster Check (using node 10.0.0.10:)
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
additional replica(s)
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
additional replica(s)
M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:
slots: ( slots) master
S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:
slots: ( slots) slave
replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:
slots: ( slots) slave
replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:
slots: ( slots) slave
replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
>>> Send CLUSTER MEET to node 10.0.0.71: to make it join the cluster.
[OK] New node added correctly.
[root@redis01 module]# redis-cli --cluster add-node 10.0.0.72: 10.0.0.10:
>>> Adding node 10.0.0.72: to cluster 10.0.0.10:
>>> Performing Cluster Check (using node 10.0.0.10:)
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
additional replica(s)
M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:
slots: ( slots) master
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
additional replica(s)
M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:
slots: ( slots) master
S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:
slots: ( slots) slave
replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:
slots: ( slots) slave
replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:
slots: ( slots) slave
replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
>>> Send CLUSTER MEET to node 10.0.0.72: to make it join the cluster.
[OK] New node added correctly.
添加从节点
[root@redis01 module]# redis-cli --cluster add-node --cluster-slave 10.0.0.70: 10.0.0.71:
>>> Adding node 10.0.0.70: to cluster 10.0.0.71:
>>> Performing Cluster Check (using node 10.0.0.71:)
M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:
slots: ( slots) master
S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:
slots: ( slots) slave
replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:
slots: ( slots) master
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
additional replica(s)
S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:
slots: ( slots) slave
replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
M: d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 10.0.0.72:
slots: ( slots) master
S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:
slots: ( slots) slave
replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
additional replica(s)
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
Automatically selected master 10.0.0.71:
>>> Send CLUSTER MEET to node 10.0.0.70: to make it join the cluster.
Waiting for the cluster to join >>> Configure node as replica of 10.0.0.71:.
[OK] New node added correctly.
[root@redis01 module]# redis-cli --cluster add-node --cluster-slave 10.0.0.71: 10.0.0.72:
>>> Adding node 10.0.0.71: to cluster 10.0.0.72:
>>> Performing Cluster Check (using node 10.0.0.72:)
M: d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 10.0.0.72:
slots: ( slots) master
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
additional replica(s)
S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:
slots: ( slots) slave
replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
S: 2585c89e3e905202c3df1f8c0d75fe716591a972 10.0.0.70:
slots: ( slots) slave
replicates fd7f797b72b9f6c04de8d879743b2f6b508a7415
M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:
slots: ( slots) master
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
additional replica(s)
S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:
slots: ( slots) slave
replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:
slots: ( slots) master
additional replica(s)
S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:
slots: ( slots) slave
replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
Automatically selected master 10.0.0.72:
>>> Send CLUSTER MEET to node 10.0.0.71: to make it join the cluster.
Waiting for the cluster to join >>> Configure node as replica of 10.0.0.72:.
[OK] New node added correctly.
[root@redis01 module]# redis-cli --cluster add-node --cluster-slave 10.0.0.72: 10.0.0.70:
>>> Adding node 10.0.0.72: to cluster 10.0.0.70:
>>> Performing Cluster Check (using node 10.0.0.70:)
M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:
slots: ( slots) master
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
additional replica(s)
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
additional replica(s)
S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:
slots: ( slots) slave
replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:
slots: ( slots) slave
replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
additional replica(s)
S: 2585c89e3e905202c3df1f8c0d75fe716591a972 10.0.0.70:
slots: ( slots) slave
replicates fd7f797b72b9f6c04de8d879743b2f6b508a7415
M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:
slots: ( slots) master
additional replica(s)
M: d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 10.0.0.72:
slots: ( slots) master
additional replica(s)
S: e6124e0803cc6a3b5b3b030a733f1e58d3cbb80c 10.0.0.71:
slots: ( slots) slave
replicates d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4
S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:
slots: ( slots) slave
replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
Automatically selected master 10.0.0.70:
>>> Send CLUSTER MEET to node 10.0.0.72: to make it join the cluster.
Waiting for the cluster to join >>> Configure node as replica of 10.0.0.70:.
[OK] New node added correctly.
检查新节点的槽数,新添加的主节点槽数都是0,需要重新分片
[root@redis01 module]# redis-cli --cluster check 10.0.0.70:
10.0.0.70: (76206b5f...) -> keys | slots | slaves.
10.0.0.10: (aca05ab1...) -> keys | slots | slaves.
10.0.0.61: (c934fb00...) -> keys | slots | slaves.
10.0.0.60: (e6fd058c...) -> keys | slots | slaves.
10.0.0.71: (fd7f797b...) -> keys | slots | slaves.
10.0.0.72: (d44d3c8b...) -> keys | slots | slaves.
[OK] keys in masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.0.0.70:)
M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:
slots: ( slots) master
additional replica(s)
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
16 additional replica(s)
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
additional replica(s)
S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:
slots: ( slots) slave
replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:
slots: ( slots) slave
replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
additional replica(s)
S: 2585c89e3e905202c3df1f8c0d75fe716591a972 10.0.0.70:
slots: ( slots) slave
replicates fd7f797b72b9f6c04de8d879743b2f6b508a7415
M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:
slots: ( slots) master
additional replica(s)
M: d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 10.0.0.72:
slots: ( slots) master
additional replica(s)
S: e6124e0803cc6a3b5b3b030a733f1e58d3cbb80c 10.0.0.71:
slots: ( slots) slave
replicates d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4
S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:
slots: ( slots) slave
replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
S: 54924e469a1003913f135de3116c7d53c41b5e69 10.0.0.72:
slots: ( slots) slave
replicates 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
重新分片
[root@redis01 module]# redis-cli --cluster reshard 10.0.0.70:
>>> Performing Cluster Check (using node 10.0.0.70:)
M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:
slots: ( slots) master
additional replica(s)
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
additional replica(s)
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots:[-] ( slots) master
additional replica(s)
S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:
slots: ( slots) slave
replicates c934fb00e04727cbe3ebec8ec52b629df8a4c760
S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:
slots: ( slots) slave
replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[-] ( slots) master
additional replica(s)
S: 2585c89e3e905202c3df1f8c0d75fe716591a972 10.0.0.70:
slots: ( slots) slave
replicates fd7f797b72b9f6c04de8d879743b2f6b508a7415
M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:
slots: ( slots) master
additional replica(s)
M: d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 10.0.0.72:
slots: ( slots) master
additional replica(s)
S: e6124e0803cc6a3b5b3b030a733f1e58d3cbb80c 10.0.0.71:
slots: ( slots) slave
replicates d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4
S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:
slots: ( slots) slave
replicates aca05ab1ffe0079493ad73cd045b14bd21941e07
S: 54924e469a1003913f135de3116c7d53c41b5e69 10.0.0.72:
slots: ( slots) slave
replicates 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
How many slots do you want to move (from to )?
What is the receiving node ID? 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #: aca05ab1ffe0079493ad73cd045b14bd21941e07
Source node #: done Ready to move slots.
Source nodes:
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots:[-] ( slots) master
additional replica(s)
Destination node:
M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:
slots: ( slots) master
additional replica(s)
Resharding plan:
Moving slot from aca05ab1ffe0079493ad73cd045b14bd21941e07
Moving slot from aca05ab1ffe0079493ad73cd045b14bd21941e07
Moving slot from aca05ab1ffe0079493ad73cd045b14bd21941e07 Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot from 10.0.0.10: to 10.0.0.70::
Moving slot from 10.0.0.10: to 10.0.0.70::
Moving slot from 10.0.0.10: to 10.0.0.70::
Moving slot from 10.0.0.10: to 10.0.0.70::
Moving slot from 10.0.0.10: to 10.0.0.70::
Moving slot from 10.0.0.10: to 10.0.0.70::
Moving slot from 10.0.0.10: to 10.0.0.70::
Moving slot from 10.0.0.10: to 10.0.0.70::
Moving slot from 10.0.0.10: to 10.0.0.70:: [root@redis01 module]# redis-cli --cluster reshard 10.0.0.71: [OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
How many slots do you want to move (from to )?
What is the receiving node ID? fd7f797b72b9f6c04de8d879743b2f6b508a7415
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #: e6fd058cb888fbe014bbc93d59eaf0595b1d514c
Source node #: done Moving slot from e6fd058cb888fbe014bbc93d59eaf0595b1d514c
Moving slot from e6fd058cb888fbe014bbc93d59eaf0595b1d514c
Moving slot from e6fd058cb888fbe014bbc93d59eaf0595b1d514c
Moving slot from e6fd058cb888fbe014bbc93d59eaf0595b1d514c
Moving slot from e6fd058cb888fbe014bbc93d59eaf0595b1d514c
Moving slot from e6fd058cb888fbe014bbc93d59eaf0595b1d514c Do you want to proceed with the proposed reshard plan (yes/no)? yes Moving slot from 10.0.0.60: to 10.0.0.71::
Moving slot from 10.0.0.60: to 10.0.0.71::
Moving slot from 10.0.0.60: to 10.0.0.71::
Moving slot from 10.0.0.60: to 10.0.0.71::
Moving slot from 10.0.0.60: to 10.0.0.71::
Moving slot from 10.0.0.60: to 10.0.0.71::
Moving slot from 10.0.0.60: to 10.0.0.71::
[root@redis01 module]# redis-cli --cluster reshard 10.0.0.72: [OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered.
How many slots do you want to move (from to )?
What is the receiving node ID? d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #: c934fb00e04727cbe3ebec8ec52b629df8a4c760
Source node #: done
查看是否老集群的哈希槽都移到了新集群节点上
[root@redis01 module]# redis-cli --cluster check 10.0.0.70:
10.0.0.70: (76206b5f...) -> keys | slots | slaves.
10.0.0.10: (aca05ab1...) -> keys | slots | slaves.
10.0.0.61: (c934fb00...) -> keys | slots | slaves.
10.0.0.60: (e6fd058c...) -> keys | slots | slaves.
10.0.0.71: (fd7f797b...) -> keys | slots | slaves.
10.0.0.72: (d44d3c8b...) -> keys | slots | slaves.
[OK] keys in masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.0.0.70:)
M: 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a 10.0.0.70:
slots:[-] ( slots) master
additional replica(s)
M: aca05ab1ffe0079493ad73cd045b14bd21941e07 10.0.0.10:
slots: ( slots) master
M: c934fb00e04727cbe3ebec8ec52b629df8a4c760 10.0.0.61:
slots: ( slots) master
S: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 10.0.0.10:
slots: ( slots) slave
replicates d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4
S: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d 10.0.0.61:
slots: ( slots) slave
replicates e6fd058cb888fbe014bbc93d59eaf0595b1d514c
M: e6fd058cb888fbe014bbc93d59eaf0595b1d514c 10.0.0.60:
slots:[] ( slots) master
additional replica(s)
S: 2585c89e3e905202c3df1f8c0d75fe716591a972 10.0.0.70:
slots: ( slots) slave
replicates fd7f797b72b9f6c04de8d879743b2f6b508a7415
M: fd7f797b72b9f6c04de8d879743b2f6b508a7415 10.0.0.71:
slots:[-] ( slots) master
additional replica(s)
M: d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4 10.0.0.72:
slots:[-] ( slots) master
additional replica(s)
S: e6124e0803cc6a3b5b3b030a733f1e58d3cbb80c 10.0.0.71:
slots: ( slots) slave
replicates d44d3c8baf3ffd9041ef22ad1fdbc840d886aca4
S: 25b3f9781fd913d2c783ab24fa2c79a74a08070b 10.0.0.60:
slots: ( slots) slave
replicates 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a
S: 54924e469a1003913f135de3116c7d53c41b5e69 10.0.0.72:
slots: ( slots) slave
replicates 76206b5fd10ffff1f0bf4287ff1bdbad9eb0c01a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All slots covered. [root@redis01 module]# redis-cli --cluster info 10.0.0.70:
10.0.0.70: (76206b5f...) -> keys | slots | slaves.
10.0.0.10: (aca05ab1...) -> keys | slots | slaves.
10.0.0.61: (c934fb00...) -> keys | slots | slaves.
10.0.0.60: (e6fd058c...) -> keys | slots | slaves.
10.0.0.71: (fd7f797b...) -> keys | slots | slaves.
10.0.0.72: (d44d3c8b...) -> keys | slots | slaves.
[OK] keys in masters.
0.00 keys per slot on average.
在新节点检查之前插入的数据
[root@redis01 module]# redis-cli -c -h 10.0.0.70 -p
10.0.0.70:> get name
-> Redirected to slot [] located at 10.0.0.71:
"tom"
10.0.0.71:> get foo
-> Redirected to slot [] located at 10.0.0.72:
"bar"
提示:上面的操作可以使用 redis-cli --cluster reshard --cluster-from <sourcenode-id> --cluster-to <destnode-id> --cluster-slots <number of slots> --cluster-yes
删除原来的节点
先删除从节点
[root@redis01 module]# redis-cli --cluster del-node 10.0.0.10: e6124e0803cc6a3b5b3b030a733f1e58d3cbb80c
>>> Removing node e6124e0803cc6a3b5b3b030a733f1e58d3cbb80c from cluster 10.0.0.10:
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@redis01 module]# redis-cli --cluster del-node 10.0.0.60: 25b3f9781fd913d2c783ab24fa2c79a74a08070b
>>> Removing node 25b3f9781fd913d2c783ab24fa2c79a74a08070b from cluster 10.0.0.60:
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@redis01 module]# redis-cli --cluster del-node 10.0.0.61: 9b88fbde76b12e035d71056881c8ed09ce6aeb0d
>>> Removing node 9b88fbde76b12e035d71056881c8ed09ce6aeb0d from cluster 10.0.0.61:
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
注意:
一定不要写错ID 这个主要是以id号来删除节点 输入错了节点id 会出现如下错误信息: [ERR] Node 10.0.0.71: is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database . 需要启动这个节点,修复一下就好了 [root@redis05 module]# redis-server /etc/redis.conf [root@redis01 module]# redis-cli --cluster fix 10.0.0.71:
从集群中删除迁移之前的master主节点。
删除master主节点时需注意下面节点:
- 如果主节点有从节点,需要将从节点转移到其他主节点或提前删除从节点
- 如果主节点有slot,去掉分配的slot,然后再删除主节点。 除master主节点时,必须确保它上面的slot为0,即必须为空!否则可能会导致整个redis cluster集群无法工作! 如果要移除的master节点不是空的,需要先用重新分片命令来把数据移到其他的节点。
另外一个移除master节点的方法是先进行一次手动的失效备援,等它的slave被选举为新的master,并且它被作为一个新的slave被重新加到集群中来之后再移除它。
很明显,如果你是想要减少集群中的master数量,这种做法没什么用。在这种情况下你还是需要用重新分片来移除数据后再移除它。
[root@redis01 module]# redis-cli --cluster del-node 10.0.0.60: e6fd058cb888fbe014bbc93d59eaf0595b1d514c
>>> Removing node e6fd058cb888fbe014bbc93d59eaf0595b1d514c from cluster 10.0.0.60:
[ERR] Node 10.0.0.60: is not empty! Reshard data away and try again.
由于已经将原来的三个master主节点的slot全部抽完了,即slot现在都为0,且他们各自的slave节点也已在上面删除
可以删除主节点
[root@redis01 module]# redis-cli --cluster del-node 10.0.0.10: f2a07c9d27d6d62d40bec2bc6914fd0757e7d072
>>> Removing node f2a07c9d27d6d62d40bec2bc6914fd0757e7d072 from cluster 10.0.0.10:
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node. [root@redis01 module]# redis-cli --cluster del-node 10.0.0.60: e6fd058cb888fbe014bbc93d59eaf0595b1d514c
>>> Removing node e6fd058cb888fbe014bbc93d59eaf0595b1d514c from cluster 10.0.0.60:
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node. [root@redis01 module]# redis-cli --cluster del-node 10.0.0.61: c934fb00e04727cbe3ebec8ec52b629df8a4c760
>>> Removing node c934fb00e04727cbe3ebec8ec52b629df8a4c760 from cluster 10.0.0.61:
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
redis集群在线迁移的更多相关文章
- redis集群在线迁移第一篇(数据在线迁移至新集群)实战一
迁移背景:1.原来redis集群在A机房,需要把其迁移到新机房B上来.2.保证现有环境稳定.3.采用在线迁移方式,因为原有redis集群内有大量数据.4.如果是一个全新的redis集群搭建会简单很多. ...
- redis集群在线迁移第二篇(redis迁移后调整主从关系,停掉14机器上的所有从节点)-实战二
变更需求为: 1.调整主从关系,所有节点都调整到10.129.51.30机器上 2.停掉10.128.51.14上的所有redis,14机器关机 14机器下线迁移至新机房,这段时间将不能提供服务. 当 ...
- Redis Cluster高可用集群在线迁移操作记录【转】
之前介绍了redis cluster的结构及高可用集群部署过程,今天这里简单说下redis集群的迁移.由于之前的redis cluster集群环境部署的服务器性能有限,需要迁移到高配置的服务器上.考虑 ...
- Redis Cluster高可用集群在线迁移操作记录
之前介绍了redis cluster的结构及高可用集群部署过程,今天这里简单说下redis集群的迁移.由于之前的redis cluster集群环境部署的服务器性能有限,需要迁移到高配置的服务器上.考虑 ...
- redis集群数据迁移txt版
./redis-trib.rb create --replicas 1 192.168.112.33:8001 192.168.112.33:8002 192.168.112.33:8003 192. ...
- redis集群数据迁移
redis集群数据备份迁移方案 n 迁移环境描述及分析 当前我们面临的数据迁移环境是:集群->集群. 源集群: 源集群为6节点,3主3备 主 备 192.168.112.33:8001 192 ...
- [个人翻译]Redis 集群教程(下)
[个人翻译]Redis 集群教程(上) [个人翻译]Redis 集群教程(中) 官方原文地址:https://redis.io/topics/cluster-tutorial 水平有限,如果您在阅读过 ...
- Redis 集群规范
什么是 Redis 集群??Redis 集群是一个分布式(distributed).容错(fault-tolerant)的 Redis 实现,集群可以使用的功能是普通单机 Redis 所能使用的功能的 ...
- Redis集群管理
1.简介 Redis在生产环境中一般是通过集群的方式进行运行,Redis集群包括主从复制集群和数据分片集群两种类型. *主从复制集群提供高可用性,而数据分片集群提供负载均衡. *数据分片集群中能实现主 ...
随机推荐
- finalize()
本文介绍的是Java里一个内建的概念,Finalizer.你可能对它对数家珍,但也可能从未听闻过,这得看你有没有花时间完整地看过一遍java.lang.Object类了.在java.lang.Obje ...
- 【代码学习】PYTHON 私有化
一.私有化 xx: 公有变量_x: 单前置下划线,私有化属性或方法,from somemodule import *禁止导入,类对象和子类可以访问__xx:双前置下划线,避免与子类中的属性命名冲突,无 ...
- php中流行的rpc框架详解
什么是RPC框架? 如果用一句话概括RPC就是:远程调用框架(Remote Procedure Call) 那什么是远程调用? 我的官方群点击此处. 通常我们调用一个php中的方法,比如这样一个函数方 ...
- Python学习笔记:变量
什么是变量? 一般的理解是,变量是一个存储数据的容器 但是在python中的变量只存储数据的引用 变量的特性: 并不直接存储数据,而是引用着某个具体的数据 我们可以人为改变这个引用 定义变量 方式 变 ...
- Bugku - CTF加密篇之聪明的小羊(一只小羊翻过了2个栅栏)
聪明的小羊 一只小羊翻过了2个栅栏 KYsd3js2E{a2jda}
- 2019牛客暑期多校训练营(第七场)A String (字符串的最小表示)
思路 这题思路如果是递归的话,应该是比较正确的.但是实际上只用切割两次就可以了. 先把原串从后向前切割一次,再把每一部分切割一次. 切两次的思路实际上是有漏洞的. 递归的思路,终点是,如果串长为1,或 ...
- 【转】ssh 远程执行命令
原文:https://blog.csdn.net/liuxiao723846/article/details/82667482 SSH 是 Linux 下进行远程连接的基本工具,不光可以登录,也可以远 ...
- 国外最受欢迎的15个BT下载网站
1.EYH.BIZ 海盗湾(The Pirate Bay)现在在中国成立的一个分部 www.eyh.biz 一个提供BT种子文件和链接,以方便使用BT协议的对等文件共享网站.该网站于2003年在瑞典创 ...
- 如何用 pycharm 调试 airflow
airflow 和 pycharm 相关基础知识请看其他博客 我们在使用 airflow的 dag时. 每次写完不知道对不对的,总不能到页面环境中跑一下,等到报错再调试吧.这是很让人恼火的事情 这里我 ...
- arcPy实现要素图层数据的复制(选择特定字段填写属性)
>>> import arcpy>>> fc=r"D:\楚雄州数据\testdata.gdb">>> editor=arcpy ...