redis文档:http://doc.redisfans.com/

参考:https://www.cnblogs.com/wuxl360/p/5920330.html

http://www.cnblogs.com/carryping/p/7447823.html

https://www.jianshu.com/p/2639549bedc8

1. 下载并解压

cd /root/software
wget http://download.redis.io/releases/redis-3.2.3.tar.gz
tar -zxvf redis-3.2.4.tar.gz 

2. 编译安装

cd redis-3.2.3
make PREFIX=/usr/local/redis-3.2.3 install
ln -sv /usr/local/redis-3.2.3 /usr/local/redis

3. 将 redis-trib.rb 复制到 /usr/local/bin 目录下

cd src
cp redis-trib.rb /usr/local/bin/  

4. 创建 Redis 节点

首先在机器上/usr/local/redis/ 目录下创建 cluster-test 目录;

mkdir redis_cluster  

在cluster-test 目录下,创建名为7001、7002、7003,7004、7005、7006的目录,并将 redis.conf 拷贝到这刘个目录中

mkdir 7001 7002 7003 7004 7005 
cp redis.conf cluster-test/7001
cp redis.conf cluster-test/7002
cp redis.conf cluster-test/7003
cp redis.conf cluster-test/7004
cp redis.conf cluster-test/7005
cp redis.conf cluster-test/7006    

分别修改这6个配置文件,修改如下内容

bind 0.0.0.0
port 7001
dir /usr/local/redis/cluster-test/7001
pidfile /var/run/redis_7001.pid
cluster-enabled yes
cluster-config-file nodes_7001.conf
cluster-node-timeout 15000
appendonly yes
appendfilename "appendonly.aof"
daemonize yes protected-mode yes
tcp-backlog 511
timeout 0
tcp-keepalive 300
supervised no
loglevel verbose
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

7001 redis.conf

bind 0.0.0.0
port 7002
dir /usr/local/redis/cluster-test/7002
pidfile /var/run/redis_7002.pid
cluster-enabled yes
cluster-config-file nodes_7002.conf
cluster-node-timeout 15000
appendonly yes
appendfilename "appendonly.aof"
daemonize yes protected-mode yes
tcp-backlog 511
timeout 0
tcp-keepalive 300
supervised no
loglevel verbose
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

7002 redis.conf

bind 0.0.0.0
port 7003
dir /usr/local/redis/cluster-test/7003
pidfile /var/run/redis_7003.pid
cluster-enabled yes
cluster-config-file nodes_7003.conf
cluster-node-timeout 15000
appendonly yes
appendfilename "appendonly.aof"
daemonize yes protected-mode yes
tcp-backlog 511
timeout 0
tcp-keepalive 300
supervised no
loglevel verbose
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

7003 redis.conf

bind 0.0.0.0
port 7004
dir /usr/local/redis/cluster-test/7004
pidfile /var/run/redis_7004.pid
cluster-enabled yes
cluster-config-file nodes_7004.conf
cluster-node-timeout 15000
appendonly yes
appendfilename "appendonly.aof"
daemonize yes protected-mode yes
tcp-backlog 511
timeout 0
tcp-keepalive 300
supervised no
loglevel verbose
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

7004 redis.conf

bind 0.0.0.0
port 7005
dir /usr/local/redis/cluster-test/7005
pidfile /var/run/redis_7005.pid
cluster-enabled yes
cluster-config-file nodes_7005.conf
cluster-node-timeout 15000
appendonly yes
appendfilename "appendonly.aof"
daemonize yes protected-mode yes
tcp-backlog 511
timeout 0
tcp-keepalive 300
supervised no
loglevel verbose
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

7005 redis.conf

pidfile /var/run/redis_7006.pid
cluster-enabled yes
cluster-config-file nodes_7006.conf
cluster-node-timeout 15000
appendonly yes
appendfilename "appendonly.aof"
daemonize yes protected-mode yes
tcp-backlog 511
timeout 0
tcp-keepalive 300
supervised no
loglevel verbose
logfile ""
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

7006 redis.conf

port  7001                                        //端口7001,7002,7001
bind 本机ip //默认ip为127.0.0.1 需要改为其他节点机器可访问的ip 否则创建集群时无法访问对应的端口,无法创建集群
daemonize yes //redis后台运行
pidfile /var/run/redis_7001.pid //pidfile文件对应7001,7002,7003
cluster-enabled yes //开启集群 把注释#去掉
cluster-config-file nodes_7001.conf //集群的配置 配置文件首次启动自动生成 7001,7002,7003
cluster-node-timeout 15000 //请求超时 默认15秒,可自行设置
appendonly yes //aof日志开启 有需要就开启,它会每次写操作都记录一条日志 

5.启动并查看

/usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7001/redis.conf
/usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7002/redis.conf
/usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7003/redis.conf
/usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7004/redis.conf
/usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7005/redis.conf
/usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7006/redis.conf 查看显示下面信息
[root@localhost 7001]# netstat -ntlp |grep redis
tcp        0      0 0.0.0.0:7005                0.0.0.0:*                   LISTEN      19225/redis-server  
tcp        0      0 0.0.0.0:7006                0.0.0.0:*                   LISTEN      19229/redis-server  
tcp        0      0 0.0.0.0:17001               0.0.0.0:*                   LISTEN      19209/redis-server  
tcp        0      0 0.0.0.0:17002               0.0.0.0:*                   LISTEN      19213/redis-server  
tcp        0      0 0.0.0.0:17003               0.0.0.0:*                   LISTEN      19215/redis-server  
tcp        0      0 0.0.0.0:17004               0.0.0.0:*                   LISTEN      19221/redis-server  
tcp        0      0 0.0.0.0:17005               0.0.0.0:*                   LISTEN      19225/redis-server  
tcp        0      0 0.0.0.0:17006               0.0.0.0:*                   LISTEN      19229/redis-server  
tcp        0      0 0.0.0.0:7001                0.0.0.0:*                   LISTEN      19209/redis-server  
tcp        0      0 0.0.0.0:7002                0.0.0.0:*                   LISTEN      19213/redis-server  
tcp        0      0 0.0.0.0:7003                0.0.0.0:*                   LISTEN      19215/redis-server  
tcp        0      0 0.0.0.0:7004                0.0.0.0:*                   LISTEN      19221/redis-server  

6.创建集群

redis-trib.rb  create  --replicas  1 192.168.8.102:7001  192.168.8.102:7002 192.168.8.102:7003  192.168.8.102:7004  192.168.8.102:7005 192.168.8.102:7006
gem install redis 报错
ERROR: Error installing redis:
redis requires Ruby version >= 2.2.2. 解决办法是 先安装rvm,再把ruby版本提升至2.4.5 1.安装curl
sudo yum install curl 2. 安装RVM
curl -L get.rvm.io | bash -s stable 再次报错:
[root@localhost yum.repos.d]# curl -L get.rvm.io | bash -s stable
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 24173  100 24173    0     0  21587      0  0:00:01  0:00:01 --:--:--  128k
Downloading https://github.com/rvm/rvm/archive/1.29.7.tar.gz
Downloading https://github.com/rvm/rvm/releases/download/1.29.7/1.29.7.tar.gz.asc
gpg: Signature made Fri 04 Jan 2019 06:01:48 AM CST using RSA key ID 39499BDB
gpg: Can't check signature: No public key
GPG signature verification failed for '/usr/local/rvm/archives/rvm-1.29.7.tgz' - 'https://github.com/rvm/rvm/releases/download/1.29.7/1.29.7.tar.gz.asc'! Try to install GPG v2 and then fetch the public key:     gpg2 --keyserver hkp://pool.sks-keyservers.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB or if it fails:     command curl -sSL https://rvm.io/mpapis.asc | gpg2 --import -
    command curl -sSL https://rvm.io/pkuczynski.asc | gpg2 --import - In case of further problems with validation please refer to https://rvm.io/rvm/security 那就按照上面的执行上面语句就OK啦 3.
source /usr/local/rvm/scripts/rvm 4. 查看rvm库中已知的ruby版本 rvm list known 5. 安装一个ruby版本 rvm install 2.4.5 6. 使用一个ruby版本 rvm use 2.4.5 7. 卸载一个已知版本 rvm remove 2.0.0 8. 查看版本 ruby --version 9. 再安装redis就可以了
gem install redis 再执行 输入 yes 即可,然后出现如下内容,说明安装成功:
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.8.102:7001
192.168.8.102:7002
192.168.8.102:7003
Adding replica 192.168.8.102:7004 to 192.168.8.102:7001
Adding replica 192.168.8.102:7005 to 192.168.8.102:7002
Adding replica 192.168.8.102:7006 to 192.168.8.102:7003
M: 229393055278b1cded847e554739255905b33fb3 192.168.8.102:7001
   slots:0-5460 (5461 slots) master
M: ef175d84db52e084b5d74cf9f1c414011bf6cce9 192.168.8.102:7002
   slots:5461-10922 (5462 slots) master
M: 1bca1b7b96f3fe936ad44f254d17da26da9fd186 192.168.8.102:7003
   slots:10923-16383 (5461 slots) master
S: 837eea90f07c9cdebfa7e1924d2e2788cf5573eb 192.168.8.102:7004
   replicates 229393055278b1cded847e554739255905b33fb3
S: 0f3d5f7e78dc857efc5b58ab674faee9fba876af 192.168.8.102:7005
   replicates ef175d84db52e084b5d74cf9f1c414011bf6cce9
S: b41ae432b4bd8d7ca44bf318c7b9382f8dbd7a79 192.168.8.102:7006
   replicates 1bca1b7b96f3fe936ad44f254d17da26da9fd186
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join....
>>> Performing Cluster Check (using node 192.168.8.102:7001)
M: 229393055278b1cded847e554739255905b33fb3 192.168.8.102:7001
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 1bca1b7b96f3fe936ad44f254d17da26da9fd186 192.168.8.102:7003
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: ef175d84db52e084b5d74cf9f1c414011bf6cce9 192.168.8.102:7002
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 0f3d5f7e78dc857efc5b58ab674faee9fba876af 192.168.8.102:7005
   slots: (0 slots) slave
   replicates ef175d84db52e084b5d74cf9f1c414011bf6cce9
S: 837eea90f07c9cdebfa7e1924d2e2788cf5573eb 192.168.8.102:7004
   slots: (0 slots) slave
   replicates 229393055278b1cded847e554739255905b33fb3
S: b41ae432b4bd8d7ca44bf318c7b9382f8dbd7a79 192.168.8.102:7006
   slots: (0 slots) slave
   replicates 1bca1b7b96f3fe936ad44f254d17da26da9fd186
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

7.测试

[root@localhost 7001]# ../../bin/redis-cli -c -h 192.168.8.102 -p 7001
192.168.8.102:7001>
192.168.8.102:7001>
192.168.8.102:7001> set name zhangsan
-> Redirected to slot [5798] located at 192.168.8.102:7002
OK [root@localhost 7001]# ../../../redis/bin/redis-cli -c -p 7006
127.0.0.1:7006>
127.0.0.1:7006>
127.0.0.1:7006>
127.0.0.1:7006> get name
-> Redirected to slot [5798] located at 192.168.8.102:7002
"zhangsan" 验证数据一致性
[root@localhost 7003]# md5sum dump.rdb
2604704e38811948117ddc473d62dc55  dump.rdb [root@localhost 7001]# md5sum dump.rdb
2604704e38811948117ddc473d62dc55  dump.rdb 说明集群运作正常。

8.测试故障

让192.168.8.102:7003当掉

[root@localhost 7001]# cat nodes_7001.conf
1bca1b7b96f3fe936ad44f254d17da26da9fd186 192.168.8.102:7003 master - 0 1551780190387 3 connected 10923-16383
ef175d84db52e084b5d74cf9f1c414011bf6cce9 192.168.8.102:7002 master - 0 1551780191393 2 connected 5461-10922
0f3d5f7e78dc857efc5b58ab674faee9fba876af 192.168.8.102:7005 slave ef175d84db52e084b5d74cf9f1c414011bf6cce9 0 1551780188372 5 connected
229393055278b1cded847e554739255905b33fb3 192.168.8.102:7001 myself,master - 0 0 1 connected 0-5460
837eea90f07c9cdebfa7e1924d2e2788cf5573eb 192.168.8.102:7004 slave 229393055278b1cded847e554739255905b33fb3 0 1551780187366 4 connected
b41ae432b4bd8d7ca44bf318c7b9382f8dbd7a79 192.168.8.102:7006 slave 1bca1b7b96f3fe936ad44f254d17da26da9fd186 0 1551780189379 6 connected [root@localhost 7001]# netstat -ntlp |grep 7003
tcp        0      0 0.0.0.0:17003               0.0.0.0:*                   LISTEN      19215/redis-server  
tcp        0      0 0.0.0.0:7003                0.0.0.0:*                   LISTEN      19215/redis-server  
[root@localhost 7001]#
[root@localhost 7001]#
[root@localhost 7001]#
[root@localhost 7001]#
[root@localhost 7001]# kill -9 19215
[root@localhost 7001]#
[root@localhost 7001]# cat nodes_7001.conf
1bca1b7b96f3fe936ad44f254d17da26da9fd186 192.168.8.102:7003 master,fail - 1551784667064 1551784663541 3 disconnected
ef175d84db52e084b5d74cf9f1c414011bf6cce9 192.168.8.102:7002 master - 0 1551784682702 2 connected 5461-10922
0f3d5f7e78dc857efc5b58ab674faee9fba876af 192.168.8.102:7005 slave ef175d84db52e084b5d74cf9f1c414011bf6cce9 0 1551784681691 5 connected
229393055278b1cded847e554739255905b33fb3 192.168.8.102:7001 myself,master - 0 0 1 connected 0-5460
837eea90f07c9cdebfa7e1924d2e2788cf5573eb 192.168.8.102:7004 slave 229393055278b1cded847e554739255905b33fb3 0 1551784679675 4 connected
b41ae432b4bd8d7ca44bf318c7b9382f8dbd7a79 192.168.8.102:7006 master - 0 1551784680684 7 connected 10923-16383
vars currentEpoch 7 lastVoteEpoch 7
[root@localhost 7001]# redis-cli -c -p 7001
127.0.0.1:7001>
127.0.0.1:7001>
127.0.0.1:7001>
127.0.0.1:7001> get name
-> Redirected to slot [5798] located at 192.168.8.102:7002
"zhangsan"
可以访问 再当掉192.168.8.102:7006 [root@localhost 7001]# redis-cli -c -p 7001
127.0.0.1:7001>
127.0.0.1:7001>
127.0.0.1:7001>
127.0.0.1:7001> get name
(error) CLUSTERDOWN The cluster is down
127.0.0.1:7001>
[root@localhost 7001]# !cat
cat nodes_7001.conf
1bca1b7b96f3fe936ad44f254d17da26da9fd186 192.168.8.102:7003 slave,fail b41ae432b4bd8d7ca44bf318c7b9382f8dbd7a79 1551785439155 1551785434725 7 disconnected
ef175d84db52e084b5d74cf9f1c414011bf6cce9 192.168.8.102:7002 master - 0 1551785536517 2 connected 5461-10922
0f3d5f7e78dc857efc5b58ab674faee9fba876af 192.168.8.102:7005 slave ef175d84db52e084b5d74cf9f1c414011bf6cce9 0 1551785537528 5 connected
229393055278b1cded847e554739255905b33fb3 192.168.8.102:7001 myself,master - 0 0 1 connected 0-5460
837eea90f07c9cdebfa7e1924d2e2788cf5573eb 192.168.8.102:7004 slave 229393055278b1cded847e554739255905b33fb3 0 1551785539547 4 connected
b41ae432b4bd8d7ca44bf318c7b9382f8dbd7a79 :0 master,fail,noaddr - 1551785518472 1551785516358 7 disconnected 10923-16383
vars currentEpoch 7 lastVoteEpoch 7 开启主节点:7006端口
[root@localhost 7001]# cat ../start_cluster.sh
/usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7001/redis.conf
/usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7002/redis.conf
/usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7003/redis.conf
/usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7004/redis.conf
/usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7005/redis.conf
/usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7006/redis.conf
[root@localhost 7001]#
[root@localhost 7001]#
[root@localhost 7001]# /usr/local/redis/bin/redis-server /usr/local/redis/cluster-test/7006/redis.conf
[root@localhost 7001]#
[root@localhost 7001]# cat nodes_7001.conf
1bca1b7b96f3fe936ad44f254d17da26da9fd186 192.168.8.102:7003 slave,fail b41ae432b4bd8d7ca44bf318c7b9382f8dbd7a79 0 1551785119857 7 connected
ef175d84db52e084b5d74cf9f1c414011bf6cce9 192.168.8.102:7002 master - 0 1551785118344 2 connected 5461-10922
0f3d5f7e78dc857efc5b58ab674faee9fba876af 192.168.8.102:7005 slave ef175d84db52e084b5d74cf9f1c414011bf6cce9 0 1551785119352 5 connected
229393055278b1cded847e554739255905b33fb3 192.168.8.102:7001 myself,master - 0 0 1 connected 0-5460
837eea90f07c9cdebfa7e1924d2e2788cf5573eb 192.168.8.102:7004 slave 229393055278b1cded847e554739255905b33fb3 0 1551785117336 4 connected
b41ae432b4bd8d7ca44bf318c7b9382f8dbd7a79 192.168.8.102:7006 master - 0 1551785786843 7 connected 10923-16383 再检测又可以访问了
[root@localhost 7001]# redis-cli -c -p 7001
127.0.0.1:7001>
127.0.0.1:7001>
127.0.0.1:7001> get name
-> Redirected to slot [5798] located at 192.168.8.102:7002
"zhangsan" 注意:恢复时,先恢复主节点 再恢复从节点

简单说一下原理

redis cluster在设计的时候,就考虑到了去中心化,去中间件,也就是说,集群中的每个节点都是平等的关系,都是对等的,每个节点都保存各自的数据和整个集群的状态。每个节点都和其他所有节点连接,而且这些连接保持活跃,这样就保证了我们只需要连接集群中的任意一个节点,就可以获取到其他节点的数据。

Redis 集群没有并使用传统的一致性哈希来分配数据,而是采用另外一种叫做哈希槽 (hash slot)的方式来分配的。redis cluster 默认分配了 16384 个slot,当我们set一个key 时,会用CRC16算法来取模得到所属的slot,然后将这个key 分到哈希槽区间的节点上,具体算法就是:CRC16(key) % 16384。所以我们在测试的时候看到set 和 get 的时候,直接跳转到了7000端口的节点。

Redis 集群会把数据存在一个 master 节点,然后在这个 master 和其对应的salve 之间进行数据同步。当读取数据时,也根据一致性哈希算法到对应的 master 节点获取数据。只有当一个master 挂掉之后,才会启动一个对应的 salve 节点,充当 master 。

需要注意的是:必须要3个或以上的主节点,否则在创建集群时会失败,并且当存活的主节点数小于总节点数的一半时,整个集群就无法提供服务了。

Redis redis-trib集群配置的更多相关文章

  1. java:redis(redis安装配置,redis的伪集群配置)

    1.redis安装配置: .安装gcc : yum install gcc-c++ .使用FTP工具FileZilla上传redis安装包到linux根目录下(当前步骤可以替换为:在root目录下执行 ...

  2. 基于redis的cas集群配置(转)

    1.cas ticket统一存储 做cas集群首先需要将ticket拿出来,做统一存储,以便每个节点访问到的数据一致.官方提供基于memcached的方案,由于项目需要,需要做计入redis,根据官方 ...

  3. 基于redis的cas集群配置

    1.cas ticket统一存储 做cas集群首先需要将ticket拿出来,做统一存储,以便每个节点访问到的数据一致.官方提供基于memcached的方案,由于项目需要,需要做计入redis,根据官方 ...

  4. CentOS7 配置 Redis Sentinel主从集群配置

    Redis Sentinel主从集群 环境.准备 slave配置 sentinel配置 测试 C#连接Redis Sentinel 1.环境.准备 单实例3台CentOS7服务器,IP地址.: 192 ...

  5. redis学习五 集群配置

    redis集群配置 0,整体概述      整体来说就是:      1,安装redis      2,配置多个redis实例      3,安装 ruby和rubygems      4,启动red ...

  6. redis:哨兵集群配置

    最少配置1主2从3哨兵 一.引言 上一篇文章我们详细的讲解了Redis的主从集群模式,其实这个集群模式配置很简单,只需要在Slave的节点上进行配置,Master主节点的配置不需要做任何更改,但是有一 ...

  7. Redis单机和集群配置(版本在5.0后)

    摘抄并用于自己后查 单机版的配置: 1. 下载redis压缩包,然后解压缩文件(tar xzf): 2. 进入解压后的redis文件目录,编译redis源文件(make,没有c环境要gcc): 3. ...

  8. redis 负载均衡 集群配置

    redis 官网 http://redis.io/ 中文网站 http://redis.cn/ 谷歌代码的redis项目 https://code.google.com/p/redis/ http:/ ...

  9. Redis 之服务器集群配置

    常见的集群架构如图: redis操作过程中数据同步的函数调用关系: 集群搭建: 1.修改3个redis.config 文件的: 2.启动2个redis服务器 当杀掉redis主进程Master时,由于 ...

  10. redis主从、集群、哨兵

    redis的主从.集群.哨兵 参考: https://blog.csdn.net/robertohuang/article/details/70741575 https://blog.csdn.net ...

随机推荐

  1. Hive导出表数据

    法一: hive (stuchoosecourse) > insert overwrite local directory '/home/landen/文档/exportDir'         ...

  2. 细说Activity与Task(任务栈)

    Task概要: task是一个具有栈结构的容器,可以放置多个Activity实例.启动一个应用,系统就会为之创建一个task,来放置根Activity:默认情况下, 一个Activity启动另一个Ac ...

  3. Windows10下简单搭建zookeeper

    转载请注明源出处:http://www.cnblogs.com/lighten/p/6798669.html 1 简介 zookeeper是Apache的一个开源项目,致力于开发和维护一个开源的服务器 ...

  4. python3 判断大小端的一种方法

    这里用到了array.array('H', [1])来测试大小端,[1]可以转化为十六进制的0x0001,占两位,00位高位, 01位低位,通过第一位就可以判断大小端. 如果是小端,则转化为bytes ...

  5. rabbitmq实现一台服务器同时给指定部分的consumer发送消息(tp框架)(第六篇)

    previous article:  http://www.cnblogs.com/spicy/p/7989717.html 上一篇学习了,发送消息的时候用direct类型的exchange,绑定不同 ...

  6. Linux 系统计算文件夹下文件数量数目

    查看某目录下文件的个数(未包括子目录) ls -l |grep "^-"|wc -l 或 find ./company -type f | wc -l 查看某目录下文件的个数,包括 ...

  7. 【数组】Next Permutation

    题目: Implement next permutation, which rearranges numbers into the lexicographically next greater per ...

  8. Linux运维中遇到的常见问题

    1.CentOS启动tomcat出现乱码的解决方案1.打开tomcat下的server.xml配置文件,在connect标签中添加编码属性:URIEncoding="UTF-8"2 ...

  9. WPF中触发器(Trigger、DataTrigger)使用动画最简单的方式EnterActions和ExitsActions

    1.当鼠标移入后执行某个动画: <Style TargetType="{x:Type StackPanel}"> <Setter Property="R ...

  10. Apache POI导出excel

    public String exportXls(HttpServletRequest request, HttpServletResponse response) { try { HSSFWorkbo ...