redis主从中断异常处理
线上预警主从中断: 查看线上复制信息:
# Replication
role:slave
master_host:master_host
master_port:6379
master_link_status:down
master_last_io_seconds_ago:-1
master_sync_in_progress:1
slave_repl_offset:1
master_sync_left_bytes:713983940
master_sync_last_io_seconds_ago:0
master_link_down_since_seconds:248
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
状态为DOWN.主从失败,查看主节点相关日志
[374] 15 Oct 16:41:28.146 # Connection with slave 10.72.26.55:6379 lost.
[374] 15 Oct 16:41:28.999 * Slave asks for synchronization
[374] 15 Oct 16:41:28.999 * Unable to partial resync with the slave for lack of backlog (Slave request was: 152340118946214).
[374] 15 Oct 16:41:28.999 * Starting BGSAVE for SYNC
[374] 15 Oct 16:41:29.447 * Background saving started by pid 11357
[11357] 15 Oct 16:41:57.325 * DB saved on disk
[11357] 15 Oct 16:41:57.555 * RDB: 231 MB of memory used by copy-on-write
[374] 15 Oct 16:41:57.980 * Background saving terminated with success
[374] 15 Oct 16:42:31.739 * Synchronization with slave succeeded
[374] 15 Oct 16:43:01.021 # Client id=6082455 addr=slave_host:55308 fd=329 name= age=93 idle=1 flags=S db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=10657 omem=2504780296 events=rw cmd=replconf scheduled to be closed ASAP for overcoming of output buffer limits.
查看从节点日志:
[372] 15 Oct 16:43:01.141 # Connection with master lost.
[372] 15 Oct 16:43:01.141 * Caching the disconnected master state.
[372] 15 Oct 16:43:01.213 * Connecting to MASTER masterhost:6379
[372] 15 Oct 16:43:01.213 * MASTER <-> SLAVE sync started
[372] 15 Oct 16:43:01.213 * Non blocking connect for SYNC fired the event.
[372] 15 Oct 16:43:01.572 * Master replied to PING, replication can continue...
[372] 15 Oct 16:43:01.599 * Trying a partial resynchronization (request cbc213a279fde141211f65d436595e4ed64198fa:152342150944513).
[372] 15 Oct 16:43:01.602 * Full resync from master: cbc213a279fde141211f65d436595e4ed64198fa:152344338348685
[372] 15 Oct 16:43:01.602 * Discarding previously cached master state.
[372] 15 Oct 16:43:30.326 * MASTER <-> SLAVE sync: receiving 1308737462 bytes from master
[372] 15 Oct 16:43:59.846 * MASTER <-> SLAVE sync: Flushing old data
[372] 15 Oct 16:44:01.534 * MASTER <-> SLAVE sync: Loading DB in memory
[372] 15 Oct 16:44:22.590 * MASTER <-> SLAVE sync: Finished with success
[372] 15 Oct 16:44:22.600 # Connection with master lost.
[372] 15 Oct 16:44:22.600 * Caching the disconnected master state.
从主库的日志我们可以看到slave的链接由于超过了output buffer limits的设置值所以被强行中断了。看一下redis2.8的自描述文件
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
我们主要看slave的限制:
256mb 是一个硬性限制,当output-buffer的大小大于256mb之后就会断开连接
64mb 60 是一个条件限制,当output-buffer的大小大于64mb并且超过了60秒的时候就会断开连接
当我们链接暴增,数据量大的情况下默认参数已经不能满足主从同步,从库会不停的向主库发起同步,主库就会不停的bgsave,发送文件给从库,这样就会造成一个死循环。我们必须依据从库的使用来调整client-output-buffer-limit slave 的值。调整以后就可以正常同步了。
redis主从中断异常处理的更多相关文章
- redis集群主从中断,报io过高 不错
问题原因:1.由于这个集群redis操作非常频繁,1分钟操作数据达到1-2G,所有自动aof非常频繁,主从复制打包rdb也非常频繁,之前配置已经无法满足要求报异常如下6943:M 19 Jul 20: ...
- Redis 主从配置和参数详解
安装redis 下载redis wget http://download.redis.io/releases/redis-3.0.7.tar.gz 解压redis tar -xvf redis-.ta ...
- Redis主从是否生效的特殊测试方法
Redis主从的特殊测试方法 配置主从 Redis的主从设定,相较于MySQL的主从,配置起来非常简单,不必像MySQL数据库一样手动记录bin log的位置再配置,但又可以像MySQL一样,一主带多 ...
- 关于redis主从|哨兵|集群模式
关于redis主从.哨兵.集群的介绍网上很多,这里就不赘述了. 一.主从 通过持久化功能,Redis保证了即使在服务器重启的情况下也不会损失(或少量损失)数据,因为持久化会把内存中数据保存到硬盘上,重 ...
- Redis主从同步要深入理解?一篇文章足矣!
前言: 今天想和大家分享有关 Redis 主从同步(也称「复制」)的内容. 我们知道,当有多台 Redis 服务器时,肯定就有一台主服务器和多台从服务器.一般来说,主服务器进行写操作,从服务器进行读操 ...
- 一文让你明白Redis主从同步
今天想和大家分享有关 Redis 主从同步(也称「复制」)的内容. 我们知道,当有多台 Redis 服务器时,肯定就有一台主服务器和多台从服务器.一般来说,主服务器进行写操作,从服务器进行读操作. 那 ...
- redis主从|哨兵|集群模式
关于redis主从.哨兵.集群的介绍网上很多,这里就不赘述了. 一.主从 通过持久化功能,Redis保证了即使在服务器重启的情况下也不会损失(或少量损失)数据,因为持久化会把内存中数据保存到硬盘上,重 ...
- redis(二)集群 redis-cluster & redis主从同步
参考文档: http://geek.csdn.net/news/detail/200023 redis主从复制:https://blog.csdn.net/imxiangzi/article/deta ...
- Redis主从,集群部署及迁移
工作中有时会遇到需要把原Redis集群下线,迁移到另一个新的Redis集群的需求(如机房迁移,Redis上云等原因).此时原Redis中的数据需要如何操作才可顺利迁移到一个新的Redis集群呢? 本节 ...
随机推荐
- SpringMVC 集成 jackson,日志格式报错:org.codehaus.jackson.map.JsonMappingException: Can not construct instance of java.util.Date from String value
org.codehaus.jackson.map.JsonMappingException: Can not construct instance of java.util.Date from Str ...
- mysql起容器的最精简命令
亲测有效的 mysql 容器命令: #pull mysql:5.6 docker pull mysql:5.6 #起容器,映射3306端口,配置root用户密码 docker run -di --na ...
- docker 打印带时间的日志
1, 根据容器日志查看连接情况 docker logs 684 (因为从6.30日开是打印,太慢了.) 2,docker带参数的打印出日志 docker logs 684 --since=&quo ...
- Vue CLI3和Vue CLI2环境搭建
关于 Vue CLI 旧版本的安装以及创建项目 1.搭建 vue 的开发环境 ,安装 vue 的脚手架工具 官方命令行工具 npm install --global vue-cli / cnpm in ...
- export,import ,export default
a.js export var name="李四"; 或者: a.js var name1="李四"; var name2="张三"; ex ...
- DataGrip 2019.1 连接mysql 8.0.16
# 下载mysql Connector/J驱动包 https://dev.mysql.com/downloads/connector/j/ 然后解压到一个目录 # 新建mysql 8.0连接驱动 打开 ...
- RabbitMQ 入门教程(PHP版) 第二部分:工作队列(Work queues)
工作队列 在第一篇教程中,我们已经写了一个从已知队列中发送和获取消息的程序.在这篇教程中,我们将创建一个工作队列(Work Queue),它会发送一些耗时的任务给多个工作者(Works ). 工作队列 ...
- 一个80后妈妈的邪淫忏悔(转自学佛网:http://www.xuefo.net/nr/article55/551761.html)
我是一个80后独生女,2012年因为孩子小产后,痛苦难当,悲伤中想起可为孩子超度,因此开始了与佛法的缘分.断断续续几年的学习,才真的知道了邪淫的可怕 我从小面容姣好,气质超群,一直被父母,老师宠爱.想 ...
- kafka如果有多个patition,消费消息的时候消息是没有顺序的
创建一个2个分区,3个副本的topic,名字叫first kafka-topics.sh --create --zookeeper datanode1:2181 --partitions 2 --r ...
- ubuntu18.04手动安装二进制MySQL8.0
wget https://cdn.mysql.com//Downloads/MySQL-8.0/mysql-8.0.13-linux-glibc2.12-x86_64.tar.xz tar xvJf ...