redis跨实例迁移 & redis上云
1)redis跨实例迁移——源实例db11迁移至目标实例db30
root@fe2e836e4470:/data# redis-cli -a pwd1 -n 11 keys \* |while read key
> do
> echo "Copying $key"
> redis-cli -a pwd1 -n 11 --raw dump $key |head -c -1 \
> |redis-cli -h <dst_ip> -p 6379 -a pwd2 -n 30 -x restore $key 0
> done ## 写成一行,如下:
root@fe2e836e4470:/data# redis-cli -a pwd1 -n 11 keys \* |while read key; do echo "Copying $key"; redis-cli -a pwd1 -n 11 --raw dump $key |head -c -1 |redis-cli -h <dst_ip> -p 6379 -a pwd2 -n 30 -x restore $key 0; done
2)redis上云——迁移至阿里云
a. 点击参考文档中的 redis-shake ,下载 redis-shake.tar.gz 至本地
b. 将下载好的 redis-shake.tar.gz 上传至 redis所在的ECS,并拷贝至redis容器中
docker cp /tmp/redis-shake.tar.gz docker_redis_1:/data/
c. 解压redis-shake.tar.gz
leyao-slb02 docker # docker-compose exec redis bash
root@fe2e836e4470:/data# tar -xvf redis-shake.tar.gz
root@fe2e836e4470:/data# ls -ahl
drwxr-xr-x 3 redis root 4.0K Jun 21 07:37 .
drwxr-xr-x 1 root root 4.0K Jun 10 07:45 ..
-rw-r--r-- 1 redis users 2.4K Jun 13 15:48 ChangeLog
-rw-r--r-- 1 redis root 8.6K Jun 21 06:44 redis-shake.conf
-rwxr-xr-x 1 redis users 11M Jun 13 15:48 redis-shake.linux64
-rw-r--r-- 1 redis root 3.7M Jun 21 06:01 redis-shake.tar.gz
d. 修改redis-shake配置文件
leyao-slb02 docker # docker-compose exec redis bash
root@fe2e836e4470:/data# vim redis-shake.conf ...
source.address = localhost:6379
source.password_raw = localRedisPwd
target.address = r-uf65427cede42c14.redis.rds.aliyuncs.com:6379
target.password_raw = yourALIredisPwd
...
# 其余参数保持默认
e. 使用如下命令进行迁移
leyao-slb02 docker # docker-compose exec redis bash
root@fe2e836e4470:/data# ./redis-shake.linux64 -type=sync -conf=redis-shake.conf
f. 查看同步日志确认同步状态,当出现sync rdb done时,全量同步已经完成,同步进入增量阶段。
root@fe2e836e4470:/data# ./redis-shake.linux64 -type=sync -conf=redis-shake.conf
2019/06/27 06:53:56 [WARN]
______________________________
\ \ _ ______ |
\ \ / \___-=O'/|O'/__|
\ redis-shake, here we go !! \_______\ / | / )
/ / '/-==__ _/__|/__=-| -GM
/ / * \ | |
/ / (o)
------------------------------
if you have any problem, please visit https://github.com/alibaba/RedisShake/wiki/FAQ 2019/06/27 06:53:56 [INFO] redis-shake configuration: {"Id":"redis-shake","LogFile":"","LogLevel":"info","SystemProfile":9310,"HttpProfile":9320,"NCpu":0,"Parallel":32,"SourceType":"standalone","SourceAddress":"localhost:6379","SourcePasswordRaw":"bckBuqb5hDhCQfSr9eTVEYufn7gBxJ5k","SourcePasswordEncoding":"","SourceVersion":0,"SourceAuthType":"auth","SourceParallel":1,"SourceTLSEnable":false,"TargetAddress":"r-uf65427cede42c14.redis.rds.aliyuncs.com:6379","TargetPasswordRaw":"Karl@612500","TargetPasswordEncoding":"","TargetVersion":0,"TargetDBString":"-1","TargetAuthType":"auth","TargetType":"standalone","TargetTLSEnable":false,"RdbInput":["local"],"RdbOutput":"local_dump","RdbParallel":1,"RdbSpecialCloud":"","FakeTime":"","Rewrite":true,"FilterDB":"","FilterKey":[],"FilterSlot":[],"BigKeyThreshold":524288000,"Psync":false,"Metric":true,"MetricPrintLog":false,"HeartbeatUrl":"","HeartbeatInterval":3,"HeartbeatExternal":"test external","HeartbeatNetworkInterface":"","SenderSize":104857600,"SenderCount":5000,"SenderDelayChannelSize":65535,"KeepAlive":0,"PidPath":"","ScanKeyNumber":50,"ScanSpecialCloud":"","ScanKeyFile":"","Qps":200000,"ReplaceHashTag":false,"ExtraInfo":false,"SockFileName":"","SockFileSize":0,"SourceAddressList":["localhost:6379"],"TargetAddressList":["r-uf65427cede42c14.redis.rds.aliyuncs.com:6379"],"HeartbeatIp":"127.0.0.1","ShiftTime":0,"TargetRedisVersion":"4.0.11","TargetReplace":true,"TargetDB":-1,"Version":"improve-1.6.7,678f43481a4826764ed71fedd744a7ee23736536,go1.10.3,2019-06-13_23:48:39"}
2019/06/27 06:53:56 [INFO] routine[0] starts syncing data from localhost:6379 to [r-uf65427cede42c14.redis.rds.aliyuncs.com:6379] with http[9321]
2019/06/27 06:53:57 [INFO] dbSyncer[0] rdb file size = 3429472
2019/06/27 06:53:57 [INFO] Aux information key:redis-ver value:5.0.5
2019/06/27 06:53:57 [INFO] Aux information key:redis-bits value:64
2019/06/27 06:53:57 [INFO] Aux information key:ctime value:1561618436
2019/06/27 06:53:57 [INFO] Aux information key:used-mem value:27379792
2019/06/27 06:53:57 [INFO] Aux information key:repl-stream-db value:0
2019/06/27 06:53:57 [INFO] Aux information key:repl-id value:6641200d52e448927a79ce3e0a3cec641302da7f
2019/06/27 06:53:57 [INFO] Aux information key:repl-offset value:0
2019/06/27 06:53:57 [INFO] Aux information key:aof-preamble value:0
2019/06/27 06:53:57 [INFO] db_size:1 expire_size:1
2019/06/27 06:53:57 [INFO] db_size:3 expire_size:1
2019/06/27 06:53:57 [INFO] db_size:9 expire_size:9
2019/06/27 06:53:57 [INFO] db_size:7 expire_size:4
2019/06/27 06:53:57 [INFO] db_size:6 expire_size:0
2019/06/27 06:53:57 [INFO] db_size:6 expire_size:0
2019/06/27 06:53:57 [INFO] Aux information key:lua value:-- Pop the first job off of the queue...
local job = redis.call('lpop', KEYS[1])
local reserved = false if(job ~= false) then
-- Increment the attempt count and place job on the reserved queue...
reserved = cjson.decode(job)
reserved['attempts'] = reserved['attempts'] + 1
reserved = cjson.encode(reserved)
redis.call('zadd', KEYS[2], ARGV[1], reserved)
end return {job, reserved}
2019/06/27 06:53:57 [INFO] Aux information key:lua value:-- Get all of the jobs with an expired "score"...
local val = redis.call('zrangebyscore', KEYS[1], '-inf', ARGV[1]) -- If we have values in the array, we will remove them from the first queue
-- and add them onto the destination queue in chunks of 100, which moves
-- all of the appropriate jobs onto the destination queue very safely.
if(next(val) ~= nil) then
redis.call('zremrangebyrank', KEYS[1], 0, #val - 1) for i = 1, #val, 100 do
redis.call('rpush', KEYS[2], unpack(val, i, math.min(i+99, #val)))
end
end return val
2019/06/27 06:53:57 [INFO] Aux information key:lua value:return redis.call('exists',KEYS[1])<1 and redis.call('setex',KEYS[1],ARGV[2],ARGV[1])
2019/06/27 06:53:57 [INFO] dbSyncer[0] total=3429472 - 3429472 [100%] entry=35
2019/06/27 06:53:57 [INFO] dbSyncer[0] sync rdb done
2019/06/27 06:53:57 [WARN] dbSyncer[0] GetFakeSlaveOffset not enable when psync == false
2019/06/27 06:53:57 [INFO] dbSyncer[0] Event:IncrSyncStart Id:redis-shake
2019/06/27 06:53:58 [INFO] dbSyncer[0] sync: +forwardCommands=0 +filterCommands=0 +writeBytes=0
2019/06/27 06:53:59 [INFO] dbSyncer[0] sync: +forwardCommands=7 +filterCommands=0 +writeBytes=34
2019/06/27 06:54:00 [INFO] dbSyncer[0] sync: +forwardCommands=6 +filterCommands=0 +writeBytes=27
g. 登录阿里云Redis查看数据同步情况
redis跨实例迁移 & redis上云的更多相关文章
- redis配置实例及redis.conf详细说明
一.配置实例 1.redis修改持久化路径.日志路径.清缓存 redis修改持久化路径和日志路径 vim redis.conf logfile /data/redis_cache/logs/redi ...
- 【11】Redis .net 实例 StackExchange.Redis框架
1.创建测试项目并下载nuget包:StackExchange.Redis PM> Install-Package StackExchange.Redis 2.创建 RedisHelper类 p ...
- redis集群同步迁移方法(二):通过redis-migrate-tool实现
前篇介绍的redis replication方法,操作步骤多,而且容易出错.在git上看到一些开源工具也能实现同步迁移功能,而且步骤简单,比如redis-port,redis-migrate-tool ...
- redis多实例与主从同步及高级特性(数据过期机制,持久化存储)
redis多实例 创建redis的存储目录 vim /usr/local/redis/conf/redis.conf #修改redis的配置文件 dir /data/redis/ #将存储路径配置修改 ...
- redis php 实例
redis php 实例一 redis的操作很多的,以前看到一个比较全的博客,但是现在找不到了.查个东西搜半天,下面整理一下php处理redis的例子,个人觉得常用一些例子.下面的例子都是基于php- ...
- Redis(四)Redis高级
一Redis 数据备份与恢复 Redis SAVE 命令用于创建当前数据库的备份. 语法 redis Save 命令基本语法如下: redis 127.0.0.1:6379> SAVE 实例 r ...
- inux redis 安装配置, 以及redis php扩展
一,什么是redis redis是一个key-value存储系统. 和Memcached类似,它支持存储的value类型相对更多,包括string(字符串).list(链表).set(集合)和zset ...
- 禧云Redis跨机房双向同步实践
编者荐语: 2019年4月16日跨机房Redis同步中间件(Rotter)上线,团餐率先商用: 以下文章来源于云纵达摩院 ,作者杨海波 禧云信息/研发中心/杨海波 20191115 关键词:Rot ...
- Redis源码阅读(五)集群-故障迁移(上)
Redis源码阅读(五)集群-故障迁移(上) 故障迁移是集群非常重要的功能:直白的说就是在集群中部分节点失效时,能将失效节点负责的键值对迁移到其他节点上,从而保证整个集群系统在部分节点失效后没有丢失数 ...
随机推荐
- YOLO-v4 口罩识别
YOLO-v4 口罩识别 一.YOLO-v4概念 如果想要了解和认识yolo-v4的基本概念,首先要提的就是它的基础版本yolo-v1,对于yolo来说,最经典的算是yolo-v3.如果想要了解它的由 ...
- 三层交换机和VLAN
目录 一.VLAN的概述及优势 二.VLAN的种类 三.静态VLAN的配置 四.Trunk介绍与配置 五.三层交换机转发原理 一.VLAN的概述及优势 分割广播域 物理分割 逻辑分割 VLAN的优势: ...
- python3中的希尔排序
def shell_sort(alist): n = len(alist) # 初始步长 gap = round(n / 2) while gap > 0: # 按步长进行插入排序 for i ...
- fastbin attack学习小结
fastbin attack学习小结 之前留在本地的一篇笔记,复习一下. 下面以glibc2.23为例,说明fastbin管理动态内存的细节.先看一下释放内存的管理: if ((unsigned ...
- NTP\rsync+inotify
NTP网络时间协议 NTP(Network Time Protocol)网络时间协议基于UDP,用于网络时间同步的协议,使网络中的计算机时钟同步到UTC(世界统一时间),再配合各个时区的偏移调整就能实 ...
- Vue-Promise
promise 就是一种异步编程的的解决方案 当执行网络请求的时候,代码就会出现阻塞,下面的代码要等待请求完成了在运行,所以我们一般网络请求的时候就去开启一个异步任务,一边请求一边执行其他代码 请求到 ...
- API文档生成(c# dll)
一.Sandcastle 这个是c#类库方法根据注释生成帮助文档的工具,我们经常会遇到把DLL或者API提供给别人调用的情况,通过在方法中添加注释,然后再用Sandcastle 来自动生成文档给调用者 ...
- Spark的两种核心Shuffle详解
在 MapReduce 框架中, Shuffle 阶段是连接 Map 与 Reduce 之间的桥梁, Map 阶段通过 Shuffle 过程将数据输出到 Reduce 阶段中.由于 Shuffle 涉 ...
- Sqli-Labs less8-10
less-8 前置基础知识: 前几关我们用到了布尔盲注的办法,还有一种盲注就是时间盲注,不仅可以用于有回显的盲注,还能用于没有回显的盲注 函数:sleep(1):等待1秒之后再返回页面做出反应 IF( ...
- msp432搭建平衡小车(二)
前言 上一节掌握了使用pwm驱动电机,接下来介绍如何使用msp432读取mpu6050数据 正文 首先我们得知道mpu6050通信方式,由于mpu6050只能用i2c通信,所以学会使用msp432的i ...