redis集群+JedisCluster+lua脚本实现分布式锁(转)
https://blog.csdn.net/qq_20597727/article/details/85235602
在这片文章中,使用Jedis clien进行lua脚本的相关操作,同时也使用一部分jedis提供的具有原子性set操作来完成值和过期时间的同时设置。使用lua脚本根本原因也是为了保证我们两个redis操作之间的原子性,使分布式锁更加可靠。
JedisCluster相关代码配置
在博主的实现例子中使用redis集群实现分布式锁,所以在开始分布式锁实现之前需要进行JedisCluster的相关配置。博主是在spring boot的下进行开发,JedisCluster需要做的配置如下。
首先是依赖包引入,如下代码所示。
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
</dependency>
1
2
3
4
5
6
7
8
9
加入必要的配置信息
#redis集群连接配置
spring.redis.cluster.nodes=192.168.0.15:6379,192.168.0.15:6380,192.168.0.15:6381,192.168.0.15:6382,192.168.0.15:6383,192.168.0.15:6384
#redis
spring.redis.cluster.max-redirects=6
spring.redis.jedis.pool.max-active=80
spring.redis.jedis.pool.max-idle=30
spring.redis.jedis.pool.max-wait=2000s
spring.redis.jedis.pool.min-idle=10
1
2
3
4
5
6
7
8
其次就是JedisCluster的配置方式,单机环境下Jedis也有相应的配置,在此不多说。JedisCluster配置如下。
/**
* @author zhoujy
* @date 2018年12月19日
**/
@Configuration
public class RedisDistributeLockConfig {
@Value("${spring.redis.cluster.nodes}")
String redisNodes;
@Bean
//定义分布式锁对象,稍后讲解实现
public RedisDistributeLock redisDistributeLock(JedisCluster jedisCluster){
return new RedisDistributeLock(jedisCluster);
}
@Bean
//定义JedisCluster操作bean
public JedisCluster jedisCluster(){
return new JedisCluster(pharseHostAnport());
}
private Set<HostAndPort> pharseHostAnport(){
if (StringUtils.isEmpty(redisNodes)){
throw new RuntimeException("redis nodes can't be null or empty");
}
String[] hps = redisNodes.split(",");
Set<HostAndPort> hostAndPorts = new HashSet<>();
for (String hp : hps) {
String[] hap = hp.split(":");
hostAndPorts.add(new HostAndPort(hap[0], Integer.parseInt(hap[1])));
}
return hostAndPorts;
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
分布式锁实现
在完成JedisCluster的所需配置之后,可以看看分布式锁的如何实现的
所有代码如下所示。
/**
* JedisCluster + lua脚本实现分布式锁
* @author zhoujy
* @date 2018年12月19日
**/
public class RedisDistributeLock {
private Logger logger = LoggerFactory.getLogger(RedisDistributeLock.class);
private JedisCluster jedisCluster;
/**
* lua脚本:判断锁住值是否为当前线程持有,是的话解锁,不是的话解锁失败
*/
private static final String DISTRIBUTE_LOCK_SCRIPT_UNLOCK_VAL = "if" +
" redis.call('get', KEYS[1]) == ARGV[1]" +
" then" +
" return redis.call('del', KEYS[1])" +
" else" +
" return 0" +
" end";
private volatile String unlockSha1 = "";
private static final Long UNLOCK_SUCCESS_CODE = 1L;
private static final String LOCK_SUCCESS_CODE = "ok";
public RedisDistributeLock(JedisCluster jedisCluster) {
this.jedisCluster = jedisCluster;
}
/**
* 根据loopTryTime循环重试
* @param lockKey 锁key
* @param lockVal 锁值,用于解锁校验
* @param expiryTime 锁过期时间
* @param loopTryTime 获取失败时,循环重试获取锁的时长
* @return 是否获得锁
*/
public boolean tryLock(String lockKey, String lockVal, long expiryTime, long loopTryTime){
Long endTime = System.currentTimeMillis() + loopTryTime;
while (System.currentTimeMillis() < endTime){
if (tryLock(lockKey, lockVal, expiryTime)){
return true;
}
}
return false;
}
/**
* 根据loopTryTime循环重试
* @param lockKey 锁key
* @param lockVal 锁值,用于解锁校验
* @param expiryTime 锁过期时间
* @param retryTimes 重试次数
* @param setpTime 每次重试间隔 mills
* @return 是否获得锁
*/
public boolean tryLock(String lockKey, String lockVal, long expiryTime, int retryTimes, long setpTime){
while (retryTimes > 0){
if (tryLock(lockKey, lockVal, expiryTime)){
return true;
}
retryTimes--;
try {
Thread.sleep(setpTime);
} catch (InterruptedException e) {
logger.error("get distribute lock error" +e.getLocalizedMessage());
}
}
return false;
}
/**
* 一次尝试,快速失败。不支持重入
* @param lockKey 锁key
* @param lockVal 锁值,用于解锁校验
* @param expiryTime 锁过期时间 MILLS
* @return 是否获得锁
*/
public boolean tryLock(String lockKey, String lockVal, long expiryTime){
//相比一般的分布式锁,这里把setNx和setExpiry操作合并到一起,jedis保证原子性,避免连个命令之间出现宕机等问题
//这里也可以我们使用lua脚本实现
String result = jedisCluster.set(lockKey, lockVal, "NX", "PX", expiryTime);
return LOCK_SUCCESS_CODE.equalsIgnoreCase(result);
}
/**
* 释放分布式锁,释放失败最可能是业务执行时间长于lockKey过期时间,应当结合业务场景调整过期时间
* @param lockKey 锁key
* @param lockVal 锁值
* @return 是否释放成功
*/
public boolean tryUnLock(String lockKey, String lockVal){
List<String> keys = new ArrayList<>();
keys.add(lockKey);
List<String> argv = new ArrayList<>();
argv.add(lockVal);
try {
Object result = jedisCluster.evalsha(unlockSha1, keys, argv);
return UNLOCK_SUCCESS_CODE.equals(result);
}catch (JedisNoScriptException e){
//没有脚本缓存时,重新发送缓存
logger.info("try to store script......");
storeScript(lockKey);
Object result = jedisCluster.evalsha(unlockSha1, keys, argv);
return UNLOCK_SUCCESS_CODE.equals(result);
}catch (Exception e){
e.printStackTrace();
return false;
}
}
/**
* 由于使用redis集群,因此每个节点都需要各自缓存一份脚本数据
* @param slotKey 用来定位对应的slot的slotKey
*/
public void storeScript(String slotKey){
if (StringUtils.isEmpty(unlockSha1) || !jedisCluster.scriptExists(unlockSha1, slotKey)){
//redis支持脚本缓存,返回哈希码,后续可以继续用来调用脚本
unlockSha1 = jedisCluster.scriptLoad(DISTRIBUTE_LOCK_SCRIPT_UNLOCK_VAL, slotKey);
}
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
针对上面的代码,逐步分析。
加锁操作
相比一般的redis分布式锁,这里操作jedis的操作方式进行加锁,好处就是Jedis保证set与设置有效期两个操作之间的原子性,避免在set值之后,程序宕机,导致没有设置过期时间,锁就一直被锁住。
这一步操作我们单独使用lua脚本实现也可以,但是幸好jedis已经帮我们进行实现。
/**
* 一次尝试,快速失败。不支持重入
* @param lockKey 锁key
* @param lockVal 锁值,用于解锁校验
* @param expiryTime 锁过期时间 MILLS
* @return 是否获得锁
*/
public boolean tryLock(String lockKey, String lockVal, long expiryTime){
//相比一般的分布式锁,这里把setNx和setExpiry操作合并到一起,jedis保证原子性,避免连个命令之间出现宕机等问题
//这里也可以我们使用lua脚本实现
//NX表示setNX操作,PX表示过期时间是mills
String result = jedisCluster.set(lockKey, lockVal, "NX", "PX", expiryTime);
return LOCK_SUCCESS_CODE.equalsIgnoreCase(result);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
同时加锁操作也有几个简单的重载实现,分别是重试获取和循环获取锁的重载,根据业务场景适当调整使用。
解锁操作
这里的分布式锁的解锁操作使用lua脚本帮助实现。
我们都知道,分布式锁在解锁时一定需要验证是不是锁的持有者,这种情况下,我们需要进行的操作就有获取key的对应value,然后验证value的值,这个过程,存在一种情况,导致误删别的持有者的锁。分析如下的操作顺序图
上面的操作顺序可能出错的情况就是当lock1尝试释放时,先获取值,判断是否是锁的持有者,如果是,就再发指令删除锁。这个过程可能存在问题就是,lock1在获取值之后,刚好到了有效期了,那么锁可能会在此时被锁竞争者2获得,并且设置锁lock2,然而这时锁竞争者1删除锁的指令刚好重新发送到redis-server,就会误删lock2,导致后续会被其他锁竞争者3获取,发送不可知业务错误。
使用lua脚本的好处就是保证redis指令之间执行的原子性,把get和del执行放在脚本中,保证不会误删别的锁竞争者的锁,假如刚好出现get之后锁值过期,最多就是del操作结果为0,不会出现误删结果。
/**
* 释放分布式锁,释放失败最可能是业务执行时间长于lockKey过期时间,应当结合业务场景调整过期时间
* @param lockKey 锁key
* @param lockVal 锁值
* @return 是否释放成功
*/
public boolean tryUnLock(String lockKey, String lockVal){
List<String> keys = new ArrayList<>();
keys.add(lockKey);
List<String> argv = new ArrayList<>();
argv.add(lockVal);
try {
Object result = jedisCluster.evalsha(unlockSha1, keys, argv);
return UNLOCK_SUCCESS_CODE.equals(result);
}catch (JedisNoScriptException e){
//没有脚本缓存时,重新发送脚本并缓存
logger.info("try to store script......");
storeScript(lockKey);
//重试获取
Object result = jedisCluster.evalsha(unlockSha1, keys, argv);
return UNLOCK_SUCCESS_CODE.equals(result);
}catch (Exception e){
e.printStackTrace();
return false;
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
解锁脚本
/**
* lua脚本:判断锁住值是否为当前线程持有,是的话解锁,不是的话解锁失败
*/
private static final String DISTRIBUTE_LOCK_SCRIPT_UNLOCK_VAL = "if" +
" redis.call('get', KEYS[1]) == ARGV[1]" +
" then" +
" return redis.call('del', KEYS[1])" +
" else" +
" return 0" +
" end";
1
2
3
4
5
6
7
8
9
10
lua脚本缓存
在redis集群中,为了避免重复发送脚本数据浪费网络资源,可以使用script load命令进行脚本数据缓存,并且返回一个哈希码作为脚本的调用句柄,每次调用脚本只需要发送哈希码来调用即可。
127.0.0.1:6381> script load "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end"
"e9f69f2beb755be68b5e456ee2ce9aadfbc4ebf4"
1
2
上面是在redis-cli中缓存脚本的方式,在程序中,存储lua脚本的方式是如下所示。使用jedis可以很方便就完成脚本缓存,先判断脚本缓存是否存在,不存在就进行脚本数据缓存并且保存哈希码,以备接下来调用脚本。
注意事项:需要注意的是,在redis集群环境下,每个节点都需要进行一份脚本缓存,否则就会出现
NOSCRIPT No matching script. Please use EVAL.
1
错误,因此我在程序中加了处理。
/**
* 由于使用redis集群,因此每个节点都需要各自缓存一份脚本数据
* @param slotKey 用来定位对应的slot的slotKey
*/
public void storeScript(String slotKey){
if (StringUtils.isEmpty(unlockSha1) || !jedisCluster.scriptExists(unlockSha1, slotKey)){
//redis支持脚本缓存,返回哈希码,后续可以继续用来调用脚本
unlockSha1 = jedisCluster.scriptLoad(DISTRIBUTE_LOCK_SCRIPT_UNLOCK_VAL, slotKey);
}
}
1
2
3
4
5
6
7
8
9
10
slotKey就是我们set值时的key,redis根据crc16函数 计算key应该对应哪一个slot,如果slot所在的redis节点没有缓存脚本数据就会报处NOSCRIPT No matching script. Please use EVAL.异常,因此当捕捉到这个异常时,我们在代码中重新发送脚本数据进行缓存即可。
/**
* 释放分布式锁,释放失败最可能是业务执行时间长于lockKey过期时间,应当结合业务场景调整过期时间
* @param lockKey 锁key
* @param lockVal 锁值
* @return 是否释放成功
*/
public boolean tryUnLock(String lockKey, String lockVal){
List<String> keys = new ArrayList<>();
keys.add(lockKey);
List<String> argv = new ArrayList<>();
argv.add(lockVal);
try {
Object result = jedisCluster.evalsha(unlockSha1, keys, argv);
return UNLOCK_SUCCESS_CODE.equals(result);
}catch (JedisNoScriptException e){
//没有脚本缓存时,重新发送脚本并缓存
//根据lockkey计算slot,在对应redis节点重新缓存一份脚本数据
logger.info("try to store script......");
storeScript(lockKey);
//重试获取
Object result = jedisCluster.evalsha(unlockSha1, keys, argv);
return UNLOCK_SUCCESS_CODE.equals(result);
}catch (Exception e){
e.printStackTrace();
return false;
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
以上就是redis集群+lua脚本实现分布式锁的方式。
测试用例
@RunWith(SpringRunner.class)
@SpringBootTest(classes = {ActivityServiceApplication.class})
@Slf4j
public class ActivityServiceApplicationTests {
@Resource
private RedisDistributeLock redisDistributeLock;
@Test
public void testRedislock() throws InterruptedException {
for(int i=0;i < 50;i++){
int finalI = i;
new Thread(() ->{
if (redisDistributeLock.tryLock("TEST_LOCK_KEY", "TEST_LOCK_VAL_"+ finalI, 1000* 100, 1000*20)){
try {
log.warn("get lock successfully with lock value:-----" + "TEST_LOCK_VAL_"+ finalI);
Thread.sleep(2000);
if (!redisDistributeLock.tryUnLock("TEST_LOCK_KEY", "TEST_LOCK_VAL_"+ finalI)){
throw new RuntimeException("release lock fail");
}
log.warn("release lock successfully with lock value:-----" + "TEST_LOCK_VAL_"+ finalI);
} catch (InterruptedException e) {
e.printStackTrace();
}
}else {
log.warn("get lock fail with lock value:-----" + "TEST_LOCK_VAL_"+ finalI);
}
}).start();
}
Thread.sleep(1000*1000);
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
设置50个线程尝试获取分布式锁,每个线程尝试时间为20秒;获取到锁的线程,sleep2秒,然后释放锁。
最终会出现,10个线程能够依次获得锁,40个线程获取锁超时失败。
2018-12-24 15:35:16.462 WARN 42580 --- [ Thread-61] g.learn.ActivityServiceApplicationTests : get lock successfully with lock value:-----TEST_LOCK_VAL_47
2018-12-24 15:35:18.710 WARN 42580 --- [ Thread-61] g.learn.ActivityServiceApplicationTests : release lock successfully with lock value:-----TEST_LOCK_VAL_47
2018-12-24 15:35:18.711 WARN 42580 --- [ Thread-14] g.learn.ActivityServiceApplicationTests : get lock successfully with lock value:-----TEST_LOCK_VAL_0
2018-12-24 15:35:20.788 WARN 42580 --- [ Thread-14] g.learn.ActivityServiceApplicationTests : release lock successfully with lock value:-----TEST_LOCK_VAL_0
2018-12-24 15:35:20.789 WARN 42580 --- [ Thread-51] g.learn.ActivityServiceApplicationTests : get lock successfully with lock value:-----TEST_LOCK_VAL_37
2018-12-24 15:35:22.831 WARN 42580 --- [ Thread-47] g.learn.ActivityServiceApplicationTests : get lock successfully with lock value:-----TEST_LOCK_VAL_33
2018-12-24 15:35:22.831 WARN 42580 --- [ Thread-51] g.learn.ActivityServiceApplicationTests : release lock successfully with lock value:-----TEST_LOCK_VAL_37
2018-12-24 15:35:25.177 WARN 42580 --- [ Thread-47] g.learn.ActivityServiceApplicationTests : release lock successfully with lock value:-----TEST_LOCK_VAL_33
2018-12-24 15:35:25.177 WARN 42580 --- [ Thread-55] g.learn.ActivityServiceApplicationTests : get lock successfully with lock value:-----TEST_LOCK_VAL_41
2018-12-24 15:35:27.227 WARN 42580 --- [ Thread-55] g.learn.ActivityServiceApplicationTests : release lock successfully with lock value:-----TEST_LOCK_VAL_41
2018-12-24 15:35:27.228 WARN 42580 --- [ Thread-36] g.learn.ActivityServiceApplicationTests : get lock successfully with lock value:-----TEST_LOCK_VAL_22
2018-12-24 15:35:29.586 WARN 42580 --- [ Thread-36] g.learn.ActivityServiceApplicationTests : release lock successfully with lock value:-----TEST_LOCK_VAL_22
2018-12-24 15:35:29.587 WARN 42580 --- [ Thread-35] g.learn.ActivityServiceApplicationTests : get lock successfully with lock value:-----TEST_LOCK_VAL_21
2018-12-24 15:35:31.609 WARN 42580 --- [ Thread-35] g.learn.ActivityServiceApplicationTests : release lock successfully with lock value:-----TEST_LOCK_VAL_21
2018-12-24 15:35:31.610 WARN 42580 --- [ Thread-28] g.learn.ActivityServiceApplicationTests : get lock successfully with lock value:-----TEST_LOCK_VAL_14
2018-12-24 15:35:34.071 WARN 42580 --- [ Thread-28] g.learn.ActivityServiceApplicationTests : release lock successfully with lock value:-----TEST_LOCK_VAL_14
2018-12-24 15:35:34.071 WARN 42580 --- [ Thread-31] g.learn.ActivityServiceApplicationTests : get lock successfully with lock value:-----TEST_LOCK_VAL_17
2018-12-24 15:35:36.089 WARN 42580 --- [ Thread-31] g.learn.ActivityServiceApplicationTests : release lock successfully with lock value:-----TEST_LOCK_VAL_17
2018-12-24 15:35:36.089 WARN 42580 --- [ Thread-54] g.learn.ActivityServiceApplicationTests : get lock successfully with lock value:-----TEST_LOCK_VAL_40
2018-12-24 15:35:36.449 WARN 42580 --- [ Thread-60] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_46
2018-12-24 15:35:36.450 WARN 42580 --- [ Thread-52] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_38
2018-12-24 15:35:36.450 WARN 42580 --- [ Thread-56] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_42
2018-12-24 15:35:36.450 WARN 42580 --- [ Thread-59] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_45
2018-12-24 15:35:36.453 WARN 42580 --- [ Thread-38] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_24
2018-12-24 15:35:36.453 WARN 42580 --- [ Thread-37] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_23
2018-12-24 15:35:36.453 WARN 42580 --- [ Thread-45] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_31
2018-12-24 15:35:36.453 WARN 42580 --- [ Thread-43] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_29
2018-12-24 15:35:36.453 WARN 42580 --- [ Thread-49] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_35
2018-12-24 15:35:36.453 WARN 42580 --- [ Thread-39] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_25
2018-12-24 15:35:36.453 WARN 42580 --- [ Thread-34] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_20
2018-12-24 15:35:36.454 WARN 42580 --- [ Thread-16] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_2
2018-12-24 15:35:36.454 WARN 42580 --- [ Thread-26] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_12
2018-12-24 15:35:36.454 WARN 42580 --- [ Thread-21] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_7
2018-12-24 15:35:36.454 WARN 42580 --- [ Thread-22] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_8
2018-12-24 15:35:36.454 WARN 42580 --- [ Thread-15] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_1
2018-12-24 15:35:36.454 WARN 42580 --- [ Thread-27] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_13
2018-12-24 15:35:36.454 WARN 42580 --- [ Thread-17] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_3
2018-12-24 15:35:36.455 WARN 42580 --- [ Thread-25] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_11
2018-12-24 15:35:36.455 WARN 42580 --- [ Thread-62] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_48
2018-12-24 15:35:36.455 WARN 42580 --- [ Thread-57] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_43
2018-12-24 15:35:36.455 WARN 42580 --- [ Thread-40] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_26
2018-12-24 15:35:36.455 WARN 42580 --- [ Thread-33] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_19
2018-12-24 15:35:36.455 WARN 42580 --- [ Thread-30] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_16
2018-12-24 15:35:36.455 WARN 42580 --- [ Thread-41] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_27
2018-12-24 15:35:36.455 WARN 42580 --- [ Thread-42] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_28
2018-12-24 15:35:36.456 WARN 42580 --- [ Thread-48] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_34
2018-12-24 15:35:36.456 WARN 42580 --- [ Thread-63] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_49
2018-12-24 15:35:36.456 WARN 42580 --- [ Thread-29] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_15
2018-12-24 15:35:36.456 WARN 42580 --- [ Thread-32] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_18
2018-12-24 15:35:36.456 WARN 42580 --- [ Thread-50] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_36
2018-12-24 15:35:36.456 WARN 42580 --- [ Thread-46] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_32
2018-12-24 15:35:36.456 WARN 42580 --- [ Thread-19] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_5
2018-12-24 15:35:36.456 WARN 42580 --- [ Thread-20] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_6
2018-12-24 15:35:36.456 WARN 42580 --- [ Thread-53] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_39
2018-12-24 15:35:36.457 WARN 42580 --- [ Thread-23] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_9
2018-12-24 15:35:36.457 WARN 42580 --- [ Thread-58] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_44
2018-12-24 15:35:36.457 WARN 42580 --- [ Thread-18] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_4
2018-12-24 15:35:36.457 WARN 42580 --- [ Thread-24] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_10
2018-12-24 15:35:36.457 WARN 42580 --- [ Thread-44] g.learn.ActivityServiceApplicationTests : get lock fail with lock value:-----TEST_LOCK_VAL_30
2018-12-24 15:35:38.091 WARN 42580 --- [ Thread-54] g.learn.ActivityServiceApplicationTests : release lock successfully with lock value:-----TEST_LOCK_VAL_40
redis集群+JedisCluster+lua脚本实现分布式锁(转)的更多相关文章
- redis集群搭建+lua脚本的使用
详细参考这篇文章(windows) https://blog.csdn.net/qiuyufeng/article/details/70474001 一.使用JAVA代码操作redis集群 publi ...
- 【spring boot】【redis】spring boot基于redis的LUA脚本 实现分布式锁
spring boot基于redis的LUA脚本 实现分布式锁[都是基于redis单点下] 一.spring boot 1.5.X 基于redis 的 lua脚本实现分布式锁 1.pom.xml &l ...
- Redis学习笔记(三)使用Lua脚本实现分布式锁
Redis在2.6推出了脚本功能,允许开发者使用Lua语言编写脚本传到Redis中执行. 使用Lua脚本的好处如下: 1.减少网络开销:本来5次网络请求的操作,可以用一个请求完成,原先5次请求的逻辑放 ...
- redis集群JedisCluster连接关闭问题
JedisCluster连接关闭问题 set方法为例 //伪代码 JedisCluster jedisCluster = new JedisCluster(); jedisCluster.set(&q ...
- Redis集群入门
官方文章: https://redis.io/topics/cluster-tutorial#redis-cluster-configuration-parameters 本文永久地址: https: ...
- 模拟安装redis5.0集群并通过Java代码访问redis集群
在虚拟机上模拟redis5.0的集群,由于redis的投票机制,一个集群至少需要3个redis节点,如果每个节点设置一主一备,一共需要六台虚拟机来搭建集群,此处,在一台虚拟机上使用6个redis实例来 ...
- Redis集群教程(Redis cluster tutorial)
本博文翻译自Redis官网:http://redis.io/topics/cluster-tutorial 本文档以温和的方式介绍Redis集群,不使用复杂的方式来理解分布式系统的概念. ...
- Windows环境下搭建Redis集群(Redis-x64-3.2.100)
一 .前期准备Redis.Ruby语言运行环境.Redis的Ruby驱动redis-xxxx.gem.创建Redis集群的工具redis-trib.rb 二.安装配置redisredis下载地址 ht ...
- Windows下搭建Redis集群
Redis集群: 如果部署到多台电脑,就跟普通的集群一样:因为Redis是单线程处理的,多核CPU也只能使用一个核, 所以部署在同一台电脑上,通过运行多个Redis实例组成集群,然后能提高CPU的利用 ...
随机推荐
- JVM内存管理 + GC垃圾回收机制
2.JVM内存管理 JVM将内存划分为6个部分:PC寄存器(也叫程序计数器).虚拟机栈.堆.方法区.运行时常量池.本地方法栈 PC寄存器(程序计数器):用于记录当前线程运行时的位置,每一个线程都有一个 ...
- 根据需求定制 admin
定义 list 页面 自定义 list_filter 首先,完成过滤器的功能,需要自定义过滤器.在 PostAdmin 定义的上方定义如下代码: class CategoryOwnerFilter(a ...
- jdk1.8 HashMap & ConcurrentHashMap
JDK1.8逐字逐句带你理解ConcurrentHashMap https://blog.csdn.net/u012403290 JDK1.8理解HashMap https://blog.csdn.n ...
- 小D课堂 - 新版本微服务springcloud+Docker教程_5-05熔断降级服务异常报警通知
笔记 5.熔断降级服务异常报警通知实战 简介:完善服务熔断处理,报警机制完善 1.加入redis依赖 <dependency> <gr ...
- Rancher Server部署方式及Rancher HA环境部署
类似Rancher这种的容器管理和编排工具,它可以很快地让每个组织获得高效的弹性集群管理能力.当前技术世界的发展形势就是让开发人员从繁琐的应用配置和管理中解放出来,使用容器镜像来处理复杂的程序运行依赖 ...
- openstack部署nova
controller 一.创建nova数据库,并设置权限及远程登录 mysql -u root -p CREATE DATABASE nova_api; CREATE DATABASE nova; C ...
- ssm整合的spring.xml文件配置(applicationContext.xml)
<?xml version="1.0" encoding="UTF-8"?><beans xmlns="http://www.spr ...
- IIS 7 实现http跳转https 重定向方法
官网的域名申请了一个SSL加密,导致原来的http无法访问了,网上找了一下解决方案,https://www.cnblogs.com/wer-ltm/p/10190535.html 按照这个方法进行了 ...
- Oracle11gR2之ORA-01034、ORA-27101、ORA-00119、 ORA-00132
昨天安装的oracel,今天用navicat连接oracel出现以下错误: ORA-01034: ORACLE not available ORA-27101: shared memory realm ...
- Python 常用模块(2) 序列化(pickle,shelve,json,configpaser)
主要内容: 一. 序列化概述 二. pickle模块 三. shelve模块 四. json模块(重点!) 五. configpaser模块 一. 序列化概述1. 序列化: 将字典,列表等内容转换成一 ...