前言

紧跟上文的:分布式锁实现(一):Redis ,这篇我们用Zookeeper来设计和实现分布式锁,并且研究下开源客户端工具Curator的分布式锁源码

设计实现

一、基本算法

1.在某父节点下创建临时有序节点
2.判断创建的节点是否是当前父节点下所有子节点中序号最小的
3.是序号最小的成功获取锁,否则监听比自己小的那个节点,进行watch,当该节点被删除的时候通知当前节点,重新获取锁
4.解锁的时候删除当前节点

二、关键点

临时有序节点

实现Zookeeper分布式锁关键就在于其[临时有序节点]的特性,在Zookeeper中有四种节点
1.PERSISTENT 持久,若不手动删除就永久存在
2.PERSISTENT_SEQUENTIAL 持久有序节点,zookeeper会为节点编号(保证有序)
3.EPHEMERAL 临时,一个客户端会话断开后会自动删除
4.EPHEMERAL_SEQUENTIAL 临时有序节点,zookeeper会为节点编号(保证有序)

监听

Zookeeper提供事件监听机制,通过对节点、节点数据、子节点都提供了监听,我们通过这种监听watcher机制实现锁的等待

三、代码实现

我们基于ZkClient这个客户端来实现,当然也可以用原生Zookeeper API,大致是一样的
坐标如下:
<dependency>
<groupId>com.101tec</groupId>
<artifactId>zkclient</artifactId>
<version>0.2</version>
</dependency>

代码如下:

public class MyDistributedLock {

    private ZkClient zkClient;
private String name;
private String currentLockPath;
private CountDownLatch countDownLatch; private static final String PARENT_LOCK_PATH = "/distribute_lock"; public MyDistributedLock(ZkClient zkClient, String name) {
this.zkClient = zkClient;
this.name = name;
} //加锁
public void lock() {
//判断父节点是否存在,不存在就创建
if (!zkClient.exists(PARENT_LOCK_PATH)) {
try {
//多个线程只会成功建立一次
zkClient.createPersistent(PARENT_LOCK_PATH);
} catch (Exception ignored) {
}
}
//创建当前目录下的临时有序节点
currentLockPath = zkClient.createEphemeralSequential(PARENT_LOCK_PATH + "/", System.currentTimeMillis());
//校验是否最小节点
checkMinNode(currentLockPath);
} //解锁
public void unlock() {
System.out.println("delete : " + currentLockPath);
zkClient.delete(currentLockPath);
} private boolean checkMinNode(String lockPath) {
//获取当前目录下所有子节点
List<String> children = zkClient.getChildren(PARENT_LOCK_PATH);
Collections.sort(children);
int index = children.indexOf(lockPath.substring(PARENT_LOCK_PATH.length() + 1));
if (index == 0) {
System.out.println(name + ":success");
if (countDownLatch != null) {
countDownLatch.countDown();
}
return true;
} else {
String waitPath = PARENT_LOCK_PATH + "/" + children.get(index - 1);
//等待前一个节点释放的监听
waitForLock(waitPath);
return false;
}
} private void waitForLock(String prev) {
System.out.println(name + " current path :" + currentLockPath + ":fail add listener" + " wait path :" + prev);
countDownLatch = new CountDownLatch(1);
zkClient.subscribeDataChanges(prev, new IZkDataListener() {
@Override
public void handleDataChange(String s, Object o) throws Exception { } @Override
public void handleDataDeleted(String s) throws Exception {
System.out.println("prev node is done");
checkMinNode(currentLockPath);
}
});
if (!zkClient.exists(prev)) {
return;
}
try {
countDownLatch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
countDownLatch = null;
}
}

加锁

  1. zkClient.exists先判断父节点是否存在,不存在就创建,zookeeper可以保证只会创建成功一次

  2. 在当前目录下zkClient.createEphemeralSequential创建临时有序节点,再判断当前目录下此节点是否为序号最小的,如果是,成功获取锁,否则的话拿比自己小的节点,并做监听

  3. waitForLock等待比自己小的节点,subscribeDataChanges监听一个节点的变化,handleDataDeleted里面再次做checkMinNode的判断

  4. 监听完毕后,再判断一次此节点是否存在,因为在监听的过程中有可能之前小的那个节点重新释放了锁,如果之前节点不存在的话,无需在这里等待,这里的等待是通过countDownLatch实现的

解锁

解锁就是通过zkClient的delete删除当前节点

测试用例

通过启动多个线程来测试lock、unlock的过程,查看是否有序

public class MyDistributedLockTest {

    public static void main(String[] args) {

        ZkClient zk = new ZkClient("127.0.0.1:2181", 5 * 10000);

        for (int i = 0; i < 20; i++) {

            String name = "thread" + i;
Thread thread = new Thread(() -> {
MyDistributedLock myDistributedLock = new MyDistributedLock(zk, name);
myDistributedLock.lock();
// try {
// Thread.sleep(1 * 1000);
// } catch (InterruptedException e) {
// e.printStackTrace();
// }
myDistributedLock.unlock();
});
thread.start();
} }
}

执行结果如下,多线程情况下lock/unlock和监听一切正常:

thread1 current path :/distribute_lock2/0000000007:fail add listener wait path :/distribute_lock2/0000000006
thread6 current path :/distribute_lock2/0000000006:fail add listener wait path :/distribute_lock2/0000000005
thread3:success
delete : /distribute_lock2/0000000000
thread2 current path :/distribute_lock2/0000000005:fail add listener wait path :/distribute_lock2/0000000004
thread7 current path :/distribute_lock2/0000000004:fail add listener wait path :/distribute_lock2/0000000003
thread9 current path :/distribute_lock2/0000000009:fail add listener wait path :/distribute_lock2/0000000008
thread5 current path :/distribute_lock2/0000000008:fail add listener wait path :/distribute_lock2/0000000007
thread0 current path :/distribute_lock2/0000000001:fail add listener wait path :/distribute_lock2/0000000000
thread8 current path :/distribute_lock2/0000000002:fail add listener wait path :/distribute_lock2/0000000001
thread4 current path :/distribute_lock2/0000000003:fail add listener wait path :/distribute_lock2/0000000002
delete : /distribute_lock2/0000000001
prev node is done
thread8:success
delete : /distribute_lock2/0000000002
prev node is done
thread4:success
delete : /distribute_lock2/0000000003
prev node is done
thread7:success
delete : /distribute_lock2/0000000004
prev node is done
thread2:success
delete : /distribute_lock2/0000000005
prev node is done
thread6:success
delete : /distribute_lock2/0000000006
prev node is done
thread1:success
delete : /distribute_lock2/0000000007
prev node is done
thread5:success
delete : /distribute_lock2/0000000008
prev node is done
thread9:success
delete : /distribute_lock2/0000000009

Curator源码分析

一、基本使用

 		RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
CuratorFramework client = CuratorFrameworkFactory.newClient("127.0.0.1:2181", retryPolicy);
client.start();
InterProcessMutex lock2 = new InterProcessMutex(client, "/test"); try {
lock.acquire();
//业务
} catch (Exception e) {
e.printStackTrace();
} finally {
lock.release();
}
  1. CuratorFrameworkFactory.newClient获取zookeeper的客户端,retryPolicy指定重试策略,开启客户端

  2. Curator本身提供了多种锁的实现,这里我们以InterProcessMutex可重入锁为例, lock.acquire()方法获取锁,lock.release()来释放锁,acquire方法也提供了重载的等待时间参数

二、源码分析

加锁

acquire内部就直接internalLock方法,传了-1的等待时间

 public void acquire() throws Exception {
if(!this.internalLock(-1L, (TimeUnit)null)) {
throw new IOException("Lost connection while trying to acquire lock: " + this.basePath);
}
}

internalLock方法首先判断是否是重入锁,通过ConcurrentMap维护线程和一个原子计数器,非重入锁的话,再通过attemptLock去获取锁

 private boolean internalLock(long time, TimeUnit unit) throws Exception
{
/*
Note on concurrency: a given lockData instance
can be only acted on by a single thread so locking isn't necessary
*/ Thread currentThread = Thread.currentThread(); LockData lockData = threadData.get(currentThread);
if ( lockData != null )
{
// re-entering
lockData.lockCount.incrementAndGet();
return true;
} String lockPath = internals.attemptLock(time, unit, getLockNodeBytes());
if ( lockPath != null )
{
LockData newLockData = new LockData(currentThread, lockPath);
threadData.put(currentThread, newLockData);
return true;
} return false;
}

attemptLock在这里进行循环等待,createsTheLock方法去创建节点,internalLockLoop去判断当前节点是否是最小节点

String attemptLock(long time, TimeUnit unit, byte[] lockNodeBytes) throws Exception
{
final long startMillis = System.currentTimeMillis();
final Long millisToWait = (unit != null) ? unit.toMillis(time) : null;
final byte[] localLockNodeBytes = (revocable.get() != null) ? new byte[0] : lockNodeBytes;
int retryCount = 0; String ourPath = null;
boolean hasTheLock = false;
boolean isDone = false;
while ( !isDone )
{
isDone = true; try
{
ourPath = driver.createsTheLock(client, path, localLockNodeBytes);
hasTheLock = internalLockLoop(startMillis, millisToWait, ourPath);
}
catch ( KeeperException.NoNodeException e )
{
// gets thrown by StandardLockInternalsDriver when it can't find the lock node
// this can happen when the session expires, etc. So, if the retry allows, just try it all again
if ( client.getZookeeperClient().getRetryPolicy().allowRetry(retryCount++, System.currentTimeMillis() - startMillis, RetryLoop.getDefaultRetrySleeper()) )
{
isDone = false;
}
else
{
throw e;
}
}
} if ( hasTheLock )
{
return ourPath;
} return null;
}

createsTheLock就是调用curator封装的api去创建临时有序节点

   public String createsTheLock(CuratorFramework client, String path, byte[] lockNodeBytes) throws Exception
{
String ourPath;
if ( lockNodeBytes != null )
{
ourPath = client.create().creatingParentContainersIfNeeded().withProtection().withMode(CreateMode.EPHEMERAL_SEQUENTIAL).forPath(path, lockNodeBytes);
}
else
{
ourPath = client.create().creatingParentContainersIfNeeded().withProtection().withMode(CreateMode.EPHEMERAL_SEQUENTIAL).forPath(path);
}
return ourPath;
}

internalLockLoop锁判断,内部就是driver.getsTheLock去判断是否是当前目录下最小节点,如果是的话,返回获取锁成功,否则的话对previousSequencePath进行监听,监听动作完成后再对等待时间进行重新判断

private boolean internalLockLoop(long startMillis, Long millisToWait, String ourPath) throws Exception
{
boolean haveTheLock = false;
boolean doDelete = false;
try
{
if ( revocable.get() != null )
{
client.getData().usingWatcher(revocableWatcher).forPath(ourPath);
} while ( (client.getState() == CuratorFrameworkState.STARTED) && !haveTheLock )
{
List<String> children = getSortedChildren();
String sequenceNodeName = ourPath.substring(basePath.length() + 1); // +1 to include the slash PredicateResults predicateResults = driver.getsTheLock(client, children, sequenceNodeName, maxLeases);
if ( predicateResults.getsTheLock() )
{
haveTheLock = true;
}
else
{
String previousSequencePath = basePath + "/" + predicateResults.getPathToWatch(); synchronized(this)
{
try
{
// use getData() instead of exists() to avoid leaving unneeded watchers which is a type of resource leak
client.getData().usingWatcher(watcher).forPath(previousSequencePath);
if ( millisToWait != null )
{
millisToWait -= (System.currentTimeMillis() - startMillis);
startMillis = System.currentTimeMillis();
if ( millisToWait <= 0 )
{
doDelete = true; // timed out - delete our node
break;
} wait(millisToWait);
}
else
{
wait();
}
}
catch ( KeeperException.NoNodeException e )
{
// it has been deleted (i.e. lock released). Try to acquire again
}
}
}
}
}
catch ( Exception e )
{
ThreadUtils.checkInterrupted(e);
doDelete = true;
throw e;
}
finally
{
if ( doDelete )
{
deleteOurPath(ourPath);
}
}
return haveTheLock;
}

解锁

release代码相对来说比较简单,就是先判断map里面是否存在当前线程的锁计数,不存在抛出异常,存在的话,进行原子减一操作,releaseLock内部就是删除节点操作,小于0的时候,从map里面移除

  public void release() throws Exception
{
/*
Note on concurrency: a given lockData instance
can be only acted on by a single thread so locking isn't necessary
*/ Thread currentThread = Thread.currentThread();
LockData lockData = threadData.get(currentThread);
if ( lockData == null )
{
throw new IllegalMonitorStateException("You do not own the lock: " + basePath);
} int newLockCount = lockData.lockCount.decrementAndGet();
if ( newLockCount > 0 )
{
return;
}
if ( newLockCount < 0 )
{
throw new IllegalMonitorStateException("Lock count has gone negative for lock: " + basePath);
}
try
{
internals.releaseLock(lockData.lockPath);
}
finally
{
threadData.remove(currentThread);
}
}

后记

分布式锁的实现目前主流比较常用的实现就是Redis和Zookeeper了,相比较自己的实现,Redission和Curator的设计实现更为优秀,也更值得我们借鉴和学习

千里之行,积于跬步;万里之船,成于罗盘,共勉。

分布式锁实现(二):Zookeeper的更多相关文章

  1. 基于redis 实现分布式锁(二)

    https://blog.csdn.net/xiaolyuh123/article/details/78551345 分布式锁的解决方式 基于数据库表做乐观锁,用于分布式锁.(适用于小并发) 使用me ...

  2. 分布式锁tair redis zookeeper,安全性

    tair分布式锁实现:https://yq.aliyun.com/articles/58928 redis分布式锁:https://www.cnblogs.com/jianwei-dai/p/6137 ...

  3. 分布式锁之一:zookeeper分布式锁1

    zookeeper集群的每个节点的数据都是一致的, 那么我们可以通过这些节点来作为锁的标志. 首先给锁设置一下API, 至少要包含, lock(锁住), unlock(解锁), isLocked(是否 ...

  4. Redisson实现分布式锁(二)

    本次基于注解+AOP实现分布式锁(招式与前文基于注解切换多数据源相同),话不多说,直接上样例: 首先自定义注解:设计时需要考虑锁的一般属性:keys,最大等待时间,超时时间,时间单位. package ...

  5. 分布式锁之二:zookeeper分布式锁2

    示例: package com.util; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zoo ...

  6. zookeeper — 实现分布式锁

    一.前言 在之前的文章中介绍过分布式锁的特点和利用Redis实现简单的分布式锁.但是分布式锁的实现还有很多其他方式,但是万变不离其宗,始终遵循一个特点:同一时刻只能有一个操作获取.这篇文章主要介绍如何 ...

  7. ZooKeeper学习(二)ZooKeeper实现分布式锁

    一.简介 在日常开发过程中,大型的项目一般都会采用分布式架构,那么在分布式架构中若需要同时对一个变量进行操作时,可以采用分布式锁来解决变量访问冲突的问题,最典型的案例就是防止库存超卖,当然还有其他很多 ...

  8. 分布式锁与实现(二)——基于ZooKeeper实现

    引言 ZooKeeper是一个分布式的,开放源码的分布式应用程序协调服务,是Google的Chubby一个开源的实现,是Hadoop和Hbase的重要组件.它是一个为分布式应用提供一致性服务的软件,提 ...

  9. [转载] zookeeper 分布式锁服务

    转载自http://www.cnblogs.com/shanyou/archive/2012/09/22/2697818.html 分布式锁服务在大家的项目中或许用的不多,因为大家都把排他放在数据库那 ...

  10. 基于Zookeeper实现多进程分布式锁

    一.zookeeper简介及基本操作 Zookeeper 并不是用来专门存储数据的,它的作用主要是用来维护和监控你存储的数据的状态变化.当对目录节点监控状态打开时,一旦目录节点的状态发生变化,Watc ...

随机推荐

  1. 你(可能)不知道的 web api

    转自奇舞周刊 简介 作为前端er,我们的工作与web是分不开的,随着HTML5的日益壮大,浏览器自带的webapi也随着增多.本篇文章主要选取了几个有趣且有用的webapi进行介绍,分别介绍其用法.用 ...

  2. 小白struts2 札记

    struts2里面的filter 也就是起个过滤作用的  1 过滤request 请求2 过滤文件类型 禁用的文字等

  3. UVA1411 Ants

    想出的一道题竟然是原题QAQ 非常有趣的一个题 根据三角形两边之和大于第三边 所以相交的线段一定是比不相交的线段要长的 所以直接二分图构图 最小费用最大流即可 (我不管我不管我要把这个出到NOIP膜你 ...

  4. Code First 延迟装入特性

    使用ORM框架,基本上都会添加“延迟装入”的特性支持,当使用Entity Framwork 的objectContext与DbContext操作数据时,默认都使用“延迟装入”,也就是当我们在应用程序里 ...

  5. mysql5.7-my.cnf

    [client] port = socket=/tmp/my3306.sock [mysql] no-auto-rehash [mysqld] #########base############ us ...

  6. LInux终端中Ctrl+S卡死

    因为初学Linux,在vim中写东西是总是喜欢按Ctrl+s来保存内容导致终端突然卡主,然后上网查资料发现了Ctrl+s 暂停屏幕输出[锁住终端]而对应的按键是Ctrl+q 恢复屏幕输出[解锁终端]

  7. 【狼】狼的unity3d脚本学习

      记录学习中的问题,时刻更新 unity获取鼠标所在位置 BOOL GetCursorPos(   LPPOINT lpPoint); 获取鼠标所在位置,不过原点在左下角 ///////////// ...

  8. LeetCode刷题笔记-递归-反转二叉树

    题目描述: 翻转一棵二叉树. 解题思路: 1.对于二叉树,立马递归 2.先处理 根节点,不需改动 3.处根的左子树和右子树需要交换位置 4.递归处理左子树和右子树.步骤见1-3步 Java代码实现: ...

  9. yum安装apache

    一.查询是否已经安装apache rpm  -qa  httpd 注:Apache在linux系统里的名字是httpd 如果有返回的信息,则会显示已经安装的软件.如果没有则不会显示其它的信息.如下图是 ...

  10. Oracle查询最近执行过的SQL语句

    oracle 查询最近执行过的 SQL语句 select sql_text,last_load_time from v$sql order by last_load_time desc; SELECT ...