ConcurrentHashMap 源码解析 -- Java 容器
final Segment<K,V> segmentFor(int hash) {
return segments[(hash >>> segmentShift) & segmentMask];
}
// Find power-of-two sizes best matching arguments
int sshift = 0;
int ssize = 1;
while (ssize < concurrencyLevel) {
++sshift;
ssize <<= 1;
}
segmentShift = 32 - sshift;
segmentMask = ssize - 1;
this.segments = Segment.newArray(ssize);
ssize 初始值是1, 假如concurrencyLevel是16,在ssize不断左移乘与2的过程中,sshift记录了总共移动了多少位
concurrentyLevel是16,那么ssize从1到16,总共是移动了4位
segmentShift = 32 - sshift = 32 -4 = 28
segmentMask = ssize - 1 = 16 - 1 = 15,二进制表示就是 1111
所以在上面的segmentFor函数中,使用的是将hash值无符号右移segmentShift 位,再通过segmentMask进行与操作,得到其实原来hash值的高sshift位,这个例子就是最高4位的值,4位刚好能够表示16个Segment
接下来看看构造函数的其它部分
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
int c = initialCapacity / ssize;
if (c * ssize < initialCapacity)
++c;
int cap = 1;
while (cap < c)
cap <<= 1; for (int i = 0; i < this.segments.length; ++i)
this.segments[i] = new Segment<K,V>(cap, loadFactor);
1-2 行初始化初始容量,最大为1 <<30 ,不能超过这个值
接下来计算c,如果 c*ssize < initialCapacity, ssize是segment的数目,将多个容量平均分成ssize份,如果还是比initialCapacity小,那么就把c+1。不难理解。
默认情况下,initialCapacity是16,ssize是15,那么c==0,那么此时cap==1 不变
解析来又是移位操作,这么做是为了保证每个segment中元素的数量都是2的整数次幂。cap就是比c大的最小一个2的整数次幂。
然后通过for循环,在初始化每个Segment
Segment(int initialCapacity, float lf) {
loadFactor = lf;
setTable(HashEntry.<K,V>newArray(initialCapacity));
}
@SuppressWarnings("unchecked")
static final <K,V> HashEntry<K,V>[] newArray(int i) {
return new HashEntry[i];
}
在Segment的构造函数中,调用HashEntry的newArray函数,也就是创建一个initialCapacity大小的HashEntry的数组
ConcurrentHashMap中的读是不需要加锁的,通过volatile来实现,写数据的时候,写完写volatile,但是读之前,先读volatile。volatile的特性保证了,volatile之前写的东西,能够被volatile读之后的线程读到。也就是写volatile的时候,会把cache刷到内存中,而读volatile的时候,会将cache的数据invalidate,而从内存中读取,这样就能够保证是最新的值。
void rehash() {
HashEntry<K,V>[] oldTable = table;
int oldCapacity = oldTable.length;
if (oldCapacity >= MAXIMUM_CAPACITY)
return; /*
* Reclassify nodes in each list to new Map. Because we are
* using power-of-two expansion, the elements from each bin
* must either stay at same index, or move with a power of two
* offset. We eliminate unnecessary node creation by catching
* cases where old nodes can be reused because their next
* fields won't change. Statistically, at the default
* threshold, only about one-sixth of them need cloning when
* a table doubles. The nodes they replace will be garbage
* collectable as soon as they are no longer referenced by any
* reader thread that may be in the midst of traversing table
* right now.
*/ HashEntry<K,V>[] newTable = HashEntry.newArray(oldCapacity<<1); //注意这里容量只扩大为原来的两倍,所以元素的下标不然就是原来的两倍大,不然就是和原来一样
threshold = (int)(newTable.length * loadFactor);
int sizeMask = newTable.length - 1;
for (int i = 0; i < oldCapacity ; i++) {
// We need to guarantee that any existing reads of old Map can
// proceed. So we cannot yet null out each bin.
HashEntry<K,V> e = oldTable[i]; //获取链表的头部 if (e != null) {
HashEntry<K,V> next = e.next;
int idx = e.hash & sizeMask; //得到新的下标idx,注意sizeMask前面已经变成newTable.length-1,sizeMask变成了原来的两倍 // Single node on list
if (next == null) //如果只有一个节点,那么就结束了
newTable[idx] = e; //直接把新的index指向它,这里为什么不用管newTable[idx]中原来是否有元素,因为前面是两倍扩容,所以前面的所有数组的index肯定不会映射到这里,所以肯定为null else {
// Reuse trailing consecutive sequence at same slot
HashEntry<K,V> lastRun = e;
int lastIdx = idx;
for (HashEntry<K,V> last = next; //在链表中遍历
last != null;
last = last.next) {
int k = last.hash & sizeMask; 计算下一个的hash值
if (k != lastIdx) { //如果hash值等于前面计算的hash值,那么就换掉
lastIdx = k;
lastRun = last;
}
}
newTable[lastIdx] = lastRun; //这样能够保留最长的同一个hash值的所有节点,至少是最后一个节点,相当于把一个链表最后几个能够hash到新的table中同一个位置的HashEntry重复利用起来 // Clone all remaining nodes
for (HashEntry<K,V> p = e; p != lastRun; p = p.next) { //处理其他的节点
int k = p.hash & sizeMask;
HashEntry<K,V> n = newTable[k]; //每次都添加在每个链表的前面
newTable[k] = new HashEntry<K,V>(p.key, p.hash,
n, p.value);
}
}
}
}
table = newTable;
}
Segment的remove函数
/**
* Remove; match on key only if value null, else match both.
*/
V remove(Object key, int hash, Object value) {
lock();
try {
int c = count - 1;
HashEntry<K,V>[] tab = table;
int index = hash & (tab.length - 1);
HashEntry<K,V> first = tab[index];
HashEntry<K,V> e = first;
while (e != null && (e.hash != hash || !key.equals(e.key)))
e = e.next; V oldValue = null;
if (e != null) {
V v = e.value;
if (value == null || value.equals(v)) {
oldValue = v; //由于next指针是final的,所以前面的所有HashEntry都要被clone
// All entries following removed node can stay
// in list, but all preceding ones need to be
// cloned.
++modCount;
HashEntry<K,V> newFirst = e.next;//从头遍历到要删除的节点,不断添加到表头
for (HashEntry<K,V> p = first; p != e; p = p.next)
newFirst = new HashEntry<K,V>(p.key, p.hash,
newFirst, p.value);
tab[index] = newFirst;
count = c; // write-volatile
}
}
return oldValue;
} finally {
unlock();
}
}
public boolean isEmpty() {
final Segment<K,V>[] segments = this.segments;
/*
* We keep track of per-segment modCounts to avoid ABA
* problems in which an element in one segment was added and
* in another removed during traversal, in which case the
* table was never actually empty at any point. Note the
* similar use of modCounts in the size() and containsValue()
* methods, which are the only other methods also susceptible
* to ABA problems.
*/
int[] mc = new int[segments.length];
int mcsum = 0;
for (int i = 0; i < segments.length; ++i) {
if (segments[i].count != 0) //如果count!=0 那么肯定是false
return false;
else
mcsum += mc[i] = segments[i].modCount; //记录modCount
}
// If mcsum happens to be zero, then we know we got a snapshot
// before any modifications at all were made. This is
// probably common enough to bother tracking.
if (mcsum != 0) { //如果mssum 等于0,也就是说某一瞬间,集合是空的
for (int i = 0; i < segments.length; ++i) {
if (segments[i].count != 0 || //再判断一次
mc[i] != segments[i].modCount) //或者这个过程中modCount发生了变化
return false;
}
}
return true;
}
/**
* Returns the number of key-value mappings in this map. If the
* map contains more than <tt>Integer.MAX_VALUE</tt> elements, returns
* <tt>Integer.MAX_VALUE</tt>.
*
* @return the number of key-value mappings in this map
*/
public int size() {
final Segment<K,V>[] segments = this.segments;
long sum = 0;
long check = 0;
int[] mc = new int[segments.length];
// Try a few times to get accurate count. On failure due to
// continuous async changes in table, resort to locking.
for (int k = 0; k < RETRIES_BEFORE_LOCK; ++k) { //最多检查两遍,如果这个过程中老是有线程在修改,那么就请求锁来解决
check = 0;
sum = 0;
int mcsum = 0;
for (int i = 0; i < segments.length; ++i) {
sum += segments[i].count; //计算个数
mcsum += mc[i] = segments[i].modCount; //记录modCount
}
if (mcsum != 0) { //如果这个过程中发生了修改
for (int i = 0; i < segments.length; ++i) {
check += segments[i].count; //重新计算check
if (mc[i] != segments[i].modCount) { //如果某个modCount变化了,也说明发生了改变,必须重试
check = -1; // force retry
break;
}
}
}
if (check == sum) //
break;
}
if (check != sum) { // Resort to locking all segments //全部锁住再计算
sum = 0;
for (int i = 0; i < segments.length; ++i)
segments[i].lock();
for (int i = 0; i < segments.length; ++i)
sum += segments[i].count;
for (int i = 0; i < segments.length; ++i)
segments[i].unlock();
}
if (sum > Integer.MAX_VALUE)
return Integer.MAX_VALUE;
else
return (int)sum;
}
/**
* Removes the key (and its corresponding value) from this map.
* This method does nothing if the key is not in the map.
*
* @param key the key that needs to be removed
* @return the previous value associated with <tt>key</tt>, or
* <tt>null</tt> if there was no mapping for <tt>key</tt>
* @throws NullPointerException if the specified key is null
*/
public V remove(Object key) {
int hash = hash(key.hashCode());
return segmentFor(hash).remove(key, hash, null); //这里是如果原来这个key映射到某个值,那么就remove掉,这里传入null,在remove函数中就知道直接不管原来的值是什么,直接删掉
} /**
* {@inheritDoc}
*
* @throws NullPointerException if the specified key is null
*/
public boolean remove(Object key, Object value) {
int hash = hash(key.hashCode());
if (value == null) //但是如果value是null,是不允许的,所以return false,因为null被用来判断特殊情况,如上所示
return false;
return segmentFor(hash).remove(key, hash, value) != null;
}
ConcurrentHashMap 源码解析 -- Java 容器的更多相关文章
- Java之ConcurrentHashMap源码解析
ConcurrentHashMap源码解析 目录 ConcurrentHashMap源码解析 jdk8之前的实现原理 jdk8的实现原理 变量解释 初始化 初始化table put操作 hash算法 ...
- Spring源码解析-ioc容器的设计
Spring源码解析-ioc容器的设计 1 IoC容器系列的设计:BeanFactory和ApplicatioContext 在Spring容器中,主要分为两个主要的容器系列,一个是实现BeanFac ...
- Java并发包源码学习系列:JDK1.8的ConcurrentHashMap源码解析
目录 为什么要使用ConcurrentHashMap? ConcurrentHashMap的结构特点 Java8之前 Java8之后 基本常量 重要成员变量 构造方法 tableSizeFor put ...
- ConcurrentHashMap源码解析,多线程扩容
前面一篇已经介绍过了 HashMap 的源码: HashMap源码解析.jdk7和8之后的区别.相关问题分析 HashMap并不是线程安全的,他就一个普通的容器,没有做相关的同步处理,因此线程不安全主 ...
- ConcurrentHashMap源码解析(1)
此文已由作者赵计刚授权网易云社区发布. 欢迎访问网易云社区,了解更多网易技术产品运营经验. 注:在看这篇文章之前,如果对HashMap的层不清楚的话,建议先去看看HashMap源码解析. http:/ ...
- 第二章 ConcurrentHashMap源码解析
注:在看这篇文章之前,如果对HashMap的层不清楚的话,建议先去看看HashMap源码解析. http://www.cnblogs.com/java-zhao/p/5106189.html 1.对于 ...
- 【Java实战】源码解析Java SPI(Service Provider Interface )机制原理
一.背景知识 在阅读开源框架源码时,发现许多框架都支持SPI(Service Provider Interface ),前面有篇文章JDBC对Driver的加载时应用了SPI,参考[Hibernate ...
- 数据结构算法 - ConcurrentHashMap 源码解析
五个线程同时往 HashMap 中 put 数据会发生什么? ConcurrentHashMap 是怎么保证线程安全的? 在分析 HashMap 源码时还遗留这两个问题,这次我们站在 Java 多线程 ...
- Spring源码解析 – AnnotationConfigApplicationContext容器创建过程
Spring在BeanFactory基础上提供了一些列具体容器的实现,其中AnnotationConfigApplicationContext是一个用来管理注解bean的容器,从AnnotationC ...
随机推荐
- 修改 jquery easyui 表单验证默认的样式
目前对于不符合要求的输入域会在右侧显示一个带箭头的提示,可是如果我的输入框比较靠右的话就显示不全了(虽然会出滚动条,但是由于鼠标移开就消失了,所以还是看不到提示内容)! 能不能把这个提示的位置改变一下 ...
- CodeForces 682D Alyona and Strings (四维DP)
Alyona and Strings 题目链接: http://acm.hust.edu.cn/vjudge/contest/121333#problem/D Description After re ...
- 什么是USBMini接口
USB的接口有四种.一种是大头,有A型和B型两种,其中A型最常见,就是我们用的最多的标准的USB接头:一种是小头的,也就是USB Mini,也有A型和B型两种,其中B型应用最多,主要应用于手机.MP4 ...
- codeforces 625C K-special Tables
C. K-special Tables time limit per test 2 seconds memory limit per test 256 megabytes input standard ...
- ASP.NET项目中引用全局dll
在ASP.NET项目中,有些dll是全局dll,也就是说,没有放在单个项目的引用中.它们一般存放在如下目录C:\Windows\assembly中 这个时候,我们需要在单个项目中引用他们,应该如何做呢 ...
- 修改浏览器accept使支持@ResponseBody
原始:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 application/json,text/javascript, ...
- MVC神韵---你想在哪解脱!(十七)
实现针对数据的CRUD操作 首先,让我们来看一下如何实现一条数据的明细信息视图.为了更好地体会这一功能,首先我们在前文所述的电影清单视图(Views文件夹下面的Movies文件夹下面的Index.cs ...
- Nginx NLB 及Redis学习
负载均衡: ARR: 微软的应用级别的负载均衡方案 NLB:服务器级别的负载均衡方案 Nginx:反向代理 达到负载均衡. Redis:用作缓存(Redis 主从配置和参数详解 http://www. ...
- sql 调用函数的方法
USE [ChangHong_612]GO/****** Object: StoredProcedure [dbo].[st_MES_RptInspectWeight] Script Date: 09 ...
- MVC 中WebViewPage的运用
MVC在View的最后处理中是将View的文件页面编译成一个类,这个类必须继承自WebViewPage,WebViewPage默认添加对AjaxHelper和HtmlHelper的支持 public ...