沉淀再出发:java中的HashMap、ConcurrentHashMap和Hashtable的认识

一、前言

很多知识在学习或者使用了之后总是会忘记的,但是如果把这些只是背后的原理理解了,并且记忆下来,这样我们就不会忘记了,常用的方法有对比记忆,将几个易混的概念放到一起进行比较,对我们的学习和生活有很大的帮助,比如hashmap和hashtab这两个概念的对比和记忆。

二、HashMap的基础知识

2.1、HashMap的介绍

  1. HashMap 是一个散列表,它存储的内容是键值对(key-value)映射。
  2. HashMap 继承于AbstractMap类,实现了MapCloneablejava.io.Serializable接口。
  3. HashMap 的实现不是同步的,这意味着它不是线程安全的。它的keyvalue都可以为null。此外,HashMap中的映射不是有序的。

HashMap 的实例有两个参数影响其性能:“初始容量” 和 “加载因子”。容量是哈希表中桶的数量,初始容量是哈希表在创建时的容量。加载因子是哈希表在其容量自动增加之前可以达到多满的一种尺度。当哈希表中的条目数超出了加载因子与当前容量的乘积时,则要对该哈希表进行 rehash 操作(即重建内部数据结构),从而哈希表将具有大约两倍的桶数。通常,默认加载因子是 0.75, 这是在时间和空间成本上寻求一种折衷。加载因子过高虽然减少了空间开销,但同时也增加了查询成本(在大多数 HashMap 类的操作中,包括 get 和 put 操作,都反映了这一点)。在设置初始容量时应该考虑到映射中所需的条目数及其加载因子,以便最大限度地减少 rehash 操作次数。如果初始容量大于最大条目数除以加载因子,则不会发生 rehash 操作。

2.2、HashMap源码解读

  1. package java.util;
  2. import java.io.*;
  3.  
  4. public class HashMap<K,V>
  5. extends AbstractMap<K,V>
  6. implements Map<K,V>, Cloneable, Serializable
  7. {
  8.  
  9. // 默认的初始容量是16,必须是2的幂。
  10. static final int DEFAULT_INITIAL_CAPACITY = 16;
  11.  
  12. // 最大容量(必须是2的幂且小于2的30次方,传入容量过大将被这个值替换)
  13. static final int MAXIMUM_CAPACITY = 1 << 30;
  14.  
  15. // 默认加载因子
  16. static final float DEFAULT_LOAD_FACTOR = 0.75f;
  17.  
  18. // 存储数据的Entry数组,长度是2的幂。
  19. // HashMap是采用拉链法实现的,每一个Entry本质上是一个单向链表
  20. transient Entry[] table;
  21.  
  22. // HashMap的大小,它是HashMap保存的键值对的数量
  23. transient int size;
  24.  
  25. // HashMap的阈值,用于判断是否需要调整HashMap的容量(threshold = 容量*加载因子)
  26. int threshold;
  27.  
  28. // 加载因子实际大小
  29. final float loadFactor;
  30.  
  31. // HashMap被改变的次数
  32. transient volatile int modCount;
  33.  
  34. // 指定“容量大小”和“加载因子”的构造函数
  35. public HashMap(int initialCapacity, float loadFactor) {
  36. if (initialCapacity < 0)
  37. throw new IllegalArgumentException("Illegal initial capacity: " +
  38. initialCapacity);
  39. // HashMap的最大容量只能是MAXIMUM_CAPACITY
  40. if (initialCapacity > MAXIMUM_CAPACITY)
  41. initialCapacity = MAXIMUM_CAPACITY;
  42. if (loadFactor <= 0 || Float.isNaN(loadFactor))
  43. throw new IllegalArgumentException("Illegal load factor: " +
  44. loadFactor);
  45.  
  46. // 找出“大于initialCapacity”的最小的2的幂
  47. int capacity = 1;
  48. while (capacity < initialCapacity)
  49. capacity <<= 1;
  50.  
  51. // 设置“加载因子”
  52. this.loadFactor = loadFactor;
  53. // 设置“HashMap阈值”,当HashMap中存储数据的数量达到threshold时,就需要将HashMap的容量加倍。
  54. threshold = (int)(capacity * loadFactor);
  55. // 创建Entry数组,用来保存数据
  56. table = new Entry[capacity];
  57. init();
  58. }
  59.  
  60. // 指定“容量大小”的构造函数
  61. public HashMap(int initialCapacity) {
  62. this(initialCapacity, DEFAULT_LOAD_FACTOR);
  63. }
  64.  
  65. // 默认构造函数。
  66. public HashMap() {
  67. // 设置“加载因子”
  68. this.loadFactor = DEFAULT_LOAD_FACTOR;
  69. // 设置“HashMap阈值”,当HashMap中存储数据的数量达到threshold时,就需要将HashMap的容量加倍。
  70. threshold = (int)(DEFAULT_INITIAL_CAPACITY * DEFAULT_LOAD_FACTOR);
  71. // 创建Entry数组,用来保存数据
  72. table = new Entry[DEFAULT_INITIAL_CAPACITY];
  73. init();
  74. }
  75.  
  76. // 包含“子Map”的构造函数
  77. public HashMap(Map<? extends K, ? extends V> m) {
  78. this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,
  79. DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR);
  80. // 将m中的全部元素逐个添加到HashMap中
  81. putAllForCreate(m);
  82. }
  83.  
  84. static int hash(int h) {
  85. h ^= (h >>> 20) ^ (h >>> 12);
  86. return h ^ (h >>> 7) ^ (h >>> 4);
  87. }
  88.  
  89. // 返回索引值
  90. // h & (length-1)保证返回值的小于length
  91. static int indexFor(int h, int length) {
  92. return h & (length-1);
  93. }
  94.  
  95. public int size() {
  96. return size;
  97. }
  98.  
  99. public boolean isEmpty() {
  100. return size == 0;
  101. }
  102.  
  103. // 获取key对应的value
  104. public V get(Object key) {
  105. if (key == null)
  106. return getForNullKey();
  107. // 获取key的hash值
  108. int hash = hash(key.hashCode());
  109. // 在“该hash值对应的链表”上查找“键值等于key”的元素
  110. for (Entry<K,V> e = table[indexFor(hash, table.length)];
  111. e != null;
  112. e = e.next) {
  113. Object k;
  114. if (e.hash == hash && ((k = e.key) == key || key.equals(k)))
  115. return e.value;
  116. }
  117. return null;
  118. }
  119.  
  120. // 获取“key为null”的元素的值
  121. // HashMap将“key为null”的元素存储在table[0]位置!
  122. private V getForNullKey() {
  123. for (Entry<K,V> e = table[0]; e != null; e = e.next) {
  124. if (e.key == null)
  125. return e.value;
  126. }
  127. return null;
  128. }
  129.  
  130. // HashMap是否包含key
  131. public boolean containsKey(Object key) {
  132. return getEntry(key) != null;
  133. }
  134.  
  135. // 返回“键为key”的键值对
  136. final Entry<K,V> getEntry(Object key) {
  137. // 获取哈希值
  138. // HashMap将“key为null”的元素存储在table[0]位置,“key不为null”的则调用hash()计算哈希值
  139. int hash = (key == null) ? 0 : hash(key.hashCode());
  140. // 在“该hash值对应的链表”上查找“键值等于key”的元素
  141. for (Entry<K,V> e = table[indexFor(hash, table.length)];
  142. e != null;
  143. e = e.next) {
  144. Object k;
  145. if (e.hash == hash &&
  146. ((k = e.key) == key || (key != null && key.equals(k))))
  147. return e;
  148. }
  149. return null;
  150. }
  151.  
  152. // 将“key-value”添加到HashMap中
  153. public V put(K key, V value) {
  154. // 若“key为null”,则将该键值对添加到table[0]中。
  155. if (key == null)
  156. return putForNullKey(value);
  157. // 若“key不为null”,则计算该key的哈希值,然后将其添加到该哈希值对应的链表中。
  158. int hash = hash(key.hashCode());
  159. int i = indexFor(hash, table.length);
  160. for (Entry<K,V> e = table[i]; e != null; e = e.next) {
  161. Object k;
  162. // 若“该key”对应的键值对已经存在,则用新的value取代旧的value。然后退出!
  163. if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
  164. V oldValue = e.value;
  165. e.value = value;
  166. e.recordAccess(this);
  167. return oldValue;
  168. }
  169. }
  170.  
  171. // 若“该key”对应的键值对不存在,则将“key-value”添加到table中
  172. modCount++;
  173. addEntry(hash, key, value, i);
  174. return null;
  175. }
  176.  
  177. // putForNullKey()的作用是将“key为null”键值对添加到table[0]位置
  178. private V putForNullKey(V value) {
  179. for (Entry<K,V> e = table[0]; e != null; e = e.next) {
  180. if (e.key == null) {
  181. V oldValue = e.value;
  182. e.value = value;
  183. e.recordAccess(this);
  184. return oldValue;
  185. }
  186. }
  187. // 这里的完全不会被执行到!
  188. modCount++;
  189. addEntry(0, null, value, 0);
  190. return null;
  191. }
  192.  
  193. // 创建HashMap对应的“添加方法”,
  194. // 它和put()不同。putForCreate()是内部方法,它被构造函数等调用,用来创建HashMap
  195. // 而put()是对外提供的往HashMap中添加元素的方法。
  196. private void putForCreate(K key, V value) {
  197. int hash = (key == null) ? 0 : hash(key.hashCode());
  198. int i = indexFor(hash, table.length);
  199.  
  200. // 若该HashMap表中存在“键值等于key”的元素,则替换该元素的value值
  201. for (Entry<K,V> e = table[i]; e != null; e = e.next) {
  202. Object k;
  203. if (e.hash == hash &&
  204. ((k = e.key) == key || (key != null && key.equals(k)))) {
  205. e.value = value;
  206. return;
  207. }
  208. }
  209.  
  210. // 若该HashMap表中不存在“键值等于key”的元素,则将该key-value添加到HashMap中
  211. createEntry(hash, key, value, i);
  212. }
  213.  
  214. // 将“m”中的全部元素都添加到HashMap中。
  215. // 该方法被内部的构造HashMap的方法所调用。
  216. private void putAllForCreate(Map<? extends K, ? extends V> m) {
  217. // 利用迭代器将元素逐个添加到HashMap中
  218. for (Iterator<? extends Map.Entry<? extends K, ? extends V>> i = m.entrySet().iterator(); i.hasNext(); ) {
  219. Map.Entry<? extends K, ? extends V> e = i.next();
  220. putForCreate(e.getKey(), e.getValue());
  221. }
  222. }
  223.  
  224. // 重新调整HashMap的大小,newCapacity是调整后的单位
  225. void resize(int newCapacity) {
  226. Entry[] oldTable = table;
  227. int oldCapacity = oldTable.length;
  228. if (oldCapacity == MAXIMUM_CAPACITY) {
  229. threshold = Integer.MAX_VALUE;
  230. return;
  231. }
  232.  
  233. // 新建一个HashMap,将“旧HashMap”的全部元素添加到“新HashMap”中,
  234. // 然后,将“新HashMap”赋值给“旧HashMap”。
  235. Entry[] newTable = new Entry[newCapacity];
  236. transfer(newTable);
  237. table = newTable;
  238. threshold = (int)(newCapacity * loadFactor);
  239. }
  240.  
  241. // 将HashMap中的全部元素都添加到newTable中
  242. void transfer(Entry[] newTable) {
  243. Entry[] src = table;
  244. int newCapacity = newTable.length;
  245. for (int j = 0; j < src.length; j++) {
  246. Entry<K,V> e = src[j];
  247. if (e != null) {
  248. src[j] = null;
  249. do {
  250. Entry<K,V> next = e.next;
  251. int i = indexFor(e.hash, newCapacity);
  252. e.next = newTable[i];
  253. newTable[i] = e;
  254. e = next;
  255. } while (e != null);
  256. }
  257. }
  258. }
  259.  
  260. // 将"m"的全部元素都添加到HashMap中
  261. public void putAll(Map<? extends K, ? extends V> m) {
  262. // 有效性判断
  263. int numKeysToBeAdded = m.size();
  264. if (numKeysToBeAdded == 0)
  265. return;
  266.  
  267. // 计算容量是否足够,
  268. // 若“当前实际容量 < 需要的容量”,则将容量x2。
  269. if (numKeysToBeAdded > threshold) {
  270. int targetCapacity = (int)(numKeysToBeAdded / loadFactor + 1);
  271. if (targetCapacity > MAXIMUM_CAPACITY)
  272. targetCapacity = MAXIMUM_CAPACITY;
  273. int newCapacity = table.length;
  274. while (newCapacity < targetCapacity)
  275. newCapacity <<= 1;
  276. if (newCapacity > table.length)
  277. resize(newCapacity);
  278. }
  279.  
  280. // 通过迭代器,将“m”中的元素逐个添加到HashMap中。
  281. for (Iterator<? extends Map.Entry<? extends K, ? extends V>> i = m.entrySet().iterator(); i.hasNext(); ) {
  282. Map.Entry<? extends K, ? extends V> e = i.next();
  283. put(e.getKey(), e.getValue());
  284. }
  285. }
  286.  
  287. // 删除“键为key”元素
  288. public V remove(Object key) {
  289. Entry<K,V> e = removeEntryForKey(key);
  290. return (e == null ? null : e.value);
  291. }
  292.  
  293. // 删除“键为key”的元素
  294. final Entry<K,V> removeEntryForKey(Object key) {
  295. // 获取哈希值。若key为null,则哈希值为0;否则调用hash()进行计算
  296. int hash = (key == null) ? 0 : hash(key.hashCode());
  297. int i = indexFor(hash, table.length);
  298. Entry<K,V> prev = table[i];
  299. Entry<K,V> e = prev;
  300.  
  301. // 删除链表中“键为key”的元素
  302. // 本质是“删除单向链表中的节点”
  303. while (e != null) {
  304. Entry<K,V> next = e.next;
  305. Object k;
  306. if (e.hash == hash &&
  307. ((k = e.key) == key || (key != null && key.equals(k)))) {
  308. modCount++;
  309. size--;
  310. if (prev == e)
  311. table[i] = next;
  312. else
  313. prev.next = next;
  314. e.recordRemoval(this);
  315. return e;
  316. }
  317. prev = e;
  318. e = next;
  319. }
  320.  
  321. return e;
  322. }
  323.  
  324. // 删除“键值对”
  325. final Entry<K,V> removeMapping(Object o) {
  326. if (!(o instanceof Map.Entry))
  327. return null;
  328.  
  329. Map.Entry<K,V> entry = (Map.Entry<K,V>) o;
  330. Object key = entry.getKey();
  331. int hash = (key == null) ? 0 : hash(key.hashCode());
  332. int i = indexFor(hash, table.length);
  333. Entry<K,V> prev = table[i];
  334. Entry<K,V> e = prev;
  335.  
  336. // 删除链表中的“键值对e”
  337. // 本质是“删除单向链表中的节点”
  338. while (e != null) {
  339. Entry<K,V> next = e.next;
  340. if (e.hash == hash && e.equals(entry)) {
  341. modCount++;
  342. size--;
  343. if (prev == e)
  344. table[i] = next;
  345. else
  346. prev.next = next;
  347. e.recordRemoval(this);
  348. return e;
  349. }
  350. prev = e;
  351. e = next;
  352. }
  353.  
  354. return e;
  355. }
  356.  
  357. // 清空HashMap,将所有的元素设为null
  358. public void clear() {
  359. modCount++;
  360. Entry[] tab = table;
  361. for (int i = 0; i < tab.length; i++)
  362. tab[i] = null;
  363. size = 0;
  364. }
  365.  
  366. // 是否包含“值为value”的元素
  367. public boolean containsValue(Object value) {
  368. // 若“value为null”,则调用containsNullValue()查找
  369. if (value == null)
  370. return containsNullValue();
  371.  
  372. // 若“value不为null”,则查找HashMap中是否有值为value的节点。
  373. Entry[] tab = table;
  374. for (int i = 0; i < tab.length ; i++)
  375. for (Entry e = tab[i] ; e != null ; e = e.next)
  376. if (value.equals(e.value))
  377. return true;
  378. return false;
  379. }
  380.  
  381. // 是否包含null值
  382. private boolean containsNullValue() {
  383. Entry[] tab = table;
  384. for (int i = 0; i < tab.length ; i++)
  385. for (Entry e = tab[i] ; e != null ; e = e.next)
  386. if (e.value == null)
  387. return true;
  388. return false;
  389. }
  390.  
  391. // 克隆一个HashMap,并返回Object对象
  392. public Object clone() {
  393. HashMap<K,V> result = null;
  394. try {
  395. result = (HashMap<K,V>)super.clone();
  396. } catch (CloneNotSupportedException e) {
  397. // assert false;
  398. }
  399. result.table = new Entry[table.length];
  400. result.entrySet = null;
  401. result.modCount = 0;
  402. result.size = 0;
  403. result.init();
  404. // 调用putAllForCreate()将全部元素添加到HashMap中
  405. result.putAllForCreate(this);
  406.  
  407. return result;
  408. }
  409.  
  410. // Entry是单向链表。
  411. // 它是 “HashMap链式存储法”对应的链表。
  412. // 它实现了Map.Entry 接口,即实现getKey(), getValue(), setValue(V value), equals(Object o), hashCode()这些函数
  413. static class Entry<K,V> implements Map.Entry<K,V> {
  414. final K key;
  415. V value;
  416. // 指向下一个节点
  417. Entry<K,V> next;
  418. final int hash;
  419.  
  420. // 构造函数。
  421. // 输入参数包括"哈希值(h)", "键(k)", "值(v)", "下一节点(n)"
  422. Entry(int h, K k, V v, Entry<K,V> n) {
  423. value = v;
  424. next = n;
  425. key = k;
  426. hash = h;
  427. }
  428.  
  429. public final K getKey() {
  430. return key;
  431. }
  432.  
  433. public final V getValue() {
  434. return value;
  435. }
  436.  
  437. public final V setValue(V newValue) {
  438. V oldValue = value;
  439. value = newValue;
  440. return oldValue;
  441. }
  442.  
  443. // 判断两个Entry是否相等
  444. // 若两个Entry的“key”和“value”都相等,则返回true。
  445. // 否则,返回false
  446. public final boolean equals(Object o) {
  447. if (!(o instanceof Map.Entry))
  448. return false;
  449. Map.Entry e = (Map.Entry)o;
  450. Object k1 = getKey();
  451. Object k2 = e.getKey();
  452. if (k1 == k2 || (k1 != null && k1.equals(k2))) {
  453. Object v1 = getValue();
  454. Object v2 = e.getValue();
  455. if (v1 == v2 || (v1 != null && v1.equals(v2)))
  456. return true;
  457. }
  458. return false;
  459. }
  460.  
  461. // 实现hashCode()
  462. public final int hashCode() {
  463. return (key==null ? 0 : key.hashCode()) ^
  464. (value==null ? 0 : value.hashCode());
  465. }
  466.  
  467. public final String toString() {
  468. return getKey() + "=" + getValue();
  469. }
  470.  
  471. // 当向HashMap中添加元素时,绘调用recordAccess()。
  472. // 这里不做任何处理
  473. void recordAccess(HashMap<K,V> m) {
  474. }
  475.  
  476. // 当从HashMap中删除元素时,绘调用recordRemoval()。
  477. // 这里不做任何处理
  478. void recordRemoval(HashMap<K,V> m) {
  479. }
  480. }
  481.  
  482. // 新增Entry。将“key-value”插入指定位置,bucketIndex是位置索引。
  483. void addEntry(int hash, K key, V value, int bucketIndex) {
  484. // 保存“bucketIndex”位置的值到“e”中
  485. Entry<K,V> e = table[bucketIndex];
  486. // 设置“bucketIndex”位置的元素为“新Entry”,
  487. // 设置“e”为“新Entry的下一个节点”
  488. table[bucketIndex] = new Entry<K,V>(hash, key, value, e);
  489. // 若HashMap的实际大小 不小于 “阈值”,则调整HashMap的大小
  490. if (size++ >= threshold)
  491. resize(2 * table.length);
  492. }
  493.  
  494. // 创建Entry。将“key-value”插入指定位置,bucketIndex是位置索引。
  495. // 它和addEntry的区别是:
  496. // (01) addEntry()一般用在 新增Entry可能导致“HashMap的实际容量”超过“阈值”的情况下。
  497. // 例如,我们新建一个HashMap,然后不断通过put()向HashMap中添加元素;
  498. // put()是通过addEntry()新增Entry的。
  499. // 在这种情况下,我们不知道何时“HashMap的实际容量”会超过“阈值”;
  500. // 因此,需要调用addEntry()
  501. // (02) createEntry() 一般用在 新增Entry不会导致“HashMap的实际容量”超过“阈值”的情况下。
  502. // 例如,我们调用HashMap“带有Map”的构造函数,它绘将Map的全部元素添加到HashMap中;
  503. // 但在添加之前,我们已经计算好“HashMap的容量和阈值”。也就是,可以确定“即使将Map中
  504. // 的全部元素添加到HashMap中,都不会超过HashMap的阈值”。
  505. // 此时,调用createEntry()即可。
  506. void createEntry(int hash, K key, V value, int bucketIndex) {
  507. // 保存“bucketIndex”位置的值到“e”中
  508. Entry<K,V> e = table[bucketIndex];
  509. // 设置“bucketIndex”位置的元素为“新Entry”,
  510. // 设置“e”为“新Entry的下一个节点”
  511. table[bucketIndex] = new Entry<K,V>(hash, key, value, e);
  512. size++;
  513. }
  514.  
  515. // HashIterator是HashMap迭代器的抽象出来的父类,实现了公共了函数。
  516. // 它包含“key迭代器(KeyIterator)”、“Value迭代器(ValueIterator)”和“Entry迭代器(EntryIterator)”3个子类。
  517. private abstract class HashIterator<E> implements Iterator<E> {
  518. // 下一个元素
  519. Entry<K,V> next;
  520. // expectedModCount用于实现fast-fail机制。
  521. int expectedModCount;
  522. // 当前索引
  523. int index;
  524. // 当前元素
  525. Entry<K,V> current;
  526.  
  527. HashIterator() {
  528. expectedModCount = modCount;
  529. if (size > 0) { // advance to first entry
  530. Entry[] t = table;
  531. // 将next指向table中第一个不为null的元素。
  532. // 这里利用了index的初始值为0,从0开始依次向后遍历,直到找到不为null的元素就退出循环。
  533. while (index < t.length && (next = t[index++]) == null)
  534. ;
  535. }
  536. }
  537.  
  538. public final boolean hasNext() {
  539. return next != null;
  540. }
  541.  
  542. // 获取下一个元素
  543. final Entry<K,V> nextEntry() {
  544. if (modCount != expectedModCount)
  545. throw new ConcurrentModificationException();
  546. Entry<K,V> e = next;
  547. if (e == null)
  548. throw new NoSuchElementException();
  549.  
  550. // 注意!!!
  551. // 一个Entry就是一个单向链表
  552. // 若该Entry的下一个节点不为空,就将next指向下一个节点;
  553. // 否则,将next指向下一个链表(也是下一个Entry)的不为null的节点。
  554. if ((next = e.next) == null) {
  555. Entry[] t = table;
  556. while (index < t.length && (next = t[index++]) == null)
  557. ;
  558. }
  559. current = e;
  560. return e;
  561. }
  562.  
  563. // 删除当前元素
  564. public void remove() {
  565. if (current == null)
  566. throw new IllegalStateException();
  567. if (modCount != expectedModCount)
  568. throw new ConcurrentModificationException();
  569. Object k = current.key;
  570. current = null;
  571. HashMap.this.removeEntryForKey(k);
  572. expectedModCount = modCount;
  573. }
  574.  
  575. }
  576.  
  577. // value的迭代器
  578. private final class ValueIterator extends HashIterator<V> {
  579. public V next() {
  580. return nextEntry().value;
  581. }
  582. }
  583.  
  584. // key的迭代器
  585. private final class KeyIterator extends HashIterator<K> {
  586. public K next() {
  587. return nextEntry().getKey();
  588. }
  589. }
  590.  
  591. // Entry的迭代器
  592. private final class EntryIterator extends HashIterator<Map.Entry<K,V>> {
  593. public Map.Entry<K,V> next() {
  594. return nextEntry();
  595. }
  596. }
  597.  
  598. // 返回一个“key迭代器”
  599. Iterator<K> newKeyIterator() {
  600. return new KeyIterator();
  601. }
  602. // 返回一个“value迭代器”
  603. Iterator<V> newValueIterator() {
  604. return new ValueIterator();
  605. }
  606. // 返回一个“entry迭代器”
  607. Iterator<Map.Entry<K,V>> newEntryIterator() {
  608. return new EntryIterator();
  609. }
  610.  
  611. // HashMap的Entry对应的集合
  612. private transient Set<Map.Entry<K,V>> entrySet = null;
  613.  
  614. // 返回“key的集合”,实际上返回一个“KeySet对象”
  615. public Set<K> keySet() {
  616. Set<K> ks = keySet;
  617. return (ks != null ? ks : (keySet = new KeySet()));
  618. }
  619.  
  620. // Key对应的集合
  621. // KeySet继承于AbstractSet,说明该集合中没有重复的Key。
  622. private final class KeySet extends AbstractSet<K> {
  623. public Iterator<K> iterator() {
  624. return newKeyIterator();
  625. }
  626. public int size() {
  627. return size;
  628. }
  629. public boolean contains(Object o) {
  630. return containsKey(o);
  631. }
  632. public boolean remove(Object o) {
  633. return HashMap.this.removeEntryForKey(o) != null;
  634. }
  635. public void clear() {
  636. HashMap.this.clear();
  637. }
  638. }
  639.  
  640. // 返回“value集合”,实际上返回的是一个Values对象
  641. public Collection<V> values() {
  642. Collection<V> vs = values;
  643. return (vs != null ? vs : (values = new Values()));
  644. }
  645.  
  646. // “value集合”
  647. // Values继承于AbstractCollection,不同于“KeySet继承于AbstractSet”,
  648. // Values中的元素能够重复。因为不同的key可以指向相同的value。
  649. private final class Values extends AbstractCollection<V> {
  650. public Iterator<V> iterator() {
  651. return newValueIterator();
  652. }
  653. public int size() {
  654. return size;
  655. }
  656. public boolean contains(Object o) {
  657. return containsValue(o);
  658. }
  659. public void clear() {
  660. HashMap.this.clear();
  661. }
  662. }
  663.  
  664. // 返回“HashMap的Entry集合”
  665. public Set<Map.Entry<K,V>> entrySet() {
  666. return entrySet0();
  667. }
  668.  
  669. // 返回“HashMap的Entry集合”,它实际是返回一个EntrySet对象
  670. private Set<Map.Entry<K,V>> entrySet0() {
  671. Set<Map.Entry<K,V>> es = entrySet;
  672. return es != null ? es : (entrySet = new EntrySet());
  673. }
  674.  
  675. // EntrySet对应的集合
  676. // EntrySet继承于AbstractSet,说明该集合中没有重复的EntrySet。
  677. private final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
  678. public Iterator<Map.Entry<K,V>> iterator() {
  679. return newEntryIterator();
  680. }
  681. public boolean contains(Object o) {
  682. if (!(o instanceof Map.Entry))
  683. return false;
  684. Map.Entry<K,V> e = (Map.Entry<K,V>) o;
  685. Entry<K,V> candidate = getEntry(e.getKey());
  686. return candidate != null && candidate.equals(e);
  687. }
  688. public boolean remove(Object o) {
  689. return removeMapping(o) != null;
  690. }
  691. public int size() {
  692. return size;
  693. }
  694. public void clear() {
  695. HashMap.this.clear();
  696. }
  697. }
  698.  
  699. // java.io.Serializable的写入函数
  700. // 将HashMap的“总的容量,实际容量,所有的Entry”都写入到输出流中
  701. private void writeObject(java.io.ObjectOutputStream s)
  702. throws IOException
  703. {
  704. Iterator<Map.Entry<K,V>> i =
  705. (size > 0) ? entrySet0().iterator() : null;
  706.  
  707. // Write out the threshold, loadfactor, and any hidden stuff
  708. s.defaultWriteObject();
  709.  
  710. // Write out number of buckets
  711. s.writeInt(table.length);
  712.  
  713. // Write out size (number of Mappings)
  714. s.writeInt(size);
  715.  
  716. // Write out keys and values (alternating)
  717. if (i != null) {
  718. while (i.hasNext()) {
  719. Map.Entry<K,V> e = i.next();
  720. s.writeObject(e.getKey());
  721. s.writeObject(e.getValue());
  722. }
  723. }
  724. }
  725.  
  726. private static final long serialVersionUID = 362498820763181265L;
  727.  
  728. // java.io.Serializable的读取函数:根据写入方式读出
  729. // 将HashMap的“总的容量,实际容量,所有的Entry”依次读出
  730. private void readObject(java.io.ObjectInputStream s)
  731. throws IOException, ClassNotFoundException
  732. {
  733. // Read in the threshold, loadfactor, and any hidden stuff
  734. s.defaultReadObject();
  735.  
  736. // Read in number of buckets and allocate the bucket array;
  737. int numBuckets = s.readInt();
  738. table = new Entry[numBuckets];
  739.  
  740. init(); // Give subclass a chance to do its thing.
  741.  
  742. // Read in size (number of Mappings)
  743. int size = s.readInt();
  744.  
  745. // Read the keys and values, and put the mappings in the HashMap
  746. for (int i=0; i<size; i++) {
  747. K key = (K) s.readObject();
  748. V value = (V) s.readObject();
  749. putForCreate(key, value);
  750. }
  751. }
  752.  
  753. // 返回“HashMap总的容量”
  754. int capacity() { return table.length; }
  755. // 返回“HashMap的加载因子”
  756. float loadFactor() { return loadFactor; }
  757. }

HashMap源码解读(jdk1.6)

   但是在jdk1.8之后,HashMap加入了红黑树机制,在一个单向链表的节点大于8的情况下,就把这个链表转换成红黑树。

   同时让我们看看HashMap和Map之间的关系:

  1. (1) HashMap继承于AbstractMap类,实现了Map接口。Map"key-value键值对"接口,AbstractMap实现了"键值对"的通用函数接口。
  2. (2) HashMap是通过"拉链法"实现的哈希表。它包括几个重要的成员变量:table, size, threshold, loadFactor, modCount
  3.   table是一个Entry[]数组类型,而Entry实际上就是一个单向链表。哈希表的"key-value键值对"都是存储在Entry数组中的。
  4.   sizeHashMap的大小,它是HashMap保存的键值对的数量。
  5.   thresholdHashMap的阈值,用于判断是否需要调整HashMap的容量。
    threshold的值="容量*加载因子",当HashMap中存储数据的数量达到threshold时,就需要将HashMap的容量加倍。
  6.   loadFactor就是加载因子。
  7.   modCount是用来实现fail-fast机制的。java.util包下面的所有的集合类都是快速失败(fail-fast)的,
    java.util.concurrent包下面的所有的类都是安全失败(fail-safe)的。
    快速失败的迭代器会抛出ConcurrentModificationException异常,而安全失败的迭代器永远不会抛出这样的异常。
  8. 当多个线程对同一个集合进行操作的时候,某线程访问集合的过程中,
    该集合的内容被其他线程所改变(即其它线程通过addremoveclear等方法,改变了modCount的值);
    这时,就会抛出ConcurrentModificationException异常,产生fail-fast事件。
  9. fail-fast机制,是一种错误检测机制。它只能被用来检测错误,因为JDK并不保证fail-fast机制一定会发生。
    若在多线程环境下使用fail-fast机制的集合,建议使用“java.util.concurrent包下的类”去取代“java.util包下的类”。

下面我们看看在jdk1.8之中的HashMap源码实现方式:

  1. /*
  2. * Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.
  3. * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.
  4. *
  5. *
  6. *
  7. *
  8. *
  9. *
  10. *
  11. *
  12. *
  13. *
  14. *
  15. *
  16. *
  17. *
  18. *
  19. *
  20. *
  21. *
  22. *
  23. *
  24. */
  25.  
  26. package java.util;
  27.  
  28. import java.io.IOException;
  29. import java.io.InvalidObjectException;
  30. import java.io.Serializable;
  31. import java.lang.reflect.ParameterizedType;
  32. import java.lang.reflect.Type;
  33. import java.util.function.BiConsumer;
  34. import java.util.function.BiFunction;
  35. import java.util.function.Consumer;
  36. import java.util.function.Function;
  37.  
  38. /**
  39. * Hash table based implementation of the <tt>Map</tt> interface. This
  40. * implementation provides all of the optional map operations, and permits
  41. * <tt>null</tt> values and the <tt>null</tt> key. (The <tt>HashMap</tt>
  42. * class is roughly equivalent to <tt>Hashtable</tt>, except that it is
  43. * unsynchronized and permits nulls.) This class makes no guarantees as to
  44. * the order of the map; in particular, it does not guarantee that the order
  45. * will remain constant over time.
  46. *
  47. * <p>This implementation provides constant-time performance for the basic
  48. * operations (<tt>get</tt> and <tt>put</tt>), assuming the hash function
  49. * disperses the elements properly among the buckets. Iteration over
  50. * collection views requires time proportional to the "capacity" of the
  51. * <tt>HashMap</tt> instance (the number of buckets) plus its size (the number
  52. * of key-value mappings). Thus, it's very important not to set the initial
  53. * capacity too high (or the load factor too low) if iteration performance is
  54. * important.
  55. *
  56. * <p>An instance of <tt>HashMap</tt> has two parameters that affect its
  57. * performance: <i>initial capacity</i> and <i>load factor</i>. The
  58. * <i>capacity</i> is the number of buckets in the hash table, and the initial
  59. * capacity is simply the capacity at the time the hash table is created. The
  60. * <i>load factor</i> is a measure of how full the hash table is allowed to
  61. * get before its capacity is automatically increased. When the number of
  62. * entries in the hash table exceeds the product of the load factor and the
  63. * current capacity, the hash table is <i>rehashed</i> (that is, internal data
  64. * structures are rebuilt) so that the hash table has approximately twice the
  65. * number of buckets.
  66. *
  67. * <p>As a general rule, the default load factor (.75) offers a good
  68. * tradeoff between time and space costs. Higher values decrease the
  69. * space overhead but increase the lookup cost (reflected in most of
  70. * the operations of the <tt>HashMap</tt> class, including
  71. * <tt>get</tt> and <tt>put</tt>). The expected number of entries in
  72. * the map and its load factor should be taken into account when
  73. * setting its initial capacity, so as to minimize the number of
  74. * rehash operations. If the initial capacity is greater than the
  75. * maximum number of entries divided by the load factor, no rehash
  76. * operations will ever occur.
  77. *
  78. * <p>If many mappings are to be stored in a <tt>HashMap</tt>
  79. * instance, creating it with a sufficiently large capacity will allow
  80. * the mappings to be stored more efficiently than letting it perform
  81. * automatic rehashing as needed to grow the table. Note that using
  82. * many keys with the same {@code hashCode()} is a sure way to slow
  83. * down performance of any hash table. To ameliorate impact, when keys
  84. * are {@link Comparable}, this class may use comparison order among
  85. * keys to help break ties.
  86. *
  87. * <p><strong>Note that this implementation is not synchronized.</strong>
  88. * If multiple threads access a hash map concurrently, and at least one of
  89. * the threads modifies the map structurally, it <i>must</i> be
  90. * synchronized externally. (A structural modification is any operation
  91. * that adds or deletes one or more mappings; merely changing the value
  92. * associated with a key that an instance already contains is not a
  93. * structural modification.) This is typically accomplished by
  94. * synchronizing on some object that naturally encapsulates the map.
  95. *
  96. * If no such object exists, the map should be "wrapped" using the
  97. * {@link Collections#synchronizedMap Collections.synchronizedMap}
  98. * method. This is best done at creation time, to prevent accidental
  99. * unsynchronized access to the map:<pre>
  100. * Map m = Collections.synchronizedMap(new HashMap(...));</pre>
  101. *
  102. * <p>The iterators returned by all of this class's "collection view methods"
  103. * are <i>fail-fast</i>: if the map is structurally modified at any time after
  104. * the iterator is created, in any way except through the iterator's own
  105. * <tt>remove</tt> method, the iterator will throw a
  106. * {@link ConcurrentModificationException}. Thus, in the face of concurrent
  107. * modification, the iterator fails quickly and cleanly, rather than risking
  108. * arbitrary, non-deterministic behavior at an undetermined time in the
  109. * future.
  110. *
  111. * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed
  112. * as it is, generally speaking, impossible to make any hard guarantees in the
  113. * presence of unsynchronized concurrent modification. Fail-fast iterators
  114. * throw <tt>ConcurrentModificationException</tt> on a best-effort basis.
  115. * Therefore, it would be wrong to write a program that depended on this
  116. * exception for its correctness: <i>the fail-fast behavior of iterators
  117. * should be used only to detect bugs.</i>
  118. *
  119. * <p>This class is a member of the
  120. * <a href="{@docRoot}/../technotes/guides/collections/index.html">
  121. * Java Collections Framework</a>.
  122. *
  123. * @param <K> the type of keys maintained by this map
  124. * @param <V> the type of mapped values
  125. *
  126. * @author Doug Lea
  127. * @author Josh Bloch
  128. * @author Arthur van Hoff
  129. * @author Neal Gafter
  130. * @see Object#hashCode()
  131. * @see Collection
  132. * @see Map
  133. * @see TreeMap
  134. * @see Hashtable
  135. * @since 1.2
  136. */
  137. public class HashMap<K,V> extends AbstractMap<K,V>
  138. implements Map<K,V>, Cloneable, Serializable {
  139.  
  140. private static final long serialVersionUID = 362498820763181265L;
  141.  
  142. /*
  143. * Implementation notes.
  144. *
  145. * This map usually acts as a binned (bucketed) hash table, but
  146. * when bins get too large, they are transformed into bins of
  147. * TreeNodes, each structured similarly to those in
  148. * java.util.TreeMap. Most methods try to use normal bins, but
  149. * relay to TreeNode methods when applicable (simply by checking
  150. * instanceof a node). Bins of TreeNodes may be traversed and
  151. * used like any others, but additionally support faster lookup
  152. * when overpopulated. However, since the vast majority of bins in
  153. * normal use are not overpopulated, checking for existence of
  154. * tree bins may be delayed in the course of table methods.
  155. *
  156. * Tree bins (i.e., bins whose elements are all TreeNodes) are
  157. * ordered primarily by hashCode, but in the case of ties, if two
  158. * elements are of the same "class C implements Comparable<C>",
  159. * type then their compareTo method is used for ordering. (We
  160. * conservatively check generic types via reflection to validate
  161. * this -- see method comparableClassFor). The added complexity
  162. * of tree bins is worthwhile in providing worst-case O(log n)
  163. * operations when keys either have distinct hashes or are
  164. * orderable, Thus, performance degrades gracefully under
  165. * accidental or malicious usages in which hashCode() methods
  166. * return values that are poorly distributed, as well as those in
  167. * which many keys share a hashCode, so long as they are also
  168. * Comparable. (If neither of these apply, we may waste about a
  169. * factor of two in time and space compared to taking no
  170. * precautions. But the only known cases stem from poor user
  171. * programming practices that are already so slow that this makes
  172. * little difference.)
  173. *
  174. * Because TreeNodes are about twice the size of regular nodes, we
  175. * use them only when bins contain enough nodes to warrant use
  176. * (see TREEIFY_THRESHOLD). And when they become too small (due to
  177. * removal or resizing) they are converted back to plain bins. In
  178. * usages with well-distributed user hashCodes, tree bins are
  179. * rarely used. Ideally, under random hashCodes, the frequency of
  180. * nodes in bins follows a Poisson distribution
  181. * (http://en.wikipedia.org/wiki/Poisson_distribution) with a
  182. * parameter of about 0.5 on average for the default resizing
  183. * threshold of 0.75, although with a large variance because of
  184. * resizing granularity. Ignoring variance, the expected
  185. * occurrences of list size k are (exp(-0.5) * pow(0.5, k) /
  186. * factorial(k)). The first values are:
  187. *
  188. * 0: 0.60653066
  189. * 1: 0.30326533
  190. * 2: 0.07581633
  191. * 3: 0.01263606
  192. * 4: 0.00157952
  193. * 5: 0.00015795
  194. * 6: 0.00001316
  195. * 7: 0.00000094
  196. * 8: 0.00000006
  197. * more: less than 1 in ten million
  198. *
  199. * The root of a tree bin is normally its first node. However,
  200. * sometimes (currently only upon Iterator.remove), the root might
  201. * be elsewhere, but can be recovered following parent links
  202. * (method TreeNode.root()).
  203. *
  204. * All applicable internal methods accept a hash code as an
  205. * argument (as normally supplied from a public method), allowing
  206. * them to call each other without recomputing user hashCodes.
  207. * Most internal methods also accept a "tab" argument, that is
  208. * normally the current table, but may be a new or old one when
  209. * resizing or converting.
  210. *
  211. * When bin lists are treeified, split, or untreeified, we keep
  212. * them in the same relative access/traversal order (i.e., field
  213. * Node.next) to better preserve locality, and to slightly
  214. * simplify handling of splits and traversals that invoke
  215. * iterator.remove. When using comparators on insertion, to keep a
  216. * total ordering (or as close as is required here) across
  217. * rebalancings, we compare classes and identityHashCodes as
  218. * tie-breakers.
  219. *
  220. * The use and transitions among plain vs tree modes is
  221. * complicated by the existence of subclass LinkedHashMap. See
  222. * below for hook methods defined to be invoked upon insertion,
  223. * removal and access that allow LinkedHashMap internals to
  224. * otherwise remain independent of these mechanics. (This also
  225. * requires that a map instance be passed to some utility methods
  226. * that may create new nodes.)
  227. *
  228. * The concurrent-programming-like SSA-based coding style helps
  229. * avoid aliasing errors amid all of the twisty pointer operations.
  230. */
  231.  
  232. /**
  233. * The default initial capacity - MUST be a power of two.
  234. */
  235. static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
  236.  
  237. /**
  238. * The maximum capacity, used if a higher value is implicitly specified
  239. * by either of the constructors with arguments.
  240. * MUST be a power of two <= 1<<30.
  241. */
  242. static final int MAXIMUM_CAPACITY = 1 << 30;
  243.  
  244. /**
  245. * The load factor used when none specified in constructor.
  246. */
  247. static final float DEFAULT_LOAD_FACTOR = 0.75f;
  248.  
  249. /**
  250. * The bin count threshold for using a tree rather than list for a
  251. * bin. Bins are converted to trees when adding an element to a
  252. * bin with at least this many nodes. The value must be greater
  253. * than 2 and should be at least 8 to mesh with assumptions in
  254. * tree removal about conversion back to plain bins upon
  255. * shrinkage.
  256. */
  257. static final int TREEIFY_THRESHOLD = 8;
  258.  
  259. /**
  260. * The bin count threshold for untreeifying a (split) bin during a
  261. * resize operation. Should be less than TREEIFY_THRESHOLD, and at
  262. * most 6 to mesh with shrinkage detection under removal.
  263. */
  264. static final int UNTREEIFY_THRESHOLD = 6;
  265.  
  266. /**
  267. * The smallest table capacity for which bins may be treeified.
  268. * (Otherwise the table is resized if too many nodes in a bin.)
  269. * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
  270. * between resizing and treeification thresholds.
  271. */
  272. static final int MIN_TREEIFY_CAPACITY = 64;
  273.  
  274. /**
  275. * Basic hash bin node, used for most entries. (See below for
  276. * TreeNode subclass, and in LinkedHashMap for its Entry subclass.)
  277. */
  278. static class Node<K,V> implements Map.Entry<K,V> {
  279. final int hash;
  280. final K key;
  281. V value;
  282. Node<K,V> next;
  283.  
  284. Node(int hash, K key, V value, Node<K,V> next) {
  285. this.hash = hash;
  286. this.key = key;
  287. this.value = value;
  288. this.next = next;
  289. }
  290.  
  291. public final K getKey() { return key; }
  292. public final V getValue() { return value; }
  293. public final String toString() { return key + "=" + value; }
  294.  
  295. public final int hashCode() {
  296. return Objects.hashCode(key) ^ Objects.hashCode(value);
  297. }
  298.  
  299. public final V setValue(V newValue) {
  300. V oldValue = value;
  301. value = newValue;
  302. return oldValue;
  303. }
  304.  
  305. public final boolean equals(Object o) {
  306. if (o == this)
  307. return true;
  308. if (o instanceof Map.Entry) {
  309. Map.Entry<?,?> e = (Map.Entry<?,?>)o;
  310. if (Objects.equals(key, e.getKey()) &&
  311. Objects.equals(value, e.getValue()))
  312. return true;
  313. }
  314. return false;
  315. }
  316. }
  317.  
  318. /* ---------------- Static utilities -------------- */
  319.  
  320. /**
  321. * Computes key.hashCode() and spreads (XORs) higher bits of hash
  322. * to lower. Because the table uses power-of-two masking, sets of
  323. * hashes that vary only in bits above the current mask will
  324. * always collide. (Among known examples are sets of Float keys
  325. * holding consecutive whole numbers in small tables.) So we
  326. * apply a transform that spreads the impact of higher bits
  327. * downward. There is a tradeoff between speed, utility, and
  328. * quality of bit-spreading. Because many common sets of hashes
  329. * are already reasonably distributed (so don't benefit from
  330. * spreading), and because we use trees to handle large sets of
  331. * collisions in bins, we just XOR some shifted bits in the
  332. * cheapest possible way to reduce systematic lossage, as well as
  333. * to incorporate impact of the highest bits that would otherwise
  334. * never be used in index calculations because of table bounds.
  335. */
  336. static final int hash(Object key) {
  337. int h;
  338. return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
  339. }
  340.  
  341. /**
  342. * Returns x's Class if it is of the form "class C implements
  343. * Comparable<C>", else null.
  344. */
  345. static Class<?> comparableClassFor(Object x) {
  346. if (x instanceof Comparable) {
  347. Class<?> c; Type[] ts, as; Type t; ParameterizedType p;
  348. if ((c = x.getClass()) == String.class) // bypass checks
  349. return c;
  350. if ((ts = c.getGenericInterfaces()) != null) {
  351. for (int i = 0; i < ts.length; ++i) {
  352. if (((t = ts[i]) instanceof ParameterizedType) &&
  353. ((p = (ParameterizedType)t).getRawType() ==
  354. Comparable.class) &&
  355. (as = p.getActualTypeArguments()) != null &&
  356. as.length == 1 && as[0] == c) // type arg is c
  357. return c;
  358. }
  359. }
  360. }
  361. return null;
  362. }
  363.  
  364. /**
  365. * Returns k.compareTo(x) if x matches kc (k's screened comparable
  366. * class), else 0.
  367. */
  368. @SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable
  369. static int compareComparables(Class<?> kc, Object k, Object x) {
  370. return (x == null || x.getClass() != kc ? 0 :
  371. ((Comparable)k).compareTo(x));
  372. }
  373.  
  374. /**
  375. * Returns a power of two size for the given target capacity.
  376. */
  377. static final int tableSizeFor(int cap) {
  378. int n = cap - 1;
  379. n |= n >>> 1;
  380. n |= n >>> 2;
  381. n |= n >>> 4;
  382. n |= n >>> 8;
  383. n |= n >>> 16;
  384. return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
  385. }
  386.  
  387. /* ---------------- Fields -------------- */
  388.  
  389. /**
  390. * The table, initialized on first use, and resized as
  391. * necessary. When allocated, length is always a power of two.
  392. * (We also tolerate length zero in some operations to allow
  393. * bootstrapping mechanics that are currently not needed.)
  394. */
  395. transient Node<K,V>[] table;
  396.  
  397. /**
  398. * Holds cached entrySet(). Note that AbstractMap fields are used
  399. * for keySet() and values().
  400. */
  401. transient Set<Map.Entry<K,V>> entrySet;
  402.  
  403. /**
  404. * The number of key-value mappings contained in this map.
  405. */
  406. transient int size;
  407.  
  408. /**
  409. * The number of times this HashMap has been structurally modified
  410. * Structural modifications are those that change the number of mappings in
  411. * the HashMap or otherwise modify its internal structure (e.g.,
  412. * rehash). This field is used to make iterators on Collection-views of
  413. * the HashMap fail-fast. (See ConcurrentModificationException).
  414. */
  415. transient int modCount;
  416.  
  417. /**
  418. * The next size value at which to resize (capacity * load factor).
  419. *
  420. * @serial
  421. */
  422. // (The javadoc description is true upon serialization.
  423. // Additionally, if the table array has not been allocated, this
  424. // field holds the initial array capacity, or zero signifying
  425. // DEFAULT_INITIAL_CAPACITY.)
  426. int threshold;
  427.  
  428. /**
  429. * The load factor for the hash table.
  430. *
  431. * @serial
  432. */
  433. final float loadFactor;
  434.  
  435. /* ---------------- Public operations -------------- */
  436.  
  437. /**
  438. * Constructs an empty <tt>HashMap</tt> with the specified initial
  439. * capacity and load factor.
  440. *
  441. * @param initialCapacity the initial capacity
  442. * @param loadFactor the load factor
  443. * @throws IllegalArgumentException if the initial capacity is negative
  444. * or the load factor is nonpositive
  445. */
  446. public HashMap(int initialCapacity, float loadFactor) {
  447. if (initialCapacity < 0)
  448. throw new IllegalArgumentException("Illegal initial capacity: " +
  449. initialCapacity);
  450. if (initialCapacity > MAXIMUM_CAPACITY)
  451. initialCapacity = MAXIMUM_CAPACITY;
  452. if (loadFactor <= 0 || Float.isNaN(loadFactor))
  453. throw new IllegalArgumentException("Illegal load factor: " +
  454. loadFactor);
  455. this.loadFactor = loadFactor;
  456. this.threshold = tableSizeFor(initialCapacity);
  457. }
  458.  
  459. /**
  460. * Constructs an empty <tt>HashMap</tt> with the specified initial
  461. * capacity and the default load factor (0.75).
  462. *
  463. * @param initialCapacity the initial capacity.
  464. * @throws IllegalArgumentException if the initial capacity is negative.
  465. */
  466. public HashMap(int initialCapacity) {
  467. this(initialCapacity, DEFAULT_LOAD_FACTOR);
  468. }
  469.  
  470. /**
  471. * Constructs an empty <tt>HashMap</tt> with the default initial capacity
  472. * (16) and the default load factor (0.75).
  473. */
  474. public HashMap() {
  475. this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
  476. }
  477.  
  478. /**
  479. * Constructs a new <tt>HashMap</tt> with the same mappings as the
  480. * specified <tt>Map</tt>. The <tt>HashMap</tt> is created with
  481. * default load factor (0.75) and an initial capacity sufficient to
  482. * hold the mappings in the specified <tt>Map</tt>.
  483. *
  484. * @param m the map whose mappings are to be placed in this map
  485. * @throws NullPointerException if the specified map is null
  486. */
  487. public HashMap(Map<? extends K, ? extends V> m) {
  488. this.loadFactor = DEFAULT_LOAD_FACTOR;
  489. putMapEntries(m, false);
  490. }
  491.  
  492. /**
  493. * Implements Map.putAll and Map constructor
  494. *
  495. * @param m the map
  496. * @param evict false when initially constructing this map, else
  497. * true (relayed to method afterNodeInsertion).
  498. */
  499. final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
  500. int s = m.size();
  501. if (s > 0) {
  502. if (table == null) { // pre-size
  503. float ft = ((float)s / loadFactor) + 1.0F;
  504. int t = ((ft < (float)MAXIMUM_CAPACITY) ?
  505. (int)ft : MAXIMUM_CAPACITY);
  506. if (t > threshold)
  507. threshold = tableSizeFor(t);
  508. }
  509. else if (s > threshold)
  510. resize();
  511. for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
  512. K key = e.getKey();
  513. V value = e.getValue();
  514. putVal(hash(key), key, value, false, evict);
  515. }
  516. }
  517. }
  518.  
  519. /**
  520. * Returns the number of key-value mappings in this map.
  521. *
  522. * @return the number of key-value mappings in this map
  523. */
  524. public int size() {
  525. return size;
  526. }
  527.  
  528. /**
  529. * Returns <tt>true</tt> if this map contains no key-value mappings.
  530. *
  531. * @return <tt>true</tt> if this map contains no key-value mappings
  532. */
  533. public boolean isEmpty() {
  534. return size == 0;
  535. }
  536.  
  537. /**
  538. * Returns the value to which the specified key is mapped,
  539. * or {@code null} if this map contains no mapping for the key.
  540. *
  541. * <p>More formally, if this map contains a mapping from a key
  542. * {@code k} to a value {@code v} such that {@code (key==null ? k==null :
  543. * key.equals(k))}, then this method returns {@code v}; otherwise
  544. * it returns {@code null}. (There can be at most one such mapping.)
  545. *
  546. * <p>A return value of {@code null} does not <i>necessarily</i>
  547. * indicate that the map contains no mapping for the key; it's also
  548. * possible that the map explicitly maps the key to {@code null}.
  549. * The {@link #containsKey containsKey} operation may be used to
  550. * distinguish these two cases.
  551. *
  552. * @see #put(Object, Object)
  553. */
  554. public V get(Object key) {
  555. Node<K,V> e;
  556. return (e = getNode(hash(key), key)) == null ? null : e.value;
  557. }
  558.  
  559. /**
  560. * Implements Map.get and related methods
  561. *
  562. * @param hash hash for key
  563. * @param key the key
  564. * @return the node, or null if none
  565. */
  566. final Node<K,V> getNode(int hash, Object key) {
  567. Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
  568. if ((tab = table) != null && (n = tab.length) > 0 &&
  569. (first = tab[(n - 1) & hash]) != null) {
  570. if (first.hash == hash && // always check first node
  571. ((k = first.key) == key || (key != null && key.equals(k))))
  572. return first;
  573. if ((e = first.next) != null) {
  574. if (first instanceof TreeNode)
  575. return ((TreeNode<K,V>)first).getTreeNode(hash, key);
  576. do {
  577. if (e.hash == hash &&
  578. ((k = e.key) == key || (key != null && key.equals(k))))
  579. return e;
  580. } while ((e = e.next) != null);
  581. }
  582. }
  583. return null;
  584. }
  585.  
  586. /**
  587. * Returns <tt>true</tt> if this map contains a mapping for the
  588. * specified key.
  589. *
  590. * @param key The key whose presence in this map is to be tested
  591. * @return <tt>true</tt> if this map contains a mapping for the specified
  592. * key.
  593. */
  594. public boolean containsKey(Object key) {
  595. return getNode(hash(key), key) != null;
  596. }
  597.  
  598. /**
  599. * Associates the specified value with the specified key in this map.
  600. * If the map previously contained a mapping for the key, the old
  601. * value is replaced.
  602. *
  603. * @param key key with which the specified value is to be associated
  604. * @param value value to be associated with the specified key
  605. * @return the previous value associated with <tt>key</tt>, or
  606. * <tt>null</tt> if there was no mapping for <tt>key</tt>.
  607. * (A <tt>null</tt> return can also indicate that the map
  608. * previously associated <tt>null</tt> with <tt>key</tt>.)
  609. */
  610. public V put(K key, V value) {
  611. return putVal(hash(key), key, value, false, true);
  612. }
  613.  
  614. /**
  615. * Implements Map.put and related methods
  616. *
  617. * @param hash hash for key
  618. * @param key the key
  619. * @param value the value to put
  620. * @param onlyIfAbsent if true, don't change existing value
  621. * @param evict if false, the table is in creation mode.
  622. * @return previous value, or null if none
  623. */
  624. final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
  625. boolean evict) {
  626. Node<K,V>[] tab; Node<K,V> p; int n, i;
  627. if ((tab = table) == null || (n = tab.length) == 0)
  628. n = (tab = resize()).length;
  629. if ((p = tab[i = (n - 1) & hash]) == null)
  630. tab[i] = newNode(hash, key, value, null);
  631. else {
  632. Node<K,V> e; K k;
  633. if (p.hash == hash &&
  634. ((k = p.key) == key || (key != null && key.equals(k))))
  635. e = p;
  636. else if (p instanceof TreeNode)
  637. e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
  638. else {
  639. for (int binCount = 0; ; ++binCount) {
  640. if ((e = p.next) == null) {
  641. p.next = newNode(hash, key, value, null);
  642. if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
  643. treeifyBin(tab, hash);
  644. break;
  645. }
  646. if (e.hash == hash &&
  647. ((k = e.key) == key || (key != null && key.equals(k))))
  648. break;
  649. p = e;
  650. }
  651. }
  652. if (e != null) { // existing mapping for key
  653. V oldValue = e.value;
  654. if (!onlyIfAbsent || oldValue == null)
  655. e.value = value;
  656. afterNodeAccess(e);
  657. return oldValue;
  658. }
  659. }
  660. ++modCount;
  661. if (++size > threshold)
  662. resize();
  663. afterNodeInsertion(evict);
  664. return null;
  665. }
  666.  
  667. /**
  668. * Initializes or doubles table size. If null, allocates in
  669. * accord with initial capacity target held in field threshold.
  670. * Otherwise, because we are using power-of-two expansion, the
  671. * elements from each bin must either stay at same index, or move
  672. * with a power of two offset in the new table.
  673. *
  674. * @return the table
  675. */
  676. final Node<K,V>[] resize() {
  677. Node<K,V>[] oldTab = table;
  678. int oldCap = (oldTab == null) ? 0 : oldTab.length;
  679. int oldThr = threshold;
  680. int newCap, newThr = 0;
  681. if (oldCap > 0) {
  682. if (oldCap >= MAXIMUM_CAPACITY) {
  683. threshold = Integer.MAX_VALUE;
  684. return oldTab;
  685. }
  686. else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
  687. oldCap >= DEFAULT_INITIAL_CAPACITY)
  688. newThr = oldThr << 1; // double threshold
  689. }
  690. else if (oldThr > 0) // initial capacity was placed in threshold
  691. newCap = oldThr;
  692. else { // zero initial threshold signifies using defaults
  693. newCap = DEFAULT_INITIAL_CAPACITY;
  694. newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
  695. }
  696. if (newThr == 0) {
  697. float ft = (float)newCap * loadFactor;
  698. newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
  699. (int)ft : Integer.MAX_VALUE);
  700. }
  701. threshold = newThr;
  702. @SuppressWarnings({"rawtypes","unchecked"})
  703. Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
  704. table = newTab;
  705. if (oldTab != null) {
  706. for (int j = 0; j < oldCap; ++j) {
  707. Node<K,V> e;
  708. if ((e = oldTab[j]) != null) {
  709. oldTab[j] = null;
  710. if (e.next == null)
  711. newTab[e.hash & (newCap - 1)] = e;
  712. else if (e instanceof TreeNode)
  713. ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
  714. else { // preserve order
  715. Node<K,V> loHead = null, loTail = null;
  716. Node<K,V> hiHead = null, hiTail = null;
  717. Node<K,V> next;
  718. do {
  719. next = e.next;
  720. if ((e.hash & oldCap) == 0) {
  721. if (loTail == null)
  722. loHead = e;
  723. else
  724. loTail.next = e;
  725. loTail = e;
  726. }
  727. else {
  728. if (hiTail == null)
  729. hiHead = e;
  730. else
  731. hiTail.next = e;
  732. hiTail = e;
  733. }
  734. } while ((e = next) != null);
  735. if (loTail != null) {
  736. loTail.next = null;
  737. newTab[j] = loHead;
  738. }
  739. if (hiTail != null) {
  740. hiTail.next = null;
  741. newTab[j + oldCap] = hiHead;
  742. }
  743. }
  744. }
  745. }
  746. }
  747. return newTab;
  748. }
  749.  
  750. /**
  751. * Replaces all linked nodes in bin at index for given hash unless
  752. * table is too small, in which case resizes instead.
  753. */
  754. final void treeifyBin(Node<K,V>[] tab, int hash) {
  755. int n, index; Node<K,V> e;
  756. if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
  757. resize();
  758. else if ((e = tab[index = (n - 1) & hash]) != null) {
  759. TreeNode<K,V> hd = null, tl = null;
  760. do {
  761. TreeNode<K,V> p = replacementTreeNode(e, null);
  762. if (tl == null)
  763. hd = p;
  764. else {
  765. p.prev = tl;
  766. tl.next = p;
  767. }
  768. tl = p;
  769. } while ((e = e.next) != null);
  770. if ((tab[index] = hd) != null)
  771. hd.treeify(tab);
  772. }
  773. }
  774.  
  775. /**
  776. * Copies all of the mappings from the specified map to this map.
  777. * These mappings will replace any mappings that this map had for
  778. * any of the keys currently in the specified map.
  779. *
  780. * @param m mappings to be stored in this map
  781. * @throws NullPointerException if the specified map is null
  782. */
  783. public void putAll(Map<? extends K, ? extends V> m) {
  784. putMapEntries(m, true);
  785. }
  786.  
  787. /**
  788. * Removes the mapping for the specified key from this map if present.
  789. *
  790. * @param key key whose mapping is to be removed from the map
  791. * @return the previous value associated with <tt>key</tt>, or
  792. * <tt>null</tt> if there was no mapping for <tt>key</tt>.
  793. * (A <tt>null</tt> return can also indicate that the map
  794. * previously associated <tt>null</tt> with <tt>key</tt>.)
  795. */
  796. public V remove(Object key) {
  797. Node<K,V> e;
  798. return (e = removeNode(hash(key), key, null, false, true)) == null ?
  799. null : e.value;
  800. }
  801.  
  802. /**
  803. * Implements Map.remove and related methods
  804. *
  805. * @param hash hash for key
  806. * @param key the key
  807. * @param value the value to match if matchValue, else ignored
  808. * @param matchValue if true only remove if value is equal
  809. * @param movable if false do not move other nodes while removing
  810. * @return the node, or null if none
  811. */
  812. final Node<K,V> removeNode(int hash, Object key, Object value,
  813. boolean matchValue, boolean movable) {
  814. Node<K,V>[] tab; Node<K,V> p; int n, index;
  815. if ((tab = table) != null && (n = tab.length) > 0 &&
  816. (p = tab[index = (n - 1) & hash]) != null) {
  817. Node<K,V> node = null, e; K k; V v;
  818. if (p.hash == hash &&
  819. ((k = p.key) == key || (key != null && key.equals(k))))
  820. node = p;
  821. else if ((e = p.next) != null) {
  822. if (p instanceof TreeNode)
  823. node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
  824. else {
  825. do {
  826. if (e.hash == hash &&
  827. ((k = e.key) == key ||
  828. (key != null && key.equals(k)))) {
  829. node = e;
  830. break;
  831. }
  832. p = e;
  833. } while ((e = e.next) != null);
  834. }
  835. }
  836. if (node != null && (!matchValue || (v = node.value) == value ||
  837. (value != null && value.equals(v)))) {
  838. if (node instanceof TreeNode)
  839. ((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
  840. else if (node == p)
  841. tab[index] = node.next;
  842. else
  843. p.next = node.next;
  844. ++modCount;
  845. --size;
  846. afterNodeRemoval(node);
  847. return node;
  848. }
  849. }
  850. return null;
  851. }
  852.  
  853. /**
  854. * Removes all of the mappings from this map.
  855. * The map will be empty after this call returns.
  856. */
  857. public void clear() {
  858. Node<K,V>[] tab;
  859. modCount++;
  860. if ((tab = table) != null && size > 0) {
  861. size = 0;
  862. for (int i = 0; i < tab.length; ++i)
  863. tab[i] = null;
  864. }
  865. }
  866.  
  867. /**
  868. * Returns <tt>true</tt> if this map maps one or more keys to the
  869. * specified value.
  870. *
  871. * @param value value whose presence in this map is to be tested
  872. * @return <tt>true</tt> if this map maps one or more keys to the
  873. * specified value
  874. */
  875. public boolean containsValue(Object value) {
  876. Node<K,V>[] tab; V v;
  877. if ((tab = table) != null && size > 0) {
  878. for (int i = 0; i < tab.length; ++i) {
  879. for (Node<K,V> e = tab[i]; e != null; e = e.next) {
  880. if ((v = e.value) == value ||
  881. (value != null && value.equals(v)))
  882. return true;
  883. }
  884. }
  885. }
  886. return false;
  887. }
  888.  
  889. /**
  890. * Returns a {@link Set} view of the keys contained in this map.
  891. * The set is backed by the map, so changes to the map are
  892. * reflected in the set, and vice-versa. If the map is modified
  893. * while an iteration over the set is in progress (except through
  894. * the iterator's own <tt>remove</tt> operation), the results of
  895. * the iteration are undefined. The set supports element removal,
  896. * which removes the corresponding mapping from the map, via the
  897. * <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,
  898. * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>
  899. * operations. It does not support the <tt>add</tt> or <tt>addAll</tt>
  900. * operations.
  901. *
  902. * @return a set view of the keys contained in this map
  903. */
  904. public Set<K> keySet() {
  905. Set<K> ks;
  906. return (ks = keySet) == null ? (keySet = new KeySet()) : ks;
  907. }
  908.  
  909. final class KeySet extends AbstractSet<K> {
  910. public final int size() { return size; }
  911. public final void clear() { HashMap.this.clear(); }
  912. public final Iterator<K> iterator() { return new KeyIterator(); }
  913. public final boolean contains(Object o) { return containsKey(o); }
  914. public final boolean remove(Object key) {
  915. return removeNode(hash(key), key, null, false, true) != null;
  916. }
  917. public final Spliterator<K> spliterator() {
  918. return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0);
  919. }
  920. public final void forEach(Consumer<? super K> action) {
  921. Node<K,V>[] tab;
  922. if (action == null)
  923. throw new NullPointerException();
  924. if (size > 0 && (tab = table) != null) {
  925. int mc = modCount;
  926. for (int i = 0; i < tab.length; ++i) {
  927. for (Node<K,V> e = tab[i]; e != null; e = e.next)
  928. action.accept(e.key);
  929. }
  930. if (modCount != mc)
  931. throw new ConcurrentModificationException();
  932. }
  933. }
  934. }
  935.  
  936. /**
  937. * Returns a {@link Collection} view of the values contained in this map.
  938. * The collection is backed by the map, so changes to the map are
  939. * reflected in the collection, and vice-versa. If the map is
  940. * modified while an iteration over the collection is in progress
  941. * (except through the iterator's own <tt>remove</tt> operation),
  942. * the results of the iteration are undefined. The collection
  943. * supports element removal, which removes the corresponding
  944. * mapping from the map, via the <tt>Iterator.remove</tt>,
  945. * <tt>Collection.remove</tt>, <tt>removeAll</tt>,
  946. * <tt>retainAll</tt> and <tt>clear</tt> operations. It does not
  947. * support the <tt>add</tt> or <tt>addAll</tt> operations.
  948. *
  949. * @return a view of the values contained in this map
  950. */
  951. public Collection<V> values() {
  952. Collection<V> vs;
  953. return (vs = values) == null ? (values = new Values()) : vs;
  954. }
  955.  
  956. final class Values extends AbstractCollection<V> {
  957. public final int size() { return size; }
  958. public final void clear() { HashMap.this.clear(); }
  959. public final Iterator<V> iterator() { return new ValueIterator(); }
  960. public final boolean contains(Object o) { return containsValue(o); }
  961. public final Spliterator<V> spliterator() {
  962. return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0);
  963. }
  964. public final void forEach(Consumer<? super V> action) {
  965. Node<K,V>[] tab;
  966. if (action == null)
  967. throw new NullPointerException();
  968. if (size > 0 && (tab = table) != null) {
  969. int mc = modCount;
  970. for (int i = 0; i < tab.length; ++i) {
  971. for (Node<K,V> e = tab[i]; e != null; e = e.next)
  972. action.accept(e.value);
  973. }
  974. if (modCount != mc)
  975. throw new ConcurrentModificationException();
  976. }
  977. }
  978. }
  979.  
  980. /**
  981. * Returns a {@link Set} view of the mappings contained in this map.
  982. * The set is backed by the map, so changes to the map are
  983. * reflected in the set, and vice-versa. If the map is modified
  984. * while an iteration over the set is in progress (except through
  985. * the iterator's own <tt>remove</tt> operation, or through the
  986. * <tt>setValue</tt> operation on a map entry returned by the
  987. * iterator) the results of the iteration are undefined. The set
  988. * supports element removal, which removes the corresponding
  989. * mapping from the map, via the <tt>Iterator.remove</tt>,
  990. * <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and
  991. * <tt>clear</tt> operations. It does not support the
  992. * <tt>add</tt> or <tt>addAll</tt> operations.
  993. *
  994. * @return a set view of the mappings contained in this map
  995. */
  996. public Set<Map.Entry<K,V>> entrySet() {
  997. Set<Map.Entry<K,V>> es;
  998. return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;
  999. }
  1000.  
  1001. final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
  1002. public final int size() { return size; }
  1003. public final void clear() { HashMap.this.clear(); }
  1004. public final Iterator<Map.Entry<K,V>> iterator() {
  1005. return new EntryIterator();
  1006. }
  1007. public final boolean contains(Object o) {
  1008. if (!(o instanceof Map.Entry))
  1009. return false;
  1010. Map.Entry<?,?> e = (Map.Entry<?,?>) o;
  1011. Object key = e.getKey();
  1012. Node<K,V> candidate = getNode(hash(key), key);
  1013. return candidate != null && candidate.equals(e);
  1014. }
  1015. public final boolean remove(Object o) {
  1016. if (o instanceof Map.Entry) {
  1017. Map.Entry<?,?> e = (Map.Entry<?,?>) o;
  1018. Object key = e.getKey();
  1019. Object value = e.getValue();
  1020. return removeNode(hash(key), key, value, true, true) != null;
  1021. }
  1022. return false;
  1023. }
  1024. public final Spliterator<Map.Entry<K,V>> spliterator() {
  1025. return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);
  1026. }
  1027. public final void forEach(Consumer<? super Map.Entry<K,V>> action) {
  1028. Node<K,V>[] tab;
  1029. if (action == null)
  1030. throw new NullPointerException();
  1031. if (size > 0 && (tab = table) != null) {
  1032. int mc = modCount;
  1033. for (int i = 0; i < tab.length; ++i) {
  1034. for (Node<K,V> e = tab[i]; e != null; e = e.next)
  1035. action.accept(e);
  1036. }
  1037. if (modCount != mc)
  1038. throw new ConcurrentModificationException();
  1039. }
  1040. }
  1041. }
  1042.  
  1043. // Overrides of JDK8 Map extension methods
  1044.  
  1045. @Override
  1046. public V getOrDefault(Object key, V defaultValue) {
  1047. Node<K,V> e;
  1048. return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;
  1049. }
  1050.  
  1051. @Override
  1052. public V putIfAbsent(K key, V value) {
  1053. return putVal(hash(key), key, value, true, true);
  1054. }
  1055.  
  1056. @Override
  1057. public boolean remove(Object key, Object value) {
  1058. return removeNode(hash(key), key, value, true, true) != null;
  1059. }
  1060.  
  1061. @Override
  1062. public boolean replace(K key, V oldValue, V newValue) {
  1063. Node<K,V> e; V v;
  1064. if ((e = getNode(hash(key), key)) != null &&
  1065. ((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {
  1066. e.value = newValue;
  1067. afterNodeAccess(e);
  1068. return true;
  1069. }
  1070. return false;
  1071. }
  1072.  
  1073. @Override
  1074. public V replace(K key, V value) {
  1075. Node<K,V> e;
  1076. if ((e = getNode(hash(key), key)) != null) {
  1077. V oldValue = e.value;
  1078. e.value = value;
  1079. afterNodeAccess(e);
  1080. return oldValue;
  1081. }
  1082. return null;
  1083. }
  1084.  
  1085. @Override
  1086. public V computeIfAbsent(K key,
  1087. Function<? super K, ? extends V> mappingFunction) {
  1088. if (mappingFunction == null)
  1089. throw new NullPointerException();
  1090. int hash = hash(key);
  1091. Node<K,V>[] tab; Node<K,V> first; int n, i;
  1092. int binCount = 0;
  1093. TreeNode<K,V> t = null;
  1094. Node<K,V> old = null;
  1095. if (size > threshold || (tab = table) == null ||
  1096. (n = tab.length) == 0)
  1097. n = (tab = resize()).length;
  1098. if ((first = tab[i = (n - 1) & hash]) != null) {
  1099. if (first instanceof TreeNode)
  1100. old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
  1101. else {
  1102. Node<K,V> e = first; K k;
  1103. do {
  1104. if (e.hash == hash &&
  1105. ((k = e.key) == key || (key != null && key.equals(k)))) {
  1106. old = e;
  1107. break;
  1108. }
  1109. ++binCount;
  1110. } while ((e = e.next) != null);
  1111. }
  1112. V oldValue;
  1113. if (old != null && (oldValue = old.value) != null) {
  1114. afterNodeAccess(old);
  1115. return oldValue;
  1116. }
  1117. }
  1118. V v = mappingFunction.apply(key);
  1119. if (v == null) {
  1120. return null;
  1121. } else if (old != null) {
  1122. old.value = v;
  1123. afterNodeAccess(old);
  1124. return v;
  1125. }
  1126. else if (t != null)
  1127. t.putTreeVal(this, tab, hash, key, v);
  1128. else {
  1129. tab[i] = newNode(hash, key, v, first);
  1130. if (binCount >= TREEIFY_THRESHOLD - 1)
  1131. treeifyBin(tab, hash);
  1132. }
  1133. ++modCount;
  1134. ++size;
  1135. afterNodeInsertion(true);
  1136. return v;
  1137. }
  1138.  
  1139. public V computeIfPresent(K key,
  1140. BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
  1141. if (remappingFunction == null)
  1142. throw new NullPointerException();
  1143. Node<K,V> e; V oldValue;
  1144. int hash = hash(key);
  1145. if ((e = getNode(hash, key)) != null &&
  1146. (oldValue = e.value) != null) {
  1147. V v = remappingFunction.apply(key, oldValue);
  1148. if (v != null) {
  1149. e.value = v;
  1150. afterNodeAccess(e);
  1151. return v;
  1152. }
  1153. else
  1154. removeNode(hash, key, null, false, true);
  1155. }
  1156. return null;
  1157. }
  1158.  
  1159. @Override
  1160. public V compute(K key,
  1161. BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
  1162. if (remappingFunction == null)
  1163. throw new NullPointerException();
  1164. int hash = hash(key);
  1165. Node<K,V>[] tab; Node<K,V> first; int n, i;
  1166. int binCount = 0;
  1167. TreeNode<K,V> t = null;
  1168. Node<K,V> old = null;
  1169. if (size > threshold || (tab = table) == null ||
  1170. (n = tab.length) == 0)
  1171. n = (tab = resize()).length;
  1172. if ((first = tab[i = (n - 1) & hash]) != null) {
  1173. if (first instanceof TreeNode)
  1174. old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
  1175. else {
  1176. Node<K,V> e = first; K k;
  1177. do {
  1178. if (e.hash == hash &&
  1179. ((k = e.key) == key || (key != null && key.equals(k)))) {
  1180. old = e;
  1181. break;
  1182. }
  1183. ++binCount;
  1184. } while ((e = e.next) != null);
  1185. }
  1186. }
  1187. V oldValue = (old == null) ? null : old.value;
  1188. V v = remappingFunction.apply(key, oldValue);
  1189. if (old != null) {
  1190. if (v != null) {
  1191. old.value = v;
  1192. afterNodeAccess(old);
  1193. }
  1194. else
  1195. removeNode(hash, key, null, false, true);
  1196. }
  1197. else if (v != null) {
  1198. if (t != null)
  1199. t.putTreeVal(this, tab, hash, key, v);
  1200. else {
  1201. tab[i] = newNode(hash, key, v, first);
  1202. if (binCount >= TREEIFY_THRESHOLD - 1)
  1203. treeifyBin(tab, hash);
  1204. }
  1205. ++modCount;
  1206. ++size;
  1207. afterNodeInsertion(true);
  1208. }
  1209. return v;
  1210. }
  1211.  
  1212. @Override
  1213. public V merge(K key, V value,
  1214. BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
  1215. if (value == null)
  1216. throw new NullPointerException();
  1217. if (remappingFunction == null)
  1218. throw new NullPointerException();
  1219. int hash = hash(key);
  1220. Node<K,V>[] tab; Node<K,V> first; int n, i;
  1221. int binCount = 0;
  1222. TreeNode<K,V> t = null;
  1223. Node<K,V> old = null;
  1224. if (size > threshold || (tab = table) == null ||
  1225. (n = tab.length) == 0)
  1226. n = (tab = resize()).length;
  1227. if ((first = tab[i = (n - 1) & hash]) != null) {
  1228. if (first instanceof TreeNode)
  1229. old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
  1230. else {
  1231. Node<K,V> e = first; K k;
  1232. do {
  1233. if (e.hash == hash &&
  1234. ((k = e.key) == key || (key != null && key.equals(k)))) {
  1235. old = e;
  1236. break;
  1237. }
  1238. ++binCount;
  1239. } while ((e = e.next) != null);
  1240. }
  1241. }
  1242. if (old != null) {
  1243. V v;
  1244. if (old.value != null)
  1245. v = remappingFunction.apply(old.value, value);
  1246. else
  1247. v = value;
  1248. if (v != null) {
  1249. old.value = v;
  1250. afterNodeAccess(old);
  1251. }
  1252. else
  1253. removeNode(hash, key, null, false, true);
  1254. return v;
  1255. }
  1256. if (value != null) {
  1257. if (t != null)
  1258. t.putTreeVal(this, tab, hash, key, value);
  1259. else {
  1260. tab[i] = newNode(hash, key, value, first);
  1261. if (binCount >= TREEIFY_THRESHOLD - 1)
  1262. treeifyBin(tab, hash);
  1263. }
  1264. ++modCount;
  1265. ++size;
  1266. afterNodeInsertion(true);
  1267. }
  1268. return value;
  1269. }
  1270.  
  1271. @Override
  1272. public void forEach(BiConsumer<? super K, ? super V> action) {
  1273. Node<K,V>[] tab;
  1274. if (action == null)
  1275. throw new NullPointerException();
  1276. if (size > 0 && (tab = table) != null) {
  1277. int mc = modCount;
  1278. for (int i = 0; i < tab.length; ++i) {
  1279. for (Node<K,V> e = tab[i]; e != null; e = e.next)
  1280. action.accept(e.key, e.value);
  1281. }
  1282. if (modCount != mc)
  1283. throw new ConcurrentModificationException();
  1284. }
  1285. }
  1286.  
  1287. @Override
  1288. public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
  1289. Node<K,V>[] tab;
  1290. if (function == null)
  1291. throw new NullPointerException();
  1292. if (size > 0 && (tab = table) != null) {
  1293. int mc = modCount;
  1294. for (int i = 0; i < tab.length; ++i) {
  1295. for (Node<K,V> e = tab[i]; e != null; e = e.next) {
  1296. e.value = function.apply(e.key, e.value);
  1297. }
  1298. }
  1299. if (modCount != mc)
  1300. throw new ConcurrentModificationException();
  1301. }
  1302. }
  1303.  
  1304. /* ------------------------------------------------------------ */
  1305. // Cloning and serialization
  1306.  
  1307. /**
  1308. * Returns a shallow copy of this <tt>HashMap</tt> instance: the keys and
  1309. * values themselves are not cloned.
  1310. *
  1311. * @return a shallow copy of this map
  1312. */
  1313. @SuppressWarnings("unchecked")
  1314. @Override
  1315. public Object clone() {
  1316. HashMap<K,V> result;
  1317. try {
  1318. result = (HashMap<K,V>)super.clone();
  1319. } catch (CloneNotSupportedException e) {
  1320. // this shouldn't happen, since we are Cloneable
  1321. throw new InternalError(e);
  1322. }
  1323. result.reinitialize();
  1324. result.putMapEntries(this, false);
  1325. return result;
  1326. }
  1327.  
  1328. // These methods are also used when serializing HashSets
  1329. final float loadFactor() { return loadFactor; }
  1330. final int capacity() {
  1331. return (table != null) ? table.length :
  1332. (threshold > 0) ? threshold :
  1333. DEFAULT_INITIAL_CAPACITY;
  1334. }
  1335.  
  1336. /**
  1337. * Save the state of the <tt>HashMap</tt> instance to a stream (i.e.,
  1338. * serialize it).
  1339. *
  1340. * @serialData The <i>capacity</i> of the HashMap (the length of the
  1341. * bucket array) is emitted (int), followed by the
  1342. * <i>size</i> (an int, the number of key-value
  1343. * mappings), followed by the key (Object) and value (Object)
  1344. * for each key-value mapping. The key-value mappings are
  1345. * emitted in no particular order.
  1346. */
  1347. private void writeObject(java.io.ObjectOutputStream s)
  1348. throws IOException {
  1349. int buckets = capacity();
  1350. // Write out the threshold, loadfactor, and any hidden stuff
  1351. s.defaultWriteObject();
  1352. s.writeInt(buckets);
  1353. s.writeInt(size);
  1354. internalWriteEntries(s);
  1355. }
  1356.  
  1357. /**
  1358. * Reconstitute the {@code HashMap} instance from a stream (i.e.,
  1359. * deserialize it).
  1360. */
  1361. private void readObject(java.io.ObjectInputStream s)
  1362. throws IOException, ClassNotFoundException {
  1363. // Read in the threshold (ignored), loadfactor, and any hidden stuff
  1364. s.defaultReadObject();
  1365. reinitialize();
  1366. if (loadFactor <= 0 || Float.isNaN(loadFactor))
  1367. throw new InvalidObjectException("Illegal load factor: " +
  1368. loadFactor);
  1369. s.readInt(); // Read and ignore number of buckets
  1370. int mappings = s.readInt(); // Read number of mappings (size)
  1371. if (mappings < 0)
  1372. throw new InvalidObjectException("Illegal mappings count: " +
  1373. mappings);
  1374. else if (mappings > 0) { // (if zero, use defaults)
  1375. // Size the table using given load factor only if within
  1376. // range of 0.25...4.0
  1377. float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);
  1378. float fc = (float)mappings / lf + 1.0f;
  1379. int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ?
  1380. DEFAULT_INITIAL_CAPACITY :
  1381. (fc >= MAXIMUM_CAPACITY) ?
  1382. MAXIMUM_CAPACITY :
  1383. tableSizeFor((int)fc));
  1384. float ft = (float)cap * lf;
  1385. threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ?
  1386. (int)ft : Integer.MAX_VALUE);
  1387. @SuppressWarnings({"rawtypes","unchecked"})
  1388. Node<K,V>[] tab = (Node<K,V>[])new Node[cap];
  1389. table = tab;
  1390.  
  1391. // Read the keys and values, and put the mappings in the HashMap
  1392. for (int i = 0; i < mappings; i++) {
  1393. @SuppressWarnings("unchecked")
  1394. K key = (K) s.readObject();
  1395. @SuppressWarnings("unchecked")
  1396. V value = (V) s.readObject();
  1397. putVal(hash(key), key, value, false, false);
  1398. }
  1399. }
  1400. }
  1401.  
  1402. /* ------------------------------------------------------------ */
  1403. // iterators
  1404.  
  1405. abstract class HashIterator {
  1406. Node<K,V> next; // next entry to return
  1407. Node<K,V> current; // current entry
  1408. int expectedModCount; // for fast-fail
  1409. int index; // current slot
  1410.  
  1411. HashIterator() {
  1412. expectedModCount = modCount;
  1413. Node<K,V>[] t = table;
  1414. current = next = null;
  1415. index = 0;
  1416. if (t != null && size > 0) { // advance to first entry
  1417. do {} while (index < t.length && (next = t[index++]) == null);
  1418. }
  1419. }
  1420.  
  1421. public final boolean hasNext() {
  1422. return next != null;
  1423. }
  1424.  
  1425. final Node<K,V> nextNode() {
  1426. Node<K,V>[] t;
  1427. Node<K,V> e = next;
  1428. if (modCount != expectedModCount)
  1429. throw new ConcurrentModificationException();
  1430. if (e == null)
  1431. throw new NoSuchElementException();
  1432. if ((next = (current = e).next) == null && (t = table) != null) {
  1433. do {} while (index < t.length && (next = t[index++]) == null);
  1434. }
  1435. return e;
  1436. }
  1437.  
  1438. public final void remove() {
  1439. Node<K,V> p = current;
  1440. if (p == null)
  1441. throw new IllegalStateException();
  1442. if (modCount != expectedModCount)
  1443. throw new ConcurrentModificationException();
  1444. current = null;
  1445. K key = p.key;
  1446. removeNode(hash(key), key, null, false, false);
  1447. expectedModCount = modCount;
  1448. }
  1449. }
  1450.  
  1451. final class KeyIterator extends HashIterator
  1452. implements Iterator<K> {
  1453. public final K next() { return nextNode().key; }
  1454. }
  1455.  
  1456. final class ValueIterator extends HashIterator
  1457. implements Iterator<V> {
  1458. public final V next() { return nextNode().value; }
  1459. }
  1460.  
  1461. final class EntryIterator extends HashIterator
  1462. implements Iterator<Map.Entry<K,V>> {
  1463. public final Map.Entry<K,V> next() { return nextNode(); }
  1464. }
  1465.  
  1466. /* ------------------------------------------------------------ */
  1467. // spliterators
  1468.  
  1469. static class HashMapSpliterator<K,V> {
  1470. final HashMap<K,V> map;
  1471. Node<K,V> current; // current node
  1472. int index; // current index, modified on advance/split
  1473. int fence; // one past last index
  1474. int est; // size estimate
  1475. int expectedModCount; // for comodification checks
  1476.  
  1477. HashMapSpliterator(HashMap<K,V> m, int origin,
  1478. int fence, int est,
  1479. int expectedModCount) {
  1480. this.map = m;
  1481. this.index = origin;
  1482. this.fence = fence;
  1483. this.est = est;
  1484. this.expectedModCount = expectedModCount;
  1485. }
  1486.  
  1487. final int getFence() { // initialize fence and size on first use
  1488. int hi;
  1489. if ((hi = fence) < 0) {
  1490. HashMap<K,V> m = map;
  1491. est = m.size;
  1492. expectedModCount = m.modCount;
  1493. Node<K,V>[] tab = m.table;
  1494. hi = fence = (tab == null) ? 0 : tab.length;
  1495. }
  1496. return hi;
  1497. }
  1498.  
  1499. public final long estimateSize() {
  1500. getFence(); // force init
  1501. return (long) est;
  1502. }
  1503. }
  1504.  
  1505. static final class KeySpliterator<K,V>
  1506. extends HashMapSpliterator<K,V>
  1507. implements Spliterator<K> {
  1508. KeySpliterator(HashMap<K,V> m, int origin, int fence, int est,
  1509. int expectedModCount) {
  1510. super(m, origin, fence, est, expectedModCount);
  1511. }
  1512.  
  1513. public KeySpliterator<K,V> trySplit() {
  1514. int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
  1515. return (lo >= mid || current != null) ? null :
  1516. new KeySpliterator<>(map, lo, index = mid, est >>>= 1,
  1517. expectedModCount);
  1518. }
  1519.  
  1520. public void forEachRemaining(Consumer<? super K> action) {
  1521. int i, hi, mc;
  1522. if (action == null)
  1523. throw new NullPointerException();
  1524. HashMap<K,V> m = map;
  1525. Node<K,V>[] tab = m.table;
  1526. if ((hi = fence) < 0) {
  1527. mc = expectedModCount = m.modCount;
  1528. hi = fence = (tab == null) ? 0 : tab.length;
  1529. }
  1530. else
  1531. mc = expectedModCount;
  1532. if (tab != null && tab.length >= hi &&
  1533. (i = index) >= 0 && (i < (index = hi) || current != null)) {
  1534. Node<K,V> p = current;
  1535. current = null;
  1536. do {
  1537. if (p == null)
  1538. p = tab[i++];
  1539. else {
  1540. action.accept(p.key);
  1541. p = p.next;
  1542. }
  1543. } while (p != null || i < hi);
  1544. if (m.modCount != mc)
  1545. throw new ConcurrentModificationException();
  1546. }
  1547. }
  1548.  
  1549. public boolean tryAdvance(Consumer<? super K> action) {
  1550. int hi;
  1551. if (action == null)
  1552. throw new NullPointerException();
  1553. Node<K,V>[] tab = map.table;
  1554. if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
  1555. while (current != null || index < hi) {
  1556. if (current == null)
  1557. current = tab[index++];
  1558. else {
  1559. K k = current.key;
  1560. current = current.next;
  1561. action.accept(k);
  1562. if (map.modCount != expectedModCount)
  1563. throw new ConcurrentModificationException();
  1564. return true;
  1565. }
  1566. }
  1567. }
  1568. return false;
  1569. }
  1570.  
  1571. public int characteristics() {
  1572. return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
  1573. Spliterator.DISTINCT;
  1574. }
  1575. }
  1576.  
  1577. static final class ValueSpliterator<K,V>
  1578. extends HashMapSpliterator<K,V>
  1579. implements Spliterator<V> {
  1580. ValueSpliterator(HashMap<K,V> m, int origin, int fence, int est,
  1581. int expectedModCount) {
  1582. super(m, origin, fence, est, expectedModCount);
  1583. }
  1584.  
  1585. public ValueSpliterator<K,V> trySplit() {
  1586. int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
  1587. return (lo >= mid || current != null) ? null :
  1588. new ValueSpliterator<>(map, lo, index = mid, est >>>= 1,
  1589. expectedModCount);
  1590. }
  1591.  
  1592. public void forEachRemaining(Consumer<? super V> action) {
  1593. int i, hi, mc;
  1594. if (action == null)
  1595. throw new NullPointerException();
  1596. HashMap<K,V> m = map;
  1597. Node<K,V>[] tab = m.table;
  1598. if ((hi = fence) < 0) {
  1599. mc = expectedModCount = m.modCount;
  1600. hi = fence = (tab == null) ? 0 : tab.length;
  1601. }
  1602. else
  1603. mc = expectedModCount;
  1604. if (tab != null && tab.length >= hi &&
  1605. (i = index) >= 0 && (i < (index = hi) || current != null)) {
  1606. Node<K,V> p = current;
  1607. current = null;
  1608. do {
  1609. if (p == null)
  1610. p = tab[i++];
  1611. else {
  1612. action.accept(p.value);
  1613. p = p.next;
  1614. }
  1615. } while (p != null || i < hi);
  1616. if (m.modCount != mc)
  1617. throw new ConcurrentModificationException();
  1618. }
  1619. }
  1620.  
  1621. public boolean tryAdvance(Consumer<? super V> action) {
  1622. int hi;
  1623. if (action == null)
  1624. throw new NullPointerException();
  1625. Node<K,V>[] tab = map.table;
  1626. if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
  1627. while (current != null || index < hi) {
  1628. if (current == null)
  1629. current = tab[index++];
  1630. else {
  1631. V v = current.value;
  1632. current = current.next;
  1633. action.accept(v);
  1634. if (map.modCount != expectedModCount)
  1635. throw new ConcurrentModificationException();
  1636. return true;
  1637. }
  1638. }
  1639. }
  1640. return false;
  1641. }
  1642.  
  1643. public int characteristics() {
  1644. return (fence < 0 || est == map.size ? Spliterator.SIZED : 0);
  1645. }
  1646. }
  1647.  
  1648. static final class EntrySpliterator<K,V>
  1649. extends HashMapSpliterator<K,V>
  1650. implements Spliterator<Map.Entry<K,V>> {
  1651. EntrySpliterator(HashMap<K,V> m, int origin, int fence, int est,
  1652. int expectedModCount) {
  1653. super(m, origin, fence, est, expectedModCount);
  1654. }
  1655.  
  1656. public EntrySpliterator<K,V> trySplit() {
  1657. int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
  1658. return (lo >= mid || current != null) ? null :
  1659. new EntrySpliterator<>(map, lo, index = mid, est >>>= 1,
  1660. expectedModCount);
  1661. }
  1662.  
  1663. public void forEachRemaining(Consumer<? super Map.Entry<K,V>> action) {
  1664. int i, hi, mc;
  1665. if (action == null)
  1666. throw new NullPointerException();
  1667. HashMap<K,V> m = map;
  1668. Node<K,V>[] tab = m.table;
  1669. if ((hi = fence) < 0) {
  1670. mc = expectedModCount = m.modCount;
  1671. hi = fence = (tab == null) ? 0 : tab.length;
  1672. }
  1673. else
  1674. mc = expectedModCount;
  1675. if (tab != null && tab.length >= hi &&
  1676. (i = index) >= 0 && (i < (index = hi) || current != null)) {
  1677. Node<K,V> p = current;
  1678. current = null;
  1679. do {
  1680. if (p == null)
  1681. p = tab[i++];
  1682. else {
  1683. action.accept(p);
  1684. p = p.next;
  1685. }
  1686. } while (p != null || i < hi);
  1687. if (m.modCount != mc)
  1688. throw new ConcurrentModificationException();
  1689. }
  1690. }
  1691.  
  1692. public boolean tryAdvance(Consumer<? super Map.Entry<K,V>> action) {
  1693. int hi;
  1694. if (action == null)
  1695. throw new NullPointerException();
  1696. Node<K,V>[] tab = map.table;
  1697. if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
  1698. while (current != null || index < hi) {
  1699. if (current == null)
  1700. current = tab[index++];
  1701. else {
  1702. Node<K,V> e = current;
  1703. current = current.next;
  1704. action.accept(e);
  1705. if (map.modCount != expectedModCount)
  1706. throw new ConcurrentModificationException();
  1707. return true;
  1708. }
  1709. }
  1710. }
  1711. return false;
  1712. }
  1713.  
  1714. public int characteristics() {
  1715. return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
  1716. Spliterator.DISTINCT;
  1717. }
  1718. }
  1719.  
  1720. /* ------------------------------------------------------------ */
  1721. // LinkedHashMap support
  1722.  
  1723. /*
  1724. * The following package-protected methods are designed to be
  1725. * overridden by LinkedHashMap, but not by any other subclass.
  1726. * Nearly all other internal methods are also package-protected
  1727. * but are declared final, so can be used by LinkedHashMap, view
  1728. * classes, and HashSet.
  1729. */
  1730.  
  1731. // Create a regular (non-tree) node
  1732. Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) {
  1733. return new Node<>(hash, key, value, next);
  1734. }
  1735.  
  1736. // For conversion from TreeNodes to plain nodes
  1737. Node<K,V> replacementNode(Node<K,V> p, Node<K,V> next) {
  1738. return new Node<>(p.hash, p.key, p.value, next);
  1739. }
  1740.  
  1741. // Create a tree bin node
  1742. TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next) {
  1743. return new TreeNode<>(hash, key, value, next);
  1744. }
  1745.  
  1746. // For treeifyBin
  1747. TreeNode<K,V> replacementTreeNode(Node<K,V> p, Node<K,V> next) {
  1748. return new TreeNode<>(p.hash, p.key, p.value, next);
  1749. }
  1750.  
  1751. /**
  1752. * Reset to initial default state. Called by clone and readObject.
  1753. */
  1754. void reinitialize() {
  1755. table = null;
  1756. entrySet = null;
  1757. keySet = null;
  1758. values = null;
  1759. modCount = 0;
  1760. threshold = 0;
  1761. size = 0;
  1762. }
  1763.  
  1764. // Callbacks to allow LinkedHashMap post-actions
  1765. void afterNodeAccess(Node<K,V> p) { }
  1766. void afterNodeInsertion(boolean evict) { }
  1767. void afterNodeRemoval(Node<K,V> p) { }
  1768.  
  1769. // Called only from writeObject, to ensure compatible ordering.
  1770. void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {
  1771. Node<K,V>[] tab;
  1772. if (size > 0 && (tab = table) != null) {
  1773. for (int i = 0; i < tab.length; ++i) {
  1774. for (Node<K,V> e = tab[i]; e != null; e = e.next) {
  1775. s.writeObject(e.key);
  1776. s.writeObject(e.value);
  1777. }
  1778. }
  1779. }
  1780. }
  1781.  
  1782. /* ------------------------------------------------------------ */
  1783. // Tree bins
  1784.  
  1785. /**
  1786. * Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn
  1787. * extends Node) so can be used as extension of either regular or
  1788. * linked node.
  1789. */
  1790. static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {
  1791. TreeNode<K,V> parent; // red-black tree links
  1792. TreeNode<K,V> left;
  1793. TreeNode<K,V> right;
  1794. TreeNode<K,V> prev; // needed to unlink next upon deletion
  1795. boolean red;
  1796. TreeNode(int hash, K key, V val, Node<K,V> next) {
  1797. super(hash, key, val, next);
  1798. }
  1799.  
  1800. /**
  1801. * Returns root of tree containing this node.
  1802. */
  1803. final TreeNode<K,V> root() {
  1804. for (TreeNode<K,V> r = this, p;;) {
  1805. if ((p = r.parent) == null)
  1806. return r;
  1807. r = p;
  1808. }
  1809. }
  1810.  
  1811. /**
  1812. * Ensures that the given root is the first node of its bin.
  1813. */
  1814. static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) {
  1815. int n;
  1816. if (root != null && tab != null && (n = tab.length) > 0) {
  1817. int index = (n - 1) & root.hash;
  1818. TreeNode<K,V> first = (TreeNode<K,V>)tab[index];
  1819. if (root != first) {
  1820. Node<K,V> rn;
  1821. tab[index] = root;
  1822. TreeNode<K,V> rp = root.prev;
  1823. if ((rn = root.next) != null)
  1824. ((TreeNode<K,V>)rn).prev = rp;
  1825. if (rp != null)
  1826. rp.next = rn;
  1827. if (first != null)
  1828. first.prev = root;
  1829. root.next = first;
  1830. root.prev = null;
  1831. }
  1832. assert checkInvariants(root);
  1833. }
  1834. }
  1835.  
  1836. /**
  1837. * Finds the node starting at root p with the given hash and key.
  1838. * The kc argument caches comparableClassFor(key) upon first use
  1839. * comparing keys.
  1840. */
  1841. final TreeNode<K,V> find(int h, Object k, Class<?> kc) {
  1842. TreeNode<K,V> p = this;
  1843. do {
  1844. int ph, dir; K pk;
  1845. TreeNode<K,V> pl = p.left, pr = p.right, q;
  1846. if ((ph = p.hash) > h)
  1847. p = pl;
  1848. else if (ph < h)
  1849. p = pr;
  1850. else if ((pk = p.key) == k || (k != null && k.equals(pk)))
  1851. return p;
  1852. else if (pl == null)
  1853. p = pr;
  1854. else if (pr == null)
  1855. p = pl;
  1856. else if ((kc != null ||
  1857. (kc = comparableClassFor(k)) != null) &&
  1858. (dir = compareComparables(kc, k, pk)) != 0)
  1859. p = (dir < 0) ? pl : pr;
  1860. else if ((q = pr.find(h, k, kc)) != null)
  1861. return q;
  1862. else
  1863. p = pl;
  1864. } while (p != null);
  1865. return null;
  1866. }
  1867.  
  1868. /**
  1869. * Calls find for root node.
  1870. */
  1871. final TreeNode<K,V> getTreeNode(int h, Object k) {
  1872. return ((parent != null) ? root() : this).find(h, k, null);
  1873. }
  1874.  
  1875. /**
  1876. * Tie-breaking utility for ordering insertions when equal
  1877. * hashCodes and non-comparable. We don't require a total
  1878. * order, just a consistent insertion rule to maintain
  1879. * equivalence across rebalancings. Tie-breaking further than
  1880. * necessary simplifies testing a bit.
  1881. */
  1882. static int tieBreakOrder(Object a, Object b) {
  1883. int d;
  1884. if (a == null || b == null ||
  1885. (d = a.getClass().getName().
  1886. compareTo(b.getClass().getName())) == 0)
  1887. d = (System.identityHashCode(a) <= System.identityHashCode(b) ?
  1888. -1 : 1);
  1889. return d;
  1890. }
  1891.  
  1892. /**
  1893. * Forms tree of the nodes linked from this node.
  1894. * @return root of tree
  1895. */
  1896. final void treeify(Node<K,V>[] tab) {
  1897. TreeNode<K,V> root = null;
  1898. for (TreeNode<K,V> x = this, next; x != null; x = next) {
  1899. next = (TreeNode<K,V>)x.next;
  1900. x.left = x.right = null;
  1901. if (root == null) {
  1902. x.parent = null;
  1903. x.red = false;
  1904. root = x;
  1905. }
  1906. else {
  1907. K k = x.key;
  1908. int h = x.hash;
  1909. Class<?> kc = null;
  1910. for (TreeNode<K,V> p = root;;) {
  1911. int dir, ph;
  1912. K pk = p.key;
  1913. if ((ph = p.hash) > h)
  1914. dir = -1;
  1915. else if (ph < h)
  1916. dir = 1;
  1917. else if ((kc == null &&
  1918. (kc = comparableClassFor(k)) == null) ||
  1919. (dir = compareComparables(kc, k, pk)) == 0)
  1920. dir = tieBreakOrder(k, pk);
  1921.  
  1922. TreeNode<K,V> xp = p;
  1923. if ((p = (dir <= 0) ? p.left : p.right) == null) {
  1924. x.parent = xp;
  1925. if (dir <= 0)
  1926. xp.left = x;
  1927. else
  1928. xp.right = x;
  1929. root = balanceInsertion(root, x);
  1930. break;
  1931. }
  1932. }
  1933. }
  1934. }
  1935. moveRootToFront(tab, root);
  1936. }
  1937.  
  1938. /**
  1939. * Returns a list of non-TreeNodes replacing those linked from
  1940. * this node.
  1941. */
  1942. final Node<K,V> untreeify(HashMap<K,V> map) {
  1943. Node<K,V> hd = null, tl = null;
  1944. for (Node<K,V> q = this; q != null; q = q.next) {
  1945. Node<K,V> p = map.replacementNode(q, null);
  1946. if (tl == null)
  1947. hd = p;
  1948. else
  1949. tl.next = p;
  1950. tl = p;
  1951. }
  1952. return hd;
  1953. }
  1954.  
  1955. /**
  1956. * Tree version of putVal.
  1957. */
  1958. final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,
  1959. int h, K k, V v) {
  1960. Class<?> kc = null;
  1961. boolean searched = false;
  1962. TreeNode<K,V> root = (parent != null) ? root() : this;
  1963. for (TreeNode<K,V> p = root;;) {
  1964. int dir, ph; K pk;
  1965. if ((ph = p.hash) > h)
  1966. dir = -1;
  1967. else if (ph < h)
  1968. dir = 1;
  1969. else if ((pk = p.key) == k || (k != null && k.equals(pk)))
  1970. return p;
  1971. else if ((kc == null &&
  1972. (kc = comparableClassFor(k)) == null) ||
  1973. (dir = compareComparables(kc, k, pk)) == 0) {
  1974. if (!searched) {
  1975. TreeNode<K,V> q, ch;
  1976. searched = true;
  1977. if (((ch = p.left) != null &&
  1978. (q = ch.find(h, k, kc)) != null) ||
  1979. ((ch = p.right) != null &&
  1980. (q = ch.find(h, k, kc)) != null))
  1981. return q;
  1982. }
  1983. dir = tieBreakOrder(k, pk);
  1984. }
  1985.  
  1986. TreeNode<K,V> xp = p;
  1987. if ((p = (dir <= 0) ? p.left : p.right) == null) {
  1988. Node<K,V> xpn = xp.next;
  1989. TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);
  1990. if (dir <= 0)
  1991. xp.left = x;
  1992. else
  1993. xp.right = x;
  1994. xp.next = x;
  1995. x.parent = x.prev = xp;
  1996. if (xpn != null)
  1997. ((TreeNode<K,V>)xpn).prev = x;
  1998. moveRootToFront(tab, balanceInsertion(root, x));
  1999. return null;
  2000. }
  2001. }
  2002. }
  2003.  
  2004. /**
  2005. * Removes the given node, that must be present before this call.
  2006. * This is messier than typical red-black deletion code because we
  2007. * cannot swap the contents of an interior node with a leaf
  2008. * successor that is pinned by "next" pointers that are accessible
  2009. * independently during traversal. So instead we swap the tree
  2010. * linkages. If the current tree appears to have too few nodes,
  2011. * the bin is converted back to a plain bin. (The test triggers
  2012. * somewhere between 2 and 6 nodes, depending on tree structure).
  2013. */
  2014. final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab,
  2015. boolean movable) {
  2016. int n;
  2017. if (tab == null || (n = tab.length) == 0)
  2018. return;
  2019. int index = (n - 1) & hash;
  2020. TreeNode<K,V> first = (TreeNode<K,V>)tab[index], root = first, rl;
  2021. TreeNode<K,V> succ = (TreeNode<K,V>)next, pred = prev;
  2022. if (pred == null)
  2023. tab[index] = first = succ;
  2024. else
  2025. pred.next = succ;
  2026. if (succ != null)
  2027. succ.prev = pred;
  2028. if (first == null)
  2029. return;
  2030. if (root.parent != null)
  2031. root = root.root();
  2032. if (root == null || root.right == null ||
  2033. (rl = root.left) == null || rl.left == null) {
  2034. tab[index] = first.untreeify(map); // too small
  2035. return;
  2036. }
  2037. TreeNode<K,V> p = this, pl = left, pr = right, replacement;
  2038. if (pl != null && pr != null) {
  2039. TreeNode<K,V> s = pr, sl;
  2040. while ((sl = s.left) != null) // find successor
  2041. s = sl;
  2042. boolean c = s.red; s.red = p.red; p.red = c; // swap colors
  2043. TreeNode<K,V> sr = s.right;
  2044. TreeNode<K,V> pp = p.parent;
  2045. if (s == pr) { // p was s's direct parent
  2046. p.parent = s;
  2047. s.right = p;
  2048. }
  2049. else {
  2050. TreeNode<K,V> sp = s.parent;
  2051. if ((p.parent = sp) != null) {
  2052. if (s == sp.left)
  2053. sp.left = p;
  2054. else
  2055. sp.right = p;
  2056. }
  2057. if ((s.right = pr) != null)
  2058. pr.parent = s;
  2059. }
  2060. p.left = null;
  2061. if ((p.right = sr) != null)
  2062. sr.parent = p;
  2063. if ((s.left = pl) != null)
  2064. pl.parent = s;
  2065. if ((s.parent = pp) == null)
  2066. root = s;
  2067. else if (p == pp.left)
  2068. pp.left = s;
  2069. else
  2070. pp.right = s;
  2071. if (sr != null)
  2072. replacement = sr;
  2073. else
  2074. replacement = p;
  2075. }
  2076. else if (pl != null)
  2077. replacement = pl;
  2078. else if (pr != null)
  2079. replacement = pr;
  2080. else
  2081. replacement = p;
  2082. if (replacement != p) {
  2083. TreeNode<K,V> pp = replacement.parent = p.parent;
  2084. if (pp == null)
  2085. root = replacement;
  2086. else if (p == pp.left)
  2087. pp.left = replacement;
  2088. else
  2089. pp.right = replacement;
  2090. p.left = p.right = p.parent = null;
  2091. }
  2092.  
  2093. TreeNode<K,V> r = p.red ? root : balanceDeletion(root, replacement);
  2094.  
  2095. if (replacement == p) { // detach
  2096. TreeNode<K,V> pp = p.parent;
  2097. p.parent = null;
  2098. if (pp != null) {
  2099. if (p == pp.left)
  2100. pp.left = null;
  2101. else if (p == pp.right)
  2102. pp.right = null;
  2103. }
  2104. }
  2105. if (movable)
  2106. moveRootToFront(tab, r);
  2107. }
  2108.  
  2109. /**
  2110. * Splits nodes in a tree bin into lower and upper tree bins,
  2111. * or untreeifies if now too small. Called only from resize;
  2112. * see above discussion about split bits and indices.
  2113. *
  2114. * @param map the map
  2115. * @param tab the table for recording bin heads
  2116. * @param index the index of the table being split
  2117. * @param bit the bit of hash to split on
  2118. */
  2119. final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {
  2120. TreeNode<K,V> b = this;
  2121. // Relink into lo and hi lists, preserving order
  2122. TreeNode<K,V> loHead = null, loTail = null;
  2123. TreeNode<K,V> hiHead = null, hiTail = null;
  2124. int lc = 0, hc = 0;
  2125. for (TreeNode<K,V> e = b, next; e != null; e = next) {
  2126. next = (TreeNode<K,V>)e.next;
  2127. e.next = null;
  2128. if ((e.hash & bit) == 0) {
  2129. if ((e.prev = loTail) == null)
  2130. loHead = e;
  2131. else
  2132. loTail.next = e;
  2133. loTail = e;
  2134. ++lc;
  2135. }
  2136. else {
  2137. if ((e.prev = hiTail) == null)
  2138. hiHead = e;
  2139. else
  2140. hiTail.next = e;
  2141. hiTail = e;
  2142. ++hc;
  2143. }
  2144. }
  2145.  
  2146. if (loHead != null) {
  2147. if (lc <= UNTREEIFY_THRESHOLD)
  2148. tab[index] = loHead.untreeify(map);
  2149. else {
  2150. tab[index] = loHead;
  2151. if (hiHead != null) // (else is already treeified)
  2152. loHead.treeify(tab);
  2153. }
  2154. }
  2155. if (hiHead != null) {
  2156. if (hc <= UNTREEIFY_THRESHOLD)
  2157. tab[index + bit] = hiHead.untreeify(map);
  2158. else {
  2159. tab[index + bit] = hiHead;
  2160. if (loHead != null)
  2161. hiHead.treeify(tab);
  2162. }
  2163. }
  2164. }
  2165.  
  2166. /* ------------------------------------------------------------ */
  2167. // Red-black tree methods, all adapted from CLR
  2168.  
  2169. static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root,
  2170. TreeNode<K,V> p) {
  2171. TreeNode<K,V> r, pp, rl;
  2172. if (p != null && (r = p.right) != null) {
  2173. if ((rl = p.right = r.left) != null)
  2174. rl.parent = p;
  2175. if ((pp = r.parent = p.parent) == null)
  2176. (root = r).red = false;
  2177. else if (pp.left == p)
  2178. pp.left = r;
  2179. else
  2180. pp.right = r;
  2181. r.left = p;
  2182. p.parent = r;
  2183. }
  2184. return root;
  2185. }
  2186.  
  2187. static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root,
  2188. TreeNode<K,V> p) {
  2189. TreeNode<K,V> l, pp, lr;
  2190. if (p != null && (l = p.left) != null) {
  2191. if ((lr = p.left = l.right) != null)
  2192. lr.parent = p;
  2193. if ((pp = l.parent = p.parent) == null)
  2194. (root = l).red = false;
  2195. else if (pp.right == p)
  2196. pp.right = l;
  2197. else
  2198. pp.left = l;
  2199. l.right = p;
  2200. p.parent = l;
  2201. }
  2202. return root;
  2203. }
  2204.  
  2205. static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root,
  2206. TreeNode<K,V> x) {
  2207. x.red = true;
  2208. for (TreeNode<K,V> xp, xpp, xppl, xppr;;) {
  2209. if ((xp = x.parent) == null) {
  2210. x.red = false;
  2211. return x;
  2212. }
  2213. else if (!xp.red || (xpp = xp.parent) == null)
  2214. return root;
  2215. if (xp == (xppl = xpp.left)) {
  2216. if ((xppr = xpp.right) != null && xppr.red) {
  2217. xppr.red = false;
  2218. xp.red = false;
  2219. xpp.red = true;
  2220. x = xpp;
  2221. }
  2222. else {
  2223. if (x == xp.right) {
  2224. root = rotateLeft(root, x = xp);
  2225. xpp = (xp = x.parent) == null ? null : xp.parent;
  2226. }
  2227. if (xp != null) {
  2228. xp.red = false;
  2229. if (xpp != null) {
  2230. xpp.red = true;
  2231. root = rotateRight(root, xpp);
  2232. }
  2233. }
  2234. }
  2235. }
  2236. else {
  2237. if (xppl != null && xppl.red) {
  2238. xppl.red = false;
  2239. xp.red = false;
  2240. xpp.red = true;
  2241. x = xpp;
  2242. }
  2243. else {
  2244. if (x == xp.left) {
  2245. root = rotateRight(root, x = xp);
  2246. xpp = (xp = x.parent) == null ? null : xp.parent;
  2247. }
  2248. if (xp != null) {
  2249. xp.red = false;
  2250. if (xpp != null) {
  2251. xpp.red = true;
  2252. root = rotateLeft(root, xpp);
  2253. }
  2254. }
  2255. }
  2256. }
  2257. }
  2258. }
  2259.  
  2260. static <K,V> TreeNode<K,V> balanceDeletion(TreeNode<K,V> root,
  2261. TreeNode<K,V> x) {
  2262. for (TreeNode<K,V> xp, xpl, xpr;;) {
  2263. if (x == null || x == root)
  2264. return root;
  2265. else if ((xp = x.parent) == null) {
  2266. x.red = false;
  2267. return x;
  2268. }
  2269. else if (x.red) {
  2270. x.red = false;
  2271. return root;
  2272. }
  2273. else if ((xpl = xp.left) == x) {
  2274. if ((xpr = xp.right) != null && xpr.red) {
  2275. xpr.red = false;
  2276. xp.red = true;
  2277. root = rotateLeft(root, xp);
  2278. xpr = (xp = x.parent) == null ? null : xp.right;
  2279. }
  2280. if (xpr == null)
  2281. x = xp;
  2282. else {
  2283. TreeNode<K,V> sl = xpr.left, sr = xpr.right;
  2284. if ((sr == null || !sr.red) &&
  2285. (sl == null || !sl.red)) {
  2286. xpr.red = true;
  2287. x = xp;
  2288. }
  2289. else {
  2290. if (sr == null || !sr.red) {
  2291. if (sl != null)
  2292. sl.red = false;
  2293. xpr.red = true;
  2294. root = rotateRight(root, xpr);
  2295. xpr = (xp = x.parent) == null ?
  2296. null : xp.right;
  2297. }
  2298. if (xpr != null) {
  2299. xpr.red = (xp == null) ? false : xp.red;
  2300. if ((sr = xpr.right) != null)
  2301. sr.red = false;
  2302. }
  2303. if (xp != null) {
  2304. xp.red = false;
  2305. root = rotateLeft(root, xp);
  2306. }
  2307. x = root;
  2308. }
  2309. }
  2310. }
  2311. else { // symmetric
  2312. if (xpl != null && xpl.red) {
  2313. xpl.red = false;
  2314. xp.red = true;
  2315. root = rotateRight(root, xp);
  2316. xpl = (xp = x.parent) == null ? null : xp.left;
  2317. }
  2318. if (xpl == null)
  2319. x = xp;
  2320. else {
  2321. TreeNode<K,V> sl = xpl.left, sr = xpl.right;
  2322. if ((sl == null || !sl.red) &&
  2323. (sr == null || !sr.red)) {
  2324. xpl.red = true;
  2325. x = xp;
  2326. }
  2327. else {
  2328. if (sl == null || !sl.red) {
  2329. if (sr != null)
  2330. sr.red = false;
  2331. xpl.red = true;
  2332. root = rotateLeft(root, xpl);
  2333. xpl = (xp = x.parent) == null ?
  2334. null : xp.left;
  2335. }
  2336. if (xpl != null) {
  2337. xpl.red = (xp == null) ? false : xp.red;
  2338. if ((sl = xpl.left) != null)
  2339. sl.red = false;
  2340. }
  2341. if (xp != null) {
  2342. xp.red = false;
  2343. root = rotateRight(root, xp);
  2344. }
  2345. x = root;
  2346. }
  2347. }
  2348. }
  2349. }
  2350. }
  2351.  
  2352. /**
  2353. * Recursive invariant check
  2354. */
  2355. static <K,V> boolean checkInvariants(TreeNode<K,V> t) {
  2356. TreeNode<K,V> tp = t.parent, tl = t.left, tr = t.right,
  2357. tb = t.prev, tn = (TreeNode<K,V>)t.next;
  2358. if (tb != null && tb.next != t)
  2359. return false;
  2360. if (tn != null && tn.prev != t)
  2361. return false;
  2362. if (tp != null && t != tp.left && t != tp.right)
  2363. return false;
  2364. if (tl != null && (tl.parent != t || tl.hash > t.hash))
  2365. return false;
  2366. if (tr != null && (tr.parent != t || tr.hash < t.hash))
  2367. return false;
  2368. if (t.red && tl != null && tl.red && tr != null && tr.red)
  2369. return false;
  2370. if (tl != null && !checkInvariants(tl))
  2371. return false;
  2372. if (tr != null && !checkInvariants(tr))
  2373. return false;
  2374. return true;
  2375. }
  2376. }
  2377.  
  2378. }

jdk1.8中的HashMap源码

2.3、HashMap的基本思想

2.3.1、 确定哈希桶数组索引位置

不管增加、删除、查找键值对,定位到哈希桶数组的位置都是很关键的第一步。前面说过HashMap的数据结构是数组和链表(链地址法)的结合,所以我们当然希望这个HashMap里面的元素位置尽量分布均匀些,尽量使得每个位置上的元素数量只有一个,那么当我们用hash算法求得这个位置的时候,马上就可以知道对应位置的元素就是我们要的,不用遍历链表,大大优化了查询的效率。HashMap定位数组索引位置,直接决定了hash方法的离散性能。先看看源码的实现:

  1. 方法一:
  2. static final int hash(Object key) { //jdk1.8 & jdk1.7
  3. int h;
  4. // h = key.hashCode() 为第一步 取hashCode值
  5. // h ^ (h >>> 16) 为第二步 高位参与运算
  6. return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
  7. }
  8. 方法二:
  9. static int indexFor(int h, int length) {
  10. //jdk1.7的源码,jdk1.8没有这个方法,但是实现原理一样的
  11. return h & (length-1); //第三步 取模运算
  12. }

这里的Hash算法本质上就是三步:取key的hashCode值、高位运算、取模运算。

对于任意给定的对象,只要它的hashCode()返回值相同,那么程序调用方法一所计算得到的Hash码值总是相同的。我们首先想到的就是把hash值对数组长度取模运算,这样一来,元素的分布相对来说是比较均匀的。但是,模运算的消耗还是比较大的,在HashMap中是这样做的:调用方法二来计算该对象应该保存在table数组的哪个索引处。这个方法非常巧妙,它通过h & (table.length -1)来得到该对象的保存位,而HashMap底层数组的长度总是2的n次方,这是HashMap在速度上的优化。当length总是2的n次方时,h& (length-1)运算等价于对length取模,也就是h%length,但是&比%具有更高的效率。
    在JDK1.8的实现中,优化了高位运算的算法,通过hashCode()的高16位异或低16位实现的:(h = k.hashCode()) ^ (h >>> 16),主要是从速度、功效、质量来考虑的,这么做可以在数组table的length比较小的时候,也能保证考虑到高低bit都参与到Hash的计算中,同时不会有太大的开销。

2.3.2、jdk1.8中的HashMap的put方法

2.3.3、扩容机制

扩容(resize)就是重新计算容量,向HashMap对象里不停的添加元素,而HashMap对象内部的数组无法装载更多的元素时,对象就需要扩大数组的长度,以便能装入更多的元素。当然Java里的数组是无法自动扩容的,方法是使用一个新的数组代替已有的容量小的数组。
     我们分析下resize的源码,鉴于JDK1.8融入了红黑树,较复杂,为了便于理解我们仍然使用JDK1.7的代码,好理解一些,本质上区别不大,具体区别后文再说。

  1. void resize(int newCapacity) { //传入新的容量
  2. Entry[] oldTable = table; //引用扩容前的Entry数组
  3. int oldCapacity = oldTable.length;
  4. if (oldCapacity == MAXIMUM_CAPACITY) { //扩容前的数组大小如果已经达到最大(2^30)了
  5. threshold = Integer.MAX_VALUE; //修改阈值为int的最大值(2^31-1),这样以后就不会扩容了
  6. return;
  7. }
  8.  
  9. Entry[] newTable = new Entry[newCapacity]; //初始化一个新的Entry数组
  10. transfer(newTable); //!!将数据转移到新的Entry数组里
  11. table = newTable; //HashMap的table属性引用新的Entry数组
  12. threshold = (int)(newCapacity * loadFactor);//修改阈值
  13. }

这里就是使用一个容量更大的数组来代替已有的容量小的数组,transfer()方法将原有Entry数组的元素拷贝到新的Entry数组里。

  1. void transfer(Entry[] newTable) {
  2. Entry[] src = table; //src引用了旧的Entry数组
  3. int newCapacity = newTable.length;
  4. for (int j = 0; j < src.length; j++) { //遍历旧的Entry数组
  5. Entry<K,V> e = src[j]; //取得旧Entry数组的每个元素
  6. if (e != null) {
  7. src[j] = null;//释放旧Entry数组的对象引用(for循环后,旧的Entry数组不再引用任何对象)
  8. do {
  9. Entry<K,V> next = e.next;
  10. int i = indexFor(e.hash, newCapacity); //!!重新计算每个元素在数组中的位置
  11. e.next = newTable[i];
  12. newTable[i] = e; //将元素放在数组上
  13. e = next; //访问下一个Entry链上的元素
  14. } while (e != null);
  15. }
  16. }
  17. }

newTable[i]的引用赋给了e.next,也就是使用了单链表的头插入方式,同一位置上新元素总会被放在链表的头部位置;这样先放在一个索引上的元素终会被放到Entry链的尾部(如果发生了hash冲突的话),这一点和Jdk1.8有区别,下文详解。在旧数组中同一条Entry链上的元素,通过重新计算索引位置后,有可能被放到了新数组的不同位置上。
    下面举个例子说明下扩容过程。假设了我们的hash算法就是简单的用key mod 一下表的大小(也就是数组的长度)。其中的哈希桶数组table的size=2, 所以key = 3、7、5,put顺序依次为 5、7、3。在mod 2以后都冲突在table[1]这里了。这里假设负载因子 loadFactor=1,即当键值对的实际大小size 大于 table的实际大小时进行扩容。接下来的三个步骤是哈希桶数组 resize成4,然后所有的Node重新rehash的过程。

下面我们讲解下JDK1.8做的优化。经过观测可以发现,我们使用的是2次幂的扩展(指长度扩为原来2倍),所以,元素的位置要么是在原位置,要么是在原位置再移动2次幂的位置。下图n为table的长度,图(a)表示扩容前的key1和key2两种key确定索引位置的示例,图(b)表示扩容后key1和key2两种key确定索引位置的示例,其中hash1是key1对应的哈希与高位运算结果。

元素在重新计算hash之后,因为n变为2倍,那么n-1的mask范围在高位多1bit(红色),因此新的index就会发生这样的变化:

因此,我们在扩充HashMap的时候,不需要像JDK1.7的实现那样重新计算hash,只需要看看原来的hash值新增的那个bit是1还是0就好了,是0的话索引没变,是1的话索引变成“原索引+oldCap”,可以看看下图为16扩充为32的resize示意图:

这个设计确实非常的巧妙,既省去了重新计算hash值的时间,而且同时,由于新增的1bit是0还是1可以认为是随机的,因此resize的过程,均匀的把之前的冲突的节点分散到新的bucket了。这一块就是JDK1.8新增的优化点。有一点注意区别,JDK1.7中rehash的时候,旧链表迁移新链表的时候,如果在新表的数组索引位置相同,则链表元素会倒置,但是JDK1.8不会倒置。

  1. (1) 扩容是一个特别耗性能的操作,所以当程序员在使用HashMap的时候,估算map的大小,初始化的时候给一个大致的数值,避免map进行频繁的扩容。
  2. (2) 负载因子是可以修改的,也可以大于1,但是建议不要轻易修改,除非情况非常特殊。
  3. (3) HashMap是线程不安全的,不要在并发的环境中同时操作HashMap,建议使用ConcurrentHashMap
  4. (4) JDK1.8引入红黑树大程度优化了HashMap的性能。

三、ConcurrentHashMap的介绍

HashMap是非线程安全的,这意味着不应该在多个线程中对这些Map进行修改操作,轻则会产生数据不一致的问题,甚至还会因为并发插入元素而导致链表成环(插入会触发扩容,而扩容操作需要将原数组中的元素rehash到新数组,这时并发操作就有可能产生链表的循环引用从而成环),这样在查找时就会发生死循环,影响到整个应用程序。
     Collections.synchronizedMap(Map<K,V> m)可以将一个Map转换成线程安全的实现,其实也就是通过一个包装类,然后把所有功能都委托给传入的Map实现,而且包装类是基于synchronized关键字来保证线程安全的(Hashtable也是基于synchronized关键字),底层使用的是互斥锁(同一时间内只能由持有锁的线程访问,其他竞争线程进入睡眠状态),性能与吞吐量比较低。

  1. public static <K,V> Map<K,V> synchronizedMap(Map<K,V> m) {
  2. return new SynchronizedMap<>(m);
  3. }
  4. private static class SynchronizedMap<K,V>
  5. implements Map<K,V>, Serializable {
  6. private static final long serialVersionUID = 1978198479659022715L;
  7. private final Map<K,V> m; // Backing Map
  8. final Object mutex; // Object on which to synchronize
  9. SynchronizedMap(Map<K,V> m) {
  10. this.m = Objects.requireNonNull(m);
  11. mutex = this;
  12. }
  13. SynchronizedMap(Map<K,V> m, Object mutex) {
  14. this.m = m;
  15. this.mutex = mutex;
  16. }
  17. public int size() {
  18. synchronized (mutex) {return m.size();}
  19. }
  20. public boolean isEmpty() {
  21. synchronized (mutex) {return m.isEmpty();}
  22. }
  23. ............
  24. }

然而ConcurrentHashMap的实现细节远没有这么简单,因此性能也要高上许多。它没有使用一个全局锁来锁住自己,而是采用了减少锁粒度的方法,尽量减少因为竞争锁而导致的阻塞与冲突,而且ConcurrentHashMap的检索操作是不需要锁的。
     在Java 7中,ConcurrentHashMap把内部细分成了若干个小的HashMap,称之为段(Segment),默认被分为16个段。对于一个写操作而言,会先根据hash code进行寻址,得出该Entry应被存放在哪一个Segment,然后只要对该Segment加锁即可。理想情况下,一个默认的ConcurrentHashMap可以同时接受16个线程进行写操作(如果都是对不同Segment进行操作的话)。分段锁对于size()这样的全局操作来说就没有任何作用了,想要得出Entry的数量就需要遍历所有Segment,获得所有的锁,然后再统计总数。事实上,ConcurrentHashMap会先试图使用无锁的方式统计总数,这个尝试会进行3次,如果在相邻的2次计算中获得的Segment的modCount次数一致,代表这两次计算过程中都没有发生过修改操作,那么就可以当做最终结果返回,否则,就要获得所有Segment的锁,重新计算size。


     而Java 8的ConcurrentHashMap,它与Java 7的实现差别较大。完全放弃了段的设计,而是变回与HashMap相似的设计,使用buckets数组与分离链接法(同样会在超过阈值时树化,对于构造红黑树的逻辑与HashMap差别不大,只不过需要额外使用CAS来保证线程安全),锁的粒度也被细分到每个数组元素(因为HashMap在Java 8中也实现了不少优化,即使碰撞严重,也能保证一定的性能,而且Segment不仅臃肿还有弱一致性的问题存在),所以它的并发级别与数组长度相关(Java 7则是与段数相关)。

3.1、ConcurrentHashMap散列函数

ConcurrentHashMap的散列函数与HashMap并没有什么区别,同样是把key的hash code的高16位与低16位进行异或运算(因为ConcurrentHashMap的buckets数组长度也永远是一个2的N次方),然后将扰乱后的hash code与数组的长度减一(实际可访问到的最大索引)进行与运算,得出的结果即是目标所在的位置。

  1. // 2^31 - 1,int类型的最大值
  2. // 该掩码表示节点hash的可用位,用来保证hash永远为一个正整数
  3. static final int HASH_BITS = 0x7fffffff;
  4. static final int spread(int h) {
  5. return (h ^ (h >>> 16)) & HASH_BITS;
  6. }

3.2、查找操作

下面是查找操作的源码:

  1. public V get(Object key) {
  2. Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
  3. int h = spread(key.hashCode());
  4. if ((tab = table) != null && (n = tab.length) > 0 &&
  5. (e = tabAt(tab, (n - 1) & h)) != null) {
  6. if ((eh = e.hash) == h) {
  7. // 先尝试判断链表头是否为目标,如果是就直接返回
  8. if ((ek = e.key) == key || (ek != null && key.equals(ek)))
  9. return e.val;
  10. }
  11. else if (eh < 0)
  12. // eh < 0代表这是一个特殊节点(TreeBin或ForwardingNode)
  13. // 所以直接调用find()进行遍历查找
  14. return (p = e.find(h, key)) != null ? p.val : null;
  15. // 遍历链表
  16. while ((e = e.next) != null) {
  17. if (e.hash == h &&
  18. ((ek = e.key) == key || (ek != null && key.equals(ek))))
  19. return e.val;
  20. }
  21. }
  22. return null;
  23. }

一个普通的节点(链表节点)的hash不可能小于0(已经在spread()函数中修正过了),所以小于0的只可能是一个特殊节点,它不能用while循环中遍历链表的方式来进行遍历。TreeBin是红黑树的头部节点(红黑树的节点为TreeNode),它本身不含有key与value,而是指向一个TreeNode节点的链表与它们的根节点,同时使用CAS实现了一个读写锁,迫使Writer(持有这个锁)在树重构操作之前等待Reader完成。ForwardingNode是一个在数据转移过程(由扩容引起)中使用的临时节点,它会被插入到头部。它与TreeBin(和TreeNode)都是Node类的子类,为了判断出哪些是特殊节点,TreeBin和ForwardingNode的hash域都只是一个虚拟值:

  1. static class Node<K,V> implements Map.Entry<K,V> {
  2. final int hash;
  3. final K key;
  4. volatile V val;
  5. volatile Node<K,V> next;
  6. Node(int hash, K key, V val, Node<K,V> next) {
  7. this.hash = hash;
  8. this.key = key;
  9. this.val = val;
  10. this.next = next;
  11. }
  12. public final V setValue(V value) {
  13. throw new UnsupportedOperationException();
  14. }
  15. ......
  16. /**
  17. * Virtualized support for map.get(); overridden in subclasses.
  18. */
  19. Node<K,V> find(int h, Object k) {
  20. Node<K,V> e = this;
  21. if (k != null) {
  22. do {
  23. K ek;
  24. if (e.hash == h &&
  25. ((ek = e.key) == k || (ek != null && k.equals(ek))))
  26. return e;
  27. } while ((e = e.next) != null);
  28. }
  29. return null;
  30. }
  31. }
  32. /*
  33. * Encodings for Node hash fields. See above for explanation.
  34. */
  35. static final int MOVED = -1; // hash for forwarding nodes
  36. static final int TREEBIN = -2; // hash for roots of trees
  37. static final int RESERVED = -3; // hash for transient reservations
  38. static final class TreeBin<K,V> extends Node<K,V> {
  39. ....
  40. TreeBin(TreeNode<K,V> b) {
  41. super(TREEBIN, null, null, null);
  42. ....
  43. }
  44.  
  45. ....
  46. }
  47. static final class ForwardingNode<K,V> extends Node<K,V> {
  48. final Node<K,V>[] nextTable;
  49. ForwardingNode(Node<K,V>[] tab) {
  50. super(MOVED, null, null, null);
  51. this.nextTable = tab;
  52. }
  53. .....
  54. }

我们在get()函数中并没有发现任何与锁相关的代码,那么它是怎么保证线程安全的呢?一个操作ConcurrentHashMap.get("a"),它的步骤基本分为以下几步:

  1. 根据散列函数计算出的索引访问table
  2. table中取出头节点。
  3. 遍历头节点直到找到目标节点。
  4. 从目标节点中取出value并返回。

所以只要保证访问table与节点的操作总是能够返回最新的数据就可以了。ConcurrentHashMap并没有采用锁的方式,而是通过volatile关键字来保证它们的可见性。在代码中可以发现,table、Node.val和Node.next都是被volatile关键字所修饰的。

  1. volatile关键字保证了多线程环境下变量的可见性与有序性,底层实现基于内存屏障(Memory Barrier)。为了优化性能,现代CPU工作时的指令执行顺序与应用程序的代码顺序其实是不一致的(有些编译器也会进行这种优化),也就是所谓的乱序执行技术。乱序执行可以提高CPU流水线的工作效率,只要保证数据符合程序逻辑上的正确性即可(遵循happens-before原则)。不过如今是多核时代,如果随便乱序而不提供防护措施那是会出问题的。每一个cpu上都会进行乱序优化,单cpu所保证的逻辑次序可能会被其他cpu所破坏。内存屏障就是针对此情况的防护措施。可以认为它是一个同步点(但它本身也是一条cpu指令)。例如在IA32指令集架构中引入的SFENCE指令,在该指令之前的所有写操作必须全部完成,读操作仍可以乱序执行。LFENCE指令则保证之前的所有读操作必须全部完成,另外还有粒度更粗的MFENCE指令保证之前的所有读写操作都必须全部完成。内存屏障就像是一个保护指令顺序的栅栏,保护后面的指令不被前面的指令跨越。将内存屏障插入到写操作与读操作之间,就可以保证之后的读操作可以访问到最新的数据,因为屏障前的写操作已经把数据写回到内存(根据缓存一致性协议,不会直接写回到内存,而是改变该cpu私有缓存中的状态,然后通知给其他cpu这个缓存行已经被修改过了,之后另一个cpu在读操作时就可以发现该缓存行已经是无效的了,这时它会从其他cpu中读取最新的缓存行,然后之前的cpu才会更改状态并写回到内存)。
  2. 例如,读一个被volatile修饰的变量V总是能够从JMMJava Memory Model)主内存中获得最新的数据。因为内存屏障的原因,每次在使用变量V(通过JVM指令use,后面说的也都是JVM中的指令而不是cpu)之前都必须先执行load指令(把从主内存中得到的数据放入到工作内存),根据JVM的规定,load指令必须发生在read指令(从主内存中读取数据)之后,所以每次访问变量V都会先从主内存中读取。相对的,写操作也因为内存屏障保证的指令顺序,每次都会直接写回到主内存。不过volatile关键字并不能保证操作的原子性,对该变量进行并发的连续操作是非线程安全的,所幸ConcurrentHashMap只是用来确保访问到的变量是最新的,所以也不会发生什么问题。

出于性能考虑,Doug Lea(java.util.concurrent包的作者)直接通过Unsafe类来对table进行操作。Java号称是安全的编程语言,而保证安全的代价就是牺牲程序员自由操控内存的能力。像在C/C++中可以通过操作指针变量达到操作内存的目的(其实操作的是虚拟地址),但这种灵活性在新手手中也经常会带来一些愚蠢的错误,比如内存访问越界。Unsafe从字面意思可以看出是不安全的,它包含了许多本地方法(在JVM平台上运行的其他语言编写的程序,主要为C/C++,由JNI实现),这些方法支持了对指针的操作,所以它才被称为是不安全的。虽然不安全,但毕竟是由C/C++实现的,像一些与操作系统交互的操作肯定是快过Java的,毕竟Java与操作系统之间还隔了一层抽象(JVM),不过代价就是失去了JVM所带来的多平台可移植性(本质上也只是一个c/cpp文件,如果换了平台那就要重新编译)。
    对table进行操作的函数有以下三个,都使用到了Unsafe(在java.util.concurrent包随处可见)

  1. @SuppressWarnings("unchecked")
  2. static final <K,V> Node<K,V> tabAt(Node<K,V>[] tab, int i) {
  3. // 从tab数组中获取一个引用,遵循Volatile语义
  4. // 参数2是一个在tab中的偏移量,用来寻找目标对象
  5. return (Node<K,V>)U.getObjectVolatile(tab, ((long)i << ASHIFT) + ABASE);
  6. }
  7. static final <K,V> boolean casTabAt(Node<K,V>[] tab, int i,
  8. Node<K,V> c, Node<K,V> v) {
  9. // 通过CAS操作将tab数组中位于参数2偏移量位置的值替换为v
  10. // c是期望值,如果期望值与实际值不符,返回false
  11. // 否则,v会成功地被设置到目标位置,返回true
  12. return U.compareAndSwapObject(tab, ((long)i << ASHIFT) + ABASE, c, v);
  13. }
  14. static final <K,V> void setTabAt(Node<K,V>[] tab, int i, Node<K,V> v) {
  15. // 设置tab数组中位于参数2偏移量位置的值,遵循Volatile语义
  16. U.putObjectVolatile(tab, ((long)i << ASHIFT) + ABASE, v);
  17. }
  18. 初始化:ConcurrentHashMapHashMap一样是Lazy的,buckets数组会在第一次访问put()函数时进行初始化,它的默认构造函数甚至是个空函数。
  19. /**
  20. * Creates a new, empty map with the default initial table size (16).
  21. */
  22. public ConcurrentHashMap() {
  23. }

但是有一点需要注意,ConcurrentHashMap是工作在多线程并发环境下的,如果有多个线程同时调用了put()函数该怎么办?这会导致重复初始化,所以必须要有对应的防护措施。ConcurrentHashMap声明了一个用于控制table的初始化与扩容的实例变量sizeCtl,默认值为0。当它是一个负数的时候,代表table正处于初始化或者扩容的状态。-1表示table正在进行初始化,-N则表示当前有N-1个线程正在进行扩容。在其他情况下,如果table还未初始化(table == null),sizeCtl表示table进行初始化的数组大小(所以从构造函数传入的initialCapacity在经过计算后会被赋给它)。如果table已经初始化过了,则表示下次触发扩容操作的阈值,算法stzeCtl = n - (n >>> 2),也就是n的75%,与默认负载因子(0.75)的HashMap一致。

  1. private transient volatile int sizeCtl;

初始化table的操作位于函数initTable(),源码如下:

  1. /**
  2. * Initializes table, using the size recorded in sizeCtl.
  3. */
  4. private final Node<K,V>[] initTable() {
  5. Node<K,V>[] tab; int sc;
  6. while ((tab = table) == null || tab.length == 0) {
  7. // sizeCtl小于0,这意味着已经有其他线程进行初始化了
  8. // 所以当前线程让出CPU时间片
  9. if ((sc = sizeCtl) < 0)
  10. Thread.yield(); // lost initialization race; just spin
  11. // 否则,通过CAS操作尝试修改sizeCtl
  12. else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
  13. try {
  14. if ((tab = table) == null || tab.length == 0) {
  15. // 默认构造函数,sizeCtl = 0,使用默认容量(16)进行初始化
  16. // 否则,会根据sizeCtl进行初始化
  17. int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
  18. @SuppressWarnings("unchecked")
  19. Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
  20. table = tab = nt;
  21. // 计算阈值,n的75%
  22. sc = n - (n >>> 2);
  23. }
  24. } finally {
  25. // 阈值赋给sizeCtl
  26. sizeCtl = sc;
  27. }
  28. break;
  29. }
  30. }
  31. return tab;
  32. }

sizeCtl是一个volatile变量,只要有一个线程CAS操作成功,sizeCtl就会被暂时地修改为-1,这样其他线程就能够根据sizeCtl得知table是否已经处于初始化状态中,最后sizeCtl会被设置成阈值,用于触发扩容操作。

3.3、扩容

ConcurrentHashMap触发扩容的时机与HashMap类似,要么是在将链表转换成红黑树时判断table数组的长度是否小于阈值(64),如果小于就进行扩容而不是树化,要么就是在添加元素的时候,判断当前Entry数量是否超过阈值,如果超过就进行扩容。

  1. private final void treeifyBin(Node<K,V>[] tab, int index) {
  2. Node<K,V> b; int n, sc;
  3. if (tab != null) {
  4. // 小于MIN_TREEIFY_CAPACITY,进行扩容
  5. if ((n = tab.length) < MIN_TREEIFY_CAPACITY)
  6. tryPresize(n << 1);
  7. else if ((b = tabAt(tab, index)) != null && b.hash >= 0) {
  8. synchronized (b) {
  9. // 将链表转换成红黑树...
  10. }
  11. }
  12. }
  13. }
  14. ...
  15. final V putVal(K key, V value, boolean onlyIfAbsent) {
  16. ...
  17. addCount(1L, binCount); // 计数
  18. return null;
  19. }
  20. private final void addCount(long x, int check) {
  21. // 计数...
  22. if (check >= 0) {
  23. Node<K,V>[] tab, nt; int n, sc;
  24. // s(元素个数)大于等于sizeCtl,触发扩容
  25. while (s >= (long)(sc = sizeCtl) && (tab = table) != null &&
  26. (n = tab.length) < MAXIMUM_CAPACITY) {
  27. // 扩容标志位
  28. int rs = resizeStamp(n);
  29. // sizeCtl为负数,代表正有其他线程进行扩容
  30. if (sc < 0) {
  31. // 扩容已经结束,中断循环
  32. if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
  33. sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
  34. transferIndex <= 0)
  35. break;
  36. // 进行扩容,并设置sizeCtl,表示扩容线程 + 1
  37. if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
  38. transfer(tab, nt);
  39. }
  40. // 触发扩容(第一个进行扩容的线程)
  41. // 并设置sizeCtl告知其他线程
  42. else if (U.compareAndSwapInt(this, SIZECTL, sc,
  43. (rs << RESIZE_STAMP_SHIFT) + 2))
  44. transfer(tab, null);
  45. // 统计个数,用于循环检测是否还需要扩容
  46. s = sumCount();
  47. }
  48. }
  49. }

扩容代码

可以看到有关sizeCtl的操作牵涉到了大量的位运算,我们先来理解这些位运算的意义。首先是resizeStamp(),该函数返回一个用于数据校验的标志位,意思是对长度为n的table进行扩容。它将n的前导零(最高有效位之前的零的数量)和1 << 15做或运算,这时低16位的最高位为1,其他都为n的前导零。

  1. static final int resizeStamp(int n) {
  2. // RESIZE_STAMP_BITS = 16
  3. return Integer.numberOfLeadingZeros(n) | (1 << (RESIZE_STAMP_BITS - 1));
  4. }

初始化sizeCtl(扩容操作被第一个线程首次进行)的算法为(rs << RESIZE_STAMP_SHIFT) + 2,首先RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS = 16,那么rs << 16等于将这个标志位移动到了高16位,这时最高位为1,所以sizeCtl此时是个负数,然后加二(至于为什么是2,还记得有关sizeCtl的说明吗?1代表初始化状态,所以实际的线程个数是要减去1的)代表当前有一个线程正在进行扩容,这样sizeCtl就被分割成了两部分,高16位是一个对n的数据校验的标志位,低16位表示参与扩容操作的线程个数 + 1。可能会有读者有所疑惑,更新进行扩容的线程数量的操作为什么是sc + 1而不是sc - 1,这是因为对sizeCtl的操作都是基于位运算的,所以不会关心它本身的数值是多少,只关心它在二进制上的数值,而sc + 1会在低16位上加1。


    tryPresize()函数跟addCount()的后半段逻辑类似,不断地根据sizeCtl判断当前的状态,然后选择对应的策略。

  1. private final void tryPresize(int size) {
  2. // 对size进行修正
  3. int c = (size >= (MAXIMUM_CAPACITY >>> 1)) ? MAXIMUM_CAPACITY :
  4. tableSizeFor(size + (size >>> 1) + 1);
  5. int sc;
  6. // sizeCtl是默认值或正整数
  7. // 代表table还未初始化
  8. // 或还没有其他线程正在进行扩容
  9. while ((sc = sizeCtl) >= 0) {
  10. Node<K,V>[] tab = table; int n;
  11. if (tab == null || (n = tab.length) == 0) {
  12. n = (sc > c) ? sc : c;
  13. // 设置sizeCtl,告诉其他线程,table现在正处于初始化状态
  14. if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
  15. try {
  16. if (table == tab) {
  17. @SuppressWarnings("unchecked")
  18. Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
  19. table = nt;
  20. // 计算下次触发扩容的阈值
  21. sc = n - (n >>> 2);
  22. }
  23. } finally {
  24. // 将阈值赋给sizeCtl
  25. sizeCtl = sc;
  26. }
  27. }
  28. }
  29. // 没有超过阈值或者大于容量的上限,中断循环
  30. else if (c <= sc || n >= MAXIMUM_CAPACITY)
  31. break;
  32. // 进行扩容,与addCount()后半段的逻辑一致
  33. else if (tab == table) {
  34. int rs = resizeStamp(n);
  35. if (sc < 0) {
  36. Node<K,V>[] nt;
  37. if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
  38. sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
  39. transferIndex <= 0)
  40. break;
  41. if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
  42. transfer(tab, nt);
  43. }
  44. else if (U.compareAndSwapInt(this, SIZECTL, sc,
  45. (rs << RESIZE_STAMP_SHIFT) + 2))
  46. transfer(tab, null);
  47. }
  48. }
  49. }

tryPresize

扩容操作的核心在于数据的转移,在单线程环境下数据的转移很简单,无非就是把旧数组中的数据迁移到新的数组。但是这在多线程环境下是行不通的,需要保证线程安全性,在扩容的时候其他线程也可能正在添加元素,这时又触发了扩容怎么办?有人可能会说,用一个互斥锁把数据转移操作的过程锁住不就好了?这确实是一种可行的解决方法,但同样也会带来极差的吞吐量。互斥锁会导致所有访问临界区的线程陷入阻塞状态,这会消耗额外的系统资源,内核需要保存这些线程的上下文并放到阻塞队列,持有锁的线程耗时越长,其他竞争线程就会一直被阻塞,因此吞吐量低下,导致响应时间缓慢。而且锁总是会伴随着死锁问题,一旦发生死锁,整个应用程序都会因此受到影响,所以加锁永远是最后的备选方案。Doug Lea没有选择直接加锁,而是基于CAS实现无锁的并发同步策略,令人佩服的是他不仅没有把其他线程拒之门外,甚至还邀请它们一起来协助工作。那么如何才能让多个线程协同工作呢?Doug Lea把整个table数组当做多个线程之间共享的任务队列,然后只需维护一个指针,当有一个线程开始进行数据转移,就会先移动指针,表示指针划过的这片bucket区域由该线程负责。这个指针被声明为一个volatile整型变量,它的初始位置位于table的尾部,即它等于table.length,很明显这个任务队列是逆向遍历的。

  1. /**
  2. * The next table index (plus one) to split while resizing.
  3. */
  4. private transient volatile int transferIndex;
  5. /**
  6. * 一个线程需要负责的最小bucket数
  7. */
  8. private static final int MIN_TRANSFER_STRIDE = 16;
  9.  
  10. /**
  11. * The next table to use; non-null only while resizing.
  12. */
  13. private transient volatile Node<K,V>[] nextTable;

一个已经迁移完毕的bucket会被替换成ForwardingNode节点,用来标记此bucket已经被其他线程迁移完毕了。ForwardingNode是一个特殊节点,可以通过hash域的虚拟值来识别它,它同样重写了find()函数,用来在新数组中查找目标。数据迁移的操作位于transfer()函数,多个线程之间依靠sizeCtl与transferIndex指针来协同工作,每个线程都有自己负责的区域,一个完成迁移的bucket会被设置为ForwardingNode,其他线程遇见这个特殊节点就跳过该bucket,处理下一个bucket。transfer()函数可以大致分为三部分,第一部分对后续需要使用的变量进行初始化:

  1. /**
  2. * Moves and/or copies the nodes in each bin to new table. See
  3. * above for explanation.
  4. */
  5. private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
  6. int n = tab.length, stride;
  7. // 根据当前机器的CPU数量来决定每个线程负责的bucket数
  8. // 避免因为扩容线程过多,反而影响到性能
  9. if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
  10. stride = MIN_TRANSFER_STRIDE; // subdivide range
  11. // 初始化nextTab,容量为旧数组的一倍
  12. if (nextTab == null) { // initiating
  13. try {
  14. @SuppressWarnings("unchecked")
  15. Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1];
  16. nextTab = nt;
  17. } catch (Throwable ex) { // try to cope with OOME
  18. sizeCtl = Integer.MAX_VALUE;
  19. return;
  20. }
  21. nextTable = nextTab;
  22. transferIndex = n; // 初始化指针
  23. }
  24. int nextn = nextTab.length;
  25. ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);
  26. boolean advance = true;
  27. boolean finishing = false; // to ensure sweep before committing nextTab

第二部分为当前线程分配任务和控制当前线程的任务进度,这部分是transfer()的核心逻辑,描述了如何与其他线程协同工作:

  1. // i指向当前bucket,bound表示当前线程所负责的bucket区域的边界
  2. for (int i = 0, bound = 0;;) {
  3. Node<K,V> f; int fh;
  4. // 这个循环使用CAS不断尝试为当前线程分配任务
  5. // 直到分配成功或任务队列已经被全部分配完毕
  6. // 如果当前线程已经被分配过bucket区域
  7. // 那么会通过--i指向下一个待处理bucket然后退出该循环
  8. while (advance) {
  9. int nextIndex, nextBound;
  10. // --i表示将i指向下一个待处理的bucket
  11. // 如果--i >= bound,代表当前线程已经分配过bucket区域
  12. // 并且还留有未处理的bucket
  13. if (--i >= bound || finishing)
  14. advance = false;
  15. // transferIndex指针 <= 0 表示所有bucket已经被分配完毕
  16. else if ((nextIndex = transferIndex) <= 0) {
  17. i = -1;
  18. advance = false;
  19. }
  20. // 移动transferIndex指针
  21. // 为当前线程设置所负责的bucket区域的范围
  22. // i指向该范围的第一个bucket,注意i是逆向遍历的
  23. // 这个范围为(bound, i),i是该区域最后一个bucket,遍历顺序是逆向的
  24. else if (U.compareAndSwapInt
  25. (this, TRANSFERINDEX, nextIndex,
  26. nextBound = (nextIndex > stride ?
  27. nextIndex - stride : 0))) {
  28. bound = nextBound;
  29. i = nextIndex - 1;
  30. advance = false;
  31. }
  32. }
  33. // 当前线程已经处理完了所负责的所有bucket
  34. if (i < 0 || i >= n || i + n >= nextn) {
  35. int sc;
  36. // 如果任务队列已经全部完成
  37. if (finishing) {
  38. nextTable = null;
  39. table = nextTab;
  40. // 设置新的阈值
  41. sizeCtl = (n << 1) - (n >>> 1);
  42. return;
  43. }
  44. // 工作中的扩容线程数量减1
  45. if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
  46. // (resizeStamp << RESIZE_STAMP_SHIFT) + 2代表当前有一个扩容线程
  47. // 相对的,(sc - 2) != resizeStamp << RESIZE_STAMP_SHIFT
  48. // 表示当前还有其他线程正在进行扩容,所以直接返回
  49. if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
  50. return;
  51. // 否则,当前线程就是最后一个进行扩容的线程
  52. // 设置finishing标识
  53. finishing = advance = true;
  54. i = n; // recheck before commit
  55. }
  56. }
  57. // 如果待处理bucket是空的
  58. // 那么插入ForwardingNode,以通知其他线程
  59. else if ((f = tabAt(tab, i)) == null)
  60. advance = casTabAt(tab, i, null, fwd);
  61. // 如果待处理bucket的头节点是ForwardingNode
  62. // 说明此bucket已经被处理过了,跳过该bucket
  63. else if ((fh = f.hash) == MOVED)
  64. advance = true; // already processed

最后一部分是具体的迁移过程(对当前指向的bucket),这部分的逻辑与HashMap类似,拿旧数组的容量当做一个掩码,然后与节点的hash进行与操作,可以得出该节点的新增有效位,如果新增有效位为0就放入一个链表A,如果为1就放入另一个链表B,链表A在新数组中的位置不变(跟在旧数组的索引一致),链表B在新数组中的位置为原索引加上旧数组容量。这个方法减少了rehash的计算量,而且还能达到均匀分布的目的。

  1. else {
  2. // 对于节点的操作还是要加上锁的
  3. // 不过这个锁的粒度很小,只锁住了bucket的头节点
  4. synchronized (f) {
  5. if (tabAt(tab, i) == f) {
  6. Node<K,V> ln, hn;
  7. // hash code不为负,代表这是条链表
  8. if (fh >= 0) {
  9. // fh & n 获得hash code的新增有效位,用于将链表分离成两类
  10. // 要么是0要么是1,关于这个位运算的更多细节
  11. // 请看本文中有关HashMap扩容操作的解释
  12. int runBit = fh & n;
  13. Node<K,V> lastRun = f;
  14. // 这个循环用于记录最后一段连续的同一类节点
  15. // 这个类别是通过fh & n来区分的
  16. // 这段连续的同类节点直接被复用,不会产生额外的复制
  17. for (Node<K,V> p = f.next; p != null; p = p.next) {
  18. int b = p.hash & n;
  19. if (b != runBit) {
  20. runBit = b;
  21. lastRun = p;
  22. }
  23. }
  24. // 0被放入ln链表,1被放入hn链表
  25. // lastRun是连续同类节点的起始节点
  26. if (runBit == 0) {
  27. ln = lastRun;
  28. hn = null;
  29. }
  30. else {
  31. hn = lastRun;
  32. ln = null;
  33. }
  34. // 将最后一段的连续同类节点之前的节点按类别复制到ln或hn
  35. // 链表的插入方向是往头部插入的,Node构造函数的第四个参数是next
  36. // 所以就算遇到类别与lastRun一致的节点也只会被插入到头部
  37. for (Node<K,V> p = f; p != lastRun; p = p.next) {
  38. int ph = p.hash; K pk = p.key; V pv = p.val;
  39. if ((ph & n) == 0)
  40. ln = new Node<K,V>(ph, pk, pv, ln);
  41. else
  42. hn = new Node<K,V>(ph, pk, pv, hn);
  43. }
  44. // ln链表被放入到原索引位置,hn放入到原索引 + 旧数组容量
  45. // 这一点与HashMap一致,如果看不懂请去参考本文对HashMap扩容的讲解
  46. setTabAt(nextTab, i, ln);
  47. setTabAt(nextTab, i + n, hn);
  48. setTabAt(tab, i, fwd); // 标记该bucket已被处理
  49. advance = true;
  50. }
  51. // 对红黑树的操作,逻辑与链表一样,按新增有效位进行分类
  52. else if (f instanceof TreeBin) {
  53. TreeBin<K,V> t = (TreeBin<K,V>)f;
  54. TreeNode<K,V> lo = null, loTail = null;
  55. TreeNode<K,V> hi = null, hiTail = null;
  56. int lc = 0, hc = 0;
  57. for (Node<K,V> e = t.first; e != null; e = e.next) {
  58. int h = e.hash;
  59. TreeNode<K,V> p = new TreeNode<K,V>
  60. (h, e.key, e.val, null, null);
  61. if ((h & n) == 0) {
  62. if ((p.prev = loTail) == null)
  63. lo = p;
  64. else
  65. loTail.next = p;
  66. loTail = p;
  67. ++lc;
  68. }
  69. else {
  70. if ((p.prev = hiTail) == null)
  71. hi = p;
  72. else
  73. hiTail.next = p;
  74. hiTail = p;
  75. ++hc;
  76. }
  77. }
  78. // 元素数量没有超过UNTREEIFY_THRESHOLD,退化成链表
  79. ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) :
  80. (hc != 0) ? new TreeBin<K,V>(lo) : t;
  81. hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) :
  82. (lc != 0) ? new TreeBin<K,V>(hi) : t;
  83. setTabAt(nextTab, i, ln);
  84. setTabAt(nextTab, i + n, hn);
  85. setTabAt(tab, i, fwd);
  86. advance = true;
  87. }

3.4、计数

在Java 7中ConcurrentHashMap对每个Segment单独计数,想要得到总数就需要获得所有Segment的锁,然后进行统计。由于Java 8抛弃了Segment,显然是不能再这样做了,而且这种方法虽然简单准确但也舍弃了性能。Java 8声明了一个volatile变量baseCount用于记录元素的个数,对这个变量的修改操作是基于CAS的,每当插入元素或删除元素时都会调用addCount()函数进行计数。

  1. private transient volatile long baseCount;
  2. private final void addCount(long x, int check) {
  3. CounterCell[] as; long b, s;
  4. // 尝试使用CAS更新baseCount失败
  5. // 转用CounterCells进行更新
  6. if ((as = counterCells) != null ||
  7. !U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)) {
  8. CounterCell a; long v; int m;
  9. boolean uncontended = true;
  10. // 在CounterCells未初始化
  11. // 或尝试通过CAS更新当前线程的CounterCell失败时
  12. // 调用fullAddCount(),该函数负责初始化CounterCells和更新计数
  13. if (as == null || (m = as.length - 1) < 0 ||
  14. (a = as[ThreadLocalRandom.getProbe() & m]) == null ||
  15. !(uncontended =
  16. U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))) {
  17. fullAddCount(x, uncontended);
  18. return;
  19. }
  20. if (check <= 1)
  21. return;
  22. // 统计总数
  23. s = sumCount();
  24. }
  25. if (check >= 0) {
  26. // 判断是否需要扩容,在上文中已经讲过了
  27. }
  28. }

counterCells是一个元素为CounterCell的数组,该数组的大小与当前机器的CPU数量有关,并且它不会被主动初始化,只有在调用fullAddCount()函数时才会进行初始化。CounterCell是一个简单的内部静态类,每个CounterCell都是一个用于记录数量的单元:

  1. /**
  2. * Table of counter cells. When non-null, size is a power of 2.
  3. */
  4. private transient volatile CounterCell[] counterCells;
  5. /**
  6. * A padded cell for distributing counts. Adapted from LongAdder
  7. * and Striped64. See their internal docs for explanation.
  8. */
  9. @sun.misc.Contended static final class CounterCell {
  10. volatile long value;
  11. CounterCell(long x) { value = x; }
  12. }

注解@sun.misc.Contended用于解决伪共享问题。所谓伪共享,即是在同一缓存行(CPU缓存的基本单位)中存储了多个变量,当其中一个变量被修改时,就会影响到同一缓存行内的其他变量,导致它们也要跟着被标记为失效,其他变量的缓存命中率将会受到影响。解决伪共享问题的方法一般是对该变量填充一些无意义的占位数据,从而使它独享一个缓存行。
     ConcurrentHashMap的计数设计与LongAdder类似。在一个低并发的情况下,就只是简单地使用CAS操作来对baseCount进行更新,但只要这个CAS操作失败一次,就代表有多个线程正在竞争,那么就转而使用CounterCell数组进行计数,数组内的每个ConuterCell都是一个独立的计数单元。每个线程都会通过ThreadLocalRandom.getProbe() & m寻址找到属于它的CounterCell,然后进行计数。ThreadLocalRandom是一个线程私有的伪随机数生成器,每个线程的probe都是不同的(这点基于ThreadLocalRandom的内部实现,它在内部维护了一个probeGenerator,这是一个类型为AtomicInteger的静态常量,每当初始化一个ThreadLocalRandom时probeGenerator都会先自增一个常量然后返回的整数即为当前线程的probe,probe变量被维护在Thread对象中),可以认为每个线程的probe就是它在CounterCell数组中的hash code。这种方法将竞争数据按照线程的粒度进行分离,相比所有竞争线程对一个共享变量使用CAS不断尝试在性能上要效率好多了,这也是为什么在高并发环境下LongAdder要优于AtomicInteger的原因。
     fullAddCount()函数根据当前线程的probe寻找对应的CounterCell进行计数,如果CounterCell数组未被初始化,则初始化CounterCell数组和CounterCell。该函数的实现与Striped64类(LongAdder的父类)的longAccumulate()函数是一样的,把CounterCell数组当成一个散列表,每个线程的probe就是hash code,散列函数也仅仅是简单的(n - 1) & probe。CounterCell数组的大小永远是一个2的n次方,初始容量为2,每次扩容的新容量都是之前容量乘以二,处于性能考虑,它的最大容量上限是机器的CPU数量。所以说CounterCell数组的碰撞冲突是很严重的,因为它的bucket基数太小了。而发生碰撞就代表着一个CounterCell会被多个线程竞争,为了解决这个问题,Doug Lea使用无限循环加上CAS来模拟出一个自旋锁来保证线程安全,自旋锁的实现基于一个被volatile修饰的整数变量,该变量只会有两种状态:0和1,当它被设置为0时表示没有加锁,当它被设置为1时表示已被其他线程加锁。这个自旋锁用于保护初始化CounterCell、初始化CounterCell数组以及对CounterCell数组进行扩容时的安全。CounterCell更新计数是依赖于CAS的,每次循环都会尝试通过CAS进行更新,如果成功就退出无限循环,否则就调用ThreadLocalRandom.advanceProbe()函数为当前线程更新probe,然后重新开始循环,以期望下一次寻址到的CounterCell没有被其他线程竞争。如果连着两次CAS更新都没有成功,那么会对CounterCell数组进行一次扩容,这个扩容操作只会在当前循环中触发一次,而且只能在容量小于上限时触发。
    fullAddCount()函数的主要流程如下:

  1. 首先检查当前线程有没有初始化过ThreadLocalRandom,如果没有则进行初始化。ThreadLocalRandom负责更新线程的probe,而probe又是在数组中进行寻址的关键。
  2. 检查CounterCell数组是否已经初始化,如果已初始化,那么就根据probe找到对应的CounterCell
  3. 如果这个CounterCell等于null,需要先初始化CounterCell,通过把计数增量传入构造函数,所以初始化只要成功就说明更新计数已经完成了。初始化的过程需要获取自旋锁。
  4. 如果不为null,就按上文所说的逻辑对CounterCell实施更新计数。
  5. CounterCell数组未被初始化,尝试获取自旋锁,进行初始化。数组初始化的过程会附带初始化一个CounterCell来记录计数增量,所以只要初始化成功就表示更新计数完成。
  6. 如果自旋锁被其他线程占用,无法进行数组的初始化,只好通过CAS更新baseCount
  1. private final void fullAddCount(long x, boolean wasUncontended) {
  2. int h;
  3. // 当前线程的probe等于0,证明该线程的ThreadLocalRandom还未被初始化
  4. // 以及当前线程是第一次进入该函数
  5. if ((h = ThreadLocalRandom.getProbe()) == 0) {
  6. // 初始化ThreadLocalRandom,当前线程会被设置一个probe
  7. ThreadLocalRandom.localInit(); // force initialization
  8. // probe用于在CounterCell数组中寻址
  9. h = ThreadLocalRandom.getProbe();
  10. // 未竞争标志
  11. wasUncontended = true;
  12. }
  13. // 冲突标志
  14. boolean collide = false; // True if last slot nonempty
  15. for (;;) {
  16. CounterCell[] as; CounterCell a; int n; long v;
  17. // CounterCell数组已初始化
  18. if ((as = counterCells) != null && (n = as.length) > 0) {
  19. // 如果寻址到的Cell为空,那么创建一个新的Cell
  20. if ((a = as[(n - 1) & h]) == null) {
  21. // cellsBusy是一个只有0和1两个状态的volatile整数
  22. // 它被当做一个自旋锁,0代表无锁,1代表加锁
  23. if (cellsBusy == 0) { // Try to attach new Cell
  24. // 将传入的x作为初始值创建一个新的CounterCell
  25. CounterCell r = new CounterCell(x); // Optimistic create
  26. // 通过CAS尝试对自旋锁加锁
  27. if (cellsBusy == 0 &&
  28. U.compareAndSwapInt(this, CELLSBUSY, 0, 1)) {
  29. // 加锁成功,声明Cell是否创建成功的标志
  30. boolean created = false;
  31. try { // Recheck under lock
  32. CounterCell[] rs; int m, j;
  33. // 再次检查CounterCell数组是否不为空
  34. // 并且寻址到的Cell为空
  35. if ((rs = counterCells) != null &&
  36. (m = rs.length) > 0 &&
  37. rs[j = (m - 1) & h] == null) {
  38. // 将之前创建的新Cell放入数组
  39. rs[j] = r;
  40. created = true;
  41. }
  42. } finally {
  43. // 释放锁
  44. cellsBusy = 0;
  45. }
  46. // 如果已经创建成功,中断循环
  47. // 因为新Cell的初始值就是传入的增量,所以计数已经完毕了
  48. if (created)
  49. break;
  50. // 如果未成功
  51. // 代表as[(n - 1) & h]这个位置的Cell已经被其他线程设置
  52. // 那么就从循环头重新开始
  53. continue; // Slot is now non-empty
  54. }
  55. }
  56. collide = false;
  57. }
  58. // as[(n - 1) & h]非空
  59. // 在addCount()函数中通过CAS更新当前线程的Cell进行计数失败
  60. // 会传入wasUncontended = false,代表已经有其他线程进行竞争
  61. else if (!wasUncontended) // CAS already known to fail
  62. // 设置未竞争标志,之后会重新计算probe,然后重新执行循环
  63. wasUncontended = true; // Continue after rehash
  64. // 尝试进行计数,如果成功,那么就退出循环
  65. else if (U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))
  66. break;
  67. // 尝试更新失败,检查counterCell数组是否已经扩容
  68. // 或者容量达到最大值(CPU的数量)
  69. else if (counterCells != as || n >= NCPU)
  70. // 设置冲突标志,防止跳入下面的扩容分支
  71. // 之后会重新计算probe
  72. collide = false; // At max size or stale
  73. // 设置冲突标志,重新执行循环
  74. // 如果下次循环执行到该分支,并且冲突标志仍然为true
  75. // 那么会跳过该分支,到下一个分支进行扩容
  76. else if (!collide)
  77. collide = true;
  78. // 尝试加锁,然后对counterCells数组进行扩容
  79. else if (cellsBusy == 0 &&
  80. U.compareAndSwapInt(this, CELLSBUSY, 0, 1)) {
  81. try {
  82. // 检查是否已被扩容
  83. if (counterCells == as) {// Expand table unless stale
  84. // 新数组容量为之前的1倍
  85. CounterCell[] rs = new CounterCell[n << 1];
  86. // 迁移数据到新数组
  87. for (int i = 0; i < n; ++i)
  88. rs[i] = as[i];
  89. counterCells = rs;
  90. }
  91. } finally {
  92. // 释放锁
  93. cellsBusy = 0;
  94. }
  95. collide = false;
  96. // 重新执行循环
  97. continue; // Retry with expanded table
  98. }
  99. // 为当前线程重新计算probe
  100. h = ThreadLocalRandom.advanceProbe(h);
  101. }
  102. // CounterCell数组未初始化,尝试获取自旋锁,然后进行初始化
  103. else if (cellsBusy == 0 && counterCells == as &&
  104. U.compareAndSwapInt(this, CELLSBUSY, 0, 1)) {
  105. boolean init = false;
  106. try { // Initialize table
  107. if (counterCells == as) {
  108. // 初始化CounterCell数组,初始容量为2
  109. CounterCell[] rs = new CounterCell[2];
  110. // 初始化CounterCell
  111. rs[h & 1] = new CounterCell(x);
  112. counterCells = rs;
  113. init = true;
  114. }
  115. } finally {
  116. cellsBusy = 0;
  117. }
  118. // 初始化CounterCell数组成功,退出循环
  119. if (init)
  120. break;
  121. }
  122. // 如果自旋锁被占用,则只好尝试更新baseCount
  123. else if (U.compareAndSwapLong(this, BASECOUNT, v = baseCount, v + x))
  124. break; // Fall back on using base
  125. }
  126. }

对于统计总数,只要能够理解CounterCell的思想,就很简单了。仔细想一想,每次计数的更新都会被分摊在baseCount和CounterCell数组中的某一CounterCell,想要获得总数,把它们统计相加就是了。

  1. public int size() {
  2. long n = sumCount();
  3. return ((n < 0L) ? 0 :
  4. (n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE :
  5. (int)n);
  6. }
  7. final long sumCount() {
  8. CounterCell[] as = counterCells; CounterCell a;
  9. long sum = baseCount;
  10. if (as != null) {
  11. for (int i = 0; i < as.length; ++i) {
  12. if ((a = as[i]) != null)
  13. sum += a.value;
  14. }
  15. }
  16. return sum;
  17. }

其实size()函数返回的总数可能并不是百分百精确的,试想如果前一个遍历过的CounterCell又进行了更新会怎么样?尽管只是一个估算值,但在大多数场景下都还能接受,而且性能上是要比Java 7好上太多了。

3.5、其他操作

添加元素的主要逻辑与HashMap没什么区别,所以整体来说putVal()函数还是比较简单的,可能唯一需要注意的就是在对节点进行操作的时候需要通过互斥锁保证线程安全,这个互斥锁的粒度很小,只对需要操作的这个bucket加锁。

  1. public V put(K key, V value) {
  2. return putVal(key, value, false);
  3. }
  4. /** Implementation for put and putIfAbsent */
  5. final V putVal(K key, V value, boolean onlyIfAbsent) {
  6. if (key == null || value == null) throw new NullPointerException();
  7. int hash = spread(key.hashCode());
  8. int binCount = 0; // 节点计数器,用于判断是否需要树化
  9. // 无限循环+CAS,无锁的标准套路
  10. for (Node<K,V>[] tab = table;;) {
  11. Node<K,V> f; int n, i, fh;
  12. // 初始化table
  13. if (tab == null || (n = tab.length) == 0)
  14. tab = initTable();
  15. // bucket为null,通过CAS创建头节点,如果成功就结束循环
  16. else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
  17. if (casTabAt(tab, i, null,
  18. new Node<K,V>(hash, key, value, null)))
  19. break; // no lock when adding to empty bin
  20. }
  21. // bucket为ForwardingNode
  22. // 当前线程前去协助进行扩容
  23. else if ((fh = f.hash) == MOVED)
  24. tab = helpTransfer(tab, f);
  25. else {
  26. V oldVal = null;
  27. synchronized (f) {
  28. if (tabAt(tab, i) == f) {
  29. // 节点是链表
  30. if (fh >= 0) {
  31. binCount = 1;
  32. for (Node<K,V> e = f;; ++binCount) {
  33. K ek;
  34. // 找到目标,设置value
  35. if (e.hash == hash &&
  36. ((ek = e.key) == key ||
  37. (ek != null && key.equals(ek)))) {
  38. oldVal = e.val;
  39. if (!onlyIfAbsent)
  40. e.val = value;
  41. break;
  42. }
  43. Node<K,V> pred = e;
  44. // 未找到节点,插入新节点到链表尾部
  45. if ((e = e.next) == null) {
  46. pred.next = new Node<K,V>(hash, key,
  47. value, null);
  48. break;
  49. }
  50. }
  51. }
  52. // 节点是红黑树
  53. else if (f instanceof TreeBin) {
  54. Node<K,V> p;
  55. binCount = 2;
  56. if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
  57. value)) != null) {
  58. oldVal = p.val;
  59. if (!onlyIfAbsent)
  60. p.val = value;
  61. }
  62. }
  63. }
  64. }
  65. // 根据bucket中的节点数决定是否树化
  66. if (binCount != 0) {
  67. if (binCount >= TREEIFY_THRESHOLD)
  68. treeifyBin(tab, i);
  69. // oldVal不等于null,说明没有新节点
  70. // 所以直接返回,不进行计数
  71. if (oldVal != null)
  72. return oldVal;
  73. break;
  74. }
  75. }
  76. }
  77. // 计数
  78. addCount(1L, binCount);
  79. return null;
  80. }

至于删除元素的操作位于函数replaceNode(Object key, V value, Object cv),当table[key].val等于期望值cv时(或cv等于null),更新节点的值为value,如果value等于null,那么删除该节点。
    remove()函数通过调用replaceNode(key, null, null)来达成删除目标节点的目的,replaceNode()的具体实现与putVal()没什么差别,只不过对链表的操作有所不同而已。

四、Hashtable介绍

  1. HashMap一样,Hashtable 也是一个散列表,它存储的内容是键值对(key-value)映射。
  2. Hashtable 继承于Dictionary,实现了MapCloneablejava.io.Serializable接口。
  3. Hashtable 的函数都是同步的,这意味着它是线程安全的。它的keyvalue都不可以为null
  4. 此外,Hashtable中的映射不是有序的。

Hashtable的实例有两个参数影响其性能:初始容量和加载因子。容量是哈希表中桶 的数量,初始容量就是哈希表创建时的容量。注意,哈希表的状态为 open:在发生“哈希冲突”的情况下,单个桶会存储多个条目,这些条目必须按顺序搜索。加载因子是对哈希表在其容量自动增加之前可以达到多满的一个尺度。初始容量和加载因子这两个参数只是对该实现的提示。关于何时以及是否调用 rehash 方法的具体细节则依赖于该实现。通常,默认加载因子是 0.75, 这是在时间和空间成本上寻求一种折衷。

  1. package java.util;
  2. import java.io.*;
  3.  
  4. public class Hashtable<K,V>
  5. extends Dictionary<K,V>
  6. implements Map<K,V>, Cloneable, java.io.Serializable {
  7.  
  8. // Hashtable保存key-value的数组。
  9. // Hashtable是采用拉链法实现的,每一个Entry本质上是一个单向链表
  10. private transient Entry[] table;
  11.  
  12. // Hashtable中元素的实际数量
  13. private transient int count;
  14.  
  15. // 阈值,用于判断是否需要调整Hashtable的容量(threshold = 容量*加载因子)
  16. private int threshold;
  17.  
  18. // 加载因子
  19. private float loadFactor;
  20.  
  21. // Hashtable被改变的次数
  22. private transient int modCount = 0;
  23.  
  24. // 序列版本号
  25. private static final long serialVersionUID = 1421746759512286392L;
  26.  
  27. // 指定“容量大小”和“加载因子”的构造函数
  28. public Hashtable(int initialCapacity, float loadFactor) {
  29. if (initialCapacity < 0)
  30. throw new IllegalArgumentException("Illegal Capacity: "+
  31. initialCapacity);
  32. if (loadFactor <= 0 || Float.isNaN(loadFactor))
  33. throw new IllegalArgumentException("Illegal Load: "+loadFactor);
  34.  
  35. if (initialCapacity==0)
  36. initialCapacity = 1;
  37. this.loadFactor = loadFactor;
  38. table = new Entry[initialCapacity];
  39. threshold = (int)(initialCapacity * loadFactor);
  40. }
  41.  
  42. // 指定“容量大小”的构造函数
  43. public Hashtable(int initialCapacity) {
  44. this(initialCapacity, 0.75f);
  45. }
  46.  
  47. // 默认构造函数。
  48. public Hashtable() {
  49. // 默认构造函数,指定的容量大小是11;加载因子是0.75
  50. this(11, 0.75f);
  51. }
  52.  
  53. // 包含“子Map”的构造函数
  54. public Hashtable(Map<? extends K, ? extends V> t) {
  55. this(Math.max(2*t.size(), 11), 0.75f);
  56. // 将“子Map”的全部元素都添加到Hashtable中
  57. putAll(t);
  58. }
  59.  
  60. public synchronized int size() {
  61. return count;
  62. }
  63.  
  64. public synchronized boolean isEmpty() {
  65. return count == 0;
  66. }
  67.  
  68. // 返回“所有key”的枚举对象
  69. public synchronized Enumeration<K> keys() {
  70. return this.<K>getEnumeration(KEYS);
  71. }
  72.  
  73. // 返回“所有value”的枚举对象
  74. public synchronized Enumeration<V> elements() {
  75. return this.<V>getEnumeration(VALUES);
  76. }
  77.  
  78. // 判断Hashtable是否包含“值(value)”
  79. public synchronized boolean contains(Object value) {
  80. // Hashtable中“键值对”的value不能是null,
  81. // 若是null的话,抛出异常!
  82. if (value == null) {
  83. throw new NullPointerException();
  84. }
  85.  
  86. // 从后向前遍历table数组中的元素(Entry)
  87. // 对于每个Entry(单向链表),逐个遍历,判断节点的值是否等于value
  88. Entry tab[] = table;
  89. for (int i = tab.length ; i-- > 0 ;) {
  90. for (Entry<K,V> e = tab[i] ; e != null ; e = e.next) {
  91. if (e.value.equals(value)) {
  92. return true;
  93. }
  94. }
  95. }
  96. return false;
  97. }
  98.  
  99. public boolean containsValue(Object value) {
  100. return contains(value);
  101. }
  102.  
  103. // 判断Hashtable是否包含key
  104. public synchronized boolean containsKey(Object key) {
  105. Entry tab[] = table;
  106. int hash = key.hashCode();
  107. // 计算索引值,
  108. // % tab.length 的目的是防止数据越界
  109. int index = (hash & 0x7FFFFFFF) % tab.length;
  110. // 找到“key对应的Entry(链表)”,然后在链表中找出“哈希值”和“键值”与key都相等的元素
  111. for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {
  112. if ((e.hash == hash) && e.key.equals(key)) {
  113. return true;
  114. }
  115. }
  116. return false;
  117. }
  118.  
  119. // 返回key对应的value,没有的话返回null
  120. public synchronized V get(Object key) {
  121. Entry tab[] = table;
  122. int hash = key.hashCode();
  123. // 计算索引值,
  124. int index = (hash & 0x7FFFFFFF) % tab.length;
  125. // 找到“key对应的Entry(链表)”,然后在链表中找出“哈希值”和“键值”与key都相等的元素
  126. for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {
  127. if ((e.hash == hash) && e.key.equals(key)) {
  128. return e.value;
  129. }
  130. }
  131. return null;
  132. }
  133.  
  134. // 调整Hashtable的长度,将长度变成原来的(2倍+1)
  135. // (01) 将“旧的Entry数组”赋值给一个临时变量。
  136. // (02) 创建一个“新的Entry数组”,并赋值给“旧的Entry数组”
  137. // (03) 将“Hashtable”中的全部元素依次添加到“新的Entry数组”中
  138. protected void rehash() {
  139. int oldCapacity = table.length;
  140. Entry[] oldMap = table;
  141.  
  142. int newCapacity = oldCapacity * 2 + 1;
  143. Entry[] newMap = new Entry[newCapacity];
  144.  
  145. modCount++;
  146. threshold = (int)(newCapacity * loadFactor);
  147. table = newMap;
  148.  
  149. for (int i = oldCapacity ; i-- > 0 ;) {
  150. for (Entry<K,V> old = oldMap[i] ; old != null ; ) {
  151. Entry<K,V> e = old;
  152. old = old.next;
  153.  
  154. int index = (e.hash & 0x7FFFFFFF) % newCapacity;
  155. e.next = newMap[index];
  156. newMap[index] = e;
  157. }
  158. }
  159. }
  160.  
  161. // 将“key-value”添加到Hashtable中
  162. public synchronized V put(K key, V value) {
  163. // Hashtable中不能插入value为null的元素!!!
  164. if (value == null) {
  165. throw new NullPointerException();
  166. }
  167.  
  168. // 若“Hashtable中已存在键为key的键值对”,
  169. // 则用“新的value”替换“旧的value”
  170. Entry tab[] = table;
  171. int hash = key.hashCode();
  172. int index = (hash & 0x7FFFFFFF) % tab.length;
  173. for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {
  174. if ((e.hash == hash) && e.key.equals(key)) {
  175. V old = e.value;
  176. e.value = value;
  177. return old;
  178. }
  179. }
  180.  
  181. // 若“Hashtable中不存在键为key的键值对”,
  182. // (01) 将“修改统计数”+1
  183. modCount++;
  184. // (02) 若“Hashtable实际容量” > “阈值”(阈值=总的容量 * 加载因子)
  185. // 则调整Hashtable的大小
  186. if (count >= threshold) {
  187. // Rehash the table if the threshold is exceeded
  188. rehash();
  189.  
  190. tab = table;
  191. index = (hash & 0x7FFFFFFF) % tab.length;
  192. }
  193.  
  194. // (03) 将“Hashtable中index”位置的Entry(链表)保存到e中
  195. Entry<K,V> e = tab[index];
  196. // (04) 创建“新的Entry节点”,并将“新的Entry”插入“Hashtable的index位置”,并设置e为“新的Entry”的下一个元素(即“新Entry”为链表表头)。
  197. tab[index] = new Entry<K,V>(hash, key, value, e);
  198. // (05) 将“Hashtable的实际容量”+1
  199. count++;
  200. return null;
  201. }
  202.  
  203. // 删除Hashtable中键为key的元素
  204. public synchronized V remove(Object key) {
  205. Entry tab[] = table;
  206. int hash = key.hashCode();
  207. int index = (hash & 0x7FFFFFFF) % tab.length;
  208. // 找到“key对应的Entry(链表)”
  209. // 然后在链表中找出要删除的节点,并删除该节点。
  210. for (Entry<K,V> e = tab[index], prev = null ; e != null ; prev = e, e = e.next) {
  211. if ((e.hash == hash) && e.key.equals(key)) {
  212. modCount++;
  213. if (prev != null) {
  214. prev.next = e.next;
  215. } else {
  216. tab[index] = e.next;
  217. }
  218. count--;
  219. V oldValue = e.value;
  220. e.value = null;
  221. return oldValue;
  222. }
  223. }
  224. return null;
  225. }
  226.  
  227. // 将“Map(t)”的中全部元素逐一添加到Hashtable中
  228. public synchronized void putAll(Map<? extends K, ? extends V> t) {
  229. for (Map.Entry<? extends K, ? extends V> e : t.entrySet())
  230. put(e.getKey(), e.getValue());
  231. }
  232.  
  233. // 清空Hashtable
  234. // 将Hashtable的table数组的值全部设为null
  235. public synchronized void clear() {
  236. Entry tab[] = table;
  237. modCount++;
  238. for (int index = tab.length; --index >= 0; )
  239. tab[index] = null;
  240. count = 0;
  241. }
  242.  
  243. // 克隆一个Hashtable,并以Object的形式返回。
  244. public synchronized Object clone() {
  245. try {
  246. Hashtable<K,V> t = (Hashtable<K,V>) super.clone();
  247. t.table = new Entry[table.length];
  248. for (int i = table.length ; i-- > 0 ; ) {
  249. t.table[i] = (table[i] != null)
  250. ? (Entry<K,V>) table[i].clone() : null;
  251. }
  252. t.keySet = null;
  253. t.entrySet = null;
  254. t.values = null;
  255. t.modCount = 0;
  256. return t;
  257. } catch (CloneNotSupportedException e) {
  258. // this shouldn't happen, since we are Cloneable
  259. throw new InternalError();
  260. }
  261. }
  262.  
  263. public synchronized String toString() {
  264. int max = size() - 1;
  265. if (max == -1)
  266. return "{}";
  267.  
  268. StringBuilder sb = new StringBuilder();
  269. Iterator<Map.Entry<K,V>> it = entrySet().iterator();
  270.  
  271. sb.append('{');
  272. for (int i = 0; ; i++) {
  273. Map.Entry<K,V> e = it.next();
  274. K key = e.getKey();
  275. V value = e.getValue();
  276. sb.append(key == this ? "(this Map)" : key.toString());
  277. sb.append('=');
  278. sb.append(value == this ? "(this Map)" : value.toString());
  279.  
  280. if (i == max)
  281. return sb.append('}').toString();
  282. sb.append(", ");
  283. }
  284. }
  285.  
  286. // 获取Hashtable的枚举类对象
  287. // 若Hashtable的实际大小为0,则返回“空枚举类”对象;
  288. // 否则,返回正常的Enumerator的对象。(Enumerator实现了迭代器和枚举两个接口)
  289. private <T> Enumeration<T> getEnumeration(int type) {
  290. if (count == 0) {
  291. return (Enumeration<T>)emptyEnumerator;
  292. } else {
  293. return new Enumerator<T>(type, false);
  294. }
  295. }
  296.  
  297. // 获取Hashtable的迭代器
  298. // 若Hashtable的实际大小为0,则返回“空迭代器”对象;
  299. // 否则,返回正常的Enumerator的对象。(Enumerator实现了迭代器和枚举两个接口)
  300. private <T> Iterator<T> getIterator(int type) {
  301. if (count == 0) {
  302. return (Iterator<T>) emptyIterator;
  303. } else {
  304. return new Enumerator<T>(type, true);
  305. }
  306. }
  307.  
  308. // Hashtable的“key的集合”。它是一个Set,意味着没有重复元素
  309. private transient volatile Set<K> keySet = null;
  310. // Hashtable的“key-value的集合”。它是一个Set,意味着没有重复元素
  311. private transient volatile Set<Map.Entry<K,V>> entrySet = null;
  312. // Hashtable的“key-value的集合”。它是一个Collection,意味着可以有重复元素
  313. private transient volatile Collection<V> values = null;
  314.  
  315. // 返回一个被synchronizedSet封装后的KeySet对象
  316. // synchronizedSet封装的目的是对KeySet的所有方法都添加synchronized,实现多线程同步
  317. public Set<K> keySet() {
  318. if (keySet == null)
  319. keySet = Collections.synchronizedSet(new KeySet(), this);
  320. return keySet;
  321. }
  322.  
  323. // Hashtable的Key的Set集合。
  324. // KeySet继承于AbstractSet,所以,KeySet中的元素没有重复的。
  325. private class KeySet extends AbstractSet<K> {
  326. public Iterator<K> iterator() {
  327. return getIterator(KEYS);
  328. }
  329. public int size() {
  330. return count;
  331. }
  332. public boolean contains(Object o) {
  333. return containsKey(o);
  334. }
  335. public boolean remove(Object o) {
  336. return Hashtable.this.remove(o) != null;
  337. }
  338. public void clear() {
  339. Hashtable.this.clear();
  340. }
  341. }
  342.  
  343. // 返回一个被synchronizedSet封装后的EntrySet对象
  344. // synchronizedSet封装的目的是对EntrySet的所有方法都添加synchronized,实现多线程同步
  345. public Set<Map.Entry<K,V>> entrySet() {
  346. if (entrySet==null)
  347. entrySet = Collections.synchronizedSet(new EntrySet(), this);
  348. return entrySet;
  349. }
  350.  
  351. // Hashtable的Entry的Set集合。
  352. // EntrySet继承于AbstractSet,所以,EntrySet中的元素没有重复的。
  353. private class EntrySet extends AbstractSet<Map.Entry<K,V>> {
  354. public Iterator<Map.Entry<K,V>> iterator() {
  355. return getIterator(ENTRIES);
  356. }
  357.  
  358. public boolean add(Map.Entry<K,V> o) {
  359. return super.add(o);
  360. }
  361.  
  362. // 查找EntrySet中是否包含Object(0)
  363. // 首先,在table中找到o对应的Entry(Entry是一个单向链表)
  364. // 然后,查找Entry链表中是否存在Object
  365. public boolean contains(Object o) {
  366. if (!(o instanceof Map.Entry))
  367. return false;
  368. Map.Entry entry = (Map.Entry)o;
  369. Object key = entry.getKey();
  370. Entry[] tab = table;
  371. int hash = key.hashCode();
  372. int index = (hash & 0x7FFFFFFF) % tab.length;
  373.  
  374. for (Entry e = tab[index]; e != null; e = e.next)
  375. if (e.hash==hash && e.equals(entry))
  376. return true;
  377. return false;
  378. }
  379.  
  380. // 删除元素Object(0)
  381. // 首先,在table中找到o对应的Entry(Entry是一个单向链表)
  382. // 然后,删除链表中的元素Object
  383. public boolean remove(Object o) {
  384. if (!(o instanceof Map.Entry))
  385. return false;
  386. Map.Entry<K,V> entry = (Map.Entry<K,V>) o;
  387. K key = entry.getKey();
  388. Entry[] tab = table;
  389. int hash = key.hashCode();
  390. int index = (hash & 0x7FFFFFFF) % tab.length;
  391.  
  392. for (Entry<K,V> e = tab[index], prev = null; e != null;
  393. prev = e, e = e.next) {
  394. if (e.hash==hash && e.equals(entry)) {
  395. modCount++;
  396. if (prev != null)
  397. prev.next = e.next;
  398. else
  399. tab[index] = e.next;
  400.  
  401. count--;
  402. e.value = null;
  403. return true;
  404. }
  405. }
  406. return false;
  407. }
  408.  
  409. public int size() {
  410. return count;
  411. }
  412.  
  413. public void clear() {
  414. Hashtable.this.clear();
  415. }
  416. }
  417.  
  418. // 返回一个被synchronizedCollection封装后的ValueCollection对象
  419. // synchronizedCollection封装的目的是对ValueCollection的所有方法都添加synchronized,实现多线程同步
  420. public Collection<V> values() {
  421. if (values==null)
  422. values = Collections.synchronizedCollection(new ValueCollection(),
  423. this);
  424. return values;
  425. }
  426.  
  427. // Hashtable的value的Collection集合。
  428. // ValueCollection继承于AbstractCollection,所以,ValueCollection中的元素可以重复的。
  429. private class ValueCollection extends AbstractCollection<V> {
  430. public Iterator<V> iterator() {
  431. return getIterator(VALUES);
  432. }
  433. public int size() {
  434. return count;
  435. }
  436. public boolean contains(Object o) {
  437. return containsValue(o);
  438. }
  439. public void clear() {
  440. Hashtable.this.clear();
  441. }
  442. }
  443.  
  444. // 重新equals()函数
  445. // 若两个Hashtable的所有key-value键值对都相等,则判断它们两个相等
  446. public synchronized boolean equals(Object o) {
  447. if (o == this)
  448. return true;
  449.  
  450. if (!(o instanceof Map))
  451. return false;
  452. Map<K,V> t = (Map<K,V>) o;
  453. if (t.size() != size())
  454. return false;
  455.  
  456. try {
  457. // 通过迭代器依次取出当前Hashtable的key-value键值对
  458. // 并判断该键值对,存在于Hashtable(o)中。
  459. // 若不存在,则立即返回false;否则,遍历完“当前Hashtable”并返回true。
  460. Iterator<Map.Entry<K,V>> i = entrySet().iterator();
  461. while (i.hasNext()) {
  462. Map.Entry<K,V> e = i.next();
  463. K key = e.getKey();
  464. V value = e.getValue();
  465. if (value == null) {
  466. if (!(t.get(key)==null && t.containsKey(key)))
  467. return false;
  468. } else {
  469. if (!value.equals(t.get(key)))
  470. return false;
  471. }
  472. }
  473. } catch (ClassCastException unused) {
  474. return false;
  475. } catch (NullPointerException unused) {
  476. return false;
  477. }
  478.  
  479. return true;
  480. }
  481.  
  482. // 计算Hashtable的哈希值
  483. // 若 Hashtable的实际大小为0 或者 加载因子<0,则返回0。
  484. // 否则,返回“Hashtable中的每个Entry的key和value的异或值 的总和”。
  485. public synchronized int hashCode() {
  486. int h = 0;
  487. if (count == 0 || loadFactor < 0)
  488. return h; // Returns zero
  489.  
  490. loadFactor = -loadFactor; // Mark hashCode computation in progress
  491. Entry[] tab = table;
  492. for (int i = 0; i < tab.length; i++)
  493. for (Entry e = tab[i]; e != null; e = e.next)
  494. h += e.key.hashCode() ^ e.value.hashCode();
  495. loadFactor = -loadFactor; // Mark hashCode computation complete
  496.  
  497. return h;
  498. }
  499.  
  500. // java.io.Serializable的写入函数
  501. // 将Hashtable的“总的容量,实际容量,所有的Entry”都写入到输出流中
  502. private synchronized void writeObject(java.io.ObjectOutputStream s)
  503. throws IOException
  504. {
  505. // Write out the length, threshold, loadfactor
  506. s.defaultWriteObject();
  507.  
  508. // Write out length, count of elements and then the key/value objects
  509. s.writeInt(table.length);
  510. s.writeInt(count);
  511. for (int index = table.length-1; index >= 0; index--) {
  512. Entry entry = table[index];
  513.  
  514. while (entry != null) {
  515. s.writeObject(entry.key);
  516. s.writeObject(entry.value);
  517. entry = entry.next;
  518. }
  519. }
  520. }
  521.  
  522. // java.io.Serializable的读取函数:根据写入方式读出
  523. // 将Hashtable的“总的容量,实际容量,所有的Entry”依次读出
  524. private void readObject(java.io.ObjectInputStream s)
  525. throws IOException, ClassNotFoundException
  526. {
  527. // Read in the length, threshold, and loadfactor
  528. s.defaultReadObject();
  529.  
  530. // Read the original length of the array and number of elements
  531. int origlength = s.readInt();
  532. int elements = s.readInt();
  533.  
  534. // Compute new size with a bit of room 5% to grow but
  535. // no larger than the original size. Make the length
  536. // odd if it's large enough, this helps distribute the entries.
  537. // Guard against the length ending up zero, that's not valid.
  538. int length = (int)(elements * loadFactor) + (elements / 20) + 3;
  539. if (length > elements && (length & 1) == 0)
  540. length--;
  541. if (origlength > 0 && length > origlength)
  542. length = origlength;
  543.  
  544. Entry[] table = new Entry[length];
  545. count = 0;
  546.  
  547. // Read the number of elements and then all the key/value objects
  548. for (; elements > 0; elements--) {
  549. K key = (K)s.readObject();
  550. V value = (V)s.readObject();
  551. // synch could be eliminated for performance
  552. reconstitutionPut(table, key, value);
  553. }
  554. this.table = table;
  555. }
  556.  
  557. private void reconstitutionPut(Entry[] tab, K key, V value)
  558. throws StreamCorruptedException
  559. {
  560. if (value == null) {
  561. throw new java.io.StreamCorruptedException();
  562. }
  563. // Makes sure the key is not already in the hashtable.
  564. // This should not happen in deserialized version.
  565. int hash = key.hashCode();
  566. int index = (hash & 0x7FFFFFFF) % tab.length;
  567. for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {
  568. if ((e.hash == hash) && e.key.equals(key)) {
  569. throw new java.io.StreamCorruptedException();
  570. }
  571. }
  572. // Creates the new entry.
  573. Entry<K,V> e = tab[index];
  574. tab[index] = new Entry<K,V>(hash, key, value, e);
  575. count++;
  576. }
  577.  
  578. // Hashtable的Entry节点,它本质上是一个单向链表。
  579. // 也因此,我们才能推断出Hashtable是由拉链法实现的散列表
  580. private static class Entry<K,V> implements Map.Entry<K,V> {
  581. // 哈希值
  582. int hash;
  583. K key;
  584. V value;
  585. // 指向的下一个Entry,即链表的下一个节点
  586. Entry<K,V> next;
  587.  
  588. // 构造函数
  589. protected Entry(int hash, K key, V value, Entry<K,V> next) {
  590. this.hash = hash;
  591. this.key = key;
  592. this.value = value;
  593. this.next = next;
  594. }
  595.  
  596. protected Object clone() {
  597. return new Entry<K,V>(hash, key, value,
  598. (next==null ? null : (Entry<K,V>) next.clone()));
  599. }
  600.  
  601. public K getKey() {
  602. return key;
  603. }
  604.  
  605. public V getValue() {
  606. return value;
  607. }
  608.  
  609. // 设置value。若value是null,则抛出异常。
  610. public V setValue(V value) {
  611. if (value == null)
  612. throw new NullPointerException();
  613.  
  614. V oldValue = this.value;
  615. this.value = value;
  616. return oldValue;
  617. }
  618.  
  619. // 覆盖equals()方法,判断两个Entry是否相等。
  620. // 若两个Entry的key和value都相等,则认为它们相等。
  621. public boolean equals(Object o) {
  622. if (!(o instanceof Map.Entry))
  623. return false;
  624. Map.Entry e = (Map.Entry)o;
  625.  
  626. return (key==null ? e.getKey()==null : key.equals(e.getKey())) &&
  627. (value==null ? e.getValue()==null : value.equals(e.getValue()));
  628. }
  629.  
  630. public int hashCode() {
  631. return hash ^ (value==null ? 0 : value.hashCode());
  632. }
  633.  
  634. public String toString() {
  635. return key.toString()+"="+value.toString();
  636. }
  637. }
  638.  
  639. private static final int KEYS = 0;
  640. private static final int VALUES = 1;
  641. private static final int ENTRIES = 2;
  642.  
  643. // Enumerator的作用是提供了“通过elements()遍历Hashtable的接口” 和 “通过entrySet()遍历Hashtable的接口”。因为,它同时实现了 “Enumerator接口”和“Iterator接口”。
  644. private class Enumerator<T> implements Enumeration<T>, Iterator<T> {
  645. // 指向Hashtable的table
  646. Entry[] table = Hashtable.this.table;
  647. // Hashtable的总的大小
  648. int index = table.length;
  649. Entry<K,V> entry = null;
  650. Entry<K,V> lastReturned = null;
  651. int type;
  652.  
  653. // Enumerator是 “迭代器(Iterator)” 还是 “枚举类(Enumeration)”的标志
  654. // iterator为true,表示它是迭代器;否则,是枚举类。
  655. boolean iterator;
  656.  
  657. // 在将Enumerator当作迭代器使用时会用到,用来实现fail-fast机制。
  658. protected int expectedModCount = modCount;
  659.  
  660. Enumerator(int type, boolean iterator) {
  661. this.type = type;
  662. this.iterator = iterator;
  663. }
  664.  
  665. // 从遍历table的数组的末尾向前查找,直到找到不为null的Entry。
  666. public boolean hasMoreElements() {
  667. Entry<K,V> e = entry;
  668. int i = index;
  669. Entry[] t = table;
  670. /* Use locals for faster loop iteration */
  671. while (e == null && i > 0) {
  672. e = t[--i];
  673. }
  674. entry = e;
  675. index = i;
  676. return e != null;
  677. }
  678.  
  679. // 获取下一个元素
  680. // 注意:从hasMoreElements() 和nextElement() 可以看出“Hashtable的elements()遍历方式”
  681. // 首先,从后向前的遍历table数组。table数组的每个节点都是一个单向链表(Entry)。
  682. // 然后,依次向后遍历单向链表Entry。
  683. public T nextElement() {
  684. Entry<K,V> et = entry;
  685. int i = index;
  686. Entry[] t = table;
  687. /* Use locals for faster loop iteration */
  688. while (et == null && i > 0) {
  689. et = t[--i];
  690. }
  691. entry = et;
  692. index = i;
  693. if (et != null) {
  694. Entry<K,V> e = lastReturned = entry;
  695. entry = e.next;
  696. return type == KEYS ? (T)e.key : (type == VALUES ? (T)e.value : (T)e);
  697. }
  698. throw new NoSuchElementException("Hashtable Enumerator");
  699. }
  700.  
  701. // 迭代器Iterator的判断是否存在下一个元素
  702. // 实际上,它是调用的hasMoreElements()
  703. public boolean hasNext() {
  704. return hasMoreElements();
  705. }
  706.  
  707. // 迭代器获取下一个元素
  708. // 实际上,它是调用的nextElement()
  709. public T next() {
  710. if (modCount != expectedModCount)
  711. throw new ConcurrentModificationException();
  712. return nextElement();
  713. }
  714.  
  715. // 迭代器的remove()接口。
  716. // 首先,它在table数组中找出要删除元素所在的Entry,
  717. // 然后,删除单向链表Entry中的元素。
  718. public void remove() {
  719. if (!iterator)
  720. throw new UnsupportedOperationException();
  721. if (lastReturned == null)
  722. throw new IllegalStateException("Hashtable Enumerator");
  723. if (modCount != expectedModCount)
  724. throw new ConcurrentModificationException();
  725.  
  726. synchronized(Hashtable.this) {
  727. Entry[] tab = Hashtable.this.table;
  728. int index = (lastReturned.hash & 0x7FFFFFFF) % tab.length;
  729.  
  730. for (Entry<K,V> e = tab[index], prev = null; e != null;
  731. prev = e, e = e.next) {
  732. if (e == lastReturned) {
  733. modCount++;
  734. expectedModCount++;
  735. if (prev == null)
  736. tab[index] = e.next;
  737. else
  738. prev.next = e.next;
  739. count--;
  740. lastReturned = null;
  741. return;
  742. }
  743. }
  744. throw new ConcurrentModificationException();
  745. }
  746. }
  747. }
  748.  
  749. private static Enumeration emptyEnumerator = new EmptyEnumerator();
  750. private static Iterator emptyIterator = new EmptyIterator();
  751.  
  752. // 空枚举类
  753. // 当Hashtable的实际大小为0;此时,又要通过Enumeration遍历Hashtable时,返回的是“空枚举类”的对象。
  754. private static class EmptyEnumerator implements Enumeration<Object> {
  755.  
  756. EmptyEnumerator() {
  757. }
  758.  
  759. // 空枚举类的hasMoreElements() 始终返回false
  760. public boolean hasMoreElements() {
  761. return false;
  762. }
  763.  
  764. // 空枚举类的nextElement() 抛出异常
  765. public Object nextElement() {
  766. throw new NoSuchElementException("Hashtable Enumerator");
  767. }
  768. }
  769.  
  770. // 空迭代器
  771. // 当Hashtable的实际大小为0;此时,又要通过迭代器遍历Hashtable时,返回的是“空迭代器”的对象。
  772. private static class EmptyIterator implements Iterator<Object> {
  773.  
  774. EmptyIterator() {
  775. }
  776.  
  777. public boolean hasNext() {
  778. return false;
  779. }
  780.  
  781. public Object next() {
  782. throw new NoSuchElementException("Hashtable Iterator");
  783. }
  784.  
  785. public void remove() {
  786. throw new IllegalStateException("Hashtable Iterator");
  787. }
  788.  
  789. }
  790. }

jdk1.6的Hashtable源码解析

  1. /*
  2. * Copyright (c) 1994, 2013, Oracle and/or its affiliates. All rights reserved.
  3. * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.
  4. *
  5. *
  6. *
  7. *
  8. *
  9. *
  10. *
  11. *
  12. *
  13. *
  14. *
  15. *
  16. *
  17. *
  18. *
  19. *
  20. *
  21. *
  22. *
  23. *
  24. */
  25.  
  26. package java.util;
  27.  
  28. import java.io.*;
  29. import java.util.concurrent.ThreadLocalRandom;
  30. import java.util.function.BiConsumer;
  31. import java.util.function.Function;
  32. import java.util.function.BiFunction;
  33.  
  34. /**
  35. * This class implements a hash table, which maps keys to values. Any
  36. * non-<code>null</code> object can be used as a key or as a value. <p>
  37. *
  38. * To successfully store and retrieve objects from a hashtable, the
  39. * objects used as keys must implement the <code>hashCode</code>
  40. * method and the <code>equals</code> method. <p>
  41. *
  42. * An instance of <code>Hashtable</code> has two parameters that affect its
  43. * performance: <i>initial capacity</i> and <i>load factor</i>. The
  44. * <i>capacity</i> is the number of <i>buckets</i> in the hash table, and the
  45. * <i>initial capacity</i> is simply the capacity at the time the hash table
  46. * is created. Note that the hash table is <i>open</i>: in the case of a "hash
  47. * collision", a single bucket stores multiple entries, which must be searched
  48. * sequentially. The <i>load factor</i> is a measure of how full the hash
  49. * table is allowed to get before its capacity is automatically increased.
  50. * The initial capacity and load factor parameters are merely hints to
  51. * the implementation. The exact details as to when and whether the rehash
  52. * method is invoked are implementation-dependent.<p>
  53. *
  54. * Generally, the default load factor (.75) offers a good tradeoff between
  55. * time and space costs. Higher values decrease the space overhead but
  56. * increase the time cost to look up an entry (which is reflected in most
  57. * <tt>Hashtable</tt> operations, including <tt>get</tt> and <tt>put</tt>).<p>
  58. *
  59. * The initial capacity controls a tradeoff between wasted space and the
  60. * need for <code>rehash</code> operations, which are time-consuming.
  61. * No <code>rehash</code> operations will <i>ever</i> occur if the initial
  62. * capacity is greater than the maximum number of entries the
  63. * <tt>Hashtable</tt> will contain divided by its load factor. However,
  64. * setting the initial capacity too high can waste space.<p>
  65. *
  66. * If many entries are to be made into a <code>Hashtable</code>,
  67. * creating it with a sufficiently large capacity may allow the
  68. * entries to be inserted more efficiently than letting it perform
  69. * automatic rehashing as needed to grow the table. <p>
  70. *
  71. * This example creates a hashtable of numbers. It uses the names of
  72. * the numbers as keys:
  73. * <pre> {@code
  74. * Hashtable<String, Integer> numbers
  75. * = new Hashtable<String, Integer>();
  76. * numbers.put("one", 1);
  77. * numbers.put("two", 2);
  78. * numbers.put("three", 3);}</pre>
  79. *
  80. * <p>To retrieve a number, use the following code:
  81. * <pre> {@code
  82. * Integer n = numbers.get("two");
  83. * if (n != null) {
  84. * System.out.println("two = " + n);
  85. * }}</pre>
  86. *
  87. * <p>The iterators returned by the <tt>iterator</tt> method of the collections
  88. * returned by all of this class's "collection view methods" are
  89. * <em>fail-fast</em>: if the Hashtable is structurally modified at any time
  90. * after the iterator is created, in any way except through the iterator's own
  91. * <tt>remove</tt> method, the iterator will throw a {@link
  92. * ConcurrentModificationException}. Thus, in the face of concurrent
  93. * modification, the iterator fails quickly and cleanly, rather than risking
  94. * arbitrary, non-deterministic behavior at an undetermined time in the future.
  95. * The Enumerations returned by Hashtable's keys and elements methods are
  96. * <em>not</em> fail-fast.
  97. *
  98. * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed
  99. * as it is, generally speaking, impossible to make any hard guarantees in the
  100. * presence of unsynchronized concurrent modification. Fail-fast iterators
  101. * throw <tt>ConcurrentModificationException</tt> on a best-effort basis.
  102. * Therefore, it would be wrong to write a program that depended on this
  103. * exception for its correctness: <i>the fail-fast behavior of iterators
  104. * should be used only to detect bugs.</i>
  105. *
  106. * <p>As of the Java 2 platform v1.2, this class was retrofitted to
  107. * implement the {@link Map} interface, making it a member of the
  108. * <a href="{@docRoot}/../technotes/guides/collections/index.html">
  109. *
  110. * Java Collections Framework</a>. Unlike the new collection
  111. * implementations, {@code Hashtable} is synchronized. If a
  112. * thread-safe implementation is not needed, it is recommended to use
  113. * {@link HashMap} in place of {@code Hashtable}. If a thread-safe
  114. * highly-concurrent implementation is desired, then it is recommended
  115. * to use {@link java.util.concurrent.ConcurrentHashMap} in place of
  116. * {@code Hashtable}.
  117. *
  118. * @author Arthur van Hoff
  119. * @author Josh Bloch
  120. * @author Neal Gafter
  121. * @see Object#equals(java.lang.Object)
  122. * @see Object#hashCode()
  123. * @see Hashtable#rehash()
  124. * @see Collection
  125. * @see Map
  126. * @see HashMap
  127. * @see TreeMap
  128. * @since JDK1.0
  129. */
  130. public class Hashtable<K,V>
  131. extends Dictionary<K,V>
  132. implements Map<K,V>, Cloneable, java.io.Serializable {
  133.  
  134. /**
  135. * The hash table data.
  136. */
  137. private transient Entry<?,?>[] table;
  138.  
  139. /**
  140. * The total number of entries in the hash table.
  141. */
  142. private transient int count;
  143.  
  144. /**
  145. * The table is rehashed when its size exceeds this threshold. (The
  146. * value of this field is (int)(capacity * loadFactor).)
  147. *
  148. * @serial
  149. */
  150. private int threshold;
  151.  
  152. /**
  153. * The load factor for the hashtable.
  154. *
  155. * @serial
  156. */
  157. private float loadFactor;
  158.  
  159. /**
  160. * The number of times this Hashtable has been structurally modified
  161. * Structural modifications are those that change the number of entries in
  162. * the Hashtable or otherwise modify its internal structure (e.g.,
  163. * rehash). This field is used to make iterators on Collection-views of
  164. * the Hashtable fail-fast. (See ConcurrentModificationException).
  165. */
  166. private transient int modCount = 0;
  167.  
  168. /** use serialVersionUID from JDK 1.0.2 for interoperability */
  169. private static final long serialVersionUID = 1421746759512286392L;
  170.  
  171. /**
  172. * Constructs a new, empty hashtable with the specified initial
  173. * capacity and the specified load factor.
  174. *
  175. * @param initialCapacity the initial capacity of the hashtable.
  176. * @param loadFactor the load factor of the hashtable.
  177. * @exception IllegalArgumentException if the initial capacity is less
  178. * than zero, or if the load factor is nonpositive.
  179. */
  180. public Hashtable(int initialCapacity, float loadFactor) {
  181. if (initialCapacity < 0)
  182. throw new IllegalArgumentException("Illegal Capacity: "+
  183. initialCapacity);
  184. if (loadFactor <= 0 || Float.isNaN(loadFactor))
  185. throw new IllegalArgumentException("Illegal Load: "+loadFactor);
  186.  
  187. if (initialCapacity==0)
  188. initialCapacity = 1;
  189. this.loadFactor = loadFactor;
  190. table = new Entry<?,?>[initialCapacity];
  191. threshold = (int)Math.min(initialCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
  192. }
  193.  
  194. /**
  195. * Constructs a new, empty hashtable with the specified initial capacity
  196. * and default load factor (0.75).
  197. *
  198. * @param initialCapacity the initial capacity of the hashtable.
  199. * @exception IllegalArgumentException if the initial capacity is less
  200. * than zero.
  201. */
  202. public Hashtable(int initialCapacity) {
  203. this(initialCapacity, 0.75f);
  204. }
  205.  
  206. /**
  207. * Constructs a new, empty hashtable with a default initial capacity (11)
  208. * and load factor (0.75).
  209. */
  210. public Hashtable() {
  211. this(11, 0.75f);
  212. }
  213.  
  214. /**
  215. * Constructs a new hashtable with the same mappings as the given
  216. * Map. The hashtable is created with an initial capacity sufficient to
  217. * hold the mappings in the given Map and a default load factor (0.75).
  218. *
  219. * @param t the map whose mappings are to be placed in this map.
  220. * @throws NullPointerException if the specified map is null.
  221. * @since 1.2
  222. */
  223. public Hashtable(Map<? extends K, ? extends V> t) {
  224. this(Math.max(2*t.size(), 11), 0.75f);
  225. putAll(t);
  226. }
  227.  
  228. /**
  229. * Returns the number of keys in this hashtable.
  230. *
  231. * @return the number of keys in this hashtable.
  232. */
  233. public synchronized int size() {
  234. return count;
  235. }
  236.  
  237. /**
  238. * Tests if this hashtable maps no keys to values.
  239. *
  240. * @return <code>true</code> if this hashtable maps no keys to values;
  241. * <code>false</code> otherwise.
  242. */
  243. public synchronized boolean isEmpty() {
  244. return count == 0;
  245. }
  246.  
  247. /**
  248. * Returns an enumeration of the keys in this hashtable.
  249. *
  250. * @return an enumeration of the keys in this hashtable.
  251. * @see Enumeration
  252. * @see #elements()
  253. * @see #keySet()
  254. * @see Map
  255. */
  256. public synchronized Enumeration<K> keys() {
  257. return this.<K>getEnumeration(KEYS);
  258. }
  259.  
  260. /**
  261. * Returns an enumeration of the values in this hashtable.
  262. * Use the Enumeration methods on the returned object to fetch the elements
  263. * sequentially.
  264. *
  265. * @return an enumeration of the values in this hashtable.
  266. * @see java.util.Enumeration
  267. * @see #keys()
  268. * @see #values()
  269. * @see Map
  270. */
  271. public synchronized Enumeration<V> elements() {
  272. return this.<V>getEnumeration(VALUES);
  273. }
  274.  
  275. /**
  276. * Tests if some key maps into the specified value in this hashtable.
  277. * This operation is more expensive than the {@link #containsKey
  278. * containsKey} method.
  279. *
  280. * <p>Note that this method is identical in functionality to
  281. * {@link #containsValue containsValue}, (which is part of the
  282. * {@link Map} interface in the collections framework).
  283. *
  284. * @param value a value to search for
  285. * @return <code>true</code> if and only if some key maps to the
  286. * <code>value</code> argument in this hashtable as
  287. * determined by the <tt>equals</tt> method;
  288. * <code>false</code> otherwise.
  289. * @exception NullPointerException if the value is <code>null</code>
  290. */
  291. public synchronized boolean contains(Object value) {
  292. if (value == null) {
  293. throw new NullPointerException();
  294. }
  295.  
  296. Entry<?,?> tab[] = table;
  297. for (int i = tab.length ; i-- > 0 ;) {
  298. for (Entry<?,?> e = tab[i] ; e != null ; e = e.next) {
  299. if (e.value.equals(value)) {
  300. return true;
  301. }
  302. }
  303. }
  304. return false;
  305. }
  306.  
  307. /**
  308. * Returns true if this hashtable maps one or more keys to this value.
  309. *
  310. * <p>Note that this method is identical in functionality to {@link
  311. * #contains contains} (which predates the {@link Map} interface).
  312. *
  313. * @param value value whose presence in this hashtable is to be tested
  314. * @return <tt>true</tt> if this map maps one or more keys to the
  315. * specified value
  316. * @throws NullPointerException if the value is <code>null</code>
  317. * @since 1.2
  318. */
  319. public boolean containsValue(Object value) {
  320. return contains(value);
  321. }
  322.  
  323. /**
  324. * Tests if the specified object is a key in this hashtable.
  325. *
  326. * @param key possible key
  327. * @return <code>true</code> if and only if the specified object
  328. * is a key in this hashtable, as determined by the
  329. * <tt>equals</tt> method; <code>false</code> otherwise.
  330. * @throws NullPointerException if the key is <code>null</code>
  331. * @see #contains(Object)
  332. */
  333. public synchronized boolean containsKey(Object key) {
  334. Entry<?,?> tab[] = table;
  335. int hash = key.hashCode();
  336. int index = (hash & 0x7FFFFFFF) % tab.length;
  337. for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
  338. if ((e.hash == hash) && e.key.equals(key)) {
  339. return true;
  340. }
  341. }
  342. return false;
  343. }
  344.  
  345. /**
  346. * Returns the value to which the specified key is mapped,
  347. * or {@code null} if this map contains no mapping for the key.
  348. *
  349. * <p>More formally, if this map contains a mapping from a key
  350. * {@code k} to a value {@code v} such that {@code (key.equals(k))},
  351. * then this method returns {@code v}; otherwise it returns
  352. * {@code null}. (There can be at most one such mapping.)
  353. *
  354. * @param key the key whose associated value is to be returned
  355. * @return the value to which the specified key is mapped, or
  356. * {@code null} if this map contains no mapping for the key
  357. * @throws NullPointerException if the specified key is null
  358. * @see #put(Object, Object)
  359. */
  360. @SuppressWarnings("unchecked")
  361. public synchronized V get(Object key) {
  362. Entry<?,?> tab[] = table;
  363. int hash = key.hashCode();
  364. int index = (hash & 0x7FFFFFFF) % tab.length;
  365. for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
  366. if ((e.hash == hash) && e.key.equals(key)) {
  367. return (V)e.value;
  368. }
  369. }
  370. return null;
  371. }
  372.  
  373. /**
  374. * The maximum size of array to allocate.
  375. * Some VMs reserve some header words in an array.
  376. * Attempts to allocate larger arrays may result in
  377. * OutOfMemoryError: Requested array size exceeds VM limit
  378. */
  379. private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
  380.  
  381. /**
  382. * Increases the capacity of and internally reorganizes this
  383. * hashtable, in order to accommodate and access its entries more
  384. * efficiently. This method is called automatically when the
  385. * number of keys in the hashtable exceeds this hashtable's capacity
  386. * and load factor.
  387. */
  388. @SuppressWarnings("unchecked")
  389. protected void rehash() {
  390. int oldCapacity = table.length;
  391. Entry<?,?>[] oldMap = table;
  392.  
  393. // overflow-conscious code
  394. int newCapacity = (oldCapacity << 1) + 1;
  395. if (newCapacity - MAX_ARRAY_SIZE > 0) {
  396. if (oldCapacity == MAX_ARRAY_SIZE)
  397. // Keep running with MAX_ARRAY_SIZE buckets
  398. return;
  399. newCapacity = MAX_ARRAY_SIZE;
  400. }
  401. Entry<?,?>[] newMap = new Entry<?,?>[newCapacity];
  402.  
  403. modCount++;
  404. threshold = (int)Math.min(newCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
  405. table = newMap;
  406.  
  407. for (int i = oldCapacity ; i-- > 0 ;) {
  408. for (Entry<K,V> old = (Entry<K,V>)oldMap[i] ; old != null ; ) {
  409. Entry<K,V> e = old;
  410. old = old.next;
  411.  
  412. int index = (e.hash & 0x7FFFFFFF) % newCapacity;
  413. e.next = (Entry<K,V>)newMap[index];
  414. newMap[index] = e;
  415. }
  416. }
  417. }
  418.  
  419. private void addEntry(int hash, K key, V value, int index) {
  420. modCount++;
  421.  
  422. Entry<?,?> tab[] = table;
  423. if (count >= threshold) {
  424. // Rehash the table if the threshold is exceeded
  425. rehash();
  426.  
  427. tab = table;
  428. hash = key.hashCode();
  429. index = (hash & 0x7FFFFFFF) % tab.length;
  430. }
  431.  
  432. // Creates the new entry.
  433. @SuppressWarnings("unchecked")
  434. Entry<K,V> e = (Entry<K,V>) tab[index];
  435. tab[index] = new Entry<>(hash, key, value, e);
  436. count++;
  437. }
  438.  
  439. /**
  440. * Maps the specified <code>key</code> to the specified
  441. * <code>value</code> in this hashtable. Neither the key nor the
  442. * value can be <code>null</code>. <p>
  443. *
  444. * The value can be retrieved by calling the <code>get</code> method
  445. * with a key that is equal to the original key.
  446. *
  447. * @param key the hashtable key
  448. * @param value the value
  449. * @return the previous value of the specified key in this hashtable,
  450. * or <code>null</code> if it did not have one
  451. * @exception NullPointerException if the key or value is
  452. * <code>null</code>
  453. * @see Object#equals(Object)
  454. * @see #get(Object)
  455. */
  456. public synchronized V put(K key, V value) {
  457. // Make sure the value is not null
  458. if (value == null) {
  459. throw new NullPointerException();
  460. }
  461.  
  462. // Makes sure the key is not already in the hashtable.
  463. Entry<?,?> tab[] = table;
  464. int hash = key.hashCode();
  465. int index = (hash & 0x7FFFFFFF) % tab.length;
  466. @SuppressWarnings("unchecked")
  467. Entry<K,V> entry = (Entry<K,V>)tab[index];
  468. for(; entry != null ; entry = entry.next) {
  469. if ((entry.hash == hash) && entry.key.equals(key)) {
  470. V old = entry.value;
  471. entry.value = value;
  472. return old;
  473. }
  474. }
  475.  
  476. addEntry(hash, key, value, index);
  477. return null;
  478. }
  479.  
  480. /**
  481. * Removes the key (and its corresponding value) from this
  482. * hashtable. This method does nothing if the key is not in the hashtable.
  483. *
  484. * @param key the key that needs to be removed
  485. * @return the value to which the key had been mapped in this hashtable,
  486. * or <code>null</code> if the key did not have a mapping
  487. * @throws NullPointerException if the key is <code>null</code>
  488. */
  489. public synchronized V remove(Object key) {
  490. Entry<?,?> tab[] = table;
  491. int hash = key.hashCode();
  492. int index = (hash & 0x7FFFFFFF) % tab.length;
  493. @SuppressWarnings("unchecked")
  494. Entry<K,V> e = (Entry<K,V>)tab[index];
  495. for(Entry<K,V> prev = null ; e != null ; prev = e, e = e.next) {
  496. if ((e.hash == hash) && e.key.equals(key)) {
  497. modCount++;
  498. if (prev != null) {
  499. prev.next = e.next;
  500. } else {
  501. tab[index] = e.next;
  502. }
  503. count--;
  504. V oldValue = e.value;
  505. e.value = null;
  506. return oldValue;
  507. }
  508. }
  509. return null;
  510. }
  511.  
  512. /**
  513. * Copies all of the mappings from the specified map to this hashtable.
  514. * These mappings will replace any mappings that this hashtable had for any
  515. * of the keys currently in the specified map.
  516. *
  517. * @param t mappings to be stored in this map
  518. * @throws NullPointerException if the specified map is null
  519. * @since 1.2
  520. */
  521. public synchronized void putAll(Map<? extends K, ? extends V> t) {
  522. for (Map.Entry<? extends K, ? extends V> e : t.entrySet())
  523. put(e.getKey(), e.getValue());
  524. }
  525.  
  526. /**
  527. * Clears this hashtable so that it contains no keys.
  528. */
  529. public synchronized void clear() {
  530. Entry<?,?> tab[] = table;
  531. modCount++;
  532. for (int index = tab.length; --index >= 0; )
  533. tab[index] = null;
  534. count = 0;
  535. }
  536.  
  537. /**
  538. * Creates a shallow copy of this hashtable. All the structure of the
  539. * hashtable itself is copied, but the keys and values are not cloned.
  540. * This is a relatively expensive operation.
  541. *
  542. * @return a clone of the hashtable
  543. */
  544. public synchronized Object clone() {
  545. try {
  546. Hashtable<?,?> t = (Hashtable<?,?>)super.clone();
  547. t.table = new Entry<?,?>[table.length];
  548. for (int i = table.length ; i-- > 0 ; ) {
  549. t.table[i] = (table[i] != null)
  550. ? (Entry<?,?>) table[i].clone() : null;
  551. }
  552. t.keySet = null;
  553. t.entrySet = null;
  554. t.values = null;
  555. t.modCount = 0;
  556. return t;
  557. } catch (CloneNotSupportedException e) {
  558. // this shouldn't happen, since we are Cloneable
  559. throw new InternalError(e);
  560. }
  561. }
  562.  
  563. /**
  564. * Returns a string representation of this <tt>Hashtable</tt> object
  565. * in the form of a set of entries, enclosed in braces and separated
  566. * by the ASCII characters "<tt>,&nbsp;</tt>" (comma and space). Each
  567. * entry is rendered as the key, an equals sign <tt>=</tt>, and the
  568. * associated element, where the <tt>toString</tt> method is used to
  569. * convert the key and element to strings.
  570. *
  571. * @return a string representation of this hashtable
  572. */
  573. public synchronized String toString() {
  574. int max = size() - 1;
  575. if (max == -1)
  576. return "{}";
  577.  
  578. StringBuilder sb = new StringBuilder();
  579. Iterator<Map.Entry<K,V>> it = entrySet().iterator();
  580.  
  581. sb.append('{');
  582. for (int i = 0; ; i++) {
  583. Map.Entry<K,V> e = it.next();
  584. K key = e.getKey();
  585. V value = e.getValue();
  586. sb.append(key == this ? "(this Map)" : key.toString());
  587. sb.append('=');
  588. sb.append(value == this ? "(this Map)" : value.toString());
  589.  
  590. if (i == max)
  591. return sb.append('}').toString();
  592. sb.append(", ");
  593. }
  594. }
  595.  
  596. private <T> Enumeration<T> getEnumeration(int type) {
  597. if (count == 0) {
  598. return Collections.emptyEnumeration();
  599. } else {
  600. return new Enumerator<>(type, false);
  601. }
  602. }
  603.  
  604. private <T> Iterator<T> getIterator(int type) {
  605. if (count == 0) {
  606. return Collections.emptyIterator();
  607. } else {
  608. return new Enumerator<>(type, true);
  609. }
  610. }
  611.  
  612. // Views
  613.  
  614. /**
  615. * Each of these fields are initialized to contain an instance of the
  616. * appropriate view the first time this view is requested. The views are
  617. * stateless, so there's no reason to create more than one of each.
  618. */
  619. private transient volatile Set<K> keySet;
  620. private transient volatile Set<Map.Entry<K,V>> entrySet;
  621. private transient volatile Collection<V> values;
  622.  
  623. /**
  624. * Returns a {@link Set} view of the keys contained in this map.
  625. * The set is backed by the map, so changes to the map are
  626. * reflected in the set, and vice-versa. If the map is modified
  627. * while an iteration over the set is in progress (except through
  628. * the iterator's own <tt>remove</tt> operation), the results of
  629. * the iteration are undefined. The set supports element removal,
  630. * which removes the corresponding mapping from the map, via the
  631. * <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,
  632. * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>
  633. * operations. It does not support the <tt>add</tt> or <tt>addAll</tt>
  634. * operations.
  635. *
  636. * @since 1.2
  637. */
  638. public Set<K> keySet() {
  639. if (keySet == null)
  640. keySet = Collections.synchronizedSet(new KeySet(), this);
  641. return keySet;
  642. }
  643.  
  644. private class KeySet extends AbstractSet<K> {
  645. public Iterator<K> iterator() {
  646. return getIterator(KEYS);
  647. }
  648. public int size() {
  649. return count;
  650. }
  651. public boolean contains(Object o) {
  652. return containsKey(o);
  653. }
  654. public boolean remove(Object o) {
  655. return Hashtable.this.remove(o) != null;
  656. }
  657. public void clear() {
  658. Hashtable.this.clear();
  659. }
  660. }
  661.  
  662. /**
  663. * Returns a {@link Set} view of the mappings contained in this map.
  664. * The set is backed by the map, so changes to the map are
  665. * reflected in the set, and vice-versa. If the map is modified
  666. * while an iteration over the set is in progress (except through
  667. * the iterator's own <tt>remove</tt> operation, or through the
  668. * <tt>setValue</tt> operation on a map entry returned by the
  669. * iterator) the results of the iteration are undefined. The set
  670. * supports element removal, which removes the corresponding
  671. * mapping from the map, via the <tt>Iterator.remove</tt>,
  672. * <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and
  673. * <tt>clear</tt> operations. It does not support the
  674. * <tt>add</tt> or <tt>addAll</tt> operations.
  675. *
  676. * @since 1.2
  677. */
  678. public Set<Map.Entry<K,V>> entrySet() {
  679. if (entrySet==null)
  680. entrySet = Collections.synchronizedSet(new EntrySet(), this);
  681. return entrySet;
  682. }
  683.  
  684. private class EntrySet extends AbstractSet<Map.Entry<K,V>> {
  685. public Iterator<Map.Entry<K,V>> iterator() {
  686. return getIterator(ENTRIES);
  687. }
  688.  
  689. public boolean add(Map.Entry<K,V> o) {
  690. return super.add(o);
  691. }
  692.  
  693. public boolean contains(Object o) {
  694. if (!(o instanceof Map.Entry))
  695. return false;
  696. Map.Entry<?,?> entry = (Map.Entry<?,?>)o;
  697. Object key = entry.getKey();
  698. Entry<?,?>[] tab = table;
  699. int hash = key.hashCode();
  700. int index = (hash & 0x7FFFFFFF) % tab.length;
  701.  
  702. for (Entry<?,?> e = tab[index]; e != null; e = e.next)
  703. if (e.hash==hash && e.equals(entry))
  704. return true;
  705. return false;
  706. }
  707.  
  708. public boolean remove(Object o) {
  709. if (!(o instanceof Map.Entry))
  710. return false;
  711. Map.Entry<?,?> entry = (Map.Entry<?,?>) o;
  712. Object key = entry.getKey();
  713. Entry<?,?>[] tab = table;
  714. int hash = key.hashCode();
  715. int index = (hash & 0x7FFFFFFF) % tab.length;
  716.  
  717. @SuppressWarnings("unchecked")
  718. Entry<K,V> e = (Entry<K,V>)tab[index];
  719. for(Entry<K,V> prev = null; e != null; prev = e, e = e.next) {
  720. if (e.hash==hash && e.equals(entry)) {
  721. modCount++;
  722. if (prev != null)
  723. prev.next = e.next;
  724. else
  725. tab[index] = e.next;
  726.  
  727. count--;
  728. e.value = null;
  729. return true;
  730. }
  731. }
  732. return false;
  733. }
  734.  
  735. public int size() {
  736. return count;
  737. }
  738.  
  739. public void clear() {
  740. Hashtable.this.clear();
  741. }
  742. }
  743.  
  744. /**
  745. * Returns a {@link Collection} view of the values contained in this map.
  746. * The collection is backed by the map, so changes to the map are
  747. * reflected in the collection, and vice-versa. If the map is
  748. * modified while an iteration over the collection is in progress
  749. * (except through the iterator's own <tt>remove</tt> operation),
  750. * the results of the iteration are undefined. The collection
  751. * supports element removal, which removes the corresponding
  752. * mapping from the map, via the <tt>Iterator.remove</tt>,
  753. * <tt>Collection.remove</tt>, <tt>removeAll</tt>,
  754. * <tt>retainAll</tt> and <tt>clear</tt> operations. It does not
  755. * support the <tt>add</tt> or <tt>addAll</tt> operations.
  756. *
  757. * @since 1.2
  758. */
  759. public Collection<V> values() {
  760. if (values==null)
  761. values = Collections.synchronizedCollection(new ValueCollection(),
  762. this);
  763. return values;
  764. }
  765.  
  766. private class ValueCollection extends AbstractCollection<V> {
  767. public Iterator<V> iterator() {
  768. return getIterator(VALUES);
  769. }
  770. public int size() {
  771. return count;
  772. }
  773. public boolean contains(Object o) {
  774. return containsValue(o);
  775. }
  776. public void clear() {
  777. Hashtable.this.clear();
  778. }
  779. }
  780.  
  781. // Comparison and hashing
  782.  
  783. /**
  784. * Compares the specified Object with this Map for equality,
  785. * as per the definition in the Map interface.
  786. *
  787. * @param o object to be compared for equality with this hashtable
  788. * @return true if the specified Object is equal to this Map
  789. * @see Map#equals(Object)
  790. * @since 1.2
  791. */
  792. public synchronized boolean equals(Object o) {
  793. if (o == this)
  794. return true;
  795.  
  796. if (!(o instanceof Map))
  797. return false;
  798. Map<?,?> t = (Map<?,?>) o;
  799. if (t.size() != size())
  800. return false;
  801.  
  802. try {
  803. Iterator<Map.Entry<K,V>> i = entrySet().iterator();
  804. while (i.hasNext()) {
  805. Map.Entry<K,V> e = i.next();
  806. K key = e.getKey();
  807. V value = e.getValue();
  808. if (value == null) {
  809. if (!(t.get(key)==null && t.containsKey(key)))
  810. return false;
  811. } else {
  812. if (!value.equals(t.get(key)))
  813. return false;
  814. }
  815. }
  816. } catch (ClassCastException unused) {
  817. return false;
  818. } catch (NullPointerException unused) {
  819. return false;
  820. }
  821.  
  822. return true;
  823. }
  824.  
  825. /**
  826. * Returns the hash code value for this Map as per the definition in the
  827. * Map interface.
  828. *
  829. * @see Map#hashCode()
  830. * @since 1.2
  831. */
  832. public synchronized int hashCode() {
  833. /*
  834. * This code detects the recursion caused by computing the hash code
  835. * of a self-referential hash table and prevents the stack overflow
  836. * that would otherwise result. This allows certain 1.1-era
  837. * applets with self-referential hash tables to work. This code
  838. * abuses the loadFactor field to do double-duty as a hashCode
  839. * in progress flag, so as not to worsen the space performance.
  840. * A negative load factor indicates that hash code computation is
  841. * in progress.
  842. */
  843. int h = 0;
  844. if (count == 0 || loadFactor < 0)
  845. return h; // Returns zero
  846.  
  847. loadFactor = -loadFactor; // Mark hashCode computation in progress
  848. Entry<?,?>[] tab = table;
  849. for (Entry<?,?> entry : tab) {
  850. while (entry != null) {
  851. h += entry.hashCode();
  852. entry = entry.next;
  853. }
  854. }
  855.  
  856. loadFactor = -loadFactor; // Mark hashCode computation complete
  857.  
  858. return h;
  859. }
  860.  
  861. @Override
  862. public synchronized V getOrDefault(Object key, V defaultValue) {
  863. V result = get(key);
  864. return (null == result) ? defaultValue : result;
  865. }
  866.  
  867. @SuppressWarnings("unchecked")
  868. @Override
  869. public synchronized void forEach(BiConsumer<? super K, ? super V> action) {
  870. Objects.requireNonNull(action); // explicit check required in case
  871. // table is empty.
  872. final int expectedModCount = modCount;
  873.  
  874. Entry<?, ?>[] tab = table;
  875. for (Entry<?, ?> entry : tab) {
  876. while (entry != null) {
  877. action.accept((K)entry.key, (V)entry.value);
  878. entry = entry.next;
  879.  
  880. if (expectedModCount != modCount) {
  881. throw new ConcurrentModificationException();
  882. }
  883. }
  884. }
  885. }
  886.  
  887. @SuppressWarnings("unchecked")
  888. @Override
  889. public synchronized void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
  890. Objects.requireNonNull(function); // explicit check required in case
  891. // table is empty.
  892. final int expectedModCount = modCount;
  893.  
  894. Entry<K, V>[] tab = (Entry<K, V>[])table;
  895. for (Entry<K, V> entry : tab) {
  896. while (entry != null) {
  897. entry.value = Objects.requireNonNull(
  898. function.apply(entry.key, entry.value));
  899. entry = entry.next;
  900.  
  901. if (expectedModCount != modCount) {
  902. throw new ConcurrentModificationException();
  903. }
  904. }
  905. }
  906. }
  907.  
  908. @Override
  909. public synchronized V putIfAbsent(K key, V value) {
  910. Objects.requireNonNull(value);
  911.  
  912. // Makes sure the key is not already in the hashtable.
  913. Entry<?,?> tab[] = table;
  914. int hash = key.hashCode();
  915. int index = (hash & 0x7FFFFFFF) % tab.length;
  916. @SuppressWarnings("unchecked")
  917. Entry<K,V> entry = (Entry<K,V>)tab[index];
  918. for (; entry != null; entry = entry.next) {
  919. if ((entry.hash == hash) && entry.key.equals(key)) {
  920. V old = entry.value;
  921. if (old == null) {
  922. entry.value = value;
  923. }
  924. return old;
  925. }
  926. }
  927.  
  928. addEntry(hash, key, value, index);
  929. return null;
  930. }
  931.  
  932. @Override
  933. public synchronized boolean remove(Object key, Object value) {
  934. Objects.requireNonNull(value);
  935.  
  936. Entry<?,?> tab[] = table;
  937. int hash = key.hashCode();
  938. int index = (hash & 0x7FFFFFFF) % tab.length;
  939. @SuppressWarnings("unchecked")
  940. Entry<K,V> e = (Entry<K,V>)tab[index];
  941. for (Entry<K,V> prev = null; e != null; prev = e, e = e.next) {
  942. if ((e.hash == hash) && e.key.equals(key) && e.value.equals(value)) {
  943. modCount++;
  944. if (prev != null) {
  945. prev.next = e.next;
  946. } else {
  947. tab[index] = e.next;
  948. }
  949. count--;
  950. e.value = null;
  951. return true;
  952. }
  953. }
  954. return false;
  955. }
  956.  
  957. @Override
  958. public synchronized boolean replace(K key, V oldValue, V newValue) {
  959. Objects.requireNonNull(oldValue);
  960. Objects.requireNonNull(newValue);
  961. Entry<?,?> tab[] = table;
  962. int hash = key.hashCode();
  963. int index = (hash & 0x7FFFFFFF) % tab.length;
  964. @SuppressWarnings("unchecked")
  965. Entry<K,V> e = (Entry<K,V>)tab[index];
  966. for (; e != null; e = e.next) {
  967. if ((e.hash == hash) && e.key.equals(key)) {
  968. if (e.value.equals(oldValue)) {
  969. e.value = newValue;
  970. return true;
  971. } else {
  972. return false;
  973. }
  974. }
  975. }
  976. return false;
  977. }
  978.  
  979. @Override
  980. public synchronized V replace(K key, V value) {
  981. Objects.requireNonNull(value);
  982. Entry<?,?> tab[] = table;
  983. int hash = key.hashCode();
  984. int index = (hash & 0x7FFFFFFF) % tab.length;
  985. @SuppressWarnings("unchecked")
  986. Entry<K,V> e = (Entry<K,V>)tab[index];
  987. for (; e != null; e = e.next) {
  988. if ((e.hash == hash) && e.key.equals(key)) {
  989. V oldValue = e.value;
  990. e.value = value;
  991. return oldValue;
  992. }
  993. }
  994. return null;
  995. }
  996.  
  997. @Override
  998. public synchronized V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) {
  999. Objects.requireNonNull(mappingFunction);
  1000.  
  1001. Entry<?,?> tab[] = table;
  1002. int hash = key.hashCode();
  1003. int index = (hash & 0x7FFFFFFF) % tab.length;
  1004. @SuppressWarnings("unchecked")
  1005. Entry<K,V> e = (Entry<K,V>)tab[index];
  1006. for (; e != null; e = e.next) {
  1007. if (e.hash == hash && e.key.equals(key)) {
  1008. // Hashtable not accept null value
  1009. return e.value;
  1010. }
  1011. }
  1012.  
  1013. V newValue = mappingFunction.apply(key);
  1014. if (newValue != null) {
  1015. addEntry(hash, key, newValue, index);
  1016. }
  1017.  
  1018. return newValue;
  1019. }
  1020.  
  1021. @Override
  1022. public synchronized V computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
  1023. Objects.requireNonNull(remappingFunction);
  1024.  
  1025. Entry<?,?> tab[] = table;
  1026. int hash = key.hashCode();
  1027. int index = (hash & 0x7FFFFFFF) % tab.length;
  1028. @SuppressWarnings("unchecked")
  1029. Entry<K,V> e = (Entry<K,V>)tab[index];
  1030. for (Entry<K,V> prev = null; e != null; prev = e, e = e.next) {
  1031. if (e.hash == hash && e.key.equals(key)) {
  1032. V newValue = remappingFunction.apply(key, e.value);
  1033. if (newValue == null) {
  1034. modCount++;
  1035. if (prev != null) {
  1036. prev.next = e.next;
  1037. } else {
  1038. tab[index] = e.next;
  1039. }
  1040. count--;
  1041. } else {
  1042. e.value = newValue;
  1043. }
  1044. return newValue;
  1045. }
  1046. }
  1047. return null;
  1048. }
  1049.  
  1050. @Override
  1051. public synchronized V compute(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
  1052. Objects.requireNonNull(remappingFunction);
  1053.  
  1054. Entry<?,?> tab[] = table;
  1055. int hash = key.hashCode();
  1056. int index = (hash & 0x7FFFFFFF) % tab.length;
  1057. @SuppressWarnings("unchecked")
  1058. Entry<K,V> e = (Entry<K,V>)tab[index];
  1059. for (Entry<K,V> prev = null; e != null; prev = e, e = e.next) {
  1060. if (e.hash == hash && Objects.equals(e.key, key)) {
  1061. V newValue = remappingFunction.apply(key, e.value);
  1062. if (newValue == null) {
  1063. modCount++;
  1064. if (prev != null) {
  1065. prev.next = e.next;
  1066. } else {
  1067. tab[index] = e.next;
  1068. }
  1069. count--;
  1070. } else {
  1071. e.value = newValue;
  1072. }
  1073. return newValue;
  1074. }
  1075. }
  1076.  
  1077. V newValue = remappingFunction.apply(key, null);
  1078. if (newValue != null) {
  1079. addEntry(hash, key, newValue, index);
  1080. }
  1081.  
  1082. return newValue;
  1083. }
  1084.  
  1085. @Override
  1086. public synchronized V merge(K key, V value, BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
  1087. Objects.requireNonNull(remappingFunction);
  1088.  
  1089. Entry<?,?> tab[] = table;
  1090. int hash = key.hashCode();
  1091. int index = (hash & 0x7FFFFFFF) % tab.length;
  1092. @SuppressWarnings("unchecked")
  1093. Entry<K,V> e = (Entry<K,V>)tab[index];
  1094. for (Entry<K,V> prev = null; e != null; prev = e, e = e.next) {
  1095. if (e.hash == hash && e.key.equals(key)) {
  1096. V newValue = remappingFunction.apply(e.value, value);
  1097. if (newValue == null) {
  1098. modCount++;
  1099. if (prev != null) {
  1100. prev.next = e.next;
  1101. } else {
  1102. tab[index] = e.next;
  1103. }
  1104. count--;
  1105. } else {
  1106. e.value = newValue;
  1107. }
  1108. return newValue;
  1109. }
  1110. }
  1111.  
  1112. if (value != null) {
  1113. addEntry(hash, key, value, index);
  1114. }
  1115.  
  1116. return value;
  1117. }
  1118.  
  1119. /**
  1120. * Save the state of the Hashtable to a stream (i.e., serialize it).
  1121. *
  1122. * @serialData The <i>capacity</i> of the Hashtable (the length of the
  1123. * bucket array) is emitted (int), followed by the
  1124. * <i>size</i> of the Hashtable (the number of key-value
  1125. * mappings), followed by the key (Object) and value (Object)
  1126. * for each key-value mapping represented by the Hashtable
  1127. * The key-value mappings are emitted in no particular order.
  1128. */
  1129. private void writeObject(java.io.ObjectOutputStream s)
  1130. throws IOException {
  1131. Entry<Object, Object> entryStack = null;
  1132.  
  1133. synchronized (this) {
  1134. // Write out the length, threshold, loadfactor
  1135. s.defaultWriteObject();
  1136.  
  1137. // Write out length, count of elements
  1138. s.writeInt(table.length);
  1139. s.writeInt(count);
  1140.  
  1141. // Stack copies of the entries in the table
  1142. for (int index = 0; index < table.length; index++) {
  1143. Entry<?,?> entry = table[index];
  1144.  
  1145. while (entry != null) {
  1146. entryStack =
  1147. new Entry<>(0, entry.key, entry.value, entryStack);
  1148. entry = entry.next;
  1149. }
  1150. }
  1151. }
  1152.  
  1153. // Write out the key/value objects from the stacked entries
  1154. while (entryStack != null) {
  1155. s.writeObject(entryStack.key);
  1156. s.writeObject(entryStack.value);
  1157. entryStack = entryStack.next;
  1158. }
  1159. }
  1160.  
  1161. /**
  1162. * Reconstitute the Hashtable from a stream (i.e., deserialize it).
  1163. */
  1164. private void readObject(java.io.ObjectInputStream s)
  1165. throws IOException, ClassNotFoundException
  1166. {
  1167. // Read in the length, threshold, and loadfactor
  1168. s.defaultReadObject();
  1169.  
  1170. // Read the original length of the array and number of elements
  1171. int origlength = s.readInt();
  1172. int elements = s.readInt();
  1173.  
  1174. // Compute new size with a bit of room 5% to grow but
  1175. // no larger than the original size. Make the length
  1176. // odd if it's large enough, this helps distribute the entries.
  1177. // Guard against the length ending up zero, that's not valid.
  1178. int length = (int)(elements * loadFactor) + (elements / 20) + 3;
  1179. if (length > elements && (length & 1) == 0)
  1180. length--;
  1181. if (origlength > 0 && length > origlength)
  1182. length = origlength;
  1183. table = new Entry<?,?>[length];
  1184. threshold = (int)Math.min(length * loadFactor, MAX_ARRAY_SIZE + 1);
  1185. count = 0;
  1186.  
  1187. // Read the number of elements and then all the key/value objects
  1188. for (; elements > 0; elements--) {
  1189. @SuppressWarnings("unchecked")
  1190. K key = (K)s.readObject();
  1191. @SuppressWarnings("unchecked")
  1192. V value = (V)s.readObject();
  1193. // synch could be eliminated for performance
  1194. reconstitutionPut(table, key, value);
  1195. }
  1196. }
  1197.  
  1198. /**
  1199. * The put method used by readObject. This is provided because put
  1200. * is overridable and should not be called in readObject since the
  1201. * subclass will not yet be initialized.
  1202. *
  1203. * <p>This differs from the regular put method in several ways. No
  1204. * checking for rehashing is necessary since the number of elements
  1205. * initially in the table is known. The modCount is not incremented
  1206. * because we are creating a new instance. Also, no return value
  1207. * is needed.
  1208. */
  1209. private void reconstitutionPut(Entry<?,?>[] tab, K key, V value)
  1210. throws StreamCorruptedException
  1211. {
  1212. if (value == null) {
  1213. throw new java.io.StreamCorruptedException();
  1214. }
  1215. // Makes sure the key is not already in the hashtable.
  1216. // This should not happen in deserialized version.
  1217. int hash = key.hashCode();
  1218. int index = (hash & 0x7FFFFFFF) % tab.length;
  1219. for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
  1220. if ((e.hash == hash) && e.key.equals(key)) {
  1221. throw new java.io.StreamCorruptedException();
  1222. }
  1223. }
  1224. // Creates the new entry.
  1225. @SuppressWarnings("unchecked")
  1226. Entry<K,V> e = (Entry<K,V>)tab[index];
  1227. tab[index] = new Entry<>(hash, key, value, e);
  1228. count++;
  1229. }
  1230.  
  1231. /**
  1232. * Hashtable bucket collision list entry
  1233. */
  1234. private static class Entry<K,V> implements Map.Entry<K,V> {
  1235. final int hash;
  1236. final K key;
  1237. V value;
  1238. Entry<K,V> next;
  1239.  
  1240. protected Entry(int hash, K key, V value, Entry<K,V> next) {
  1241. this.hash = hash;
  1242. this.key = key;
  1243. this.value = value;
  1244. this.next = next;
  1245. }
  1246.  
  1247. @SuppressWarnings("unchecked")
  1248. protected Object clone() {
  1249. return new Entry<>(hash, key, value,
  1250. (next==null ? null : (Entry<K,V>) next.clone()));
  1251. }
  1252.  
  1253. // Map.Entry Ops
  1254.  
  1255. public K getKey() {
  1256. return key;
  1257. }
  1258.  
  1259. public V getValue() {
  1260. return value;
  1261. }
  1262.  
  1263. public V setValue(V value) {
  1264. if (value == null)
  1265. throw new NullPointerException();
  1266.  
  1267. V oldValue = this.value;
  1268. this.value = value;
  1269. return oldValue;
  1270. }
  1271.  
  1272. public boolean equals(Object o) {
  1273. if (!(o instanceof Map.Entry))
  1274. return false;
  1275. Map.Entry<?,?> e = (Map.Entry<?,?>)o;
  1276.  
  1277. return (key==null ? e.getKey()==null : key.equals(e.getKey())) &&
  1278. (value==null ? e.getValue()==null : value.equals(e.getValue()));
  1279. }
  1280.  
  1281. public int hashCode() {
  1282. return hash ^ Objects.hashCode(value);
  1283. }
  1284.  
  1285. public String toString() {
  1286. return key.toString()+"="+value.toString();
  1287. }
  1288. }
  1289.  
  1290. // Types of Enumerations/Iterations
  1291. private static final int KEYS = 0;
  1292. private static final int VALUES = 1;
  1293. private static final int ENTRIES = 2;
  1294.  
  1295. /**
  1296. * A hashtable enumerator class. This class implements both the
  1297. * Enumeration and Iterator interfaces, but individual instances
  1298. * can be created with the Iterator methods disabled. This is necessary
  1299. * to avoid unintentionally increasing the capabilities granted a user
  1300. * by passing an Enumeration.
  1301. */
  1302. private class Enumerator<T> implements Enumeration<T>, Iterator<T> {
  1303. Entry<?,?>[] table = Hashtable.this.table;
  1304. int index = table.length;
  1305. Entry<?,?> entry;
  1306. Entry<?,?> lastReturned;
  1307. int type;
  1308.  
  1309. /**
  1310. * Indicates whether this Enumerator is serving as an Iterator
  1311. * or an Enumeration. (true -> Iterator).
  1312. */
  1313. boolean iterator;
  1314.  
  1315. /**
  1316. * The modCount value that the iterator believes that the backing
  1317. * Hashtable should have. If this expectation is violated, the iterator
  1318. * has detected concurrent modification.
  1319. */
  1320. protected int expectedModCount = modCount;
  1321.  
  1322. Enumerator(int type, boolean iterator) {
  1323. this.type = type;
  1324. this.iterator = iterator;
  1325. }
  1326.  
  1327. public boolean hasMoreElements() {
  1328. Entry<?,?> e = entry;
  1329. int i = index;
  1330. Entry<?,?>[] t = table;
  1331. /* Use locals for faster loop iteration */
  1332. while (e == null && i > 0) {
  1333. e = t[--i];
  1334. }
  1335. entry = e;
  1336. index = i;
  1337. return e != null;
  1338. }
  1339.  
  1340. @SuppressWarnings("unchecked")
  1341. public T nextElement() {
  1342. Entry<?,?> et = entry;
  1343. int i = index;
  1344. Entry<?,?>[] t = table;
  1345. /* Use locals for faster loop iteration */
  1346. while (et == null && i > 0) {
  1347. et = t[--i];
  1348. }
  1349. entry = et;
  1350. index = i;
  1351. if (et != null) {
  1352. Entry<?,?> e = lastReturned = entry;
  1353. entry = e.next;
  1354. return type == KEYS ? (T)e.key : (type == VALUES ? (T)e.value : (T)e);
  1355. }
  1356. throw new NoSuchElementException("Hashtable Enumerator");
  1357. }
  1358.  
  1359. // Iterator methods
  1360. public boolean hasNext() {
  1361. return hasMoreElements();
  1362. }
  1363.  
  1364. public T next() {
  1365. if (modCount != expectedModCount)
  1366. throw new ConcurrentModificationException();
  1367. return nextElement();
  1368. }
  1369.  
  1370. public void remove() {
  1371. if (!iterator)
  1372. throw new UnsupportedOperationException();
  1373. if (lastReturned == null)
  1374. throw new IllegalStateException("Hashtable Enumerator");
  1375. if (modCount != expectedModCount)
  1376. throw new ConcurrentModificationException();
  1377.  
  1378. synchronized(Hashtable.this) {
  1379. Entry<?,?>[] tab = Hashtable.this.table;
  1380. int index = (lastReturned.hash & 0x7FFFFFFF) % tab.length;
  1381.  
  1382. @SuppressWarnings("unchecked")
  1383. Entry<K,V> e = (Entry<K,V>)tab[index];
  1384. for(Entry<K,V> prev = null; e != null; prev = e, e = e.next) {
  1385. if (e == lastReturned) {
  1386. modCount++;
  1387. expectedModCount++;
  1388. if (prev == null)
  1389. tab[index] = e.next;
  1390. else
  1391. prev.next = e.next;
  1392. count--;
  1393. lastReturned = null;
  1394. return;
  1395. }
  1396. }
  1397. throw new ConcurrentModificationException();
  1398. }
  1399. }
  1400. }
  1401. }

jdk1.8的Hashtable

  测试程序:

 package com.hash.hashmaptest;
import java.util.*; public class HashtableTest { public static void main(String[] args) {
testHashtableAPIs();
} private static void testHashtableAPIs() {
// 初始化随机种子
Random r = new Random();
// 新建Hashtable
Hashtable table = new Hashtable();
// 添加操作
table.put("one", new Integer(r.nextInt(10)));
table.put("two", new Integer(r.nextInt(10)));
table.put("three", new Integer(r.nextInt(10))); // 打印出table
System.out.println("table:"+table ); // 通过Iterator遍历key-value
Iterator iter = table.entrySet().iterator();
while(iter.hasNext()) {
Map.Entry entry = (Map.Entry)iter.next();
System.out.println("next : "+ entry.getKey() +" - "+entry.getValue());
} // Hashtable的键值对个数
System.out.println("size:"+table.size()); // containsKey(Object key) :是否包含键key
System.out.println("contains key two : "+table.containsKey("two"));
System.out.println("contains key five : "+table.containsKey("five")); // containsValue(Object value) :是否包含值value
System.out.println("contains value 0 : "+table.containsValue(new Integer(0))); // remove(Object key) : 删除键key对应的键值对
table.remove("three"); System.out.println("table:"+table ); // clear() : 清空Hashtable
table.clear(); // isEmpty() : Hashtable是否为空
System.out.println((table.isEmpty()?"table is empty":"table is not empty") );
}
}

五、HashMap和Hashtable的对比

5.1、产生的时间和作者

HashTable产生于JDK 1.1,而HashMap产生于JDK 1.2。从时间的维度上来看,HashMap要比HashTable出现得晚一些。

5.2、从方法层面分析

两个类的继承体系有些不同,虽然都实现了Map、Cloneable、Serializable三个接口。但是HashMap继承自抽象类AbstractMap,而HashTable继承自抽象类Dictionary。其中Dictionary类是一个已经被废弃的类。

同时Hashtable比HashMap多了两个公开方法。一个是elements,这来自于抽象类Dictionary,鉴于该类已经废弃,所以这个方法也就没什么用处了。另一个多出来的方法是contains,这个多出来的方法也没什么用,因为它跟containsValue方法功能是一样的。

此外HashMap是支持null键和null值的,而Hashtable在遇到null时,会抛出NullPointerException异常。这并不是因为HashTable有什么特殊的实现层面的原因导致不能支持null键和null值,这仅仅是因为HashMap在实现时对null做了特殊处理,将null的hashCode值定为了0,从而将其存放在哈希表的第0个bucket中。

5.3、从算法层面上分析

    初始容量大小和每次扩充容量大小的不同。

 以下代码及注释来自java.util.HashTable
// 哈希表默认初始大小为11
public Hashtable() {
this(11, 0.75f);
}
protected void rehash() {
int oldCapacity = table.length;
Entry<K,V>[] oldMap = table; // 每次扩容为原来的2n+1
int newCapacity = (oldCapacity << 1) + 1;
// ...
}
以下代码及注释来自java.util.HashMap
// 哈希表默认初始大小为2^4=16
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
void addEntry(int hash, K key, V value, int bucketIndex) {
// 每次扩充为原来的2n
if ((size >= threshold) && (null != table[bucketIndex])) {
resize(2 * table.length);
}

可以看到Hashtable默认的初始大小为11,之后每次扩充为原来的2n+1。HashMap默认的初始化大小为16,之后每次扩充为原来的2倍。如果在创建时给定了初始化大小,那么Hashtable会直接使用你给定的大小,而HashMap会将其扩充为2的幂次方大小。也就是说Hashtable会尽量使用素数、奇数。而HashMap则总是使用2的幂作为哈希表的大小。我们知道当哈希表的大小为素数时,简单的取模哈希的结果会更加均匀,所以单从这一点上看,Hashtable的哈希表大小选择,似乎更高明些。但另一方面我们又知道,在取模计算时,如果模数是2的幂,那么我们可以直接使用位运算来得到结果,效率要大大高于做除法。所以从hash计算的效率上,又是HashMap更胜一筹。所以,事实就是HashMap为了加快hash的速度,将哈希表的大小固定为了2的幂。当然这引入了哈希分布不均匀的问题,所以HashMap为解决这问题,又对hash算法做了一些改动。具体我们来看看,在获取了key对象的hashCode之后,Hashtable和HashMap分别是怎样将它们hash到确定的哈希桶(Entry数组位置)中的。
     HashMap由于使用了2的幂次方,所以在取模运算时不需要做除法,只需要位的与运算就可以了。但是由于引入的hash冲突加剧问题,HashMap在调用了对象的hashCode方法之后,又做了一些位运算在打散数据。

 以下代码及注释来自java.util.Hashtable

 // hash 不能超过Integer.MAX_VALUE 所以要取其最小的31个bit
int hash = hash(key);
int index = (hash & 0x7FFFFFFF) % tab.length; // 直接计算key.hashCode()
private int hash(Object k) {
// hashSeed will be zero if alternative hashing is disabled.
return hashSeed ^ k.hashCode();
}
以下代码及注释来自java.util.HashMap
int hash = hash(key);
int i = indexFor(hash, table.length); // 在计算了key.hashCode()之后,做了一些位运算来减少哈希冲突
final int hash(Object k) {
int h = hashSeed;
if (0 != h && k instanceof String) {
return sun.misc.Hashing.stringHash32((String) k);
} h ^= k.hashCode(); // This function ensures that hashCodes that differ only by
// constant multiples at each bit position have a bounded
// number of collisions (approximately 8 at default load factor).
h ^= (h >>> 20) ^ (h >>> 12);
return h ^ (h >>> 7) ^ (h >>> 4);
} // 取模不再需要做除法
static int indexFor(int h, int length) {
// assert Integer.bitCount(length) == 1 : "length must be a non-zero power of 2";
return h & (length-1);
}

HashMap和HashTable在计算hash时都用到了一个叫hashSeed的变量。这是因为映射到同一个hash桶内的Entry对象,是以链表的形式存在的,而链表的查询效率比较低,所以HashMap/Hashtable的效率对哈希冲突非常敏感,所以可以额外开启一个可选hash(hashSeed),从而减少哈希冲突。事实上,这个优化在JDK 1.8中已经去掉了,因为JDK 1.8中,映射到同一个哈希桶(数组位置)的Entry对象,使用了红黑树来存储,从而大大加速了其查找效率。

5.4、线程安全

HashTable是同步的,HashMap不是,也就是说HashTable在多线程使用的情况下,不需要做额外的同步,而HashMap则不行。但是使用了synchronized描述符降低了效率。

5.5、代码风格

HashMap的代码要比Hashtable整洁很多。

5.6、使用情况

Hashtable已经被淘汰了,不要在代码中再使用它。简单来说就是,如果不需要线程安全,那么使用HashMap,如果需要线程安全,那么使用ConcurrentHashMap。

5.7、持续优化

虽然HashMap和Hashtable的公开接口应该不会改变,或者说改变不频繁。但每一版本的JDK,都会对HashMap和Hashtable的内部实现做优化,比如JDK 1.8的红黑树优化。所以,尽可能的使用新版本的JDK,除了那些炫酷的新功能,普通的API也会有性能上有提升。为什么HashTable已经淘汰了,还要优化它?因为有老的代码还在使用它,所以优化了它之后,这些老的代码也能获得性能提升。

六、总结

本文中我们深入探讨了三种Map结构,对于其中的实现原理,初始化,增删改查,扩容,提升查找效率等等方面进行了分析和探讨,对我们以后的使用非常有帮助。

 参考文献: https://www.cnblogs.com/skywang12345/p/3310835.html

  http://www.importnew.com/20386.html

            http://www.importnew.com/29832.html

            http://www.importnew.com/24822.html

沉淀再出发:java中的HashMap、ConcurrentHashMap和Hashtable的认识的更多相关文章

  1. 沉淀再出发:java中的equals()辨析

    沉淀再出发:java中的equals()辨析 一.前言 关于java中的equals,我们可能非常奇怪,在Object中定义了这个函数,其他的很多类中都重载了它,导致了我们对于辨析其中的内涵有了混淆, ...

  2. 沉淀再出发:java中注解的本质和使用

    沉淀再出发:java中注解的本质和使用 一.前言 以前XML是各大框架的青睐者,它以松耦合的方式完成了框架中几乎所有的配置,但是随着项目越来越庞大,XML的内容也越来越复杂,维护成本变高.于是就有人提 ...

  3. 沉淀再出发:java中线程池解析

    沉淀再出发:java中线程池解析 一.前言 在多线程执行的环境之中,如果线程执行的时间短但是启动的线程又非常多,线程运转的时间基本上浪费在了创建和销毁上面,因此有没有一种方式能够让一个线程执行完自己的 ...

  4. 沉淀再出发:如何在eclipse中查看java的核心代码

    沉淀再出发:如何在eclipse中查看java的核心代码 一.前言   很多时候我们在eclipse中按F3键打算查看某一个系统类的定义的时候,总是弹出找不到类这样的界面,这里我们把核心对应的代码加进 ...

  5. 沉淀再出发:关于java中的AQS理解

    沉淀再出发:关于java中的AQS理解 一.前言 在java中有很多锁结构都继承自AQS(AbstractQueuedSynchronizer)这个抽象类如果我们仔细了解可以发现AQS的作用是非常大的 ...

  6. 沉淀再出发:java中的CAS和ABA问题整理

    沉淀再出发:java中的CAS和ABA问题整理 一.前言 在多并发程序设计之中,我们不得不面对并发.互斥.竞争.死锁.资源抢占等等问题,归根到底就是读写的问题,有了读写才有了增删改查,才有了所有的一切 ...

  7. 沉淀再出发:java的文件读写

    沉淀再出发:java的文件读写 一.前言 对于java的文件读写是我们必须使用的一项基本技能,因此了解其中的原理,字节流和字符流的本质有着重要的意义. 二.java中的I/O操作 2.1.文件读写的本 ...

  8. 沉淀再出发:再谈java的多线程机制

    沉淀再出发:再谈java的多线程机制 一.前言 自从我们学习了操作系统之后,对于其中的线程和进程就有了非常深刻的理解,但是,我们可能在C,C++语言之中尝试过这些机制,并且做过相应的实验,但是对于ja ...

  9. 沉淀再出发:在python3中导入自定义的包

    沉淀再出发:在python3中导入自定义的包 一.前言 在python中如果要使用自己的定义的包,还是有一些需要注意的事项的,这里简单记录一下. 二.在python3中导入自定义的包 2.1.什么是模 ...

随机推荐

  1. WPF中用后台C#代码为TabItem设置Background属性

    TabItem tabItem = sender as TabItem; tabItem.Background = new ImageBrush(new BitmapImage(new Uri(@&q ...

  2. [中英对照]INTEL与AT&T汇编语法对比

    本文首先对文章Intel and AT&T Syntax做一个中英文对照翻译,然后给出一个简单的例子,再用gdb反汇编后,对INTEL与AT&T的汇编语法进行对照从而加深理解. Int ...

  3. maven在pom文件中添加你想要的jar包

    概述:POM 文件里面的依赖jar包经常需要添加, 仅需要在google中代码查找 :maven 你需的jar包名称 repository 用了Maven,所需的JAR包就不能再像往常一样,自己找到并 ...

  4. Nginx教程(五) Nginx配置文件详解

    一. Nginx配置文件nginx.conf中文详解 #定义Nginx运行的用户和用户组 user www www; #nginx进程数,建议设置为等于CPU总核心数. worker_processe ...

  5. centos7中安装mongodb3.6

    centos7中安装mongodb3.6 首先更新系统 yum -y update 1.安装Mongodb 编辑Mongodb安装源 vim /etc/yum.repos.d/mongodb-org- ...

  6. [PY3]——函数——函数注解 | 实现类型检查功能

    函数注解(Function Annotations)——> 可以在定义函数的时候对参数和返回值添加注解 写函数注解 #平时我们使用help()可以查看一个函数的说明,我们自己写的函数也可以提供这 ...

  7. 相片Exif协议

    今天看他们安卓在做项目遇到一个要让旋转拍摄的相片竖屏方向显示 ,网上搜了下找到了安卓的一个博客,看了下想着既然安卓有ios也应该会有,果然不出所料,确实是有.其实他们都是遵循Exif协议,百度百科也有 ...

  8. [转]SAP模块一句话入门

    本文转自:http://www.cnblogs.com/mybi/archive/2010/12/20/1911154.html SAP一句话入门:Financial & Controllin ...

  9. Eclipse常用快捷键之代码编辑篇

    Eclipse是Java开发常用的IDE工具,熟练使用快捷键可以提高开发效率,使得编码工作事半功倍,下面介绍几种常用的代码编辑和补全工具 重命名快捷键:Alt+Shift+R 可用于类名,方法名,属性 ...

  10. swagger api文档添加jwt授权配置

    最近写的swagger文档,要加jwt授权,所以几经google终于搞定了,简简单单几行配置如下: securityDefinitions: APIKey: type: apiKey name: Au ...