一、概念

允许key为null,value为null;线程不安全;不保证映射的顺序;迭代 collection 视图所需的时间与 HashMap 实例的“容量”(桶的数量)及其大小(键-值映射关系数)成比例;HashMap 的实例有两个参数影响其性能:初始容量 和加载因子。容量 是哈希表中桶的数量,初始容量只是哈希表在创建时的容量。加载因子是哈希表在其容量自动增加之前可以达到多满的一种尺度。当哈希表中的条目数超出了加载因子与当前容量的乘积时,则要对该哈希表进行 rehash 操作(即重建内部数据结构),从而哈希表将具有大约两倍的桶数;加载因子默认是0.75;存储过多的相同的key,降低hash table性能;

线程不安全的,想要线程安全,最好的方法是在创建的时候使用下面方法:

  Map m = Collections.synchronizedMap(new HashMap(...));

关于迭代器和LinkedList一样!

JDK 1.8 以前 HashMap 的实现是 数组+链表,即使哈希函数取得再好,也很难达到元素百分百均匀分布。

当 HashMap 中有大量的元素都存放到同一个桶中时,这个桶下有一条长长的链表,这个时候 HashMap 就相当于一个单链表,假如单链表有 n 个元素,遍历的时间复杂度就是 O(n),完全失去了它的优势。

针对这种情况,JDK 1.8 中引入了红黑树(查找时间复杂度为 O(logn))来优化这个问题!

二、属性和构造方法:

 public class HashMap<K,V> extends AbstractMap<K,V>
implements Map<K,V>, Cloneable, Serializable { private static final long serialVersionUID = 362498820763181265L; /**
* The default initial capacity - MUST be a power of two.
*/
//默认初始容量-必须是2的指数。2的4次方=16次方
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16 /**
* The maximum capacity, used if a higher value is implicitly specified
* by either of the constructors with arguments.
* MUST be a power of two <= 1<<30.
*/
//最大容量,2的30次方=1073741824
static final int MAXIMUM_CAPACITY = 1 << 30; /**
* The load factor used when none specified in constructor.
*/
//负载系数=0.75
static final float DEFAULT_LOAD_FACTOR = 0.75f; /**
* The bin count threshold for using a tree rather than list for a
* bin. Bins are converted to trees when adding an element to a
* bin with at least this many nodes. The value must be greater
* than 2 and should be at least 8 to mesh with assumptions in
* tree removal about conversion back to plain bins upon
* shrinkage.
*/
//一个桶的树化阈值:8,当桶中元素个数超过这个值时,需要使用红黑树节点替换链表节点
static final int TREEIFY_THRESHOLD = 8; /**
* The bin count threshold for untreeifying a (split) bin during a
* resize operation. Should be less than TREEIFY_THRESHOLD, and at
* most 6 to mesh with shrinkage detection under removal.
*/
//一个树的链表还原阈值:6,当扩容时,桶中元素个数小于这个值,就会把树形的桶元素 还原(切分)为链表结构
static final int UNTREEIFY_THRESHOLD = 6; /**
* The smallest table capacity for which bins may be treeified.
* (Otherwise the table is resized if too many nodes in a bin.)
* Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
* between resizing and treeification thresholds.
*/
//hash表的最小树形化容量:64,1,当哈希表中的容量大于这个值时,表中的桶才能进行树形化;否则桶内元素太多时会扩容,而不是树形化,
//为了避免进行扩容、树形化选择的冲突,这个值不能小于 4 * TREEIFY_THRESHOLD
static final int MIN_TREEIFY_CAPACITY = 64; static class Node<K,V> implements Map.Entry<K,V> {
final int hash; //对key的hashcode值进行hash运算后得到的值,存储在Entry,避免重复计算
final K key;
V value;
Node<K,V> next; //存储指向下一个Entry的引用,单链表结构 Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
} public final K getKey() { return key; }
public final V getValue() { return value; }
public final String toString() { return key + "=" + value; } public final int hashCode() {
return Objects.hashCode(key) ^ Objects.hashCode(value);
} public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
} public final boolean equals(Object o) {
if (o == this)
return true;
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>)o;
if (Objects.equals(key, e.getKey()) &&
Objects.equals(value, e.getValue()))
return true;
}
return false;
}
} //key的hash值
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
} /**
* Returns x's Class if it is of the form "class C implements
* Comparable<C>", else null.
*/
static Class<?> comparableClassFor(Object x) {
if (x instanceof Comparable) {
Class<?> c; Type[] ts, as; Type t; ParameterizedType p;
if ((c = x.getClass()) == String.class) // bypass checks
return c;
if ((ts = c.getGenericInterfaces()) != null) {
for (int i = 0; i < ts.length; ++i) {
if (((t = ts[i]) instanceof ParameterizedType) &&
((p = (ParameterizedType)t).getRawType() ==
Comparable.class) &&
(as = p.getActualTypeArguments()) != null &&
as.length == 1 && as[0] == c) // type arg is c
return c;
}
}
}
return null;
} /**
* Returns k.compareTo(x) if x matches kc (k's screened comparable
* class), else 0.
*/
@SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable
static int compareComparables(Class<?> kc, Object k, Object x) {
return (x == null || x.getClass() != kc ? 0 :
((Comparable)k).compareTo(x));
} /**
* Returns a power of two size for the given target capacity.
*/
//返回给定的容量的2的指数
static final int tableSizeFor(int cap) {
int n = cap - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
} /* ---------------- Fields -------------- */ /**
* The table, initialized on first use, and resized as
* necessary. When allocated, length is always a power of two.
* (We also tolerate length zero in some operations to allow
* bootstrapping mechanics that are currently not needed.)
*/
//初始化数组
transient Node<K,V>[] table; /**
* Holds cached entrySet(). Note that AbstractMap fields are used
* for keySet() and values().
*/
transient Set<Map.Entry<K,V>> entrySet; /**
* The number of key-value mappings contained in this map.
*/
//实际存储的key-value键值对的个数
transient int size; /**
* The number of times this HashMap has been structurally modified
* Structural modifications are those that change the number of mappings in
* the HashMap or otherwise modify its internal structure (e.g.,
* rehash). This field is used to make iterators on Collection-views of
* the HashMap fail-fast. (See ConcurrentModificationException).
*/
transient int modCount; /**
* The next size value at which to resize (capacity * load factor).
*
* @serial
*/
// (The javadoc description is true upon serialization.
// Additionally, if the table array has not been allocated, this
// field holds the initial array capacity, or zero signifying
// DEFAULT_INITIAL_CAPACITY.)
//阈值,当table == {}时,该值为初始容量(初始容量默认为16);
//当table被填充了,也就是为table分配内存空间后,threshold一般为 capacity*loadFactory。
int threshold; //负载系数,代表了table的填充度有多少,默认是0.75
final float loadFactor; //创建一个空的HashMap用指定的容量和负载系数
public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)//不能大于最大值
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
this.loadFactor = loadFactor;
this.threshold = tableSizeFor(initialCapacity);
} //创建一个空的HashMap用指定的容量和默认的负载系数(0.75)
public HashMap(int initialCapacity) {
this(initialCapacity, DEFAULT_LOAD_FACTOR);
} //创建一个空的HashMap用默认的容量(16)和默认的负载系数(0.75)
public HashMap() {
this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
} /**
* Constructs a new <tt>HashMap</tt> with the same mappings as the
* specified <tt>Map</tt>. The <tt>HashMap</tt> is created with
* default load factor (0.75) and an initial capacity sufficient to
* hold the mappings in the specified <tt>Map</tt>.
*
* @param m the map whose mappings are to be placed in this map
* @throws NullPointerException if the specified map is null
*/
public HashMap(Map<? extends K, ? extends V> m) {
this.loadFactor = DEFAULT_LOAD_FACTOR;
putMapEntries(m, false);
}

三、扩容机制

put方法和resize()方法:

关于扩容的理解可以参考博文 :https://blog.csdn.net/qq_27093465/article/details/52270519

                                                  https://blog.csdn.net/u013494765/article/details/77837338

                                                   https://www.cnblogs.com/chengxiao/p/6059914.html

我理解为什么HashMap的数组长度一定是2的次幂?很大一部分原因是:为了均匀分布table数据和充分利用空间,减少hash冲突,快速查询

    /**
* Implements Map.putAll and Map constructor
*
* @param m the map
* @param evict false when initially constructing this map, else
* true (relayed to method afterNodeInsertion).
*/
final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
int s = m.size();
if (s > 0) {
if (table == null) { // pre-size 数组是空
float ft = ((float)s / loadFactor) + 1.0F;
int t = ((ft < (float)MAXIMUM_CAPACITY) ?
(int)ft : MAXIMUM_CAPACITY);
if (t > threshold)//大于map的实际可容量就扩大
threshold = tableSizeFor(t);
}
else if (s > threshold)
resize(); //扩容
for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
K key = e.getKey();
V value = e.getValue();
putVal(hash(key), key, value, false, evict);
}
}
} /**
* Associates the specified value with the specified key in this map.
* If the map previously contained a mapping for the key, the old
* value is replaced.
*
* @param key key with which the specified value is to be associated
* @param value value to be associated with the specified key
* @return the previous value associated with <tt>key</tt>, or
* <tt>null</tt> if there was no mapping for <tt>key</tt>.
* (A <tt>null</tt> return can also indicate that the map
* previously associated <tt>null</tt> with <tt>key</tt>.)
*/
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
} /**
* Implements Map.put and related methods
*
* @param hash hash for key
* @param key the key
* @param value the value to put
* @param onlyIfAbsent if true, don't change existing value
* @param evict if false, the table is in creation mode.
* @return previous value, or null if none
*/ final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
if ((tab = table) == null || (n = tab.length) == 0) //数组未初始化
n = (tab = resize()).length; //数组设置初始值
if ((p = tab[i = (n - 1) & hash]) == null) //计算脚标i,数组[i]是否是null,(n - 1) & hash计量把数组分布均匀的填满
tab[i] = newNode(hash, key, value, null); //将key,value放到数组[i]
else { //hash冲突时
Node<K,V> e; K k;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k)))) //判断key是否是同一个key
e = p;
else if (p instanceof TreeNode) //如果是tree节点
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
else {
for (int binCount = 0; ; ++binCount) {
if ((e = p.next) == null) {
//在链表头部插入key,value
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
break;
}
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
if (e != null) { // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
++modCount;
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
} /**
* Initializes or doubles table size. If null, allocates in
* accord with initial capacity target held in field threshold.
* Otherwise, because we are using power-of-two expansion, the
* elements from each bin must either stay at same index, or move
* with a power of two offset in the new table.
*
* @return the table
*/
//以默认容量大小16,默认负载系数0.75为例
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;//原数组
int oldCap = (oldTab == null) ? 0 : oldTab.length;//原数组长度 16
int oldThr = threshold;//原临界值 12
int newCap, newThr = 0;
if (oldCap > 0) {
if (oldCap >= MAXIMUM_CAPACITY) {//原数组长度大于最大容量(1073741824)
//则将threshold设为Integer.MAX_VALUE=2147483647,接近MAXIMUM_CAPACITY的两倍
threshold = Integer.MAX_VALUE;
return oldTab;
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && //newCap=32
oldCap >= DEFAULT_INITIAL_CAPACITY)
// 新数组长度 是原数组长度的2倍,
newThr = oldThr << 1; // double threshold 临界值也扩大为原来2倍 24
}
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr; //新临界值 24
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap]; //创建以长度32的新数组
table = newTab;
//如果原来数组有数据,则将原数据复制到新数组中
if (oldTab != null) {
// 根据容量进行循环整个数组,将非空元素进行复制
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
// 获取数组的第j个元素
if ((e = oldTab[j]) != null) {
oldTab[j] = null; //原数组j位置置为null,交给GC
// 如果链表只有一个,则进行直接赋值
if (e.next == null)
// e.hash & (newCap - 1) 确定元素存放位置
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}

剩下的一些方法:

putIfAbsent:value不为空并且不改变现存的值

computeIfAbsent:如果key已存在,返回oldVlaue;不存在创建,返回新创建value

computeIfPresent:如果key不存在,返回null;如果已存在,value为null则删除此节点,不为null替换节点value并返回此value。

compute:如果key不存在,新建key进行存储;如果key存在,value为null则删除此节点,不为null替换节点value并返回此value。

JDK1.8源码hashMap的更多相关文章

  1. 【集合框架】JDK1.8源码分析之HashMap(一) 转载

    [集合框架]JDK1.8源码分析之HashMap(一)   一.前言 在分析jdk1.8后的HashMap源码时,发现网上好多分析都是基于之前的jdk,而Java8的HashMap对之前做了较大的优化 ...

  2. Java源码-HashMap(jdk1.8)

    一.hash方法 如下是jdk1.8中的源码 static final int hash(Object key) { int h; return (key == null) ? 0 : (h = ke ...

  3. JDK1.8源码学习-HashMap

    JDK1.8源码学习-HashMap 目录 一.HashMap简介 HashMap 主要用来存放键值对,它是基于哈希表的Map接口实现的,是常用的Java集合之一. 我们都知道在JDK1.8 之前 的 ...

  4. 【集合框架】JDK1.8源码分析HashSet && LinkedHashSet(八)

    一.前言 分析完了List的两个主要类之后,我们来分析Set接口下的类,HashSet和LinkedHashSet,其实,在分析完HashMap与LinkedHashMap之后,再来分析HashSet ...

  5. 集合之LinkedHashSet(含JDK1.8源码分析)

    一.前言 上篇已经分析了Set接口下HashSet,我们发现其操作都是基于hashMap的,接下来看LinkedHashSet,其底层实现都是基于linkedHashMap的. 二.linkedHas ...

  6. 集合之HashSet(含JDK1.8源码分析)

    一.前言 我们已经分析了List接口下的ArrayList和LinkedList,以及Map接口下的HashMap.LinkedHashMap.TreeMap,接下来看的是Set接口下HashSet和 ...

  7. JDK1.8源码学习-Object

    JDK1.8源码学习-Object 目录 一.方法简介 1.一个本地方法,主要作用是将本地方法注册到虚拟机中. private static native void registerNatives() ...

  8. JDK1.8源码阅读笔记(1)Object类

    JDK1.8源码阅读笔记(1)Object类 ​ Object 类属于 java.lang 包,此包下的所有类在使⽤时⽆需⼿动导⼊,系统会在程序编译期间⾃动 导⼊.Object 类是所有类的基类,当⼀ ...

  9. 【JUC】JDK1.8源码分析之ArrayBlockingQueue(三)

    一.前言 在完成Map下的并发集合后,现在来分析ArrayBlockingQueue,ArrayBlockingQueue可以用作一个阻塞型队列,支持多任务并发操作,有了之前看源码的积累,再看Arra ...

随机推荐

  1. FPGA---Basys3(实验内容汇总贴)

    前言 本博文为FPGA---Basys3入门板的实验汇总帖子. 实验指导书 实验源码github地址 实验目录 组合逻辑电路设计 编码器 比较器 全加器 时序逻辑电路设计 D 触发器的实现 同步复位的 ...

  2. 关于查询报表总是"超时已过期"的问题解决

    "超时已过期" 的问题一直在烦扰着我, 在查一些数据量比较大的表或者运行一些复杂存储过程的时候就会出现这个提示, 一开始是按下面的来设,有一些报表是可以正常查出来 a.在企业管理器 ...

  3. linux 获取帮助文档

    在linux中遇到命令不知道如何使用,可以用man或info来查看. man -f 与 whatis命令是相同的. man -k 与apropos命令是相同的. 而这两个命令又很类似,都是去搜索,找到 ...

  4. poi中如何自定义日期格式

    1. poi的“Quick Guide”中提供了 “How to create date cells ”例子来说明如何创建日期单元格,代码如下: HSSFCellStyle cellStyle = w ...

  5. 基于接口回调详解JUC中Callable和FutureTask实现原理

    Callable接口和FutureTask实现类,是JUC(Java Util Concurrent)包中很重要的两个技术实现,它们使获取多线程运行结果成为可能.它们底层的实现,就是基于接口回调技术. ...

  6. SVN for Mac

    SVN for Mac https://www.wikihow.com/Install-Subversion-on-Mac-OS-X https://subversion.apache.org/pac ...

  7. ssh中文乱码解决

    在终端执行命令:export LC_ALL=zh_CN.GB2312;export LANG=zh_CN.GB2312是最有效的.=======================1.不管用那种ssh客户 ...

  8. Django时间时区问题(received a naive datetime while time zone support is active)

    在django1.4以后,存在两个概念 naive time 与 active time. 简单点讲,naive time就是不带时区的时间,Active time就是带时区的时间. 举例来说,使用d ...

  9. hihoCoder 1631 Cats and Fish(ACM-ICPC北京赛区2017网络同步赛)

    时间限制:1000ms 单点时限:1000ms 内存限制:256MB 描述 There are many homeless cats in PKU campus. They are all happy ...

  10. Day24-KindEditor基本使用和文件操作2

    KindEditor是一套开源的HTML可视化编辑器,主要用于让用户在网站上获得所见即所得编辑效果,兼容IE.Firefox.Chrome.Safari.Opera等主流浏览器. 1. 准备 2. 写 ...