数据结构(集合)学习之Map(一)
集合
框架关系图:
补充:HashTable父类是Dictionary,不是AbstractMap。
Map:
Map(接口)和Collection都属于集合,但是Map不是Collection的子类或者子接口。而且Map比较特殊:它是以<key,value>键值对的方式来存储数据,其中key不能重复,且一个key最多只对应一个value。
Map在面试中常问的三个子类:HashMap、HashTable、TreeMap。
HashMap:
基于哈希表的实现的Map接口, 该实现提供了所有可选的映射操作,并允许null的key和null的value。
在JDK1.8以前,HashMap底层是:"链表+数组"结构;在JDK1.8时,对HashMap底层进行优化,成了:数组+链表+红黑树。
JDK1.6:初始容量为16,加载因子为0.75,底层是Entry数组加链表结构:
//初始容量
static final int DEFAULT_INITIAL_CAPACITY = 16;
//加载因子
static final float DEFAULT_LOAD_FACTOR = 0.75f;
//存储容器-数组
transient Entry[] table;
//静态内部类
static class Entry<K,V> implements Map.Entry<K,V> {
final K key;
V value;
Entry<K,V> next;//把key和value转换成数组中的Entry对象
final int hash; /**
* Creates new entry.
*/
Entry(int h, K k, V v, Entry<K,V> n) {
value = v;
next = n;
key = k;
hash = h;
} public final K getKey() {
return key;
} public final V getValue() {
return value;
} public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
} public final boolean equals(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry e = (Map.Entry)o;
Object k1 = getKey();
Object k2 = e.getKey();
if (k1 == k2 || (k1 != null && k1.equals(k2))) {
Object v1 = getValue();
Object v2 = e.getValue();
if (v1 == v2 || (v1 != null && v1.equals(v2)))
return true;
}
return false;
} public final int hashCode() {
return (key==null ? 0 : key.hashCode()) ^
(value==null ? 0 : value.hashCode());
} public final String toString() {
return getKey() + "=" + getValue();
} void recordAccess(HashMap<K,V> m) {
} void recordRemoval(HashMap<K,V> m) {
}
}
//Put方法
public V put(K key, V value) {
if (key == null)
return putForNullKey(value);//hashmap可以存key==null
int hash = hash(key.hashCode());//根据key的hashcode计算一个哈希数
int i = indexFor(hash, table.length);//根据计算的哈希数获取数组的坐标
for (Entry<K,V> e = table[i]; e != null; e = e.next) {//hashmap不允许重复,如果key重复了,把新的value替换老的value
Object k;
if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {//
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue;
}
} modCount++;
addEntry(hash, key, value, i);//处理好以后把hash,key,value,index添加到数组中去
return null;
}
//针对key=null的方法
private V putForNullKey(V value) {
for (Entry<K,V> e = table[0]; e != null; e = e.next) {
if (e.key == null) {
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue;
}
}
modCount++;
addEntry(0, null, value, 0);
return null;
}
//获取哈希数
static int hash(int h) {
// This function ensures that hashCodes that differ only by
// constant multiples at each bit position have a bounded
// number of collisions (approximately 8 at default load factor).
h ^= (h >>> 20) ^ (h >>> 12);
return h ^ (h >>> 7) ^ (h >>> 4);
}
//获取数组坐标
static int indexFor(int h, int length) {
return h & (length-1);
}
//添加元素的方法
void addEntry(int hash, K key, V value, int bucketIndex) {
Entry<K,V> e = table[bucketIndex];//如果index相同了(哈希碰撞),把原来的值给一个新的Entry对象
//然后把新来的值存在数组中,老值作为新值的下标(next对象),在此也体现了hashmap的“链式结构”
table[bucketIndex] = new Entry<K,V>(hash, key, value, e);
if (size++ >= threshold)
resize(2 * table.length);//如果数值超容了,进行扩容:数组的长度*2(默认初始容量16,所以为16*2=32)
}
//扩容方法
void resize(int newCapacity) {//2倍长度作为参数传入
Entry[] oldTable = table;
int oldCapacity = oldTable.length;
if (oldCapacity == MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return;
} Entry[] newTable = new Entry[newCapacity];//新建数组
transfer(newTable);//数组值转换的方法
table = newTable;
threshold = (int)(newCapacity * loadFactor);
}
//数组值转换的方法
void transfer(Entry[] newTable) {
Entry[] src = table;
int newCapacity = newTable.length;
for (int j = 0; j < src.length; j++) {
Entry<K,V> e = src[j];
if (e != null) {
src[j] = null;
do {//此处为复制的过程
Entry<K,V> next = e.next;
int i = indexFor(e.hash, newCapacity);
e.next = newTable[i];
newTable[i] = e;
e = next;
} while (e != null);
}
}
}
//Get方法
public V get(Object key) {
if (key == null)
return getForNullKey();//遍历,获取到key==null的值
int hash = hash(key.hashCode());//然后根据key获取哈希值
for (Entry<K,V> e = table[indexFor(hash, table.length)];//然后获取坐标得到Entry对象
e != null;
e = e.next) {//从第一个开始遍历,如果不是,把Entry对象的下标给Entry然后比较
Object k;
if (e.hash == hash && ((k = e.key) == key || key.equals(k)))
return e.value;//当哈希值和key都相等的时候,把value返回去
}
return null;
}
//key等于null获取值
private V getForNullKey() {
for (Entry<K,V> e = table[0]; e != null; e = e.next) {
if (e.key == null)
return e.value;
}
return null;
}
//Clear方法
public void clear() {
modCount++;
Entry[] tab = table;
for (int i = 0; i < tab.length; i++)//遍历数组,把值清空,把size置为0
tab[i] = null;
size = 0;
}
//entrySet方法:获取到数组中Entry对象的Set集合
private transient Set<Map.Entry<K,V>> entrySet = null; public Set<Map.Entry<K,V>> entrySet() {
return entrySet0();
}
private Set<Map.Entry<K,V>> entrySet0() {
Set<Map.Entry<K,V>> es = entrySet;
return es != null ? es : (entrySet = new EntrySet());
}
//原理就是迭代map中的Entry对象
private final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
public Iterator<Map.Entry<K,V>> iterator() {
return newEntryIterator();
}
public boolean contains(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry<K,V> e = (Map.Entry<K,V>) o;
Entry<K,V> candidate = getEntry(e.getKey());
return candidate != null && candidate.equals(e);
}
public boolean remove(Object o) {
return removeMapping(o) != null;
}
public int size() {
return size;
}
public void clear() {
HashMap.this.clear();
}
}
final Entry<K,V> nextEntry() {
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
Entry<K,V> e = next;
if (e == null)
throw new NoSuchElementException(); if ((next = e.next) == null) {
Entry[] t = table;
while (index < t.length && (next = t[index++]) == null) ;
}
current = e;
return e;
}
//keySet方法
public Set<K> keySet() {
Set<K> ks = keySet;
return (ks != null ? ks : (keySet = new KeySet()));
}
//原理也是通过迭代器,迭代Entry对象,然后获取key
private final class KeySet extends AbstractSet<K> {
public Iterator<K> iterator() {
return newKeyIterator();
}
public int size() {
return size;
}
public boolean contains(Object o) {
return containsKey(o);
}
public boolean remove(Object o) {
return HashMap.this.removeEntryForKey(o) != null;
}
public void clear() {
HashMap.this.clear();
}
} private final class KeyIterator extends HashIterator<K> {
public K next() {
return nextEntry().getKey();
}
}
//remove方法
public V remove(Object key) {
Entry<K,V> e = removeEntryForKey(key);
return (e == null ? null : e.value);
}
final Entry<K,V> removeEntryForKey(Object key) {
int hash = (key == null) ? 0 : hash(key.hashCode());
int i = indexFor(hash, table.length);
Entry<K,V> prev = table[i];
Entry<K,V> e = prev; while (e != null) {//用while循环,当哈希值和key都相等的时候把值得next赋值给现在位置的值
Entry<K,V> next = e.next;
Object k;
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
modCount++;
size--;
if (prev == e)
table[i] = next;
else
prev.next = next;
e.recordRemoval(this);
return e;
}
prev = e;
e = next;
} return e;
}
//size方法和isEmpty方法
transient int size;//和put以及remove方法有关系,增加就++,删除就--
public int size() {
return size;
}
public boolean isEmpty() {
return size == 0;
}
//contains方法
public boolean contains(Object o) {
return containsKey(o);
}
public boolean containsKey(Object key) {
return getEntry(key) != null;
}
final Entry<K,V> getEntry(Object key) {
if (size == 0) {
return null;
} int hash = (key == null) ? 0 : hash(key);
for (Entry<K,V> e = table[indexFor(hash, table.length)];
e != null;
e = e.next) {
Object k;
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
}
return null;
}
补充1:
哈希表(Hash table,采用散列技术将记录存储在一块连续的存储空间中,这块连续存储空间称为散列表或哈希表(Hash table)),是根据关键码值(Key value)而直接进行访问的数据结构,也就是说,它通过把关键码值映射到表中一个位置来访问记录,以加快查找的速度。
记录的存储位置=f(关键字),被称为散列函数,又称为哈希(Hash函数),
补充二:
两个对象相等,hashCode一定相同,但是两个对象的HashCode相同,这两个对象不一定相等。
JDK1.7源码(和1.6的对比):
//对比一:1.7默认初始容量用了位移计算
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
//加载因子
static final float DEFAULT_LOAD_FACTOR = 0.75f;
//存储容器-数组
static final Entry<?,?>[] EMPTY_TABLE = {}; transient Entry<K,V>[] table = (Entry<K,V>[]) EMPTY_TABLE;
//静态内部类
static class Entry<K,V> implements Map.Entry<K,V> {
final K key;
V value;
Entry<K,V> next;
int hash; /**
* Creates new entry.
*/
Entry(int h, K k, V v, Entry<K,V> n) {
value = v;
next = n;
key = k;
hash = h;
} public final K getKey() {
return key;
} public final V getValue() {
return value;
} public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
} public final boolean equals(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry e = (Map.Entry)o;
Object k1 = getKey();
Object k2 = e.getKey();
if (k1 == k2 || (k1 != null && k1.equals(k2))) {
Object v1 = getValue();
Object v2 = e.getValue();
if (v1 == v2 || (v1 != null && v1.equals(v2)))
return true;
}
return false;
} public final int hashCode() {
return Objects.hashCode(getKey()) ^ Objects.hashCode(getValue());
} public final String toString() {
return getKey() + "=" + getValue();
} /**
* This method is invoked whenever the value in an entry is
* overwritten by an invocation of put(k,v) for a key k that's already
* in the HashMap.
*/
void recordAccess(HashMap<K,V> m) {
} /**
* This method is invoked whenever the entry is
* removed from the table.
*/
void recordRemoval(HashMap<K,V> m) {
}
}
//Put方法
public V put(K key, V value) {
if (table == EMPTY_TABLE) {//此出多了一个数组初始化的方法
inflateTable(threshold);
}
if (key == null)
return putForNullKey(value);
int hash = hash(key);
int i = indexFor(hash, table.length);
for (Entry<K,V> e = table[i]; e != null; e = e.next) {
Object k;
if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue;
}
} modCount++;
addEntry(hash, key, value, i);
return null;
}
//初始化数组
private void inflateTable(int toSize) { int capacity = roundUpToPowerOf2(toSize); // 此处把传入的数组容量向上转换为2的n次幂的数值 threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);
table = new Entry[capacity];
initHashSeedAsNeeded(capacity);
}
final boolean initHashSeedAsNeeded(int capacity) {
boolean currentAltHashing = hashSeed != 0;
boolean useAltHashing = sun.misc.VM.isBooted() &&
(capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);
boolean switching = currentAltHashing ^ useAltHashing;
if (switching) {
hashSeed = useAltHashing
? sun.misc.Hashing.randomHashSeed(this)
: 0;
}
return switching;
}
//针对key=null的方法
private V putForNullKey(V value) {
for (Entry<K,V> e = table[0]; e != null; e = e.next) {
if (e.key == null) {
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue;
}
}
modCount++;
addEntry(0, null, value, 0);
return null;
}
//获取哈希数
final int hash(Object k) {
int h = hashSeed;
if (0 != h && k instanceof String) {
return sun.misc.Hashing.stringHash32((String) k);
} h ^= k.hashCode(); h ^= (h >>> 20) ^ (h >>> 12);
return h ^ (h >>> 7) ^ (h >>> 4);
}
//获取数组坐标
static int indexFor(int h, int length) {
// assert Integer.bitCount(length) == 1 : "length must be a non-zero power of 2";
return h & (length-1);
}
//添加元素的方法
void addEntry(int hash, K key, V value, int bucketIndex) {
if ((size >= threshold) && (null != table[bucketIndex])) {//1.7这里先进行是否需要扩容的判断
resize(2 * table.length);
hash = (null != key) ? hash(key) : 0;
bucketIndex = indexFor(hash, table.length);
} createEntry(hash, key, value, bucketIndex);
}
//扩容方法
void resize(int newCapacity) {
Entry[] oldTable = table;
int oldCapacity = oldTable.length;
if (oldCapacity == MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return;
} Entry[] newTable = new Entry[newCapacity];
transfer(newTable, initHashSeedAsNeeded(newCapacity));
table = newTable;
threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1);
}
//数组值转换的方法
void transfer(Entry[] newTable, boolean rehash) {
int newCapacity = newTable.length;
for (Entry<K,V> e : table) {
while(null != e) {
Entry<K,V> next = e.next;
if (rehash) {
e.hash = null == e.key ? 0 : hash(e.key);
}
int i = indexFor(e.hash, newCapacity);
e.next = newTable[i];
newTable[i] = e;
e = next;
}
}
}
//添加元素的方法
void createEntry(int hash, K key, V value, int bucketIndex) {
Entry<K,V> e = table[bucketIndex];
table[bucketIndex] = new Entry<>(hash, key, value, e);
size++;
}
//Get方法
public V get(Object key) {
if (key == null)
return getForNullKey();
Entry<K,V> entry = getEntry(key); return null == entry ? null : entry.getValue();
}
//key等于null获取值
private V getForNullKey() {
if (size == 0) {//1.7在此处又多了个判断数组是否为空的情况
return null;
}
for (Entry<K,V> e = table[0]; e != null; e = e.next) {
if (e.key == null)
return e.value;
}
return null;
}
//Clear方法
public void clear() {
modCount++;
Arrays.fill(table, null);//1,7此处调用了Arrays中的方法,不过原理还是一样的
size = 0;
} public static void fill(Object[] a, Object val) {
for (int i = 0, len = a.length; i < len; i++)
a[i] = val;
}
//entrySet方法:获取到数组中Entry对象的Set集合
private transient Set<Map.Entry<K,V>> entrySet = null; public Set<Map.Entry<K,V>> entrySet() {
return entrySet0();
} private Set<Map.Entry<K,V>> entrySet0() {
Set<Map.Entry<K,V>> es = entrySet;
return es != null ? es : (entrySet = new EntrySet());
}
//原理就是迭代map中的Entry对象
private final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
public Iterator<Map.Entry<K,V>> iterator() {
return newEntryIterator();
}
public boolean contains(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry<K,V> e = (Map.Entry<K,V>) o;
Entry<K,V> candidate = getEntry(e.getKey());
return candidate != null && candidate.equals(e);
}
public boolean remove(Object o) {
return removeMapping(o) != null;
}
public int size() {
return size;
}
public void clear() {
HashMap.this.clear();
}
}
//keySet方法
public Set<K> keySet() {
Set<K> ks = keySet;
return (ks != null ? ks : (keySet = new KeySet()));
}
//原理也是通过迭代器,迭代Entry对象,然后获取key
private final class KeySet extends AbstractSet<K> {
public Iterator<K> iterator() {
return newKeyIterator();
}
public int size() {
return size;
}
public boolean contains(Object o) {
return containsKey(o);
}
public boolean remove(Object o) {
return HashMap.this.removeEntryForKey(o) != null;
}
public void clear() {
HashMap.this.clear();
}
}
//remove方法
public V remove(Object key) {
Entry<K,V> e = removeEntryForKey(key);
return (e == null ? null : e.value);
} final Entry<K,V> removeEntryForKey(Object key) {
if (size == 0) {
return null;
}
int hash = (key == null) ? 0 : hash(key);
int i = indexFor(hash, table.length);
Entry<K,V> prev = table[i];
Entry<K,V> e = prev; while (e != null) {
Entry<K,V> next = e.next;
Object k;
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
modCount++;
size--;
if (prev == e)
table[i] = next;
else
prev.next = next;
e.recordRemoval(this);
return e;
}
prev = e;
e = next;
} return e;
}
//size方法、isEmpty方法、contains方法
public int size() {
return size;
}
public boolean isEmpty() {
return size == 0;
} public boolean contains(Object o) {
return containsKey(o);
}
public boolean containsKey(Object key) {
return getEntry(key) != null;
}
final Entry<K,V> getEntry(Object key) {
if (size == 0) {
return null;
} int hash = (key == null) ? 0 : hash(key);
for (Entry<K,V> e = table[indexFor(hash, table.length)];
e != null;
e = e.next) {
Object k;
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
}
return null;
}
对比可以看出,1.6和1.7重要方法的原理基本没变,1.7在部分方法上做了一点小的优化,还有就是从1.7开始,创建无参hashmap时不直接生成数组了,采用懒加载的方式,在用的时候再初始化。
1.6无参构造函数:
public HashMap() {
this.loadFactor = DEFAULT_LOAD_FACTOR;
threshold = (int)(DEFAULT_INITIAL_CAPACITY * DEFAULT_LOAD_FACTOR);
table = new Entry[DEFAULT_INITIAL_CAPACITY];
init();
}
1.7无参构造函数(在Put方法中,判断table等于空的时候再创建):
public HashMap() {
this.loadFactor = DEFAULT_LOAD_FACTOR;
}
HashMap1.8:初始容量为16,加载因子为0.75,底层是Entry数组+链表+红黑树:
//初始容量
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
//加载因子
static final float DEFAULT_LOAD_FACTOR = 0.75f;
//转为红黑树的阈(yu)值
static final int TREEIFY_THRESHOLD = 8;
//从红黑树转换为链表的阈值(扩容会重新计算一次数据长度)
static final int UNTREEIFY_THRESHOLD = 6;
//默认红黑树的容量
static final int MIN_TREEIFY_CAPACITY = 64;
//存储容器-数组,由1.7的Entry<K,V>[]数组变为了Node<K,V>[] transient Node<K,V>[] table; static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
V value;
Node<K,V> next; Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
} public final K getKey() { return key; }
public final V getValue() { return value; }
public final String toString() { return key + "=" + value; } public final int hashCode() {
return Objects.hashCode(key) ^ Objects.hashCode(value);
} public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
} public final boolean equals(Object o) {
if (o == this)
return true;
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>)o;
if (Objects.equals(key, e.getKey()) &&
Objects.equals(value, e.getValue()))
return true;
}
return false;
}
}
//Put方法
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
} final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;//首先还是先判断数组是否为空的,是的话进行初始化
if ((p = tab[i = (n - 1) & hash]) == null)//如果数组该位置为空,则把key,value传入该位置
tab[i] = newNode(hash, key, value, null);
else {//走到这说明发生了哈希碰撞,计算的key的哈希值相同
Node<K,V> e; K k;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))//先比较数据是否重复,重复的话就新值替换旧值
e = p;
else if (p instanceof TreeNode)//判断该对象是不是红黑树的元素,是的话直接存入红黑树
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
else {
for (int binCount = 0; ; ++binCount) {//遍历链表上的元素
if ((e = p.next) == null) {
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // 如果数量达到临界点,则转换为红黑树
treeifyBin(tab, hash);//转换方法
break;
}
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;//e其实是p.next,此处体现1.8的尾插法
}
}
if (e != null) { // 此处作用为,当key相同时,新value替换老的value,并将oleValue返回出去
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
++modCount;
if (++size > threshold)//如果数组大于初始量*加载因子,则进行扩容
resize();
afterNodeInsertion(evict);
return null;
}
//空数组初始化方法
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
if (oldCap > 0) {
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
if (oldTab != null) {
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if ((e = oldTab[j]) != null) {
oldTab[j] = null;
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
//存入红黑树
final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,
int h, K k, V v) {
Class<?> kc = null;
boolean searched = false;
TreeNode<K,V> root = (parent != null) ? root() : this;
for (TreeNode<K,V> p = root;;) {
int dir, ph; K pk;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((pk = p.key) == k || (k != null && k.equals(pk)))
return p;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0) {
if (!searched) {
TreeNode<K,V> q, ch;
searched = true;
if (((ch = p.left) != null &&
(q = ch.find(h, k, kc)) != null) ||
((ch = p.right) != null &&
(q = ch.find(h, k, kc)) != null))
return q;
}
dir = tieBreakOrder(k, pk);
} TreeNode<K,V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null) {
Node<K,V> xpn = xp.next;
TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);
if (dir <= 0)
xp.left = x;
else
xp.right = x;
xp.next = x;
x.parent = x.prev = xp;
if (xpn != null)
((TreeNode<K,V>)xpn).prev = x;
moveRootToFront(tab, balanceInsertion(root, x));
return null;
}
}
}
//转换为红黑树
final void treeifyBin(Node<K,V>[] tab, int hash) {
int n, index; Node<K,V> e;
if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
resize();
else if ((e = tab[index = (n - 1) & hash]) != null) {
TreeNode<K,V> hd = null, tl = null;
do {
TreeNode<K,V> p = replacementTreeNode(e, null);
if (tl == null)
hd = p;
else {
p.prev = tl;
tl.next = p;
}
tl = p;
} while ((e = e.next) != null);
if ((tab[index] = hd) != null)
hd.treeify(tab);
}
}
//扩容方法
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
if (oldCap > 0) {
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
if (oldTab != null) {
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if ((e = oldTab[j]) != null) {
oldTab[j] = null;
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
//Get方法,加入很多的判断,以保证数据的准确性
public V get(Object key) {
Node<K,V> e;
return (e = getNode(hash(key), key)) == null ? null : e.value;
} final Node<K,V> getNode(int hash, Object key) {
Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
if ((tab = table) != null && (n = tab.length) > 0 &&
(first = tab[(n - 1) & hash]) != null) {
if (first.hash == hash && // always check first node
((k = first.key) == key || (key != null && key.equals(k))))
return first;
if ((e = first.next) != null) {
if (first instanceof TreeNode)
return ((TreeNode<K,V>)first).getTreeNode(hash, key);//如果是红黑树,则用红黑树的get方法
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
} while ((e = e.next) != null);
}
}
return null;
}
//1.8取消了contans方法,用containsKey和ContainsValue
public boolean containsKey(Object key) {
return getNode(hash(key), key) != null;
} public boolean containsValue(Object value) {
Node<K,V>[] tab; V v;
if ((tab = table) != null && size > 0) {
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next) {
if ((v = e.value) == value ||
(value != null && value.equals(v)))
return true;
}
}
}
return false;
}
//entrySet方法
public Set<Map.Entry<K,V>> entrySet() {
Set<Map.Entry<K,V>> es;
return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;
} final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
public final int size() { return size; }
public final void clear() { HashMap.this.clear(); }
public final Iterator<Map.Entry<K,V>> iterator() {
return new EntryIterator();
}
public final boolean contains(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry<?,?> e = (Map.Entry<?,?>) o;
Object key = e.getKey();
Node<K,V> candidate = getNode(hash(key), key);
return candidate != null && candidate.equals(e);
}
public final boolean remove(Object o) {
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>) o;
Object key = e.getKey();
Object value = e.getValue();
return removeNode(hash(key), key, value, true, true) != null;
}
return false;
}
public final Spliterator<Map.Entry<K,V>> spliterator() {
return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);
}
public final void forEach(Consumer<? super Map.Entry<K,V>> action) {
Node<K,V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next)
action.accept(e);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
//keySet
public Set<K> keySet() {
Set<K> ks;
return (ks = keySet) == null ? (keySet = new KeySet()) : ks;
} final class KeySet extends AbstractSet<K> {
public final int size() { return size; }
public final void clear() { HashMap.this.clear(); }
public final Iterator<K> iterator() { return new KeyIterator(); }
public final boolean contains(Object o) { return containsKey(o); }
public final boolean remove(Object key) {
return removeNode(hash(key), key, null, false, true) != null;
}
public final Spliterator<K> spliterator() {
return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0);
}
public final void forEach(Consumer<? super K> action) {
Node<K,V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next)
action.accept(e.key);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
//remove方法
public boolean remove(Object key, Object value) {
return removeNode(hash(key), key, value, true, true) != null;
} final Node<K,V> removeNode(int hash, Object key, Object value,
boolean matchValue, boolean movable) {
Node<K,V>[] tab; Node<K,V> p; int n, index;
if ((tab = table) != null && (n = tab.length) > 0 &&
(p = tab[index = (n - 1) & hash]) != null) {
Node<K,V> node = null, e; K k; V v;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
node = p;
else if ((e = p.next) != null) {
if (p instanceof TreeNode)
node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
else {
do {
if (e.hash == hash &&
((k = e.key) == key ||
(key != null && key.equals(k)))) {
node = e;
break;
}
p = e;
} while ((e = e.next) != null);
}
}
if (node != null && (!matchValue || (v = node.value) == value ||
(value != null && value.equals(v)))) {
if (node instanceof TreeNode)
((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
else if (node == p)
tab[index] = node.next;
else
p.next = node.next;
++modCount;
--size;
afterNodeRemoval(node);
return node;
}
}
return null;
}
//size方法和isEmpty方法
transient int size;//和put以及remove方法有关系,增加就++,删除就-- public int size() {
return size;
} public boolean isEmpty() {
return size == 0;
}
对比一下可以发现,在JDK1.8的时候,hashmap有了很大的改变,不止加了很多小的优化,而且还添加红黑树用来解决哈希碰撞导致的的查询问题。
补充:
为什么要转换为红黑树?
红黑树具有很高效的查找功能,当数值不多时用链表的形式就可以应对问题,但是当链表很长的时候(发生了哈希碰撞多),hashmap在进行put和get等方法时都需要遍历链表,红黑树可以保证hashmap在发生哈希碰撞时能保证数据元素的高效定位。
为什么转为红黑树的阈(yu)值为8?
因为由链表转换成红黑树时,需要额外的空间和时间,作者根据“泊松分布”算出,出现链表长度为8的情况已经非常小了,大概是:0.00000006,所以在8的时候再转换为红黑树。
数据结构(集合)学习之Map(一)的更多相关文章
- 数据结构(集合)学习之Map(二)
集合 框架关系图 补充:HashTable父类是Dictionary,不是AbstractMap. 一:HashMap中的链循环: 一般来说HashMap中的链循环会发生在多线程操作时(虽然HashM ...
- 数据结构(集合)学习之Set
集合 框架关系图: Collection接口下面有三个子接口:List.Set.Queue.此篇是关于Set<E>的简单学习总结. 补充:HashTable父类是Dictionary,不是 ...
- 数据结构(集合)学习之Queue
集合 框架关系图: Collection接口下面有三个子接口:List.Set.Queue.此篇是关于Queue<E>的简单学习总结. 补充:HashTable父类是Dictionary, ...
- 数据结构(集合)学习之List
集合 框架关系图: Collection接口下面有三个子接口:List.Set.Queue.此篇是关于List<E>的简单学习总结. 补充:HashTable父类是Dictionary,不 ...
- 数据结构(集合)学习之Collection和Iterator
集合 1.集合与数组 数组(可以存储基本数据类型)是用来存现对象的一种容器,但是数组的长度固定,不适合在对象数量未知的情况下使用. 集合(只能存储对象,对象类型可以不一样)的长度可变,可在多数情况下使 ...
- 【转】Java学习---Java核心数据结构(List,Map,Set)使用技巧与优化
[原文]https://www.toutiao.com/i6594587397101453827/ Java核心数据结构(List,Map,Set)使用技巧与优化 JDK提供了一组主要的数据结构实现, ...
- Java学习:集合双列Map
数据结构 数据结构: 数据结构_栈:先进后出 入口和出口在同一侧 数据结构_队列:先进先出 入口和出口在集合的两侧 数据结构_数组: 查询快:数组的地址是连续的,我们通过数组的首地址可以找到数组,通过 ...
- java集合学习(2):Map和HashMap
Map接口 java.util 中的集合类包含 Java 中某些最常用的类.最常用的集合类是 List 和 Map. Map 是一种键-值对(key-value)集合,Map 集合中的每一个元素都包含 ...
- 2019/3/4 java集合学习(二)
java集合学习(二) 在学完ArrayList 和 LinkedList之后,基本已经掌握了最基本的java常用数据结构,但是为了提高程序的效率,还有很多种特点各异的数据结构等着我们去运用,类如可以 ...
随机推荐
- 源码级别gdb远程调试(实现OS简单内核)
最近在学着编写一个操作系统的简单内核,需要debug工具,我们这里使用gdb来进行调试,由于虚拟机运行和本机是两个部分,所以使用 gdb 的远程调试技术,这里对 gdb 常见调试以及远程调试方式做一个 ...
- VS生成垃圾文件清理
@echo Off del /s /a *.txt *.exe *.suo *.ncb *.user *.dll *.pdb *.netmodule *.aps *.ilk 2>nul FOR ...
- 为了不复制粘贴,我被逼着学会了JAVA爬虫
整理了一些Java方面的架构.面试资料(微服务.集群.分布式.中间件等),有需要的小伙伴可以关注公众号[程序员内点事],无套路自行领取 本文作者:程序员内点事 更多精选 技术部突然宣布:JAVA开发人 ...
- ubuntu 安装LAMP web 服务器, phpmyadmin 安装后无法打开解决
安装方法: http://blog.chinaunix.net/uid-26495963-id-3173291.html 在上述文档中需要增加apache 支持mysql 功能. apt-get in ...
- vagrant相关
无法挂载共享目录,报错如下 Vagrant was unable to mount VirtualBox shared folders. This is usually because the fil ...
- HDU_3410_单调栈
http://acm.hdu.edu.cn/showproblem.php?pid=3410 初探单调栈,从左往右,求l,从右往左,求r. #include<iostream> #incl ...
- Virus:病毒查杀
简介 小伙伴们,大家好,今天分享一次Linux系统杀毒的经历,还有个人的一些总结,希望对大家有用. 这次遇到的是一个挖矿的病毒,在挖一种叫门罗币(XMR)的数字货币,行情走势请看 https://ww ...
- AppBox实战: 如何实现一对多表单的增删改查
本篇通过完整示例介绍如何实现一对多关系表单的相应服务及视图. 一.准备数据结构 示例所采用的数据结构为"物资需求"一对多"物资清单",通过IDE的实体设 ...
- SPH流体模拟及液面重构问题
关于流体特效模拟算法的简单描述,前提部分. 目前动画领域内的流体模拟主要是拉格朗日法无网格法和欧拉网格法,两种方法更有利弊. 我研究的主要是拉格朗日法中的SPH模型,即光滑粒子流体动力学模型. 粒子方 ...
- 实验楼-python3简明教程笔记
#!/usr/bin/env python3 days = int(input("Enter days: ")) print("Months = {} Days = {} ...