
ConcurrentHashMap is another implementation of the Map interface like HashMap and HashTable.
Here, a lock is not taken on the whole Map when doing adding or updating map. It is taken on a segment of the map.
So now what is a segment?
The ConcurrentHashMap is divided into a number of segments when it is created. Like default 32 segments are created when an ConcurrentHashMap is initialized.
So 32 threads can work randomly on the ConcurrentHashMap and doing insertion and updations without casuing problems to other threads. Each thread will have a lock on that segment. So ConcurrentHashMap allows concurrent access to the map.
Let’s look at the code implementation.
public ConcurrentHashMap(int initialCapacity, float loadFactor) {
this(initialCapacity, loadFactor, 1);
}
public ConcurrentHashMap(int initialCapacity,
float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (initialCapacity < concurrencyLevel) // Use at least as many bins
initialCapacity = concurrencyLevel; // as estimated threads
long size = (long)(1.0 + (long)initialCapacity / loadFactor);
int cap = (size >= (long)MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY : tableSizeFor((int)size);
this.sizeCtl = cap;
}
// Original (since JDK1.2) Map methods
Here the map is initialized with a an initialCapacity,loadFactor and a concurrencyLevel.
Concurrency-Level:Â It defines the number of threads which would be working concurrently or you can say the initial segments defined.
Load-Factor: It’s an initial threshold size of the map which is used to control resiizing..
Initial Capacity:Â This many elements are accomodated initially.
Now let’s look at the implementation of the segment.
private static final int DEFAULT_CONCURRENCY_LEVEL = 16;
private void writeObject(java.io.ObjectOutputStream s)
throws java.io.IOException {
// For serialization compatibility
// Emulate segment calculation from previous version of this class
int sshift = 0;
int ssize = 1;
while (ssize < DEFAULT_CONCURRENCY_LEVEL) {
++sshift;
ssize <<= 1;
}
int segmentShift = 32 - sshift;
int segmentMask = ssize - 1;
@SuppressWarnings("unchecked")
Segment<K,V>[] segments = (Segment<K,V>[])
new Segment<?,?>[DEFAULT_CONCURRENCY_LEVEL];
for (int i = 0; i < segments.length; ++i)
segments[i] = new Segment<K,V>(LOAD_FACTOR);
s.putFields().put("segments", segments);
s.putFields().put("segmentShift", segmentShift);
s.putFields().put("segmentMask", segmentMask);
s.writeFields();
Node<K,V>[] t;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
s.writeObject(p.key);
s.writeObject(p.val);
}
}
s.writeObject(null);
s.writeObject(null);
segments = null; // throw away
}
The default concurrency level is set as 16 so per the above code, 16 segments are getting created.
Now let’s explain, what happens if a element is put in the concurrent HashMap.
int hashVal = hash(key);
static int hash(Object x) {
int h = x.hashCode();
return (h << 7) - h + (h >>> 9) + (h >>> 17);
}
Initially the hash value of that key is calculated as per above.
Then using that Hash Value, we get the Segment in which that particular value would be stored.
Segment seg = segments[(hash & 0x1F)];
Now that key would be put into that segment. As discussed earlier, in ConcurrentHashMap that particular segment would be locked when an insertion or updation takes place in that segment. So here synchronization would be used to take that lock on that segment.
//Take lock on that seg to be worked on ysing sync keyword
synchronized (seg) {
int index = hash & table.length - 1; //index is fetched by hash value
Entry test = table[index]; //table is an Entry array and it returns an array for that index position
for (Entry e = test; e != null; e = e.next) {
if ((e.hash == hash) && (eq(key, e.key))) { // if key already then we update the value and return the old value which is behaviour of map
Object oldValue = e.value;
e.value = value;
return oldValue;
}
}
//Or add a new entry into the map into that segment to be precise
Entry newEntry = new Entry(hash, key, value, test);
table[index] = newEntry;
}
Similar behaviour takes place if we need to remove or update the map. We need to have synchonized keyword on that seg so that lock is take.