Recently, I encountered some high concurrency scenarios at work that require locking to ensure the correctness of business logic, and the performance after locking should not be affected too much. The initial idea is to lock the data through keywords such as timestamps and ids, thereby ensuring the concurrency of different types of data processing. The lock granularity provided by Java's own API is too large, and it is difficult to meet these needs at the same time, so I wrote several simple extensions myself...
1. Segment lock
Drawing on the segmentation idea of concurrentHashMap, a certain number of locks are created, and when using it, the corresponding lock is returned according to the key. This is the simplest, highest performance and ultimately adopted lock strategy among several implementations. The code is as follows:
/** * Segment lock, the system provides a certain number of original locks, obtain the corresponding lock based on the hash value of the incoming object and add the lock* Note: If the hash value of the object to be locked changes, it may cause the lock to be unable to be released successfully!!! */public class SegmentLock<T> { private Integer segments = 16;//Default number of segments private final HashMap<Integer, ReentrantLock> lockMap = new HashMap<>(); public SegmentLock() { init(null, false); } public SegmentLock(Integer counts, boolean fair) { init(counts, fair); } private void init(Integer counts, boolean fair) { if (counts != null) { segments = counts; } for (int i = 0; i < segments; i++) { lockMap.put(i, new ReentrantLock(fair)); } } public void lock(T key) { ReentrantLock lock = lockMap.get((key.hashCode()>>>1) % segments); lock.lock(); } public void unlock(T key) { ReentrantLock lock = lockMap.get((key.hashCode()>>>1) % segments); lock.unlock(); }}2. Hash lock
The second lock strategy developed based on the above segmented lock is to achieve a true fine-grained lock. Each object with a different hash value can obtain its own independent lock. In the test, when the locked code is executed rapidly, the efficiency is about 30% slower than the segment lock. If there is a long time-consuming operation, the feeling of performance should be better. The code is as follows:
public class HashLock<T> { private boolean isFair = false; private final SegmentLock<T> segmentLock = new SegmentLock<>();//Segment lock private final ConcurrentHashMap<T, LockInfo> lockMap = new ConcurrentHashMap<>(); public HashLock() { } public HashLock(boolean fair) { isFair = fair; } public void lock(T key) { LockInfo lockInfo; segmentLock.lock(key); try { lockInfo = lockMap.get(key); if (lockInfo == null) { lockInfo = new LockInfo(isFair); lockMap.put(key, lockInfo); } else { lockInfo.count.incrementAndGet(); } } finally { segmentLock.unlock(key); } lockInfo.lock.lock(); } public void unlock(T key) { LockInfo lockInfo = lockMap.get(key); if (lockInfo.count.get() == 1) { segmentLock.lock(key); try { if (lockInfo.count.get() == 1) { lockMap.remove(key); } } finally { segmentLock.unlock(key); } } lockInfo.count.decrementAndGet(); lockInfo.unlock(); } private static class LockInfo { public ReentrantLock lock; public AtomicInteger count = new AtomicInteger(1); private LockInfo(boolean fair) { this.lock = new ReentrantLock(fair); } public void lock() { this.lock.lock(); } public void unlock() { this.lock.unlock(); } }}3. Weak reference lock
Hash locks are always flawed because the segmented locks are introduced to ensure the synchronization of lock creation and destruction, so a third lock was written to seek better performance and finer-grained locks. The idea of this lock is to create a lock with the help of weak references in Java, and hand over the lock's destruction to jvm's garbage collection to avoid additional consumption.
It's a bit regrettable that because ConcurrentHashMap is used as the lock container, it is unable to truly get rid of segmented locks. The performance of this lock is about 10% faster than that of HashLock. Lock code:
/** * Weak reference lock, providing independent locking function for each independent hash*/public class WeakHashLock<T> { private ConcurrentHashMap<T, WeakLockRef<T, ReentrantLock>> lockMap = new ConcurrentHashMap<>(); private ReferenceQueue<ReentrantLock> queue = new ReferenceQueue<>(); public ReentrantLock get(T key) { if (lockMap.size() > 1000) { clearEmptyRef(); } WeakReference<ReentrantLock> lockRef = lockMap.get(key); ReentrantLock lock = (lockRef == null ? null : lockRef.get()); while (lock == null) { lockMap.putIfAbsent(key, new WeakLockRef<>(new ReentrantLock(), queue, key)); lockRef = lockMap.get(key); lock = (lockRef == null ? null : lockRef.get()); if (lock != null) { return lock; } clearEmptyRef(); } return lock; } @SuppressWarnings("unchecked") private void clearEmptyRef() { Reference<? extends ReentrantLock> ref; while ((ref = queue.poll()) != null) { WeakLockRef<T, ? extends ReentrantLock> weakLockRef = (WeakLockRef<T, ? extends ReentrantLock>) ref; lockMap.remove(weakLockRef.key); } } private static final class WeakLockRef<T, K> extends WeakReference<K> { final T key; private WeakLockRef(K reference, ReferenceQueue<? super K> q, T key) { super(referent, q); this.key = key; } }}postscript
At first I wanted to use locksupport and AQS to implement fine-grained locks. As I wrote, I found that the things being implemented were not much different from those in Java native locks, so I gave up the encapsulation of Java's own locks, which wasted a lot of time.
In fact, after implementing these fine-grained locks, we have new ideas, such as submitting data to special threads through segmentation ideas, which can reduce the blocking time of a large number of threads and leave it to future exploration...