Prior to Java 5.0, the only mechanisms that could be used to coordinate access to shared objects were synchronized and volatile. We know that the synchronized keyword implements built-in locks, while the volatile keyword ensures memory visibility for multi-threads. In most cases, these mechanisms can do the job well, but they cannot implement some more advanced functions, such as being unable to interrupt a thread waiting to acquire a lock, being unable to implement a time-limited lock mechanism, being unable to implement lock rules for non-blocking structures, etc. And these more flexible locking mechanisms usually provide better activity or performance. Therefore, a new mechanism has been added in Java 5.0: ReentrantLock. The ReentrantLock class implements the Lock interface and provides the same mutex and memory visibility as synchronized. Its underlying layer is to achieve multi-thread synchronization through AQS. Compared with built-in locks, ReentrantLock not only provides a richer locking mechanism, but also does not inferior to built-in locks in performance (even better than built-in locks in previous versions). Having talked about so many advantages of ReentrantLock, let’s uncover its source code and see its specific implementation.
1. Introduction to synchronized keywords
Java provides built-in locks to support multi-thread synchronization. The JVM identifies the synchronized code block according to the synchronized keyword. When a thread enters the synchronized code block, it will automatically acquire the lock. When exiting the synchronized code block, the lock will be automatically released. After one thread acquires the lock, other threads will be blocked. Each Java object can be used as a lock that implements synchronization. The synchronized keyword can be used to modify object methods, static methods and code blocks. When modifying object methods and static methods, the lock is the object where the method is located and the Class object. When modifying the code block, additional objects need to be provided as locks. The reason why each Java object can be used as a lock is that a monitor object (manipulation) is associated in the object header. When the thread enters the synchronous code block, it will automatically hold the monitor object, and when it exits, it will automatically release the monitor object. When the monitor object is held, other threads will be blocked. Of course, these synchronization operations are implemented by the JVM underlying layer, but there are still some differences in the underlying implementation of the synchronized keyword modification method and code block. The synchronized keyword modification method is implicitly synchronized, that is, it does not need to be controlled through bytecode instructions. The JVM can distinguish whether a method is a synchronized method based on the ACC_SYNCHRONIZED access flag in the method table; while the code blocks modified by synchronized keyword are explicitly synchronized, which control the thread's holding and release of the pipeline through the monitorenter and monitorexit bytecode instructions. The monitor object holds the _count field internally. _count equal to 0 means that the pipeline is not held, and _count greater than 0 means that the pipeline has been held. Each time the holding thread reentries, _count will be added 1, and every time the holding thread exits, _count will be reduced by 1. This is the implementation principle of built-in lock reentry. In addition, there are two queues inside the monitor object _EntryList and _WaitSet, which correspond to the synchronization queue and conditional queue of AQS. When the thread fails to acquire the lock, it will block in the _EntryList. When the wait method of the lock object is called, the thread will enter the _WaitSet to wait. This is the implementation principle of thread synchronization and conditional waiting for built-in locks.
2. Comparison between ReentrantLock and Synchronized
The synchronized keyword is a built-in lock mechanism provided by Java. Its synchronization operations are implemented by the underlying JVM. ReentrantLock is an explicit lock provided by the java.util.concurrent package, and its synchronization operations are powered by the AQS synchronizer. ReentrantLock provides the same semantics on locking and memory as built-in locks, in addition it provides some other features including timed lock waiting, interruptible lock waiting, fair locking, and implementing non-block structured locking. In addition, ReentrantLock also had certain performance advantages in the early JDK versions. Since ReentrantLock has so many advantages, why should we use the synchronized keyword? In fact, many people do use ReentrantLock to replace the lock operation of synchronized keywords. However, built-in locks still have their own advantages. Built-in locks are familiar to many developers and are more simple and compact in use. Because explicit locks must be manually called unlock in the finally block, it is relatively safer to use built-in locks. At the same time, it is more likely to improve the performance of synchronized rather than ReentrantLock in the future. Because synchronized is a built-in property of the JVM, it can perform some optimizations, such as lock elimination optimization for thread-enclosed lock objects, eliminating synchronization of built-in locks by increasing the granularity of the lock, and if these functions are implemented through class library-based locks, it is unlikely. So when some advanced features are needed, ReentrantLock should be used, which include: timeable, pollable and interruptible lock acquisition operations, fair queues, and non-block structure locks. Otherwise, synchronized should be used first.
3. Operations of acquiring and releasing locks
Let's first look at the sample code using ReentrantLock to add locks.
public void doSomething() { //The default is to obtain a non-fair lock ReentrantLock lock = new ReentrantLock(); try{ //Lock lock.lock() before execution; //Execute the operation... } finally{ //The lock lock.unlock() finally releases; }}The following is the API for acquiring and releasing locks.
//The operation of obtaining lock public void lock() { sync.lock();}//The operation of releasing lock public void unlock() { sync.release(1);}You can see that the operations of acquiring the lock and releasing the lock are delegated to the lock method and release method of the Sync object respectively.
public class ReentrantLock implements Lock, java.io.Serializable { private final Sync sync; abstract static class Sync extends AbstractQueuedSynchronizer { abstract void lock(); } //Synchronizer that implements non-fair lock static final class NonfairSync extends Sync { final void lock() { ... } } //Synchronizer that implements fair lock static final class FairSync extends Sync { final void lock() { ... } }}Each ReentrantLock object holds a reference of type Sync. This Sync class is an abstract inner class. It inherits from AbstractQueuedSynchronizer. The lock method inside it is an abstract method. ReentrantLock's member variable sync is assigned value during construction. Let's take a look at what the two constructor methods of ReentrantLock do?
//The default parameterless constructor public ReentrantLock() { sync = new NonfairSync();}//The parameterized constructor public ReentrantLock(boolean fair) { sync = fair ? new FairSync() : new NonfairSync();}Calling the default parameterless constructor will assign the NonfairSync instance to sync, and the lock is a non-fair lock at this time. The parameter constructor allows parameters to specify whether to assign a FairSync instance or a NonfairSync instance to sync. NonfairSync and FairSync both inherit from the Sync class and rewritten the lock() method, so there are some differences between fair locks and non-fair locks in the way of obtaining locks, which we will talk about below. Let’s take a look at the operation of releasing the lock. Each time you call the unlock() method, you just execute the sync.release(1) operation. This operation will call the release() method of the AbstractQueuedSynchronizer class. Let’s review it again.
//Release the lock operation (exclusive mode) public final boolean release(int arg) { //Turn the password lock to see if it can unlock if (tryRelease(arg)) { //Get the head node Node h = head; //If the head node is not empty and the waiting state is not equal to 0, wake up the successor node if (h != null && h.waitStatus != 0) { //Wake up the successor node unparkSuccessor(h); } return true; } return false;}This release method is the API for releasing lock operations provided by AQS. It first calls the tryRelease method to try to acquire the lock. The tryRelease method is an abstract method, and its implementation logic is in the subclass Sync.
//Try to release the lock protected final boolean tryRelease(int releases) { int c = getState() - releases; //If the thread holding the lock is not the current thread, an exception will be thrown if (Thread.currentThread() != getExclusiveOwnerThread()) { throw new IllegalMonitorStateException(); } boolean free = false; //If the synchronization status is 0, it means that the lock is released if (c == 0) { //Set the flag of the lock being released as true free = true; //Set the occupied thread to empty setExclusiveOwnerThread(null); } setState(c); return free;}This tryRelease method will first acquire the current synchronization state, subtract the current synchronization state from the passed parameters to the new synchronization state, and then determine whether the new synchronization state is equal to 0. If it is equal to 0, it means that the current lock is released. Then set the release state of the lock to true, then clear the thread that currently occupies the lock, and finally call the setState method to set the new synchronization state and return the release state of the lock.
4. Fair lock and unfair lock
We know which specific instance is the ReentrantLock pointing to based on the sync. During construction, the member variable sync will be assigned. If the value is assigned to the NonfairSync instance, it means it is a non-fair lock, and if the value is assigned to the FairSync instance, it means it is a fair lock. If it is a fair lock, the threads will obtain the lock in the order in which they make the requests, but on the unfair lock, the cut-in behavior is allowed: when a thread requests an unfair lock, if the state of the lock becomes available at the same time as the request is issued, the thread will skip all waiting threads in the queue to obtain the lock directly. Let’s take a look at how to obtain unfair locks.
//Unfair synchronizer static final class NonfairSync extends Sync { //Implement the abstract method of the parent class to acquire the lock final void lock() { //Use CAS method to set the synchronization state if (compareAndSetState(0, 1)) { //If the setting is successful, it means that the lock is not occupied setExclusiveOwnerThread(Thread.currentThread()); } else { //Otherwise it means that the lock has been occupied, call acquire and let the thread queuing to synchronize the queue to obtain acquire(1); } } //The method to try to acquire the lock protected final boolean tryAcquire(int acquires) { return nonfairTryAcquire(acquires); }}//Acquire locks in non-interrupt mode (exclusive mode) public final void acquire(int arg) { if (!tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) { selfInterrupt(); }}It can be seen that in the lock method of unfair lock, the thread will change the value of the synchronization state from 0 to 1 in the first step in CAS. In fact, this operation is equivalent to trying to acquire the lock. If the change is successful, it means that the thread has acquired the lock just now, and there is no need to queue in the synchronization queue anymore. If the change fails, it means that the lock has not been released when the thread first comes, so the acquire method is called next. We know that this acquire method is inherited from the AbstractQueuedSynchronizer method. Let’s review this method. After the thread enters the acquire method, the first call the tryAcquire method to try to acquire the lock. Since NonfairSync overwrites the tryAcquire method and calls the nonfairTryAcquire method of the parent class Sync in the method, the nonfairTryAcquire method will be called here to try to acquire the lock. Let's see what this method does specifically.
//Unfair acquisition of lock final boolean nonfairTryAcquire(int acquires) { //Get the current thread final Thread current = Thread.currentThread(); //Get the current synchronization state int c = getState(); //If the synchronization state is 0, it means that the lock is not occupied if (c == 0) { //Use CAS to update the synchronization state if (compareAndSetState(0, acquires)) { //Set the thread currently occupying the lock setExclusiveOwnerThread(current); return true; } //Otherwise, it is determined whether the lock is the current thread}else if (current == getExclusiveOwnerThread()) { //If the lock is held by the current thread, directly modify the current synchronization state int nextc = c + acquires; if (nextc < 0) { throw new Error("Maximum lock count exceeded"); } setState(nextc); return true; } //If the lock is not the current thread, return the failure flag return false;}The nonfairTryAcquire method is a Sync method. We can see that after the thread enters this method, it first acquires the synchronization state. If the synchronization state is 0, use CAS operation to change the synchronization state. In fact, this is to acquire the lock again. If the synchronization state is not 0, it means that the lock is occupied. At this time, we will first determine whether the thread holding the lock is the current thread. If so, the synchronization state will be increased by 1. Otherwise, the operation of trying to acquire the lock will fail. So the addWaiter method will be called to add the thread to the synchronization queue. To sum up, in the unfair lock mode, a thread will try to acquire two locks before entering the synchronization queue. If the acquisition is successful, it will not enter the synchronization queue queue queue queue, otherwise it will enter the synchronization queue queue queue queue. Next, let’s take a look at how to obtain fair locks.
//Synchronizer that implements fair lock static final class FairSync extends Sync { //Implement the abstract method of the parent class to acquire lock final void lock() { //Call acquire acquire and let the thread queuing to synchronize the queue to obtain acquire(1); } //Try to acquire the lock protected final boolean tryAcquire(int acquires) { //Get the current thread final Thread current = Thread.currentThread(); //Get the current synchronization state int c = getState(); //If the synchronization state 0 means that the lock is not occupied if (c == 0) { //Defend whether there is a forward node in the synchronization queue if (!hasQueuedPredecessors() && compareAndSetState(0, acquires)) { //If there is no forward node and the synchronization state is set successfully, it means that the lock is acquired successfully setExclusiveOwnerThread(current); return true; } //Otherwise, determine whether the current thread holds the lock}else if (current == getExclusiveOwnerThread()) { //If the current thread holds the lock, directly modify the synchronization state int nextc = c + acquires; if (nextc < 0) { throw new Error("Maximum lock count exceeded"); } setState(nextc); return true; } // If the current thread does not hold the lock, the acquisition fails return false; }} When calling the lock method of fair lock, the acquire method will be called directly. Similarly, the acquire method first calls the FairSync rewrite tryAcquire method to try to acquire the lock. In this method, the value of the synchronization state is first obtained. If the synchronization state is 0, it means that the lock is released at this time. The difference from the unfair lock is that it will first call the hasQueuedPredecessors method to check whether someone is queuing in the synchronization queue. If no one is queuing, the value of the synchronization state will be modified. You can see that the fair lock adopts a courtesy method here instead of acquiring the lock immediately. Except for this step that is different from the unfair lock, the other operations are the same. To sum up, we can see that fair lock only checks the status of the lock once before entering the synchronization queue. Even if you find that the lock is open, you will not acquire it immediately. Instead, you will let the threads in the synchronization queue acquire it first. Therefore, it can be ensured that the order in which all threads acquire the locks under the fair lock is first and then arrived, which also ensures the fairness of obtaining the locks.
So why don't we want all locks to be fair? After all, fairness is a good behavior, and unfairness is a bad behavior. Because the thread's suspend and wake-up operations have a large overhead, it affects system performance, especially in the case of fierce competition, fair locks will lead to frequent suspend and wake-up operations of threads, while non-fair locks can reduce such operations, so they will be better than fair locks in performance. In addition, since most threads use locks for a very short time, and the thread's wake-up operation will have a delay, it is possible that thread B will acquire the lock immediately and release the lock after using it. This leads to a win-win situation. The moment when thread A acquires the lock is not delayed, but thread B uses the lock in advance and its throughput has also been improved.
5. The implementation mechanism of conditional queues
There are some defects in the built-in condition queue. Each built-in lock can only have one associated condition queue, which causes multiple threads to wait for different condition predicates on the same condition queue. Then, every time notifyAll is called, all waiting threads will be awakened. When the thread wakes up, it finds that it is not the condition predicate that it is waiting for, and it will be suspended. This leads to many useless thread wake-up and suspend operations, which will waste a lot of system resources and reduce system performance. If you want to write a concurrent object with multiple conditional predicates, or if you want to gain more control than conditional queue visibility, you need to use explicit Lock and Condition instead of built-in locks and conditional queues. A Condition and a Lock are associated together, just like a condition queue and a built-in lock. To create a Condition, you can call the Lock.newCondition method on the associated Lock. Let's first look at an example using Condition.
public class BoundedBuffer { final Lock lock = new ReentrantLock(); final Condition notFull = lock.newCondition(); // Condition predicate: notFull final Condition notEmpty = lock.newCondition(); // Condition predicate: notEmpty final Object[] items = new Object[100]; int putptr, takeptr, count; // Production method public void put(Object x) throws InterruptedException { lock.lock(); try { while (count == items.length) notFull.await(); //The queue is full, and the thread is waiting for items[putptr] on the notFull queue. items[putptr] = x; if (++putptr == items.length) putptr = 0; ++count; notEmpty.signal(); //Production is successful, wake up the node of the notEmpty queue} finally { lock.unlock(); } } //Consuming method public Object take() throws InterruptedException { lock.lock(); try { while (count == 0) notEmpty.await(); //The queue is empty, Thread waits for Object x = items[takeptr] on the notEmpty queue; if (++takeptr == items.length) takeptr = 0; --count; notFull.signal(); //Consumption is successful, wake up the node of the notFull queue return x; } finally { lock.unlock(); } } }A lock object can generate multiple condition queues, and two condition queues are generated here notFull and notEmpty. When the container is full, the thread that calls the put method needs to be blocked. Wait until the condition predicate is true (the container is not satisfied) wakes up and continues to execute; when the container is empty, the thread that calls the take method needs to be blocked. Wait until the condition predicate is true (the container is not empty) wakes up and continues to execute. These two types of threads wait according to different condition predicates, so they will enter two different condition queues to block, and wait until the right time before wake up by calling the API on the Condition object. The following is the implementation code of the newCondition method.
//Create a condition queue public Condition newCondition() { return sync.newCondition();}abstract static class Sync extends AbstractQueuedSynchronizer { //Create a new Condition object final ConditionObject newCondition() { return new ConditionObject(); }}The implementation of the condition queue on ReentrantLock is based on AbstractQueuedSynchronizer. The Condition object we obtain when calling the newCondition method is an instance of the internal class ConditionObject of AQS. All operations on condition queues are done by calling the API provided by ConditionObject. For specific implementation of ConditionObject, you can check my article "Java Concurrency Series [4]-----AbstractQueuedSynchronizer Source Code Analysis Conditional Queue" and I will not repeat it here. At this point, our analysis of the source code of ReentrantLock has come to an end. I hope that reading this article will help readers understand and master ReentrantLock.
The above is all the content of this article. I hope it will be helpful to everyone's learning and I hope everyone will support Wulin.com more.