synchronized keyword
Synchronized, we call locks, are mainly used to lock methods and code blocks. When a method or code block is synchronized, at most one thread is executing the code at the same time. When multiple threads access the locking method/code block of the same object, only one thread is executing at the same time, and the remaining threads must wait for the current thread to execute the code segment before executing. However, the remaining threads can access the non-locked code blocks in the object.
Synchronized mainly includes two methods: synchronized method and synchronized block.
synchronized method
Declare the synchronized method by adding the synchronized keyword to the method declaration. like:
public synchronized void getResult();
The synchronized method controls access to class member variables. How does it avoid access control of class member variables? We know that the method uses the synchronized keyword to indicate that the method is locked. When any thread accesses the modified method, it is necessary to determine whether other threads are "exclusive". Each class instance corresponds to a lock. Each synchronized method must call the lock of the class instance of the method before it can be executed. Otherwise, the thread to which it belongs is blocked. Once the method is executed, the lock will be exclusive. The lock will not be released until it returns from the method, and the blocked thread can obtain the lock.
In fact, the synchronized method has flaws. If we declare a large method as synchronized, it will greatly affect efficiency. If multiple threads are accessing a synchronized method, then only one thread is executing the method at the same time, while other threads must wait. However, if the method does not use synchronized, all threads can execute it at the same time, reducing the total execution time. So if we know that a method will not be executed by multiple threads or there is no problem of resource sharing, we do not need to use the synchronized keyword. But if we must use the synchronized keyword, then we can replace the synchronized method by synchronized code block.
synchronized block
The synchronized code block plays the same role as the synchronized method, except that it makes the critical area as short as possible. In other words, it only protects the shared data needed, leaving this operation for the remaining long code blocks. The syntax is as follows:
synchronized(object) { //Code that allows access control} If we need to use the synchronized keyword in this way, we must use an object reference as a parameter. Usually, we often use this parameter as this.synchronized (this) { //Code that allows access control}There is the following understanding of synchronized(this):
1. When two concurrent threads access this synchronized(this) synchronized code block in the same object object, only one thread can be executed within one time. Another thread must wait for the current thread to execute this code block before it can execute the code block.
2. However, when one thread accesses one synchronized (this) synchronization code block of an object, another thread can still access the non-synchronized (this) synchronization code block in the object.
3. It is particularly critical that when a thread accesses a synchronized (this) synchronization code block of an object, other threads will be blocked from accessing all other synchronized (this) synchronization code blocks in the object.
4. The third example also applies to other synchronization code blocks. That is, when a thread accesses a synchronized(this) synchronization code block of an object, it obtains the object lock of this object. As a result, other threads access to all synchronous code parts of the object object will be temporarily blocked.
Lock
There is a "come first, come later" principle in Java multithreading, that is, whoever grabs the key first will use it first. We know that to avoid problems with resource competition, Java uses synchronization mechanisms to avoid, and synchronization mechanisms are controlled using the concept of locks. So how do locks be reflected in Java programs? Here we need to figure out two concepts:
What is a lock? In daily life, it is a sealer added to doors, boxes, drawers and other objects, which prevent others from peeping or stealing, and plays a protective role. The same is true in Java. Locks play a role in protecting objects. If a thread exclusively occupies a certain resource, then don’t want to use other threads, but want to use them? Let's talk about it when I'm finished using it!
In a Java program running environment, the JVM needs to coordinate the data shared by two types of threads:
1. Instance variables saved in the heap
2. Class variables saved in the method area.
In a java virtual machine, each object and class is logically associated with a monitor. For an object, the associated monitor protects the object's instance variables. For classes, monitors protect class variables. If an object has no instance variables, or a class has no variables, the associated monitor will monitor nothing.
In order to implement the exclusive monitoring capability of the monitor, the java virtual machine associates a lock for each object and class. Represents the privileges that only one thread can have at any time. Threads do not need to lock when accessing instance variables or class variables. If a thread acquires a lock, it is impossible for other threads to acquire the same lock before it releases the lock. A thread can lock the same object multiple times. For each object, the java virtual machine maintains a lock counter. Every time the thread obtains the object, the counter is increased by 1, and every time it is released, the counter is reduced by 1. When the counter value is 0, the lock is completely released.
Java programmers do not need to add locks by themselves, object locks are used internally by Java virtual machines. In a Java program, you only need to use the synchronized block or synchronized method to mark a monitoring area. Whenever you enter a monitoring area, the java virtual machine will automatically lock the object or class.
A simple lock
When using synchronized, we use locks like this:
public class ThreadTest { public void test(){ synchronized(this){ //do something } } }synchronized ensures that only one thread is executing dosomething at the same time. Here is the use of lock instead of synchronized:
public class ThreadTest { Lock lock = new Lock(); public void test(){ lock.lock(); //do something lock.unlock(); } }The lock() method locks the Lock instance object, so all threads that call the lock() method on the object will be blocked until the unlock() method of the Lock object is called.
What is locked?
Before this question, we must be clear: whether the synchronized keyword is added to the method or the object, the lock it acquires is an object. In Java, every object can be used as a lock, which is mainly reflected in the following three aspects:
For synchronization methods, the lock is the current instance object.
For the synchronization method block, the lock is an object configured in the Synchonized parentheses.
For static synchronization methods, the lock is the Class object of the current object.
First, let’s look at the following example:
public class ThreadTest_01 implements Runnable{ @Override public synchronized void run() { for(int i = 0 ; i < 3 ; i++){ System.out.println(Thread.currentThread().getName() + "run......"); } } public static void main(String[] args) { for(int i = 0 ; i < 5 ; i++){ new Thread(new ThreadTest_01(),"Thread_" + i).start(); } } }Partial run results:
Thread_2run...Thread_2run...Thread_4run...Thread_4run...Thread_3run...Thread_3run...Thread_3run...Thread_3run...Thread_2run...Thread_4run...
This result is a bit different from the expected result (these threads run around here). Logically speaking, the run method plus the synchronized keyword will produce a synchronization effect. These threads should execute the run method one after another. As mentioned above, after adding synchronized keywords to a member method, it is actually a lock to the member method. The specific point is to use the object itself where the member method is located as the object lock. However, in this example, we have new 10 ThreadTest objects, and each thread will hold the object lock of its own thread object, which will definitely not produce a synchronous effect. So: If these threads are to be synchronized, the object locks held by these threads should be shared and unique!
Which object is synchronized locked at this time? What it locks is to call this synchronous method object. That is to say, if the threadTest object executes synchronization methods in different threads, it will form mutually exclusive. Achieve the effect of synchronization. So change the above new Thread(new ThreadTest_01(),"Thread_" + i).start(); to new Thread(threadTest,"Thread_" + i).start();
For synchronization methods, the lock is the current instance object.
The above example uses the synchronized method. Let's take a look at the synchronized code block:
public class ThreadTest_02 extends Thread{ private String lock ; private String name; public ThreadTest_02(String name,String lock){ this.name = name; this.lock = lock; } @Override public void run() { synchronized (lock) { for(int i = 0 ; i < 3 ; i++){ System.out.println(name + " run......"); } } } public static void main(String[] args) { String lock = new String("test"); for(int i = 0 ; i < 5 ; i++){ new ThreadTest_02("ThreadTest_" + i,lock).start(); } } }Running results:
ThreadTest_0 run...ThreadTest_0 run...ThreadTest_0 run...ThreadTest_1 run...ThreadTest_1 run...ThreadTest_1 run...ThreadTest_1 run...ThreadTest_4 run...ThreadTest_4 run...ThreadTest_4 run...ThreadTest_3 run...ThreadTest_3 run...ThreadTest_3 run...ThreadTest_2 run...ThreadTest_2 run...ThreadTest_2 run...
In the main method, we create a String object lock and assign this object to each ThreadTest2 thread object's private variable lock. We know that there is a string pool in Java, so the lock private variables of these threads actually point to the same area in the heap memory, that is, the area where the lock variables in the main function are stored, so the object lock is unique and shared. Thread synchronization! !
The lock String object that synchronized locks here.
For the synchronization method block, the lock is an object configured in the Synchonized parentheses.
public class ThreadTest_03 extends Thread{ public synchronized static void test(){ for(int i = 0 ; i < 3 ; i++){ System.out.println(Thread.currentThread().getName() + " run......"); } } @Override public void run() { test(); } public static void main(String[] args) { for(int i = 0 ; i < 5 ; i++){ new ThreadTest_03().start(); } } }Running results:
Thread-0 run...Thread-0 run...Thread-0 run...Thread-4 run...Thread-4 run...Thread-4 run...Thread-1 run...Thread-1 run...Thread-1 run...Thread-2 run...Thread-2 run...Thread-2 run...Thread-3 run...Thread-3 run...Thread-3 run...
In this example, the run method uses a synchronization method, and a static synchronization method. So what is the synchronized lock here? We know that static is beyond the object and is at the class level. Therefore, an object lock is the Class instance of the class where the static release is located. Since in the JVM, all loaded classes have unique class objects, the only ThreadTest_03.class object in this instance. No matter how many instances of the class we create, its class instance is still one! So the object lock is unique and shared. Thread synchronization! !
For static synchronization methods, the lock is the Class object of the current object.
If a class defines a synchronized static function A and a synchronized instance function B, then the same object Obj of this class will not constitute synchronization when accessing two methods A and B in multiple threads, because their locks are different. The lock of method A is the object Obj, while the lock of B is the Class to which Obj belongs.
Lock upgrade
There are four states in Java: lock-free state, biased lock state, lightweight lock state and heavyweight lock state, which will gradually escalate with competition. The lock can be upgraded but cannot be downgraded, which means that the biased lock cannot be downgraded to the biased lock after upgrading to a lightweight lock. This strategy of lock upgrade but cannot be downgraded is to improve the efficiency of obtaining and releasing locks. The main part below is a summary of the blog: Concurrency (II) Synchronized in Java SE1.6.
Lock spin
We know that when a thread enters a synchronization method/code block, if it finds that the synchronization method/code block is occupied by others, it will wait and enter a blocking state. The performance of this process is low.
When encountering the competition of the lock, or waiting for things, the thread can be less anxious to enter the blocking state, but wait and see if the lock is released immediately. This is the lock spin. To a certain extent, the lock spin can optimize the thread.
Positive lock
Positive locks are mainly used to solve the performance problem of locks without competition. In most cases, lock locks not only do not have multi-thread competition, but are always obtained multiple times by the same thread. In order to make the thread acquire locks at a lower cost, biased locks are introduced. When a thread obtains a lock, the thread can lock the object multiple times, but every time such an operation is performed, some overhead consumption due to the CAS (CPU's Compare-And-Swap instruction) operation. In order to reduce this overhead, the lock will tend to be the first thread to obtain it. If the lock is not acquired by other threads during the next execution process, the thread holding the biased lock will never need to be synchronized again.
When other threads are trying to compete for the biased lock, the thread holding the biased lock releases the lock.
Lock expansion
Multiple or multiple calls to locks with too small granularity are not as efficient as calling locks with a large granularity lock.
Lightweight lock
The basis for the lightweight lock to improve the synchronization performance of the program is that "for most locks, there is no competition throughout the synchronization cycle", which is an empirical data. A lightweight lock creates a space called lock record in the stack frame of the current thread, which is used to store the current pointing and state of the lock object. If there is no competition, the lightweight lock uses CAS operation to avoid the overhead of using mutexes, but if there is lock competition, in addition to the overhead of mutexes, CAS operation occurs additionally, so in the case of competition, the lightweight lock will be slower than the traditional heavyweight lock.
The fairness of the lock
The opposite of fairness is hunger. So what is "hunger"? If a thread cannot get CPU running time because other threads are always occupying the CPU, then we call the thread "hungry to death". The solution to hunger is called "fairness" - all threads can get CPU running opportunities fairly.
There are several main reasons for thread hunger:
High-priority threads consume the CPU time of all low-priority threads. We can set its priority individually for each thread, from 1 to 10. The higher the priority thread, the more time it takes to get the CPU. For most applications, it is best not to change its priority value.
The thread is permanently blocked in a state waiting to enter the synchronization block. The synchronous code area of Java is an important factor that causes thread hunger. Java's synchronous code blocks do not guarantee the order of threads entering it. This means that in theory there are one or more threads that are always blocked when trying to enter the synchronous code area, because other threads are always superior to them to gain access, resulting in it not getting the CPU running opportunity and being "starved to death".
The thread is waiting for an object that itself is also permanently waiting for completion. If multiple threads are on the execution of the wait() method, and calling notify() on it does not guarantee that any thread will be awakened, any thread may be in a state of continuous waiting. Therefore, there is a risk that one waiting thread will never be awakened, because other waiting threads can always be awakened.
To solve the problem of thread "hunger", we can use locks to achieve fairness.
Reentrability of locks
We know that when a thread requests an object that is held by another thread, the thread will block, but can it be successful when the thread requests an object that is held by itself? The answer is that success can be successful, and the guarantee of success is the "reentry" of thread locks.
"Reentrable" means that you can get your own internal lock again without blocking. as follows:
public class Father { public synchronized void method(){ //do something } } public class Child extends Father{ public synchronized void method(){ //do something super.method(); } }
If it is not reentrant, the above code will be deadlocked, because calling child's method() will first acquire the built-in lock of the parent class Father and then acquire the built-in lock of Child. When calling the parent class's method, you need to go back to the built-in lock of the parent class again. If it is not reentrant, you may fall into a deadlock.
The implementation of reentrability of Java multithreading is to associate a request calculation and a thread that occupies it by each lock. When the count is 0, it is believed that the lock is not occupied, and then any thread can obtain the ownership of the lock. When a thread requests successfully, the JVM will record the thread holding the lock and set the count to 1. If other threads request the lock, they must wait. When the thread requests to obtain the lock again, the count will be +1; when the occupying thread exits the synchronous code block, the count will be -1, until it is 0, the lock will be released. Only then can other threads have the opportunity to gain ownership of the lock.
lock and its implementation class
java.util.concurrent.locks provides a very flexible locking mechanism, providing a framework's interface and class for locking and waiting conditions. It is different from built-in synchronization and monitors, which allows for more flexibility in using locking and conditions. Its class structure diagram is as follows:
ReentrantLock: A reentrant mutex lock, the main implementation of the lock interface.
ReentrantReadWriteLock:
ReadWriteLock: ReadWriteLock maintains a pair of related locks, one for read-only operations and the other for write operations.
Semaphore: A counting semaphore.
Condition: The purpose of the lock is to allow the thread to acquire the lock and see if a certain condition waiting is met.
CyclicBarrier: A synchronous auxiliary class that allows a group of threads to wait for each other until it reaches a common barrier point.