Let’s first read a detailed explanation of synchronized:
synchronized is a keyword in the Java language. When it is used to modify a method or a code block, it can ensure that at most one thread executes the code at the same time.
1. When two concurrent threads access this synchronized(this) synchronized code block in the same object object, only one thread can be executed within one time. Another thread must wait for the current thread to execute this code block before it can execute the code block.
2. However, when one thread accesses a synchronized (this) synchronization code block of an object, another thread can still access the non-synchronized (this) synchronization code block in that object.
3. It is particularly critical that when a thread accesses a synchronized (this) synchronization code block of an object, other threads will be blocked from accessing all other synchronized (this) synchronization code blocks in the object.
4. The third example also applies to other synchronous code blocks. That is, when a thread accesses a synchronized(this) synchronization code block of an object, it obtains the object lock of this object. As a result, other threads access to all synchronous code parts of the object object is temporarily blocked.
5. The above rules also apply to other object locks.
Simply put, synchronized declares a lock for the current thread. The thread with this lock can execute instructions in the block, and other threads can only wait for the lock to acquire it before the same operation.
This is very useful, but I encountered another strange situation.
1. In the same class, there are two methods: using the synchronized keyword declaration
2. When executing one of the methods, you need to wait for the other method (asynchronous thread callback) to be executed, so you use a countDownLatch to wait.
3. The code is deconstructed as follows:
synchronized void a(){ countDownLatch = new CountDownLatch(1); // do someing countDownLatch.await();}synchronized void b(){ countDownLatch.countDown();} in
Method a is executed by the main thread, method b is executed by the asynchronous thread and the callback execution result is:
The main thread starts to get stuck after executing the a method, and no longer does it, and it will be useless for you to wait for no matter how long it takes.
This is a classic deadlock problem
a waits for b to execute, but in fact, don't think that b is a callback, b is also waiting for a to execute. Why? synchronized plays a role.
Generally speaking, when we want to synchronized a block of code, we need to use a shared variable to lock it, for example:
byte[] mutex = new byte[0];void a1(){ synchronized(mutex){ //dosomething }}void b1(){ synchronized(mutex){ //dosomething }} If the contents of the a method and b method are migrated to the synchronized blocks of the a1 and b1 methods respectively, it will be easy to understand.
After a1 is executed, it will indirectly wait for the (countDownLatch) b1 method to execute.
However, since the mutex in a1 is not released, we start waiting for b1. At this time, even if the asynchronous callback b1 method needs to wait for mutex to release the lock, the b method will not be executed.
This caused a deadlock!
The synchronized keyword here is placed in front of the method, and the function is the same. It is just that the Java language helps you hide the declaration and use of mutex. The synchronized method used in the same object is the same, so even an asynchronous callback will cause deadlocks, so pay attention to this problem. This level of error is that the synchronized keyword is used improperly. Do not use it randomly, and use it correctly.
So what exactly is such an invisible mutex object?
The example itself is easy to think of. Because in this way, there is no need to define a new object and make a lock. In order to prove this idea, you can write a program to prove it.
The idea is very simple. Define a class and there are two methods. One is declared synchronized, and the other is used synchronized(this) in the method body. Then start two threads to call these two methods separately. If lock competition occurs between the two methods (waiting), it can be explained that the invisible mutex in synchronized declared by the method is actually the instance itself.
public class MultiThreadSync { public synchronized void m1() throws InterruptedException{ System. out.println("m1 call" ); Thread. sleep(2000); System. out.println("m1 call done" ); } public void m2() throws InterruptedException{ synchronized (this ) { System. out.println("m2 call" ); Thread. sleep(2000); System. out.println("m2 call done" ); } } public static void main(String[] args) { final MultiThreadSync thisObj = new MultiThreadSync(); Thread t1 = new Thread(){ @Override public void run() { try { thisObj.m1(); } catch (InterruptedException e) { e.printStackTrace(); } } } }; Thread t2 = new Thread(){ @Override public void run() { try { thisObj.m2(); } catch (InterruptedException e) { e.printStackTrace(); } } }; t1.start(); t2.start(); }} The result output is:
m1 callm1 call donem2 callm2 call done
It is explained that the sync block of method m2 is waiting for the execution of m1. This can confirm the above concept.
It should be noted that when sync is added to the static method, since it is a class-level method, the locked object is the class instance of the current class. You can also write a program to prove it. Here it is omitted.
Therefore, the synchronized keyword of the method can be automatically replaced with synchronized(this){} when reading it, which is easy to understand.
void method(){void synchronized method(){ synchronized(this){ // biz code // biz code} ------>>> } } Memory visibility from Synchronized
In Java, we all know that the keyword synchronized can be used to implement mutual exclusion between threads, but we often forget that it has another function, that is, to ensure the visibility of variables in memory - that is, when two threads read and write access to the same variable at the same time, synchronized is used to ensure that the write thread updates the variable, and the read thread can read the latest value of the variable when it accesses the variable again.
For example, the following example:
public class NoVisibility { private static boolean ready = false; private static int number = 0; private static class ReaderThread extends Thread { @Override public void run() { while (!ready) { Thread.yield(); //Support the CPU to let other threads work} System.out.println(number); } } public static void main(String[] args) { new ReaderThread().start(); number = 42; ready = true; }}What do you think reading threads will output? 42? Under normal circumstances, 42 will be output. However, due to reordering problems, the read thread may output 0 or output nothing.
We know that the compiler may reorder the code when compiling Java code into bytecode, and the CPU may also reorder its instructions when executing machine instructions. As long as the reordering does not destroy the semantics of the program-
In a single thread, as long as the reordering does not affect the execution result of the program, it cannot be guaranteed that the operations in it must be executed in the order specified by the program, even if the reordering may have a significant impact on other threads.
This means that the execution of the statement "ready=true" may take precedence over the execution of the statement "number=42". In this case, the read thread may output the default value of number 0.
Under the Java memory model, reordering problems will lead to such memory visibility problems. Under the Java memory model, each thread has its own working memory (mainly the CPU cache or register), and its operations on variables are carried out in its own working memory, while communication between threads is achieved through synchronization between main memory and thread working memory.
For example, for the above example, the write thread has successfully updated number to 42 and ready to true, but it is very likely that the write thread only synchronizes number to main memory (maybe due to the CPU's write buffer), resulting in the ready value read by the subsequent read threads always being false, so the above code will not output any numerical values.
If we use the synchronized keyword to synchronize, there will be no such problem.
public class NoVisibility { private static boolean ready = false; private static int number = 0; private static Object lock = new Object(); private static class ReaderThread extends Thread { @Override public void run() { synchronized (lock) { while (!ready) { Thread.yield(); } System.out.println(number); } } public static void main(String[] args) { synchronized (lock) { new ReaderThread().start(); number = 42; ready = true; } }} This is because the Java memory model provides the following guarantees for synchronized semantics.
That is, when ThreadA releases lock M, the variables it has written (such as x and y, which are present in its working memory) will be synchronized to the main memory. When ThreadB applies for the same lock M, ThreadB's working memory will be set to invalid, and then ThreadB will reload the variable it wants to access from the main memory into its working memory (at this time, x=1, y=1, is the latest value modified in ThreadA). In this way, communication between threads from ThreadA to ThreadB is achieved.
This is actually one of the happen-before rules defined by JSR133. JSR133 defines the following set of happen-before rules for the Java memory model.
In fact, this set of happens-before rules defines the memory visibility between operations. If A operates happens-before B operation, then the execution result of A operation (such as writing to variables) must be visible when performing B operation.
To gain a deeper understanding of these happens-before rules, let's take an example:
//Code shared by thread A and B Object lock = new Object();int a=0;int b=0;int c=0;//Thread A, call the following code synchronized(lock){ a=1; //1 b=2; //2} //3c=3; //4//Thread B, call the following code synchronized(lock){ //5 System.out.println(a); //6 System.out.println(b); //7 System.out.println(c); //8}We assume that thread A runs first, assigns values to the three variables a, b, and c respectively (Note: the assignment of variables a, b is performed in the synchronous statement block), and then thread B runs again, reading the values of these three variables and printing them out. So what are the values of variables a, b, and c printed out by thread B?
According to the single-threading rule, in the execution of thread A, we can obtain that 1 operation happens before 2 operations, 2 operation happens before 3 operations, and 3 operation happens before 4 operations. Similarly, in the execution of thread B, 5 operations happens before 6 operations, 6 operations happens before 7 operations, and 7 operations happens before 8 operations. According to the principles of unlocking and locking of the monitor, the 3 operations (unlocking operation) are happens before 5 operations (locking operation). According to the transitive rules, we can conclude that operations 1 and 2 are happens before operations 6, 7, and 8.
According to the memory semantics of happens-before, the execution results of operations 1 and 2 are visible to operations 6, 7 and 8, so in thread B, a and b are printed must be 1 and 2. For operations 4 and operation 8 of variable c. We cannot deduce operation 4 happens before operation 8 according to the existing happens before rules. Therefore, in thread B, the variable accessed to c may still be 0, not 3.