Many friends may have heard of the keyword volatile and may have used it. Prior to Java 5, it was a controversial keyword, as using it in programs often resulted in unexpected results. Only after Java 5 did the volatile keyword regain its vitality.
Although the volatile keyword is literally simple to understand, it is not easy to use it well. Since the volatile keyword is related to Java's memory model, before telling the volatile key, we first understand the concepts and knowledge related to the memory model, then analyze the implementation principle of the volatile keyword, and finally give several scenarios of using the volatile keyword.
Here is the directory outline of this article:
1. Related concepts of memory models
As we all know, when a computer executes a program, each instruction is executed in the CPU, and during the execution of the instruction, it will inevitably involve reading and writing of data. Since the temporary data during the program operation is stored in the main memory (physical memory), there is a problem at this time. Since the CPU execution speed is very fast, the process of reading data from memory and writing data to memory is much slower than the CPU's execution of instructions. Therefore, if the data operation must be carried out through interaction with memory at any time, the speed of instruction execution will be greatly reduced. Therefore, there is a cache in the CPU.
That is, when the program is running, it will copy the data required by the operation from the main memory to the CPU's cache. Then when the CPU performs calculations, it can directly read data from its cache and write data to it. After the operation is completed, the data in the cache will be flushed into the main memory. Let's give a simple example, such as the following code:
i = i + 1;
When the thread executes this statement, it will first read the value of i from the main memory, then copy a copy into the cache, and then the CPU will execute the instruction to add 1 to i, then write the data to the cache, and finally refresh the latest value of i in the cache to the main memory.
There is no problem with this code running in a single thread, but there will be problems when running in a multi-thread. In multi-core CPUs, each thread may run in a different CPU, so each thread has its own cache when running (for single-core CPUs, this problem will actually occur, but it is executed separately in the form of thread scheduling). In this article, we take multi-core CPU as an example.
For example, two threads execute this code at the same time. If the value of i is 0 at the beginning, then we hope that the value of i will become 2 after the two threads have executed. But will this be the case?
There may be one of the following situations: at the beginning, two threads read the value of i and store it in the cache of their respective CPUs, and then thread 1 performs an operation of adding 1, and then writes the latest value of i to memory. At this time, the value of i in the cache of thread 2 is still 0. After performing the 1 operation, the value of i is 1, and then thread 2 writes the value of i to memory.
The value of the final result i is 1, not 2. This is the famous cache consistency problem. This variable that is accessed by multiple threads is usually called a shared variable.
That is to say, if a variable is cached in multiple CPUs (usually only occurs during multithreading programming), then there may be a problem of cache inconsistency.
In order to solve the cache inconsistency problem, there are usually two solutions:
1) By adding LOCK# lock to the bus
2) Through the cache coherence protocol
These two methods are provided at the hardware level.
In early CPUs, the problem of cache inconsistency was solved by adding LOCK# locks to the bus. Because the communication between the CPU and other components is carried out through the bus, if the bus is added with a LOCK# lock, it means that other CPUs are blocked from accessing other components (such as memory), so that only one CPU can use the memory of this variable. For example, in the above example, if a thread is executing i = i +1, and if the LCOK# lock signal is sent on the bus during the execution of this code, then only after waiting for the code to be fully executed, other CPUs can read the variable from the memory where the variable i is located and then perform the corresponding operations. This solves the problem of cache inconsistency.
But the above method will have a problem, because other CPUs cannot access memory during the bus lock, resulting in inefficiency.
So a cache consistency protocol emerges. The most famous one is Intel's MESI protocol, which ensures that the copy of shared variables used in each cache is consistent. Its core idea is: when the CPU writes data, if it finds that the variable that is operated is a shared variable, that is, there is a copy of the variable in other CPUs, it will signal other CPUs to set the cache line of the variable to an invalid state. Therefore, when other CPUs need to read this variable and find that the cache line that caches the variable in their cache is invalid, then it will reread from memory.
2. Three concepts in concurrent programming
In concurrent programming, we usually encounter the following three problems: atomicity problem, visibility problem, and orderly problem. Let’s take a look at these three concepts first:
1. Atomicity
Atomicity: that is, one operation or multiple operations are either executed all and the process of execution will not be interrupted by any factors, or it will not be executed.
A very classic example is the bank account transfer problem:
For example, if you transfer 1,000 yuan from Account A to Account B, it will inevitably include 2 operations: subtract 1,000 yuan from Account A and add 1,000 yuan to Account B.
Just imagine what consequences will be caused if these two operations are not atomic. If 1,000 yuan is subtracted from Account A, the operation will be suddenly terminated. Then, 500 yuan was withdrawn from B, and after withdrawing 500 yuan, then the operation of adding 1,000 yuan to account B. This will lead to the fact that although Account A has minus 1,000 yuan, Account B has not received the transferred 1,000 yuan.
Therefore, these two operations must be atomic in order to ensure that there are no unexpected problems.
What are the results that will be reflected in concurrent programming?
To give the simplest example, think about what would happen if the process of assigning a 32-bit variable is not atomic?
i = 9;
If a thread executes this statement, I'll assume that assignment of a 32-bit variable includes two processes: assignment of a lower 16-bit and assignment of a higher 16-bit.
Then a situation may occur: when the low 16-bit value is written, it is suddenly interrupted, and at this time another thread reads the value of i, then what is read is the wrong data.
2. Visibility
Visibility refers to when multiple threads access the same variable, one thread modifies the value of the variable, and other threads can immediately see the modified value.
For a simple example, see the following code:
//The code executed by thread 1 is int i = 0;i = 10; //The code executed by thread 2 is j = i;
If the execution thread 1 is CPU1 and the execution thread 2 is CPU2. From the above analysis, we can see that when thread 1 executes the sentence i = 10, the initial value of i will be loaded into the cache of CPU1 and then assigned a value of 10. Then the value of i in the cache of CPU1 becomes 10, but it is not immediately written to the main memory.
At this time, thread 2 executes j = i, and it will first go to the main memory to read the value of i and load it into the cache of CPU2. Note that the value of i in the memory is still 0, so the value of j will be 0, not 10.
This is the visibility issue. After thread 1 modifies variable i, thread 2 does not immediately see the value modified by thread 1.
3. Order
Order: that is, the order of execution of programs is executed in the order of code. For a simple example, see the following code:
int i = 0; boolean flag = false;i = 1; //Statement 1 flag = true; //Statement 2
The above code defines an int-type variable, a boolean-type variable, and then assigns values to the two variables respectively. From the perspective of code sequence, statement 1 is before statement 2. So when the JVM actually executes this code, will it ensure that statement 1 will be executed before statement 2? Not necessarily, why? Instruction Reorder may occur here.
Let’s explain what instruction reordering is. Generally speaking, in order to improve the program operation efficiency, the processor may optimize the input code. It does not ensure that the execution order of each statement in the program is consistent with the order in the code, but it will ensure that the final execution result of the program and the result of the code execution sequence are consistent.
For example, in the above code, which executes statement 1 and statement 2 first has no effect on the final program result, then it is possible that during the execution process, statement 2 is executed first and statement 1 is executed later.
But be aware that although the processor will reorder the instructions, it will ensure that the final result of the program will be the same as the code execution sequence. So what guarantees it is? Let’s take a look at the following example:
int a = 10; //Statement 1int r = 2; //Statement 2a = a + 3; //Statement 3r = a*a; //Statement 4
This code has 4 statements, so a possible execution order is:
So is it possible to be the execution order: Statement 2 Statement 1 Statement 4 Statement 3
It is not possible because the processor will consider the data dependence between instructions when reordering. If an instruction Instruction 2 must use the result of Instruction 1, the processor will ensure that Instruction 1 will be executed before Instruction 2.
Although reordering will not affect the results of program execution within a single thread, what about multithreading? Let's see an example below:
//Thread 1:context = loadContext(); //State 1inited = true; //State 2 //Thread 2:while(!inited ){ sleep()}doSomethingwithconfig(context);In the above code, since statements 1 and 2 have no data dependencies, they may be reordered. If reordering occurs, statement 2 is first executed during thread 1's execution, and this is Thread 2 will think that the initialization work has been completed, and then it will jump out of the while loop to execute the doSomethingwithconfig(context) method. At this time, the context is not initialized, which will cause a program error.
As can be seen from the above, instruction reordering will not affect the execution of a single thread, but will affect the correctness of concurrent execution of threads.
In other words, in order to execute concurrent programs correctly, atomicity, visibility and orderliness must be ensured. As long as one is not guaranteed, it may cause the program to run incorrectly.
3.Java memory model
I talked about some issues that may arise in memory models and concurrent programming. Let’s take a look at the Java memory model and study what guarantees the Java memory model provides for us and what methods and mechanisms are provided in Java to ensure the correctness of program execution when performing multi-threaded programming.
In the Java virtual machine specification, it is attempted to define a Java memory model (JMM) to block the memory access differences between various hardware platforms and operating systems, so as to enable Java programs to achieve consistent memory access effects on various platforms. So what does the Java memory model stipulate? It defines the access rules for variables in a program. To put it more broadly, it defines the order of program execution. Note that in order to obtain better execution performance, the Java memory model does not restrict the execution engine from using the processor's registers or caches to improve instruction execution speed, nor does it restrict the compiler to reorder instructions. In other words, in the Java memory model, there will also be cache consistency problems and instruction reordering problems.
The Java memory model stipulates that all variables are in main memory (similar to the physical memory mentioned above), and each thread has its own working memory (similar to the previous cache). All operations of a thread on a variable must be performed in working memory, and cannot directly operate on the main memory. And each thread cannot access the working memory of other threads.
To give a simple example: In java, execute the following statement:
i = 10;
The execution thread must first assign the cache line where the variable i is located in its own work thread, and then write it to the main memory. Instead of writing the value 10 directly into the main memory.
So what guarantees does the Java language itself provide for atomicity, visibility and orderliness?
1. Atomicity
In Java, the read and assignment operations of variables of basic data types are atomic operations, that is, these operations cannot be interrupted and either executed or not.
Although the above sentence seems simple, it is not that easy to understand. See the following example i:
Please analyze which of the following operations are atomic operations:
x = 10; //Statement 1y = x; //Statement 2x++; //Statement 3x = x + 1; //Statement 4
At first glance, some friends may say that the operations in the above four statements are all atomic operations. In fact, only statement 1 is an atomic operation, and none of the other three statements are atomic operations.
Statement 1 directly assigns the value 10 to x, which means that the thread executes this statement and writes the value 10 directly into working memory.
Statement 2 actually contains 2 operations. It first needs to read the value of x, and then write the value of x to the working memory. Although the two operations of reading the value of x and writing the value of x to the working memory are atomic operations, they are not atomic operations together.
Similarly, x++ and x = x+1 include 3 operations: read the value of x, perform the operation of adding 1, and write the new value.
Therefore, only the operation of statement 1 in the above four statements is atomic.
In other words, only simple reading and assignment (and the number must be assigned to a variable, and the mutual assignment between variables is not an atomic operation) is an atomic operation.
However, there is one thing to note here: under the 32-bit platform, the reading and assignment of 64-bit data needs to be completed through two operations, and its atomicity cannot be guaranteed. However, it seems that in the latest JDK, the JVM has ensured that reading and assignment of 64-bit data is also atomic operation.
From the above, it can be seen that the Java memory model only ensures that basic reads and assignments are atomic operations. If you want to achieve atomicity of a larger range of operations, it can be achieved through synchronized and Lock. Since synchronized and Lock can ensure that only one thread executes the code block at any time, there will naturally be no atomicity problem, thus ensuring atomicity.
2. Visibility
For visibility, Java provides the volatile keyword to ensure visibility.
When a shared variable is modified by volatile, it ensures that the modified value will be updated to main memory immediately, and when other threads need to read it, it will read the new value in memory.
However, ordinary shared variables cannot guarantee visibility, because it is uncertain when the normal shared variable is written to the main memory after it is modified. When other threads read it, the original old value may still be in the memory, so visibility cannot be guaranteed.
In addition, synchronized and Lock can also ensure visibility. Synchronized and Lock can ensure that only one thread acquires the lock at the same time and executes the synchronization code. Before releasing the lock, the modification of the variable will be refreshed to the main memory. Therefore, visibility can be guaranteed.
3. Order
In the Java memory model, compilers and processors are allowed to reorder instructions, but the reordering process will not affect the execution of single-threaded programs, but will affect the correctness of multi-threaded concurrent execution.
In Java, a certain "orderline" can be ensured through the volatile keyword (the specific principle is explained in the next section). In addition, synchronized and Lock can be used to ensure order. Obviously, synchronized and Lock ensure that there is a thread that executes synchronization code at each moment, which is equivalent to letting threads execute synchronization code in sequence, which naturally ensures order.
In addition, the Java memory model has some innate "orderline", that is, it can be guaranteed without any means, which is usually called the happens-before principle. If the execution order of two operations cannot be derived from the happens-before principle, then they cannot guarantee their orderliness and virtual machines can reorder them at will.
Let’s introduce the happens-before principle (priority occurrence principle):
These 8 principles are excerpted from "In-depth Understanding of Java Virtual Machines".
Among these 8 rules, the first 4 rules are more important, while the last 4 rules are all obvious.
Let’s explain the first 4 rules below:
For program order rules, my understanding is that the execution of a piece of program code seems to be ordered in a single thread. Note that although this rule mentions that "the operation written in the front occurs first in the operation written in the back", this should be the order in which the program appears to be executed in the code sequence, because the virtual machine may reorder the program code instructed. Although reordering is performed, the final execution result is consistent with the program sequential execution, and it will only reorder instructions that do not have data dependencies. Therefore, in a single thread, program execution appears to be executed in an orderly manner, which should be understood with care. In fact, this rule is used to ensure the correctness of the execution results of the program in a single thread, but it cannot guarantee the correctness of the program in a multi-threaded manner.
The second rule is also easier to understand, that is, if the same lock is in a locked state, it must be released before the lock operation can be continued.
The third rule is a relatively important rule and is also what will be discussed later. Intuitively, if a thread writes a variable first and then a thread reads, then the write operation will definitely occur first in the read operation.
The fourth rule actually reflects that the happens-before principle is transitive.
4. In-depth analysis of volatile keywords
I have talked about a lot of things before, but they are actually paving the way to tell the volatile keyword, so let’s get to the topic.
1. Two-layer semantics of volatile keywords
Once a shared variable (class member variables, class static member variables) is modified by volatile, it has two layers of semantics:
1) Ensure visibility of different threads when operating this variable, that is, one thread modifies the value of a certain variable, and this new value is immediately visible to other threads.
2) It is prohibited to reorder instructions.
Let’s look at a piece of code first. If thread 1 is executed first and thread 2 is executed later:
//Thread 1boolean stop = false; while(!stop){ doSomething();} //Thread 2stop = true;This code is a very typical piece of code, and many people may use this markup method when interrupting threads. But in fact, will this code run completely correctly? Will the thread be interrupted? Not necessarily. Perhaps most of the time, this code can interrupt threads, but it may also cause the thread to not be interrupted (although this possibility is very small, once this happens, it will cause a dead loop).
Let's explain why this code may cause the thread to fail to interrupt. As explained earlier, each thread has its own working memory during operation, so when thread 1 is running, it will copy the value of the stop variable and put it in its own working memory.
Then when Thread 2 changes the value of the stop variable, but has not had time to write it to the main memory, Thread 2 goes to do other things, then Thread 1 does not know about Thread 2's changes to the stop variable, so it will continue to loop.
But after modifying with volatile it becomes different:
First: Using the volatile keyword will force the modified value to be written to the main memory immediately;
Second: If you use the volatile keyword, when thread 2 modifies it, the cache line of the cache variable stop in thread 1's working memory will be invalid (if it is reflected in the hardware layer, the corresponding cache line in the L1 or L2 cache of the CPU is invalid);
Third: Since the cache line of the cache variable stop in thread 1's working memory is invalid, thread 1 will read it in main memory when it reads the value of the variable stop again.
Then when thread 2 modifies the stop value (of course, there are 2 operations here, modifying the value in thread 2's working memory, and then writing the modified value to memory), the cache line of the cache variable stop in thread 1's working memory will be invalid. When thread 1 reads, it finds that its cache line is invalid. It will wait for the corresponding main memory address of the cache line to be updated, and then read the latest value in the corresponding main memory.
Then what thread 1 reads is the latest correct value.
2. Does volatile guarantee atomicity?
From the above, we know that the volatile keyword ensures the visibility of operations, but can volatile ensure that the operations on variables are atomic?
Let's see an example below:
public class Test { public volatile int inc = 0; public void increase() { inc++; } public static void main(String[] args) { final Test test = new Test(); for(int i=0;i<10;i++){ new Thread(){ public void run() { for(int j=0;j<1000;j++) test.increase(); }; }.start(); } while(Thread.activeCount()>1) // Ensure that the previous threads have completed Thread.yield(); System.out.println(test.inc); }}Think about what the output result of this program is? Maybe some friends think it is 10,000. But in fact, running it will find that the results of each run are inconsistent, and it is a number less than 10,000.
Some friends may have questions, it is wrong. The above is to perform self-increment operation on the variable inc. Since volatile ensures visibility, after the self-increment of inc in each thread, the modified value can be seen in other threads. Therefore, 10 threads have performed 1000 operations respectively, so the final value of inc should be 1000*10=10000.
There is a misunderstanding here. The volatile keyword can ensure visibility, but the above program is wrong because it cannot guarantee atomicity. Visibility can only ensure that the latest value is read every time, but volatile cannot guarantee the atomicity of the operation of variables.
As mentioned earlier, the auto-increment operation is not atomic. It includes reading the original value of a variable, performing an additional operation, and writing to working memory. That is to say, the three sub-operations of the self-increment operation may be performed separately, which may lead to the following situation:
If the value of variable inc at a certain time is 10,
Thread 1 performs self-increment operation on the variable. Thread 1 first reads the original value of the variable inc, and then thread 1 is blocked;
Then thread 2 performs self-increment operation on the variable, and thread 2 also reads the original value of the variable inc. Since thread 1 only performs a read operation on the variable inc and does not modify the variable, it will not cause the cache line of the cache inc cache variable inc in thread 2 to be invalid. Therefore, thread 2 will directly go to the main memory to read the value of inc. When it is found that the value of inc is 10, then performs an operation of adding 1, and writes 11 to the working memory, and finally writes it to the main memory.
Then thread 1 then performs the addition operation. Since the value of inc has been read, note that the value of inc in thread 1 is still 10 at this time, so after thread 1 adds inc, the value of inc is 11, then writes 11 to work memory, and finally writes it to main memory.
Then after the two threads perform a self-increment operation, inc only increases by 1.
Having explained this, some friends may have questions, it is wrong. Isn’t it guaranteed that a variable will invalidate the cache line when modifying the volatile variable? Then other threads will read the new value. Yes, this is correct. This is the volatile variable rule in the happens-before rule above, but it should be noted that if thread 1 reads the variable and is blocked, the inc value will not be modified. Then although volatile can ensure that thread 2 reads the value of variable inc from memory, thread 1 has not modified it, so thread 2 will not see the modified value at all.
The root cause is that the autoincrement operation is not an atomic operation, and volatile cannot guarantee that any operation on variables is atomic.
Change the above code to any of the following can achieve the effect:
Use synchronized:
public class Test { public int inc = 0; public synchronized void increase() { inc++; } public static void main(String[] args) { final Test test = new Test(); for(int i=0;i<10;i++){ new Thread(){ public void run() { for(int j=0;j<1000;j++) test.increase(); }; }.start(); } while(Thread.activeCount()>1) // Ensure that the previous threads have completed Thread.yield(); System.out.println(test.inc); }} Using Lock:
public class Test { public int inc = 0; Lock lock = new ReentrantLock(); public void increase() { lock.lock(); try { inc++; } finally{ lock.unlock(); } } public static void main(String[] args) { final Test test = new Test(); for(int i=0;i<10;i++){ new Thread(){ public void run() { for(int j=0;j<1000;j++) test.increase(); }; }.start(); } while(Thread.activeCount()>1) // Ensure that the previous threads have been executed Thread.yield(); System.out.println(test.inc); }} Using AtomicInteger:
public class Test { public AtomicInteger inc = new AtomicInteger(); public void increase() { inc.getAndIncrement(); } public static void main(String[] args) { final Test test = new Test(); for(int i=0;i<10;i++){ new Thread(){ public void run() { for(int j=0;j<1000;j++) test.increase(); }; }.start(); } while(Thread.activeCount()>1) // Ensure that the previous threads have been executed Thread.yield(); System.out.println(test.inc); }}Some atomic operation classes are provided under the java.util.concurrent.atomic package of java 1.5, namely, the self-increment (add 1 operation), self-decrease (add 1 operation), addition operation (add a number), and subtraction operation (add a number) of basic data types to ensure that these operations are atomic operations. atomic uses CAS to implement atomic operations (Compare And Swap). CAS is actually implemented using the CMPXCHG instructions provided by the processor, and the processor executes the CMPXCHG instructions is an atomic operation.
3.Can volatile ensure orderliness?
As mentioned earlier, the volatile keyword can prohibit instruction reordering, so volatile can ensure order to a certain extent.
There are two meanings forbidden reordering of volatile keywords:
1) When the program executes a read or write operation of the volatile variable, all changes to the previous operations must have been made, and the result is already visible to the subsequent operations; the subsequent operations must have not been made yet;
2) When performing instruction optimization, the statement accessed to the volatile variable cannot be placed behind it, nor can the statements following the volatile variable be placed before it.
Maybe what is said above is a bit confusing, so give a simple example:
//x and y are non-volatile variables//flag is volatile variable x = 2; //Statement 1y = 0; //Statement 2flag = true; //Statement 3x = 4; //Statement 4y = -1; //Statement 5
Since the flag variable is a volatile variable, when performing the instruction reordering process, statement 3 will not be placed before statement 1 and 2, nor will it be placed after statement 3, and statement 4 and 5. However, it is not guaranteed that the order of statement 1 and statement 2 and the order of statement 4 and statement 5 are not guaranteed.
Moreover, the volatile keyword can ensure that when the statement 3 is executed, statement 1 and statement 2 must be executed, and the execution results of statement 1 and statement 2 are visible to statement 3, statement 4, and statement 5.
So let's go back to the previous example:
//Thread 1:context = loadContext(); //State 1inited = true; //State 2 //Thread 2:while(!inited ){ sleep()}doSomethingwithconfig(context);When I gave this example, I mentioned that it is possible that statement 2 will be executed before statement 1, so long it may cause the context to be not initialized, and thread 2 uses the uninitialized context to operate, resulting in a program error.
If the inited variable is modified with the volatile keyword, this problem will not occur, because when the statement 2 is executed, it will definitely ensure that the context has been initialized.
4. The principle and implementation mechanism of volatile
The previous description of some uses of the volatile keyword originated from. Let’s discuss how volatile ensures visibility and prohibits instructions to reorder.
The following passage is excerpted from "In-depth Understanding of Java Virtual Machines":
"Observing the assembly code generated when adding the volatile keyword and not adding the volatile keyword, it is found that when adding the volatile keyword, there will be an additional lock prefix instruction."
The lock prefix instruction is actually equivalent to a memory barrier (also a memory fence), and the memory barrier will provide 3 functions:
1) It ensures that when the instructions are reordered, the instructions behind them will not be placed before the memory barrier, nor will the instructions ahead behind the memory barrier; that is, when the instructions on the memory barrier are executed, all the operations in front of them have been completed;
2) It will force the modification operations to the cache to be written to the main memory immediately;
3) If it is a write operation, it will cause the corresponding cache line in other CPUs to be invalid.
5. Scenarios using volatile keywords
Synchronized keyword prevents multiple threads from executing a piece of code at the same time, which will greatly affect the program execution efficiency. The performance of the volatile keyword is better than synchronized in some cases. However, it should be noted that the volatile keyword cannot replace the synchronized keyword, because the volatile keyword cannot guarantee the atomicity of the operation. Generally speaking, the following two conditions must be met when using volatile:
1) Write operations to variables do not depend on the current value
2) This variable is not included in the invariant with other variables
In fact, these conditions indicate that these valid values that can be written to the volatile variable are independent of the state of any program, including the current state of the variable.
In fact, my understanding is that the above two conditions require ensuring that the operations are atomic operations in order to ensure that programs using the volatile keyword can be executed correctly when concurrency is performed.
Below are several scenarios in Java using volatile.
1. Status mark quantity
volatile boolean flag = false; while(!flag){ doSomething();} public void setFlag() { flag = true;} volatile boolean initiated = false;//Thread 1:context = loadContext(); initiated = true; //Thread 2:while(!inited ){sleep()}doSomethingwithconfig(context);2.double check
class Singleton{ private volatile static Singleton instance = null; private Singleton() { } public static Singleton getInstance() { if(instance==null) { synchronized (Singleton.class) { if(instance==null) instance = new Singleton(); } } return instance; }}References:
"Java Programming Thoughts"
"In-depth understanding of Java virtual machines"
The above is all the content of this article. I hope it will be helpful to everyone's learning and I hope everyone will support Wulin.com more.