In this article, I will teach you how to analyze the thread stack of JVM and how to find the root cause of the problem from the stack information. In my opinion, thread stack analysis technology is a technology that Java EE products support engineers must master. The information stored in the thread stack is usually far beyond your imagination. We can use this information in our work.
My goal is to share the knowledge and experience accumulated in thread analysis over the past ten years. These knowledge and experience are obtained in the in -depth analysis of various versions of JVM and the JVM suppliers of various manufacturers. In the process, I also summarized a large number of common problem templates.
So, are you ready, now this article is added to the bookmark. In the following weeks, I will bring you this series of special articles. What are you waiting for, please hurry up and share this thread analysis training plan with your colleagues and friends.
It sounds good. I really should improve my thread stack analysis skills ... but where do I start?
My suggestion is to follow me to complete this thread analysis training plan. Here are the training content we will cover. At the same time, I will share with you the actual cases I have dealt with for everyone to learn and understand with everyone.
1) Overview of thread stacks and basic knowledge
2) Principles and related tools of thread stack
3) Different JVM thread stack formats (Sun Hotspot, IBM JRE, Oracal Jrockit)
4) Introduction and analysis method of thread stack log
5) Analysis of thread stacks and related technologies
6) Common problem templates (threading, dead locks, IO calling death, garbage recycling/OutofMemoryerror problem, dead cycle, etc.)
7) For example analysis of thread stack problems
I hope this series of training will bring you true help, so please continue to pay attention to the weekly articles update.
But what should I do if I have any questions in the study process or I can't understand the content in the article?
Don't worry, just treat me as your mentor. You can consult me with any questions about the thread stack (provided that the problem cannot be too low). Please select the following ways to get in touch with me:
1) Directly commented in this article (if you are sorry, you can be anonymous)
2) Submit your thread stack data to ROOT CAUSE Analysis Forum
3) Send me email, the address is@[email protected]
Can you help me analyze the problems encountered on our products?
Of course, if you are willing, you can send me your stack live data through mail or forum Root Capes Analysis Forum. The actual problem is the king of learning to improve skills.
I really hope that everyone can like this training. So I will do my best to provide you with high -quality materials and answer your various questions.
Before introducing thread stack analysis technology and problem model, you must first tell you the basic content. So in this post, I will cover the most basic content, so that everyone can better understand the interaction between JVM, middleware, and Java EE containers.
Java VM Overview
Java virtual machine is the basis of the Jave Ee platform. It is the place where middleware and applications are deployed and running.
JVM provides the following things to the middleware software and your Java/Java Ee program:
(Binary form) Java / Java EE program running environment some program functional characteristics and tools (IO infrastructure, data structure, thread management, security, monitoring, etc.).)
Dynamic memory allocation and management with the help of garbage recovery
Your JVM can stay on many operating systems (Solaris, AIX, Windows, etc.), and can be configured according to your physical server. You can install 1 to multiple JVM processes on each physical/virtual server Then, then
Interaction between JVM and middleware
The following diagram shows the high -rise interactive model between JVM, middleware and applications.
Some simple and typical interactions between the JVM, middleware and apps displayed in the figure. As you can see, the allocation of the threads of the standard Java EE application is completed between the core of the middle part and JVM. (Of course, there are exceptions. The application can directly call the API to create a thread. This approach is not common, and it is necessary to be careful during use)
At the same time, please note that some threads are managed by JVM. The typical example is the garbage recycling thread. The JVM uses this thread to do parallel garbage recycling treatment.
Because most thread distribution is done by the Java EE container, it is important to understand and understand thread stack tracking, and can identify it from the thread stack data, which is important to you. This allows you to quickly quickly. Know what type of requests are about to execute the Java EE container.
From the perspective of the analysis of a thread storage stack, you will be able to understand the difference between the thread pool discovered from JVM and identify the type of request.
The last section will provide you with an overview of the JVM thread stack for HOTSOP V Provide you.
Please note that you can obtain a thread stack example for this article from the fundamental reasons.
JVM thread stack -what is it?
The JVM thread stack is a given time snapshot that can provide you with a complete list of all Java threads created.
Every found Java thread will give you the following information:
The name of the thread; it is often used by the middleware manufacturer to identify the logo of the thread, and it will generally bring the assigned thread pool name and status (run, block, etc.)
Thread type & priority, for example: Daemon Prio = 3 ** middleware program generally creates their threads in the form of background guardianship, which means that these threads are running in the background; they will provide services to their users, such as:: To your java ee application **
Java thread ID, such as: tid = 0x000000011e52a800 ** This is the Java thread ID obtained through java.lang.thread.getid ().
Native thread ID, such as: NID = 0x251C **, the key is because the native thread ID allows you to obtain you from the perspective of the operating system. information. **
Java thread status and detailed information, such as: waiting for monitor entry [0xffffffffea5afb000] java.lang.thread.State: block (on object monitor)
** You can quickly understand the possible reason why the thread status is extremely currently blocked **
Java thread stack tracking; this is the most important data you can find from the thread stack so far. This is where you spend the most analysis of time, because the Java stack tracking provides you to provide you in the practice link later. The root cause of many types of problems, 90%of the information required.
Java stack memory decomposition; starting with the HOTSPOT VM 1.6 version, the memory usage of HotSpot can be seen at the end of the thread stack, such as Java's stacked memory (Younggen, OldGen) & PermGen space. This information is very useful when analyzing the problems caused by frequent GCs. You can use the known thread data or mode to make a fast positioning.
Heappsynggen Total 466944K, Used 178734K [0xFFFFFFFFFF45C00000, 0xFFFFFFFFF70800000, 0XFFFFFFFFF70800000) Eden Space 233472K, 76% USED F45C00000, 0XFFFFFFFF50AB7C50,0XFFFFFFFFFF540000) From Space 233472K, 0% Used [0xffffffffffffff62400000, 0xFFFFFFF62400000,0XFFFFFFFFF70800000) E 233472K, 0% userd [0xffffffff54000000 , 0XFFFFFFFF54000000, 0XFFFFFFFFF62400000) PSOLDGEN TOTAL 1400832K, USED 1400831K [0xffffffffef0400000, 0xffffff45c00000, 0xFFFFFFFFF45C00000) Object Space 1400832K, 99% Used [0xffffffFFFFFFFEF0400000, 0XFFFFFFFFF45BFFFFB8,0XFFFFFFFFFFF45C00000) PSPERMGEN TOTAL 262144K, USED 248475K , 0xffffffee0400000, 0xFFFFFFFFFF0400000) Object Space 262144K, 94 % used [0xffffffffed0400000, 0xffffffedf6F08,0XFFFFFFFFFFEE0400000)
Big Disassembly of thread stack information
In order to allow everyone to better understand, the following picture is provided to everyone. In this picture, the thread stack information and thread pool on the HOTSPOT VM have been disassembled in detail, as shown in the figure below:
In the figure above, it can be seen that the thread stack is composed of multiple different parts. This information is important for the analysis of the problem, but the analysis of different problems mode will use different parts (the problem mode will simulate and demonstrate in the later articles.)
Now through this analysis example, I will explain in detail the components of the HOTESPOT on -threaded stack information:
# FULL Thread Dump
"Full Thread Dump" is a global only keyword. You can find it in the output log in the middleware and stand -alone version of the Java thread stack information (for example, use it under unix: kill -3 <pid>). This is the beginning of the thread stack snapshot.
Full Thread Dump Java HotSpot (TM) 64-Bit Server VM (20.0-B11 Mixed Mode):
# Java EE middleware, third parties, and threads in custom application software
This part is the core part of the entire thread stack, and it is also the part that usually needs to analyze the time. The number of stack midlines depends on the middleware you use, a third -party library (may have independent threads) and your application (if you create a custom thread, this is usually not a good practice).
In our example thread stack, weblogic is the middleware we use. Starting from weblogic 9.2, you will use the unique thread pool that can be managed by "
"[Standby] Executethread: '414' for Queue: 'Weblogic.kernel.Default (Self-Tuning)'" Daemon Prio = 3 TID = 0x000000010916A800 NID = 0x2613 in Object .wait () [0XFFFFFFFFFE9EDFF000] java.lang.thread. State: Waiting (On Object Monitor) at java.lang.object.Wait (Native Method) -Waiting On <0xffffff27d44De0> (A Weblogic.Work.execuTetEthRead) . Lang.Object.Wait (Object.java:485) at weblogic.Work.Work.cutethread.WaitForrequest (Executethread.java:160) -Locked <0xFFFFFFFFF27D44DE0> (A weblogic.work.executetetHread) c.Work.executethread.run (executethread.java:181)
# HOTSPOT VM thread
This is an internal thread managed by HotSpot VM for the native operation of internal operations. Generally, you don't have to do too much about this, unless you find a high CPU occupation rate unless you (through related thread stacks and PRSTAT or native thread ID).
"VM Periodic Task Thread" Prio = 3 TID = 0x0000000101238800 NID = 0x19 Waiting On Condition
# HOTSPOT GC thread
When using HOTSPOT for parallel GC (now it is common in the environment of multiple physical cores), when the HOTSPOT VM created by default, or each JVM manages a GC thread with a specific logo. These GC threads can allow VM The execution of its periodic GC cleanup will cause the overall reduction of GC time; at the same time, the cost of the CPU will increase.
"GC TASK Thread#0 (Parallelgc)" prio = 3 tid = 0x0000000100120000 nid = 0x3 runnable "GC TASK Thread#1 (Parallelgc)" prio = 3 tid = 0x0000131000 NID = 0x444 runnable ………………………………………………………………… ………………………………………………………………………………………………………………………………………………………
This is a critical data, because when you encounter problems related to GC, such as excessive GC and memory leaks, you will be able to use the operating system or Java thread associated with the native ID value of these threads, and then find any right right right. High CPI time occupation. In the future, you will learn how to identify and diagnose such problems.
# Jni global reference count
The global reference of JNI (Java local interface) is from the local code to the basic object of the Java object managed by the Java garbage collector. Its role is to prevent the use of the local code and still. The "activity" referenced object's garbage collection.
At the same time, it is also important to pay attention to JNI references to detect JNI -related leaks. If your program uses JNI directly, or a third -party tool like a listener, it will easily cause local memory leakage.
Jni global references: 1925
# Java stack use view
These data have been added back to JDK 1.6, providing you with a short and fast view of the HOTSPOT stack. I found that it is very useful when I deal with problems with GC occupied by too high CPU. See the information of the thread stack and the Java pile in a separate snapshot, so that you can analyze (or exclude) in a specific Java pile memory space at that time. Seeing, Java's pile of OldGen beyond the maximum value!
Heap Psyounggen Total 466944K, Used 178734K [0xffffffff45c00000, 0xFFFFFFFF70800000, 0XFFFFFFFFF70800000) Eden Space 233472K, 76% USED F45C00000, 0XFFFFFFFF50AB7C50,0XFFFFFFFFFF540000) From Space 233472K, 0% Used [0xffffffffffffff62400000, 0xFFFFFFF62400000,0XFFFFFFFFF70800000) e 233472K, 0% used [ 0XFFFFFFFFF54000000, 0xffffffff540000, 0xFFFFFFFFF62400000) PSOLDGEN TOTAL 1400832k, Used 1400831K [0xffffffef0400000, 0xffffffffffffffffff, 0XFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFI FF45C00000) Object space 1400832k, 99% USED [0xffffffFFFFEF0400000,0XFFFFFFFFFF45BFFFFB8,0XFFFFFFFFF45C00000) PSPERMGEN TOTAL 262144K, USED 248475K [0 XFFFFFFFFFED0400000, 0xFFFFFFFFFEE0400000, 0xFFFFFFFFFEF0400000) Object Space 262144K, 94% USED [0xffffFFFFFFED0400000,0XFFFFFFFFFF6A6F08,0XFFFFFFFFFEE040000