Docker is very popular now, and container technology seems to be omnipotent, but this is actually a misunderstanding. Don’t be fascinated by the hype bubble. This article throws away the hype and rationally lists the five current misunderstandings of Docker from the perspective of Java programmers to help you better understand Docker’s advantages and problems.
Putting aside the hype from the media and manufacturers, how can we use Docker better and more rationally?
Docker has attracted much attention recently, and the reasons are obvious. How to successfully deliver the code has always troubled everyone. Traditional container technology is chaotic among many needs and templates. Docker can create containers simply and repeatedly. Using Docker enables faster and more natural code delivery than other containers. Duang, Docker is popular! There are also some misunderstandings and misunderstandings. Don't trust others to say Docker is easy to use or not. Thinking about Docker rationally and comprehensively by yourself will help you truly understand whether you really need it.
This article lists five major Docker misreadings from a Java perspective. But first introduce some background knowledge. To better understand Docker, we consulted Avishai Ish-Shalom of Fewbytes, who has extensive Docker experience and is also the organizer of the DevOps Days conference. We have listed these misunderstandings with him.
Main misunderstanding
1. Docker is a lightweight virtual machine
This is the main misunderstanding when you beginner Docker. This misunderstanding is understandable, Docker does look a bit like a virtual machine. Some people even compare the difference between Docker and virtual machines on the Docker website. However, Docker is not actually a lightweight virtual machine, but an improved Linux container (LXC). Docker and virtual machines are completely different. If you use Docker containers as lightweight virtual machines, you will encounter many problems.
Before using Docker, you must understand that there are many essential differences between Docker containers and virtual machines.
Resource isolation: Docker cannot reach the resource isolation level that a virtual machine can provide. The resources of the virtual machine are highly isolated, and Docker has needed to share some resources since its inception. These resources cannot be isolated and protected by Docker, such as page cache and kernel entropy pool. (Note: The kernel entropy pool is interesting, it collects and stores random bits generated by system operations. The machine will use this pool when it needs to be randomized, such as password-related.) If the Docker container occupies these shared resources, other processes can only wait before these resources are released.
Overhead: Most people know that the CPU and RAM of a virtual machine can provide physical machine-like performance, but there are a lot of additional IO overhead. Because the guest OS of virtual machines is abandoned, Docker's package is smaller and requires less storage overhead than virtual machines. But this doesn't mean that Docker has no overhead issues. Docker containers still need to pay attention to the IO overhead, but they are not as serious as virtual machines.
Kernel usage: Docker containers and virtual machines are completely different in kernel usage. Each virtual machine uses one kernel. Docker containers share kernels among all containers. Shared kernels bring some efficiency gains, but at the expense of high availability and redundancy. If a kernel crash occurs in a virtual machine, only the virtual machine on this kernel will be affected. If the kernel of the Docker container crashes, all containers will be affected.
2. Docker makes applications scalable
Because Docker can deploy code on multiple servers in a very short time, naturally some people will think that Docker can make the application itself scalable. Unfortunately, this is wrong. Code is the cornerstone of the application, and Docker does not rewrite the code. The scalability of an application still depends on the programmer. Using Docker does not automatically make your code easy to scale, it just makes it easier to deploy across servers.
3. Docker is widely used in production environments
Because Docker is in full swing, many people believe that Docker can be used on a large scale in production environments. In fact, this is not right. Note that Docker is still very new technology, immature and growing, which means there are still many annoying bugs and features to be improved. It is true that you are interested in new technologies, but it is best to figure out the correct usage scenarios and what needs to be paid attention to. Now, Docker is easy to apply to development environments. Using Docker can easily create many different environments (at least, it gives people the feeling that they can create different environments), which is very useful for development.
In production environments, Docker's immaturity and imperfection also limit usage scenarios. For example, Docker does not directly support monitoring of networks and resources for multiple machines, which makes it almost impossible to use in production environments. Of course, there are many potential areas, such as directly deploying the same package from the development environment to the production environment. There are also some Docker runtime features that are also useful for production environments. But in general, in the production environment, there are currently not more advantages than advantages. This does not mean that it cannot be successfully applied to the production environment, but now it cannot be expected to mature and perfect at once.
4. Docker is cross-OS
Another misunderstanding is that Docker works on any operating system and environment. This may come from an analogy of containers for loading and unloading goods, but the relationship between software and operating systems is not as simple and direct as the ship's position.
In fact, Docker is just a technology on Linux. In addition, Docker relies on specific kernel features and must have the latest version of the kernel. Based on the differences between different OSes, if you do not use the lowest common features across OSes, you will encounter many troublesome problems. These problems may only have a 1% incidence, but when you deploy on multiple servers, 1% is also fatal.
Although Docker only runs on Linux, it can also be used on OS X or Windows. Using boot2docker will run a Linux virtual machine on an OS X or Windows machine, so Docker can run in this virtual machine.
5. Docker enhances application security
It is also a misunderstanding that Docker can improve the security of the code and delivery process. This is also the difference between a real container and a container on a software. Docker is a containerization technology that adds orchestration methods. However, Linux containers have some security vulnerabilities that may be attacked. Docker does not add any security layers or patches to these vulnerabilities. It is not a trouser shirt that can protect applications.
From a Java perspective
Some Java developers have started using Docker. Some of Docker's features make it easier for us to build scalable contexts. Unlike uber-jar, Docker can help you package all your dependencies (including JVM) into a ready-to-release image. This is also the most fascinating thing about Docker for developers. However, this will also bring some hidden dangers. Generally speaking, programmers need to monitor it in different ways and code, debug it, connect it, tune it... if you use Docker, these will require extra work.
For example, we want to use jconsole, which relies on JMX functions, because JMX needs networks to use RMI. It is not very direct if you use Docker, and you need some skills to open the required port. We initially discovered that the problem was that when we wanted to build a Docker application for Takipi, we had to run a background program outside the JVM in the container. Detailed solution on GitHub.
Another serious problem is that the performance tuning of Docker containers is quite difficult. When using containers, you don't know how much memory will be allocated for each container. If you have 20 containers, the memory will be allocated to them in ways you are uncertain. If you plan to tune the heap with parameter -Xmx, it is difficult because the processing of the JVM in the Docker container depends on the ability to automatically obtain the memory allocated to the container. Performance tuning is almost impossible if you don't know how much memory is allocated.
in conclusion
Docker is a very interesting technology and has some real and effective usage scenarios. As an emerging technology, it also takes a lot of time to resolve missing features and known bugs. However, there is indeed a lot of hype in this field now. But remember, the hype is not a success~
Thank you for reading, I hope it can help you. Thank you for your support for this site!