Python and Ruby also have such a framework, but because libraries containing synchronous code are inevitably used in actual use, they did not grow up. Before Node.js, JavaScript's server-side programming was almost blank, so Node.js was able to establish a code library where all IO was asynchronous.
The bottlenecks of most web applications are in IO, namely, read and write disks, read and write networks, and read and write databases. What kind of strategy to use to wait for this period of time has become the key point in improving performance.
PHP's strategy: run multiple processes, wait for IO to complete directly. Disadvantages: Multiple processes consume multiple memory, making it difficult to share data between processes.
C/C++ common strategy: run multi-threading, and the program maintains the lock status itself. Disadvantages: High development cost, easy to make mistakes, and difficult to debug.
Python(Tornado): Multiple requests are executed in turn in a single process, and switch to another request when I encounter IO. Disadvantages: For a single request, time is still not used most efficiently.
What is "use time most efficiently"? For example, there are now two unrelated database queries. In PHP, one will usually be executed first, and the second one will be executed after the execution is completed (the total time is a + b). Obviously, this is not the most efficient. Two queries should be executed at the same time, and the time is max(a, b).
The problem with Python and other languages that support multithreading is that at the language level, it is difficult for programmers to tell virtual machines that two operations should be executed simultaneously. Even if there is a way, it is quite troublesome. Most people are too lazy to use (not worth using). Because Node.js crazyly forces all IO execution asynchronously, Node.js programmers can also be said to be familiar with it. In combination with some libraries to improve code readability (promise, async), it can easily allow irrelevant operations to be executed in parallel.
The above talks about the implementation of asynchronous IO, so where is the advantages of asynchronous IO reflected? In fact, asynchronous IO cannot magically reduce the pressure on the server. To add a server, you still need to add a server, but asynchronous IO will reduce the time of a single request and remove the meaningless waiting time in a single request. Therefore, the requests processed within unit time have not changed, but the processing time for each request has been reduced. From this perspective, the server also saves some resources - that is, maintains the memory consumed by each requested connection.