Recently, I have been studying the book "Guide to Construction of High Performance Websites". This article is a study note. I will sort out what I have learned for easy viewing later.
Performance Golden Rule explains that only 10% to 20% of end-user response time is spent accepting requested user HTML documents, while the remaining 80% to 90% of the time is spent on HTTP requests for all components (images, scripts, stylesheets, etc.) referenced by the HTML document, and the end-user response time is spent on page components.
--Steve Sounders
1 File merge (reduce the number of HTTP requests)
CSS Sprites
Use css sprites to merge the pictures used on the website into one image, and use an icon through background-position, width, and height to control the background image position. This method can reduce multiple image requests to one time. There are many tools to generate css sprites. There are related plug-ins in grunt and gulp, and CssGaga is also good.
Merge js and css
Like the sprite map, merging css and js files is also an important way to reduce HTTP requests. There is no controversy over the merger of css files at present, but for the current prevalence of js modularity, merging all js files into one file seems to be a backwards. The correct way is to abide by the compiled language pattern, maintain the modularity of js, and generate target files only for the js files used in the initial request during the generation process.
2 Use content to publish the network (reduce HTTP request time)
Another factor influencing HTTP request time is the distance you are from the website web server. Obviously, the longer the distance, the longer the request takes, which can greatly improve through CDN.
CDN is a web server distributed in multiple geographical locations, used to publish content to users more effectively. The main function of CDN is to store static files for end users, and also provides download, security services and other functions.
3 Set browser cache (avoid duplicate HTTP requests)
Using Expire/Cache-control
Browsers can avoid repeated requests every time by using cache. HTTP 1.0 and HTTP 1.1 have different cache implementation methods, Expires (1.0) and Cache-control (1.1). The web server uses expires to tell the client that all cached copies of the file will be used within a specified time and no longer repeatedly makes requests to the server, for example:
Expires: Thu, 01 Dec 2016 16:00:00 GMT (GMT format)
This setting means that as of December 1, 2016, the cached copy is available without making any further requests.
Expires has a limitation on the way of passing deadlines: it requires strict synchronization between the client and server clocks, while the Cache-Control introduced by HTTP 1.1 specifies a cache date by specifying a time in seconds, so this limitation is not present, for example:
Cache-Control: max-age=31536000
This setting means that the cache time is one year, and it is recommended to use Cache-Control, but when HTTP 1.1 is supported, another thing to note: Cache-Control and Expire exist at the same time, Cache-Control has higher priority.
Configure or remove ETag
Using Expire/Cache-Control can avoid the second visit, use local cache to avoid duplicate HTTP requests, and improve website speed. However, if the user clicks on the browser refresh or expires, an HTTP GET request will still be issued to the server. If the file does not change at this time, the server will not return the entire file but will return a 304 Not Modified status code.
There are two basiss for the server to determine whether the file has changed: Last-Modified (latest modification date) and ETag (entity tag);
ETag (Entity Tags) was introduced in HTTP 1.1 and has a higher priority when it exists at the same time as Last-Modified. The server compares the ETag (If-None-Match) sent by the client with the current ETag, and returns 304 Not Modifed if the same is true, otherwise the entire file and 200 OK will be returned.
There is a problem with ETag: when the browser sends a GET request to one server and then requests the component from another server, ETag does not match. Of course, if your website is hosted on one server, and now many websites use multiple servers, the existence of ETag greatly reduces the success rate of verification validity.
The solution to this problem is to configure ETag, remove the server innode value and only retain the modification timestamp and size as the ETag value, or directly remove ETag, and use Last-Modified to verify the validity of the file.
4 Compress components (reduce HTTP request size)
By compressing HTTP-transmitted files, reducing the size of HTTP requests and improving the request speed, GZIP is the most commonly used and most effective compression method at present.
However, not all resource files need to be compressed. The cost of compression includes that the server needs to spend CPU cycles to compress, and the client also needs to decompress the compressed files, which must be weighed in combination with its own website. Now most websites compress their HTML documents, and some websites choose to compress js and css. Almost no websites use GZIP compression for pictures, PDFs and other files. The reason is that these files have been compressed, and using HTTP to compress things that have been overcompressed cannot make it smaller. In fact, adding headers, compressing dictionaries, and verifying the response body actually makes it bigger, and also wasting CPU.
How to enable GZIP on the website requires setting in the web server (IIS, Nginx, Apache, etc.).
5 CSS files are placed first
Putting the CSS file in the first and the last does not affect HTTP requests, so it is consistent in terms of request time. However, from the perspective of user experience, putting the CSS file in the first will give you a better user experience.
The reason is that the browser parses the html document from top to bottom and places the CSS file at the head. The page will first make a request to the CSS file, then load the DOM tree and render it. The page will gradually be presented to the user.
On the contrary, if the CSS file is placed at the end, the page loads the full DOM and requests the CSS file, and then renders the entire DOM tree and presents it to the user. From the user's perspective, before the css file is requested, the entire page is in a white screen state. White screen is a behavior of the browser. David Hyatt's explanation for it is like this
Before the style tree is fully loaded, rendering the dom tree is a waste, because it will be rendered again after the style tree is loaded, and the FOUC (no style content flickers) problem occurs.
Another thing to note is to use link instead of @import to introduce css stylesheets. Even if the style introduced by @import is written in the header, it will be loaded at the end of the document.
6 JS file is placed at the end
HTTP requests are parallel, and the number of parallel downloads of different browsers is different (2, 4, or 8). Parallel downloads improve the speed of HTTP requests. Putting the JS file in the first place will not only block the download of the subsequent file but also block the rendering of the page.
Why is this happening? There are two reasons:
Document.write may exist in the JS file to modify the content of the page, so the page will not be rendered after the script is executed.
Different JS files may have dependencies regardless of size, so they must be executed in order, so they are prohibited when loading scripts.
Therefore, the best way is to place the js file at the end and wait until all the visual components of the page are loaded before making requests to improve the user experience.
The above are some suggestions on improving website performance by JavaScript that the editor introduced to you (1). I hope it will be helpful to you. If you want to know more, please pay attention to Wulin.com. In the next article, the editor will introduce to you the suggestions for improving website performance optimization of JavaScript (II)