對於此配置,您可以使用自己喜歡的Web服務器,因為我主要與它一起使用Nginx。
通常,正確配置的Nginx每秒可以處理多達400k至500k的請求(聚類)。我看到的大多數是每秒50k至80k(非集群)請求和30%的CPU負載,當然,這是啟用超線程的2 x Intel Xeon ,但在較慢的機器上可以毫無問題地工作。
您必須了解此配置是在測試環境中而不是在生產中使用的,因此您將需要找到一種方法來實現大多數這些功能,以適合您的服務器。
首先,您需要安裝nginx
yum install nginx
apt install nginx備份原始配置,您可以開始重新配置您的配置。您需要使用您喜歡的編輯/etc/nginx/nginx.conf打開nginx.conf 。
# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that
worker_processes auto; #some last versions calculate it automatically
# number of file descriptors used for nginx
# the limit for the maximum FDs on the server is usually set by the OS.
# if you don't set FD's then OS settings will be used which is by default 2000
worker_rlimit_nofile 100000 ;
# only log critical errors
error_log /var/log/nginx/error.log crit ;
# provides the configuration file context in which the directives that affect connection processing are specified.
events {
# determines how much clients will be served per worker
# max clients = worker_connections * worker_processes
# max clients is also limited by the number of socket connections available on the system (~64k)
worker_connections 4000 ;
# optimized to serve many clients with each thread, essential for linux -- for testing environment
use epoll ;
# accept as many connections as possible, may flood worker connections if set too low -- for testing environment
multi_accept on ;
}
http {
# cache informations about FDs, frequently accessed files
# can boost performance, but you need to test those values
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s ;
open_file_cache_min_uses 2 ;
open_file_cache_errors on ;
# to boost I/O on HDD we can disable access logs
access_log off ;
# copies data between one FD and other from within the kernel
# faster than read() + write()
sendfile on ;
# send headers in one piece, it is better than sending them one by one
tcp_nopush on ;
# don't buffer data sent, good for small data bursts in real time
# https://brooker.co.za/blog/2024/05/09/nagle.html
# https://news.ycombinator.com/item?id=10608356
#tcp_nodelay on;
# reduce the data that needs to be sent over network -- for testing environment
gzip on ;
# gzip_static on;
gzip_min_length 10240 ;
gzip_comp_level 1 ;
gzip_vary on ;
gzip_disable msie6;
gzip_proxied expired no-cache no-store private auth;
gzip_types
# text/html is always compressed by HttpGzipModule
text/css
text/javascript
text/xml
text/plain
text/x-component
application/javascript
application/x-javascript
application/json
application/xml
application/rss+xml
application/atom+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
# allow the server to close connection on non responding client, this will free up memory
reset_timedout_connection on ;
# request timed out -- default 60
client_body_timeout 10 ;
# if client stop responding, free up memory -- default 60
send_timeout 2 ;
# server will close connection after this time -- default 75
keepalive_timeout 30 ;
# number of requests client can make over keep-alive -- for testing environment
keepalive_requests 100000 ;
}現在您可以保存配置並運行以下命令
nginx -s reload
/etc/init.d/nginx start|restart
如果您想先測試配置,則可以運行
nginx -t
/etc/init.d/nginx configtest
server_tokens off ;這離安全的DDOS防禦距離很遠,但可以減慢一些小型DDO。此配置適用於測試環境,您應該使用自己的值。
# limit the number of connections per single IP
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
# limit the number of requests for a given session
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;
# zone which we want to limit by upper values, we want limit whole server
server {
limit_conn conn_limit_per_ip 10 ;
limit_req zone=req_limit_per_ip burst=10 nodelay;
}
# if the request body size is more than the buffer size, then the entire (or partial)
# request body is written into a temporary file
client_body_buffer_size 128k ;
# buffer size for reading client request header -- for testing environment
client_header_buffer_size 3m ;
# maximum number and size of buffers for large headers to read from client request
large_client_header_buffers 4 256k ;
# read timeout for the request body from client -- for testing environment
client_body_timeout 3m ;
# how long to wait for the client to send a request header -- for testing environment
client_header_timeout 3m ;現在您可以再次測試配置
nginx -t # /etc/init.d/nginx configtest然後重新加載或重新啟動您的nginx
nginx -s reload
/etc/init.d/nginx reload|restart
您可以使用tsung測試此配置,當您對結果感到滿意時,您可以點擊Ctrl+C因為它可以運行幾個小時。
nofile限制) - Linux在RHEL/CENTOS 7+中,有兩種方法可以提高NINGINX的NINGINX限制ninginx的nofile/Max Open文件/文件描述符。在運行NGINX時,請檢查主過程的當前限制
$ cat /proc/$(cat /var/run/nginx.pid)/limits | grep open.files
Max open files 1024 4096 files
ps --ppid $(cat /var/run/nginx.pid) -o %p|sed '1d'|xargs -I{} cat /proc/{}/limits|grep open.files
Max open files 1024 4096 files
Max open files 1024 4096 files
在{,/usr/local}/etc/nginx/nginx.conf中使用worker_rlimit_nofile指令嘗試失敗,因為selinux策略不允許setrlimit 。這顯示在/var/log/nginx/error.log中
015/07/24 12:46:40 [alert] 12066#0: setrlimit(RLIMIT_NOFILE, 2342) failed (13: Permission denied)
type=AVC msg=audit(1437731200.211:366): avc: denied { setrlimit } for pid=12066 comm="nginx" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:system_r:httpd_t:s0 tclass=process
nolimit # /etc/security/limits.conf
# /etc/default/nginx (ULIMIT)
$ nano /etc/security/limits.d/nginx.conf
nginx soft nofile 65536
nginx hard nofile 65536
$ sysctl -p
nolimit $ mkdir -p /etc/systemd/system/nginx.service.d
$ nano /etc/systemd/system/nginx.service.d/nginx.conf
[Service]
LimitNOFILE=30000
$ systemctl daemon-reload
$ systemctl restart nginx.service
httpd_setrlimit到true(1)這將為工作過程設置FD限制。將worker_rlimit_nofile指令留在{,/usr/local}/etc/nginx/nginx.conf中,然後以root運行以下
setsebool -P httpd_setrlimit 1
默認情況下, max_ranges不限。 DOS攻擊可以創建許多範圍要求(對穩定性I/O的影響)。
| 套接字類型 | 潛伏期(MS) | 潛伏期stdev(MS) | CPU負載 |
|---|---|---|---|
| 預設 | 15.65 | 26.59 | 0.3 |
| accept_mutex關閉 | 15.59 | 26.48 | 10 |
| Reuseport | 12.35 | 3.15 | 0.3 |
目前僅在Linux中支持文件的多線程發送。如果沒有sendfile_max_chunk限制,一個快速連接可能會完全抓住工作過程。
map $ssl_preread_protocol $upstream {
"" ssh.example.com:22;
"TLSv1.2" new.example.com:443;
default tls.example.com:443;
}
# ssh and https on the same port
server {
listen 192.168.0.1:443;
proxy_pass $upstream ;
ssl_preread on;
}openssl engine -t )q_disc )。tcp_bbr : modprobe tcp_bbr && echo ' tcp_bbr ' >> /etc/modules-load.d/bbr.conf
echo ' net.ipv4.tcp_congestion_control=bbr ' >> /etc/sysctl.d/99-bbr.conf
# Recommended for production, but with Linux v4.13rc1+ can be used not only in FQ (`q_disc') in BBR mode.
echo ' net.core.default_qdisc=fq ' >> /etc/sysctl.d/99-bbr.conf
sysctl --system