English | 简体中文 | 繁體中文 | Русский язык | Français | Español | Português | Deutsch | 日本語 | 한국어 | Italiano | بالعربية
This article introduces the implementation of dynamic and static separation and load balancing between Nginx and Tomcat. So-called static and dynamic separation is to process static files such as images and html on the user side through nginx (or Apache, etc.), and to process dynamic files such as jsp and do through tomcat (or weblogic), so that static and dynamic page access is processed through different containers.
Part I: Introduction to Nginx:
Nginx is a high-performance HTTP and reverse proxy server with high stability and supports hot deployment. It is also very easy to extend modules. When encountering peak access or malicious slow connection attacks, it may also cause the server's physical memory to be exhausted frequently swapped, losing response, and only restarting the server. Nginx adopts phased resource allocation technology, processes static files and uncached reverse proxy acceleration, and realizes load balancing and fault tolerance. Under such high concurrency access conditions, it can withstand high concurrency processing.
Part II: Nginx Installation and Configuration
First step: Download Nginx installation package http://nginx.org/en/download.html
Second step: Install Nginx on Linux
#tar zxvf nginx-1.7.8.tar.gz //Unzip #cd nginx-1.7.8 #./configure --with-http_stub_status_module --with-http_ssl_module//Start the server status page and HTTPS module
will report the error of missing PCRE library, as shown in the figure:
First, execute the third step to install PCRE, and then3Execute it, and that's it
4.make && make install //Compile and install
5.Test whether the installation and configuration are correct, Nginx is installed at/usr/local/nginx
#/usr/local/nginx/sbin/nginx -t, as shown in the figure:
Third step: Install PCRE on Linux
#tar zxvf pcre-8.10.tar.gz //Unzip cd pcre-8.10 ./configure make && make install//Compile and install
3. Nginx +Tomcat Implements Dynamic and Static Separation
Dynamic and static separation means that Nginx handles the static pages (html pages) or images requested by the client, while Tomcat handles the dynamic pages (jsp pages) requested by the client, because the efficiency of handling static pages by Nginx is higher than that by Tomcat.
First step: We need to configure the Nginx file
#vi /usr/local/nginx/conf/nginx.conf
#user nobody; worker_processes 1; error_log logs/error.log; pid logs/nginx.pid; events { use epoll; worker_connections 1024; {} http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; keepalive_timeout 65; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml; gzip_vary on; server { listen 80 default; server_name localhost; <span style="color:#ff0000;">location ~ .*\.(html|htm|gif|jpg|jpeg|bmp|png|ico|txt|js|css)$ //Static pages are handled by nginx/span> { root /usr/tomcat/apache-tomcat-8081/webapps/ROOT; expires 30d; //Cache to the client30 days {} error_page 404 /404.html; #redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; {} <span style="color:#ff0000;"> location ~ \.(jsp|do)$ {//All dynamic requests for jsp are handed over to Tomcat for processing</span> <span style="color:#ff0000;"> proxy_pass http://192.168.74.129:8081; //Requests with the suffix of jsp or do are handed over to tomcat for processing</span> proxy_redirect off; proxy_set_header Host $host; //The backend web server can pass the X-Forwarded-For obtaining the real IP of the user proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; //Maximum number of bytes allowed for a single file in client requests client_body_buffer_size 128k; //Maximum number of bytes for the buffer area of the proxy buffer user requests proxy_connect_timeout 90; //Timeout for the connection between nginx and the backend server proxy_read_timeout 90; //Response time of the backend server after the connection is successful proxy_buffer_size 4k; //Set the buffer size for saving user header information of the proxy server (nginx) proxy_buffers 6 32k; //proxy_buffers buffer area, the average web page is32If the value is below k, set it like this proxy_busy_buffers_size 64k;//Buffer size under high load (proxy_buffers*2) proxy_temp_file_write_size 64k; //Set the size of the cache folder. If the value is greater than this, it will be transmitted from the upstream server {} {} {}
Second step: In the tomcat under webapps/Create a new static page index.html under the ROOT, as shown in the figure:
Third step: Start the Nginx service
#sbin/nginx, as shown in the figure:
Fourth step: We access the pagehttp://192.168.74.129/index.html It can display the normal content normally, as shown in the figure:
Fifth step: Test how Nginx and Tomcat handle the performance of static pages under high concurrency?
We used the Linux 'ab' website stress test command to test the performance
1.Test the performance of Nginx in handling static pages
ab -c 100 -n 1000 http://192.168.74.129/index.html
This indicates that it can handle100 requests and run1000 requests for index.html files, as shown in the figure:
2.Test the performance of Tomcat in handling static pages
ab -c 100 -n 1000 http://192.168.74.129:8081/index.html
This indicates that it can handle100 requests and run1000 requests for index.html files, as shown in the figure:
The same static files are processed, but Nginx's static performance is better than Tomcat. Nginx can handle5388Second, while Tomcat only requests2609Second.
Summary: In the Nginx configuration file, we configure static requests to be handled by Nginx and dynamic requests to be handled by Tomcat, which provides better performance.
4. Nginx +Tomcat Load Balancing and Fault Tolerance
Under high concurrency, in order to improve the performance of the server, we have reduced the concurrent pressure on a single server by adopting cluster deployment. This also helps to solve the fault tolerance issue in case a single server fails and the service cannot be accessed.
Step 1: We have deployed two Tomcat servers on this side,192.168.74.129:8081and192.168.74.129:8082
Step 2: Nginx acts as a proxy server, and when the client requests the server, load balancing is used to handle it, so that the client requests can be evenly distributed to each server, thus reducing the pressure on the server side. Configure the nginx.conf file under Nginx.
#vi /usr/local/nginx/conf/nginx.conf
#user nobody; worker_processes 1; error_log logs/error.log; pid logs/nginx.pid; events { use epoll; worker_connections 1024; {} http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; keepalive_timeout 65; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml; gzip_vary on; <span style="color:#ff0000;">upstream localhost_server { ip_hash; server 192.168.74.129:8081; server 192.168.74.129:8082; }/span> server { listen 80 default; server_name localhost; <span style="color:#ff0000;">location ~ .*\.(html|htm|gif|jpg|jpeg|bmp|png|ico|txt|js|css)$ //Static pages are handled by nginx/span> { root /usr/tomcat/apache-tomcat-8081/webapps/ROOT; expires 30d; //Cache to the client30 days {} error_page 404 /404.html; #redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; {} <span style="color:#ff0000;">location ~ \.(jsp|do)$ {//All dynamic requests for jsp are handed over to Tomcat for processing</span> <span style="color:#ff0000;">proxy_pass http://localhost_server; //Requests with the suffix of jsp or do are handed over to tomcat for processing</span> proxy_redirect off; proxy_set_header Host $host; //The backend web server can pass the X-Forwarded-For obtaining the real IP of the user proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; //Maximum number of bytes allowed for a single file in client requests client_body_buffer_size 128k; //Maximum number of bytes for the buffer area of the proxy buffer user requests proxy_connect_timeout 90; //Timeout for the connection between nginx and the backend server proxy_read_timeout 90; //Response time of the backend server after the connection is successful proxy_buffer_size 4k; //Set the buffer size for saving user header information of the proxy server (nginx) proxy_buffers 6 32k; //proxy_buffers buffer area, the average web page is32If the value is below k, set it like this proxy_busy_buffers_size 64k;//Buffer size under high load (proxy_buffers*2) proxy_temp_file_write_size 64k; //Set the size of the cache folder. If the value is greater than this, it will be transmitted from the upstream server {} {} {}
Note:
1.upstream server refers to the IP (domain name) and port of the server, and can also be followed by parameters
1)weight: This sets the forwarding weight of the server. The default value is1.
2)max_fails: It is used in conjunction with fail_timeout, meaning that if the number of forwarding failures of the server exceeds the value set by max_fails within the fail_timeout period, this server will not be available. The default value of max_fails is1
3)fail_timeout: This indicates how many times the forwarding fails within this time period before it is considered that this server is unusable.
4)down: This means that this server is unusable.
5)backup: This makes the ip_hash setting ineffective for this server. The request will only be forwarded to the server after all non-backup servers have failed.
2.ip_hash setting is in the cluster server. If the same client request is forwarded to multiple servers, each server may cache the same information, which will cause waste of resources. The ip_hash setting will forward the same information requested by the same client for the second time to the server where the first request was made. However, ip_hash cannot be used with weight at the same time.
That's all for this article. I hope it will be helpful to everyone's learning and that everyone will support the Yelling Tutorial more.
Declaration: The content of this article is from the Internet, and the copyright belongs to the original author. The content is contributed and uploaded by Internet users spontaneously. This website does not own the copyright, has not been manually edited, and does not assume any relevant legal liability. If you find any content suspected of copyright infringement, please send an email to: notice#oldtoolbag.com (Please replace # with @ when sending an email for reporting, and provide relevant evidence. Once verified, this site will immediately delete the infringing content.)