current position:Home>A king's time, I learned nginx

A king's time, I learned nginx

2022-04-29 16:58:34Bulst


One 、Nginx brief introduction

Nginx("engine x") Is a high-performance HTTP And reverse proxy , The feature is less memory , Strong concurrency , in fact nginx The concurrency of, and indeed in the same type of web server performance better , Used in mainland China nginx Website users have : Baidu 、 JD.COM 、 Sina 、 NetEase 、 tencent 、 Taobao, etc. .

1.1 WEB The server

Nginx It can be used as a static page web The server , It also supports CGI Dynamic language of protocol , such as perl、php
etc. . But not supported java.Java Programs can only be done with tomcat Cooperate to complete .Nginx Developed for performance optimization ,
Performance is the most important consideration , Focus on efficiency , Able to withstand the test of high load , There are reports that support high
reach 50000 Number of concurrent connections .

1.2 Reverse proxy

  1. Forward agency , Proxy client , The client needs to configure the proxy

  2. Reverse proxy , Agent server , The client has no perception

1.3 Load balancing

Nginx Our asynchronous framework can handle large concurrent requests , Put these concurrent requests hold After living, it can be distributed to the background server (backend servers, Also called service pool , Later referred to as" backend) To do complex calculations 、 Processing and response , The benefits of this model are considerable : Hiding the business host is more secure , Save the public network IP Address , And when the business volume increases, it can easily expand the back-end server .

At this time, the concept of cluster came into being , We increase the number of servers , The request is then distributed to each server , Instead of centralizing requests to a single server, distribute them to multiple servers , Distribute the load to different servers , That's what we call load balancing .

1.4 Dynamic and static separation

In order to speed up the analysis of the website , Dynamic and static pages can be parsed by different servers , Speed up parsing . Reduce the pressure of the original single server .

Nginx Deploy static resources ,tomcat Deploy dynamic resources

Two 、 Installing the

Nginx Official website

2.1 Related installation package

 pcre-8.37.tar.gz  openssl-1.0.1t.tar.gz  zlib-1.2.8.tar.gz  nginx-1.11.1.tar.gz

2.2 The installation process

  1. install pcre decompression pcre-xx.tar.gz package
    Enter the decompression directory , perform ./configure
    If you are prompted , It needs to be installed in advance gcc++, Enter the software package in the installation CD Directory (/media/CentOSXX/Package) perform
    rpm -ivh libstdc+±devel-4.4.7-17.el6.x86_64.rpm
    rpm -ivh gcc-c+±4.4.7-17.el6.x86_64.rpm
    ./configure After completion , go back to pcre Execute under directory make, Re execution make install
  2. install openssl
    decompression openssl-xx.tar.gz package .
    Enter the decompression directory , perform ./config
    make && make install
  3. install zlib decompression zlib-xx.tar.gz package .
    Enter the decompression directory , perform ./configure.
    make && make install
  4. install nginx
    decompression nginx-xx.tar.gz package .
    Enter the decompression directory , perform ./configure.
    make && make install
  • Check the open port number
    firewall-cmd --list-all
  • Set the open port number
    firewall-cmd --add-service=http –permanent
    sudo firewall-cmd --add-port=80/tcp --permanent
  • service iptables restart
    firewall-cmd –reload

2.3 Nginx start-up

command

  1. Start command : stay /usr/local/nginx/sbin Execute under directory ./nginx
  2. The shutdown command : stay /usr/local/nginx/sbin Execute under directory ./nginx -s stop
  3. Reload command : stay /usr/local/nginx/sbin Execute under directory ./nginx -s reload·

Set up nginx For self starting service

  1. modify linux The startup script /etc/rc.d/rc
  2. Join in :/usr/local/nginx/sbin/nginx

3、 ... and 、Nginx Core profile

nginx Installation directory , The default configuration files are placed in conf Under the table of contents , And the main profile nginx.conf Among them , Follow up on nginx Basically, the use of this configuration file is to modify the corresponding .


worker_processes  1;
events {
    
    worker_connections  1024;
}
http {
    
    include       mime.types;
    default_type  application/octet-stream;
    keepalive_timeout  65;
    server {
    
        listen       80;
        server_name  localhost;
        location / {
    
            root   html;
            index  index.html index.htm;
        }
    } 
}

According to the above documents , We can obviously nginx.conf The configuration file is divided into three parts

The first part : Global block

From the configuration file to events Content between blocks , Mainly set up some influences nginx Configuration instructions for the server to run as a whole , Mainly including configuration and operation Nginx Users of the server ( Group )、 Allowed to be generated worker process Count , process PID Storage path 、 Log storage path and type as well as the introduction of configuration files .

For example, the configuration in the first line above :worker_processes 1;

This is a Nginx Server concurrent processing service key configuration ,worker_processes The bigger the value is. , The more concurrent processing you can support , But it's going to be hardware 、 Software and other equipment constraints .

The second part :events block

events {
    
    worker_connections  1024;
}

events Block involves the main effect of the instruction Nginx Network connection between server and user , Common settings include whether to turn on many work process The network connection under is serialized , Whether multiple network connections are allowed to be received simultaneously , Choose which event driven model to handle connection requests , Every word process The maximum number of connections that can be supported at the same time .

The above examples represent each work process The maximum number of connections supported is 1024.

The configuration of this part is right Nginx The performance of , In practice, it should be flexibly configured .

The third part :http block

http {
    
    include       mime.types;
    default_type  application/octet-stream;
    keepalive_timeout  65;
    server {
    
        listen       80;
        server_name  localhost;
        location / {
    
            root   html;
            index  index.html index.htm;
        }
    } 
}

This is considered. Nginx The most frequent part of server configuration , agent 、 Most functions such as cache and log definition and the configuration of third-party modules are here .

It should be noted that :http Blocks can also include http Global block 、server block .

http Global block
http Global block configuration instructions include file import 、MIME-TYPE Definition 、 Log customization 、 Connection timeout 、 The maximum number of single link requests, etc .

server block
This is closely related to the virtual host , Virtual host from the user's point of view , It's exactly the same as a stand-alone hardware host , This technology is to save the cost of Internet server hardware .

Every http Blocks can include multiple server block , And each server Block is equivalent to a virtual host .

And each server Block is also divided into the overall situation server block , And can contain multiple locaton block .

  1. overall situation server block
    The most common configuration is the listening configuration of the virtual machine host and the name or IP To configure .
  2. location block
    One server Multiple blocks can be configured location block .
    The main function of this piece is based on Nginx The request string received by the server ( for example server_name/uri-string), For the virtual host name ( It can also be IP Alias ) String other than ( for example Ahead /uri-string) Match , Process specific requests . Address the directional 、 Data caching and response control functions , There are also many third-party modules configured here .

Four 、Nginx Configure actual combat - Reverse proxy

The case configuration is as follows :

    server {
    
        listen       80;
        server_name  localhost;
        location / {
    
            proxy_pass http://localhost:8001;
        }
        location ~ /demo1 {
    
            proxy_pass http://localhost:8001;
        }
        location ~ /demo2 {
    
            proxy_pass http://localhost:8002;
        }
    }

location Instructions
This instruction is used to match URL, The grammar is as follows :

location [= | ~ | ~*| ^~] url{
    

}
  1. = : For uri front , Request string and uri Strictly match , If the match
    success , Stop searching and process the request immediately .
  2. ~: Used to represent uri Include regular expressions , And case sensitive .
  3. ~*: Used to represent uri Include regular expressions , And case insensitive .
  4. ^~: For uri front , requirement Nginx The server found the identity uri And the request word
    String matching is the highest location after , Use this... Now location Processing requests , No longer used location
    Regular in block uri Match the request string .

Be careful : If uri Include regular expressions , Must have ~ perhaps ~* identification .

5、 ... and 、Nginx Configure actual combat - Load balancing

The case configuration is as follows :

http{
    
	upstream myserver{
    
		ip_hash;
		server localhost:8080 weight=1;
		server localhost:8081 weight=1;
	}
    server {
    
        listen       80;
        server_name  localhost;
        location / {
    
            proxy_pass http://myserver
            proxy_connect_timeout 10;
        }
    }
}

stay linux There are Nginx、LVS、Haproxy Etc. services can provide load balancing services , and Nginx There are several ways to allocate ( Strategy ):

polling ( Default )
Each request is allocated to different back-end servers in chronological order , If the backend server down fall , Can be eliminated automatically .

weight
weight For weight , The default is 1, The higher the weight, the more clients are assigned. Specify the polling probability ,weight Proportional to the access rate , Used in case of uneven performance of back-end server .

ip_hash
Per request by access ip Of hash Result distribution , In this way, each visitor has fixed access to a back-end server , Can solve session The problem of .

fair( The third party )
Allocate requests based on the response time of the back-end server , Priority allocation with short response time .

6、 ... and 、Nginx Configure actual combat - Dynamic and static separation

The separation of static and dynamic can be roughly divided into two kinds from the perspective of current implementation :

  • One is to separate static files into separate domain names , On a separate server , It's also a popular scheme at present ;
  • Another way is to mix dynamic and static files for publishing , adopt nginx To separate .

adopt location Specify different suffixes to implement different request forwarding . adopt expires Parameter setting , Enables browser cache expiration , Reduce requests and traffic before servers . Specifically Expires Definition : Setting an expiration time for a resource , In other words, there is no need to verify the server , Directly confirm whether it is expired through the browser itself , So there's no extra traffic . This method is very suitable for infrequently changing resources .( If frequently updated files , Not recommended Expires Caching ), I set up here 3d, It means here 3 Visit this in days URL, Send a request , There is no change in the last update time of the file compared with the server , Will not be fetched from the server , Return status code 304, If there is any modification , Download directly from the server , Return status code 200.

7、 ... and 、Nginx Principle and optimized parameter configuration

 Please add a picture description
master-workers The benefits of the mechanism
First , For each worker Process for , Independent process , No need to lock , So it saves the cost of lock ,
While programming and problem finding , It will also be convenient . secondly , Adopt an independent process , Can keep each other from
influence , After a process exits , Other processes are still working , Service will not be interrupted ,master The process will soon start a new one
worker process . Of course ,worker Abnormal exit of process , It must be a program bug 了 , Abnormal exit , Will lead to when
front worker All requests on failed , But it doesn't affect all requests , So it reduces the risk .

How many... Need to be set worker
Nginx Same as redis It's all the same io Multiplexing mechanism , Every worker It's all a separate process , But every one goes in
There is only one main thread in the program , Handle requests asynchronously and nonblocking , Even tens of thousands of requests are not there
Next . Every worker The thread can put a cpu To the extreme . therefore worker Number and server cpu
Equal numbers are the most appropriate . Setting less will waste cpu, Setting too much will cause cpu The loss of frequent context switching .

# Set up  worker  Number .
worker_processes 4
#work  binding  cpu(4 work  binding  4cpu).
worker_cpu_affinity 0001 0010 0100 1000
#work  binding  cpu (4 work  binding  8cpu  Medium  4  individual ) .
worker_cpu_affinity 0000001 00000010 00000100 00001000

The number of connections worker_connection
This value is for each worker The maximum number of connections a process can establish , therefore , One nginx The maximum number of connections that can be established , Should be worker_connections * worker_processes. Of course , This is the maximum number of connections , about HTTP please seek Ben The earth information Source Come on say , can enough the a Of most Big and Hair Count The amount yes worker_connections * worker_processes, If it's support http1.1 Two connections are required for each visit , Therefore, the maximum concurrency of ordinary static access is : worker_connections * worker_processes /2, And if it is HTTP do For reverse agents , The maximum number of concurrent should be worker_connections *

worker_processes/4. Because as a reverse proxy , Each concurrency will establish the connection with the client and the back-end service
The connection of business , Will take up two connections .
 Please add a picture description

8、 ... and 、nginx Build a high availability cluster

Be careful : This part belongs to advanced technology , The following knowledge points will be supplemented in recent days .

8.1 Keepalived+Nginx High availability cluster ( A master-slave mode )

 Please add a picture description

8.2 Keepalived+Nginx High availability cluster ( Dual master mode )

 Please add a picture description

copyright notice
author[Bulst],Please bring the original link to reprint, thank you.
https://en.qdmana.com/2022/119/202204291505416789.html

Random recommended