current position:Home>A king's time, I learned nginx
A king's time, I learned nginx
2022-04-29 16:58:34【Bulst】
List of articles
- One 、Nginx brief introduction
- Two 、 Installing the
- 3、 ... and 、Nginx Core profile
- Four 、Nginx Configure actual combat - Reverse proxy
- 5、 ... and 、Nginx Configure actual combat - Load balancing
- 6、 ... and 、Nginx Configure actual combat - Dynamic and static separation
- 7、 ... and 、Nginx Principle and optimized parameter configuration
- 8、 ... and 、nginx Build a high availability cluster
One 、Nginx brief introduction
Nginx("engine x")
Is a high-performance HTTP And reverse proxy , The feature is less memory , Strong concurrency , in fact nginx The concurrency of, and indeed in the same type of web server performance better , Used in mainland China nginx Website users have : Baidu 、 JD.COM 、 Sina 、 NetEase 、 tencent 、 Taobao, etc. .
1.1 WEB The server
Nginx It can be used as a static page web The server , It also supports CGI Dynamic language of protocol , such as perl、php
etc. . But not supported java.Java Programs can only be done with tomcat Cooperate to complete .Nginx Developed for performance optimization ,
Performance is the most important consideration , Focus on efficiency , Able to withstand the test of high load , There are reports that support high
reach 50000
Number of concurrent connections .
1.2 Reverse proxy
Forward agency , Proxy client , The client needs to configure the proxy
Reverse proxy , Agent server , The client has no perception
1.3 Load balancing
Nginx Our asynchronous framework can handle large concurrent requests , Put these concurrent requests hold After living, it can be distributed to the background server (backend servers, Also called service pool , Later referred to as" backend) To do complex calculations 、 Processing and response , The benefits of this model are considerable : Hiding the business host is more secure , Save the public network IP Address , And when the business volume increases, it can easily expand the back-end server .
At this time, the concept of cluster came into being , We increase the number of servers , The request is then distributed to each server , Instead of centralizing requests to a single server, distribute them to multiple servers , Distribute the load to different servers , That's what we call load balancing .
1.4 Dynamic and static separation
In order to speed up the analysis of the website , Dynamic and static pages can be parsed by different servers , Speed up parsing . Reduce the pressure of the original single server .
Nginx Deploy static resources ,tomcat Deploy dynamic resources
Two 、 Installing the
2.1 Related installation package
pcre-8.37.tar.gz openssl-1.0.1t.tar.gz zlib-1.2.8.tar.gz nginx-1.11.1.tar.gz
2.2 The installation process
- install pcre decompression pcre-xx.tar.gz package
Enter the decompression directory , perform ./configure
If you are prompted , It needs to be installed in advancegcc++
, Enter the software package in the installation CD Directory (/media/CentOSXX/Package) perform
rpm -ivh libstdc+±devel-4.4.7-17.el6.x86_64.rpm
rpm -ivh gcc-c+±4.4.7-17.el6.x86_64.rpm
./configure After completion , go back to pcre Execute under directory make, Re execution make install - install openssl
decompression openssl-xx.tar.gz package .
Enter the decompression directory , perform ./config
make && make install - install zlib decompression zlib-xx.tar.gz package .
Enter the decompression directory , perform ./configure.
make && make install - install nginx
decompression nginx-xx.tar.gz package .
Enter the decompression directory , perform ./configure.
make && make install
- Check the open port number
firewall-cmd --list-all
- Set the open port number
firewall-cmd --add-service=http –permanent
sudo firewall-cmd --add-port=80/tcp --permanent
- service iptables restart
firewall-cmd –reload
2.3 Nginx start-up
command
- Start command : stay /usr/local/nginx/sbin Execute under directory ./nginx
- The shutdown command : stay /usr/local/nginx/sbin Execute under directory ./nginx -s stop
- Reload command : stay /usr/local/nginx/sbin Execute under directory ./nginx -s reload·
Set up nginx For self starting service
- modify linux The startup script /etc/rc.d/rc
- Join in :/usr/local/nginx/sbin/nginx
3、 ... and 、Nginx Core profile
nginx Installation directory , The default configuration files are placed in conf Under the table of contents , And the main profile nginx.conf Among them , Follow up on nginx Basically, the use of this configuration file is to modify the corresponding .
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
}
}
According to the above documents , We can obviously nginx.conf The configuration file is divided into three parts
The first part : Global block
From the configuration file to events Content between blocks , Mainly set up some influences nginx Configuration instructions for the server to run as a whole , Mainly including configuration and operation Nginx Users of the server ( Group )、 Allowed to be generated worker process Count , process PID Storage path 、 Log storage path and type as well as the introduction of configuration files .
For example, the configuration in the first line above :worker_processes 1;
This is a Nginx Server concurrent processing service key configuration ,worker_processes The bigger the value is. , The more concurrent processing you can support , But it's going to be hardware 、 Software and other equipment constraints .
The second part :events block
events {
worker_connections 1024;
}
events Block involves the main effect of the instruction Nginx Network connection between server and user , Common settings include whether to turn on many work process The network connection under is serialized , Whether multiple network connections are allowed to be received simultaneously , Choose which event driven model to handle connection requests , Every word process The maximum number of connections that can be supported at the same time .
The above examples represent each work process The maximum number of connections supported is 1024.
The configuration of this part is right Nginx The performance of , In practice, it should be flexibly configured .
The third part :http block
http {
include mime.types;
default_type application/octet-stream;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
}
}
This is considered. Nginx The most frequent part of server configuration , agent 、 Most functions such as cache and log definition and the configuration of third-party modules are here .
It should be noted that :http Blocks can also include http Global block 、server block .
http Global block
http Global block configuration instructions include file import 、MIME-TYPE Definition 、 Log customization 、 Connection timeout 、 The maximum number of single link requests, etc .
server block
This is closely related to the virtual host , Virtual host from the user's point of view , It's exactly the same as a stand-alone hardware host , This technology is to save the cost of Internet server hardware .
Every http Blocks can include multiple server block , And each server Block is equivalent to a virtual host .
And each server Block is also divided into the overall situation server block , And can contain multiple locaton block .
- overall situation server block
The most common configuration is the listening configuration of the virtual machine host and the name or IP To configure . - location block
One server Multiple blocks can be configured location block .
The main function of this piece is based on Nginx The request string received by the server ( for example server_name/uri-string), For the virtual host name ( It can also be IP Alias ) String other than ( for example Ahead /uri-string) Match , Process specific requests . Address the directional 、 Data caching and response control functions , There are also many third-party modules configured here .
Four 、Nginx Configure actual combat - Reverse proxy
The case configuration is as follows :
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:8001;
}
location ~ /demo1 {
proxy_pass http://localhost:8001;
}
location ~ /demo2 {
proxy_pass http://localhost:8002;
}
}
location Instructions
This instruction is used to match URL, The grammar is as follows :
location [= | ~ | ~*| ^~] url{
}
- = : For uri front , Request string and uri Strictly match , If the match
success , Stop searching and process the request immediately . - ~: Used to represent uri Include regular expressions , And case sensitive .
- ~*: Used to represent uri Include regular expressions , And case insensitive .
- ^~: For uri front , requirement Nginx The server found the identity uri And the request word
String matching is the highest location after , Use this... Now location Processing requests , No longer used location
Regular in block uri Match the request string .
Be careful : If uri Include regular expressions , Must have ~ perhaps ~* identification .
5、 ... and 、Nginx Configure actual combat - Load balancing
The case configuration is as follows :
http{
upstream myserver{
ip_hash;
server localhost:8080 weight=1;
server localhost:8081 weight=1;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://myserver
proxy_connect_timeout 10;
}
}
}
stay linux There are Nginx、LVS、Haproxy Etc. services can provide load balancing services , and Nginx There are several ways to allocate ( Strategy ):
polling ( Default )
Each request is allocated to different back-end servers in chronological order , If the backend server down fall , Can be eliminated automatically .
weight
weight For weight , The default is 1, The higher the weight, the more clients are assigned. Specify the polling probability ,weight Proportional to the access rate , Used in case of uneven performance of back-end server .
ip_hash
Per request by access ip Of hash Result distribution , In this way, each visitor has fixed access to a back-end server , Can solve session The problem of .
fair( The third party )
Allocate requests based on the response time of the back-end server , Priority allocation with short response time .
6、 ... and 、Nginx Configure actual combat - Dynamic and static separation
The separation of static and dynamic can be roughly divided into two kinds from the perspective of current implementation :
- One is to separate static files into separate domain names , On a separate server , It's also a popular scheme at present ;
- Another way is to mix dynamic and static files for publishing , adopt nginx To separate .
adopt location Specify different suffixes to implement different request forwarding . adopt expires Parameter setting , Enables browser cache expiration , Reduce requests and traffic before servers . Specifically Expires Definition : Setting an expiration time for a resource , In other words, there is no need to verify the server , Directly confirm whether it is expired through the browser itself , So there's no extra traffic . This method is very suitable for infrequently changing resources .( If frequently updated files , Not recommended Expires Caching ), I set up here 3d, It means here 3 Visit this in days URL, Send a request , There is no change in the last update time of the file compared with the server , Will not be fetched from the server , Return status code 304, If there is any modification , Download directly from the server , Return status code 200.
7、 ... and 、Nginx Principle and optimized parameter configuration
master-workers The benefits of the mechanism
First , For each worker Process for , Independent process , No need to lock , So it saves the cost of lock ,
While programming and problem finding , It will also be convenient . secondly , Adopt an independent process , Can keep each other from
influence , After a process exits , Other processes are still working , Service will not be interrupted ,master The process will soon start a new one
worker process . Of course ,worker Abnormal exit of process , It must be a program bug 了 , Abnormal exit , Will lead to when
front worker All requests on failed , But it doesn't affect all requests , So it reduces the risk .
How many... Need to be set worker
Nginx Same as redis It's all the same io Multiplexing mechanism , Every worker It's all a separate process , But every one goes in
There is only one main thread in the program , Handle requests asynchronously and nonblocking , Even tens of thousands of requests are not there
Next . Every worker The thread can put a cpu To the extreme . therefore worker Number and server cpu
Equal numbers are the most appropriate . Setting less will waste cpu, Setting too much will cause cpu The loss of frequent context switching .
# Set up worker Number .
worker_processes 4
#work binding cpu(4 work binding 4cpu).
worker_cpu_affinity 0001 0010 0100 1000
#work binding cpu (4 work binding 8cpu Medium 4 individual ) .
worker_cpu_affinity 0000001 00000010 00000100 00001000
The number of connections worker_connection
This value is for each worker The maximum number of connections a process can establish , therefore , One nginx The maximum number of connections that can be established , Should be worker_connections * worker_processes. Of course , This is the maximum number of connections , about HTTP please seek Ben The earth information Source Come on say , can enough the a Of most Big and Hair Count The amount yes worker_connections * worker_processes, If it's support http1.1 Two connections are required for each visit , Therefore, the maximum concurrency of ordinary static access is : worker_connections * worker_processes /2, And if it is HTTP do For reverse agents , The maximum number of concurrent should be worker_connections *
worker_processes/4. Because as a reverse proxy , Each concurrency will establish the connection with the client and the back-end service
The connection of business , Will take up two connections .
8、 ... and 、nginx Build a high availability cluster
Be careful : This part belongs to advanced technology , The following knowledge points will be supplemented in recent days .
8.1 Keepalived+Nginx High availability cluster ( A master-slave mode )
8.2 Keepalived+Nginx High availability cluster ( Dual master mode )
copyright notice
author[Bulst],Please bring the original link to reprint, thank you.
https://en.qdmana.com/2022/119/202204291505416789.html
The sidebar is recommended
- Function of host parameter in http
- Use nginx proxy node red under centos7 and realize password access
- Centos7 nginx reverse proxy TCP port
- In eclipse, an error is reported when referencing layuijs and CSS
- Front end online teacher Pink
- Learn to use PHP to insert elements at the specified position and key of the array
- Learn how to use HTML and CSS styles to overlay two pictures on another picture to achieve the effect of scanning QR code by wechat
- Learn how to use CSS to vertically center the content in Div
- Learn how to use CSS to circle numbers
- Learn to open and display PDF files in HTML web pages
guess what you like
The PHP array random sorting function shuffle() randomly scrambles the order of array elements
JQuery implements the keyboard enter search function
16 ArcGIS API for JavaScript 4.18 a new development method based on ES modules @ ArcGIS / core
17 ArcGIS API for JavaScript 4.18 draw points, lines and faces with the mouse
18 ArcGIS API for JavaScript 4.18 obtain the length and area after drawing line segments and surface features
Vue environment construction -- scaffold
Build a demo with Vue scaffold
Using vuex in Vue projects
Use Vue router in Vue project
26 [react basics-5] react hook
Random recommended
- 07 Chrome browser captures hover element style
- WebGIS development training (ArcGIS API for JavaScript)
- Solution to the blank page of the project created by create react app after packaging
- 19. Html2canvas implements ArcGIS API for JavaScript 4 X screenshot function
- Introduction to JavaScript API for ArcGIS 13
- Development of ArcGIS API for JavaScript under mainstream front-end framework
- Nginx learning notes
- Less learning notes tutorial
- Vue2 cannot get the value in props in the data of the child component, or it is always the default value (the value of the parent component comes from asynchrony)
- LeetCode 217. Determine whether there are duplicate elements in the array
- I'll write a website collection station myself. Do you think it's ok? [HTML + CSS + JS] Tan Zi
- Front end browser debugging tips
- Application of anti chattering and throttling in JavaScript
- How to create JavaScript custom events
- Several ways of hiding elements in CSS
- node. Js-3 step out the use of a server and express package
- CSS matrix function
- Fastapi series - synchronous and asynchronous mutual conversion processing practice
- How to extend the functionality of Axios without interceptors
- Read pseudo classes and pseudo elements
- About node JS server related concepts
- Access control module (2)
- About virtual lists
- Developing figma plug-in using Vue 3 + vite
- Learn more about the garbage collection engine of chrome V8 JavaScript engine
- Vue3 uses vite instead of webpack
- How to upload applet code through node? Just take a look
- Using H5 video tag in Vue to play local video in pop-up window
- What is the difference between classes in Es5 and ES6?
- [Vue] play with the slot
- [Part 4 of front-end deployment] using docker to build cache and multi-stage construction to optimize single page applications
- Vue2 simple use of vant (based on Vue CLI)
- node. JS server
- React uses concurrent mode. When the rendering phase exceeds the time slice, high priority tasks jump the queue. How will the lanes on the fiber of the previous task be solved
- Vuecli2 multi page, how to remove HTML suffix
- Vue router dynamically modifies routing parameters
- How to use webpack or configure quasar after context isolation is turned on by electron?
- Vue3 how do parent components call child component methods
- Es learning notes (I): http request
- 【Java WEB】AJAX