current position:Home>How to configure load balancing for TCP in nginx server

How to configure load balancing for TCP in nginx server

2022-04-29 19:26:39Billion cloud speed

stay Nginx How to configure the server for TCP Load balancing of

Today, I'd like to share with you about Nginx The server How to configure for TCP Of Load balancing Relevant knowledge points of , Detailed content , Clear logic , I believe most people still know too much about this knowledge , So share this article for your reference , I hope you will gain something after reading this article , Now let's take a look .

One 、 install nginx
1. download nginx

# wget http://nginx.org/download/nginx-1.2.4.tar.gz

2. download tcp Module patch

# wget https://github.com/yaoweibin/nginx_tcp_proxy_module/tarball/master

Source page : https://github.com/yaoweibin/nginx_tcp_proxy_module

3. install nginx

# tar xvf nginx-1.2.4.tar.gz# tar xvf yaoweibin-nginx_tcp_proxy_module-v0.4-45-ga40c99a.tar.gz# cd nginx-1.2.4# patch -p1 < ../yaoweibin-nginx_tcp_proxy_module-a40c99a/tcp.patch#./configure --prefix=/usr/local/nginx --with-pcre=../pcre-8.30 --add-module=../yaoweibin-nginx_tcp_proxy_module-ae321fd/# make# make install

Two 、 Modify the configuration file
modify nginx.conf The configuration file

# cd /usr/local/nginx/conf# vim nginx.conf
worker_processes 1;events {worker_connections 1024;}tcp {upstream mssql {server 10.0.1.201:1433;server 10.0.1.202:1433;check interval=3000 rise=2 fall=5 timeout=1000;}server {listen 1433;server_name 10.0.1.212;proxy_pass mssql;}}

3、 ... and 、 start-up nginx

# cd /usr/local/nginx/sbin/# ./nginx

see 1433 port :

#lsof :1433

Four 、 test

# telnet 10.0.1.201 1433

5、 ... and 、 Use sql server client Tool testing

 stay Nginx How to configure the server for TCP Load balancing of

6、 ... and 、tcp How load balancing works
When nginx When a new client link is received from the listening port , Execute the routing algorithm immediately , Get the specified services that need to be connected ip, Then create a new upstream connection , Connect to the specified server .

 stay Nginx How to configure the server for TCP Load balancing of

tcp Load balancing support nginx The original scheduling algorithm , Include round robin( Default , Polling scheduling ), Hash ( Choose the same ) etc. . meanwhile , Scheduling information data will also work with robustness detection modules , Select the appropriate target upstream server for each connection . If you use hash Load balancing scheduling method , You can use $remote_addr( client ip) To achieve a simple persistent conversation ( Same client ip The connection of , Always fall into the same service server On ).

And others upstream The modules are the same ,tcp Of stream The module also supports defining the forwarding weight of load balancing ( To configure “weight=2”), also backup and down Parameters of , Used to kick down failed upstream servers .max_conns Parameters can limit the tcp Number of connections , Set the appropriate configuration value according to the capacity of the server , Especially in high concurrency scenarios , It can achieve the purpose of overload protection .

nginx Monitor client connections and upstream connections , Once the data is received , be nginx Will immediately read and push to the upstream connection , Can not do tcp Data detection within the connection .nginx Maintain a memory buffer , Used for writing client and upstream data . If the client or server transmits a large amount of data , The buffer will increase the memory size appropriately .

 stay Nginx How to configure the server for TCP Load balancing of

When nginx Receive a close connection notification from either party , perhaps tcp The connection is idle more than proxy_timeout Configured time , The connection will be closed . about tcp A long connection , We should choose the right proxy_timeout Time for , meanwhile , Focus on Monitoring socke Of so_keepalive Parameters , Prevent premature disconnection .

ps: Service robustness monitoring

tcp Load balancing module supports built-in robustness detection , If an upstream server refuses tcp Connect more than proxy_connect_timeout Configured time , Will be considered invalid . under these circumstances ,nginx Try connecting now upstream Another normal server in the group . The connection failure message will be logged to nginx In the error log of .

If a server , Repeated failure ( More than the max_fails perhaps fail_timeout Configured parameters ),nginx And I'll kick this server . The server was kicked out 60 Seconds later ,nginx Occasionally try to reconnect it , Check if it's back to normal . If the server returns to normal ,nginx Add it back to upstream Within the group , Slowly increase the proportion of connection requests .

Where “ Slowly increase ”, Because usually a service has “ Hot data ”, in other words ,80% More than that , The reality will be blocked in “ Hot data cache ” in , There are only a few requests that actually execute processing . When the machine just started ,“ Hot data cache ” It's not actually established , At this time, a large number of requests are forwarded explosively , It's likely that the machine won't be able to “ To bear ” And hang up again . With mysql As an example , our mysql Inquire about , Usually 95% All the above are in memory cache in , There are not many queries that actually execute .

Actually , Whether it's a single machine or a cluster , In the high concurrent request scenario , Restart or switch , There is a risk , There are two main solutions :

(1) Requests gradually increase , From less to more , Gradually accumulate hot data , Finally reach the normal service state .
(2) Be prepared in advance “ Commonly used ” The data of , Take the initiative to do “ preheating ”, After preheating , Open the access of the server again .

tcp Load balancing principle and lvs Wait is the same , Work at a lower level , The performance will be higher than the original http A lot of load balancing . however , No comparison lvs Better ,lvs Is placed in the kernel module , and nginx Working in user mode , and ,nginx Relatively heavy . Another point , It's a great pity , This module is actually a paid function .

tcp Load balancing module supports built-in robustness detection , If an upstream server refuses tcp Connect more than proxy_connect_timeout Configured time , Will be considered invalid . under these circumstances ,nginx Try connecting now upstream Another normal server in the group . The connection failure message will be logged to nginx In the error log of .

 stay Nginx How to configure the server for TCP Load balancing of

If a server , Repeated failure ( More than the max_fails perhaps fail_timeout Configured parameters ),nginx And I'll kick this server . The server was kicked out 60 Seconds later ,nginx Occasionally try to reconnect it , Check if it's back to normal . If the server returns to normal ,nginx Add it back to upstream Within the group , Slowly increase the proportion of connection requests .

Where “ Slowly increase ”, Because usually a service has “ Hot data ”, in other words ,80% More than that , The reality will be blocked in “ Hot data cache ” in , There are only a few requests that actually execute processing . When the machine just started ,“ Hot data cache ” It's not actually established , At this time, a large number of requests are forwarded explosively , It's likely that the machine won't be able to “ To bear ” And hang up again . With mysql As an example , our mysql Inquire about , Usually 95% All the above are in memory cache in , There are not many queries that actually execute .

Actually , Whether it's a single machine or a cluster , In the high concurrent request scenario , Restart or switch , There is a risk , There are two main solutions :

(1) Requests gradually increase , From less to more , Gradually accumulate hot data , Finally reach the normal service state .
(2) Be prepared in advance “ Commonly used ” The data of , Take the initiative to do “ preheating ”, After preheating , Open the access of the server again .

tcp Load balancing principle and lvs Wait is the same , Work at a lower level , The performance will be higher than the original http A lot of load balancing . however , No comparison lvs Better ,lvs Is placed in the kernel module , and nginx Working in user mode , and ,nginx Relatively heavy . Another point , It's a great pity , This module is actually a paid function .

That's all “ stay Nginx How to configure the server for TCP Load balancing of ” All the content of this article , Thank you for reading ! I believe you will gain a lot after reading this article , Xiaobian will update different knowledge for you every day , If you want to learn more , Please pay attention to the Yisu cloud industry information channel .

copyright notice
author[Billion cloud speed],Please bring the original link to reprint, thank you.
https://en.qdmana.com/2022/119/202204291744570868.html

Random recommended