current position:Home>Fasthttp: go framework ten times faster than net/http (server)

Fasthttp: go framework ten times faster than net/http (server)

2022-06-24 08:16:51luozhiyun

Please state the source of reprint ~, This article was published at luozhiyun The blog of :

We explained in the last article that Go HTTP Realization principle of standard library , This time I found a man called Bi net/http Ten times faster Go frame fasthttp, This time, let's see what excellent designs it has that are worth exploring .

A typical HTTP The service should look like this :


be based on HTTP The service standard model includes two ends , client (Client) And the service side (Server).HTTP The request is issued from the client , After receiving the request, the server processes it and returns the response to the client . therefore http The server's job is to accept requests from clients , And return the response to the client .

Let's talk about this Server End of the implementation .

Realization principle

net/http And fasthttp To achieve contrast

We're talking about net/http When I was young, I said , Its processing flow is like this :

  1. Register processor to a hash In the table , It can be matched by key route ;
  2. After the registration is completed, the loop monitoring is started , Every time you hear a connection, you create one Goroutine;
  3. After creating the Goroutine There will be a loop waiting to receive the request data , Then according to the requested address to match the corresponding processor in the processor routing table , Then the request is sent to the processor for processing ;

This is no problem when the number of connections is small , But when there are a lot of connections , Each connection creates a Goroutine It will bring some pressure to the system . This has resulted in net/http Bottleneck in dealing with high concurrency .

Let's take a look at fasthttp how :

  1. Start listening ;
  2. Loop the listening port to get the connection ;
  3. After getting the connection, you first go to ready In the queue workerChan, If you can't get it, you go to the object pool to get it ;
  4. Pass the monitored connection to workerChan Of channel in ;
  5. workerChan There is one Goroutine I've been looping through channel Data in , After getting it, the request will be processed and returned .

It's mentioned above workerChan It's actually a connection processing object , There's one in this object channel Used to deliver connections ; Every workerChan There will be one backstage Goroutine Cycle to get channel Connection in , And then deal with it . If the maximum number of simultaneous connections is not set , The default is 256 * 1024 individual . In this way, when the concurrency is very high, it can also guarantee the external service .

besides , It is also realized through sync.Pool To reuse a large number of objects , Reduce memory allocation , Such as :

workerChanPool 、ctxPool 、readerPool、writerPool Wait, how big 30 Multiple sync.Pool .

In addition to reusing objects ,fasthttp And sliced , adopt s = s[:0] and s = append(s[:0], b…) To reduce the re creation of slices .

fasthttp Because of the need and string There are many places to deal with , So try to avoid it from many places []byte To string Memory allocation and copy consumption during conversion .


To sum up, let's give a brief introduction to fasthttp Points to improve performance :

  1. Control asynchrony Goroutine At the same time , The biggest default is 256 * 1024 individual ;
  2. Use sync.Pool To reuse a lot of objects and slices , Reduce memory allocation ;
  3. Try to avoid []byte To string Memory allocation and copy consumption during conversion ;

The source code parsing

Let's start with a simple example :

func main() { 
	if err := fasthttp.ListenAndServe(":8088", requestHandler); err != nil {
		log.Fatalf("Error in ListenAndServe: %s", err)

func requestHandler(ctx *fasthttp.RequestCtx) {
	fmt.Fprintf(ctx, "Hello, world!\n\n")

We call ListenAndServe Function will start service listening , Waiting for the task to be processed .ListenAndServe The function actually calls Server Of ListenAndServe Method , Let's take a look here Server Fields of structure :


Here's a simple list of Server Common fields of structs , Include : Request processor 、 service name 、 Request read timeout 、 Request write timeout 、 The maximum number of requests per connection, etc . Besides, there are many other parameters , Some parameters of the server can be controlled in various dimensions .

Server Of ListenAndServe Method will get TCP monitor , And then call Serve Method to perform the logical processing of the server .


Server The main methods are as follows :

  1. Initialize and start worker Pool;
  2. Receiving request Connection;
  3. take Connection hand worker Pool Handle ;
func (s *Server) Serve(ln net.Listener) error {
	//  initialization  worker Pool
	wp := &workerPool{
		WorkerFunc:      s.serveConn,
		MaxWorkersCount: maxWorkersCount,
		LogAllErrors:    s.LogAllErrors,
		Logger:          s.logger(),
		connState:       s.setState,
	//  start-up  worker Pool
	//  Loop processing  connection
	for {
		//  obtain  connection
		if c, err = acceptConn(s, ln, &lastPerIPErrorTime); err != nil {
			if err == io.EOF {
				return nil
			return err
		s.setState(c, StateNew)
		atomic.AddInt32(&, 1)
		//  Handle  connection
		if !wp.Serve(c) {
			//  Get into if  Indicates that the concurrency limit has been reached 
		c = nil

worker Pool

worker Pool It's used to handle all requests Connection Of , Here's a little look workerPool Fields of structure :

  • WorkerFunc: It's used to match the request handler And implement ;
  • MaxWorkersCount: Maximum number of simultaneous requests ;
  • ready: Idle workerChan;
  • workerChanPool:workerChan Object pool , It's a sync.Pool Type of ;
  • workersCount: Number of requests currently being processed ;

Let's take a look workerPool Of Start Method :

func (wp *workerPool) Start() {
	if wp.stopCh != nil {
		panic("BUG: workerPool already started")
	wp.stopCh = make(chan struct{})
	stopCh := wp.stopCh
    //  Set up  worker Pool  Create function of 
	wp.workerChanPool.New = func() interface{} {
		return &workerChan{
			ch: make(chan net.Conn, workerChanCap),
	go func() {
		var scratch []*workerChan
		for {
            //  It doesn't take a while to clean up the idle time  workerChan
			select {
			case <-stopCh:
                //  The default is  10 s

Start The main method is :

  1. Set up workerChanPool Create function of ;
  2. Start a Goroutine Clean up regularly workerPool Medium ready Free time saved in workerChan, The default for each 10s Start once .

Get the connection

func acceptConn(s *Server, ln net.Listener, lastPerIPErrorTime *time.Time) (net.Conn, error) {
	for {
		c, err := ln.Accept()
		if err != nil {
			if c != nil {
				panic("BUG: net.Listener returned non-nil conn and non-nil error")
			return nil, io.EOF
		if c == nil {
			panic("BUG: net.Listener returned (nil, nil)")
        //  Check each ip The corresponding number of connections 
		if s.MaxConnsPerIP > 0 {
			pic := wrapPerIPConn(s, c)
			if pic == nil {
				if time.Since(*lastPerIPErrorTime) > time.Minute {
					s.logger().Printf("The number of connections from %s exceeds MaxConnsPerIP=%d",
						getConnIP4(c), s.MaxConnsPerIP)
					*lastPerIPErrorTime = time.Now()
			c = pic
		return c, nil

There's nothing to say about getting a connection , and net/http It's called like a library TCPListener Of accept Method to get TCP Connection.

Deal with connection

The processing connection first gets workerChan ,workerChan The structure contains two fields :lastUseTime、channel:

type workerChan struct {
	lastUseTime time.Time
	ch          chan net.Conn
  • lastUseTime Identify the last time it was used ;
  • ch It's used to deliver Connection With .

Get Connection And then it's introduced to workerChan Of channel in , Each corresponds to workerChan All have an asynchronous Goroutine Processing channel Inside Connection.

obtain workerChan

func (wp *workerPool) Serve(c net.Conn) bool {
    //  obtain  workerChan 
	ch := wp.getCh()
	if ch == nil {
		return false
    //  take  Connection  Put in  channel  in <- c
	return true

Serve The method is mainly through getCh Method to get workerChan , Then put the current Connection The incoming to workerChan Of channel in .

func (wp *workerPool) getCh() *workerChan {
	var ch *workerChan
	createWorker := false

	//  Try to get... From the free queue  workerChan
	ready := wp.ready
	n := len(ready) - 1
	if n < 0 {
		if wp.workersCount < wp.MaxWorkersCount {
			createWorker = true
	} else {
		ch = ready[n]
		ready[n] = nil
		wp.ready = ready[:n]
	//  If not, get it from the object pool 
	if ch == nil {
		if !createWorker {
			return nil
		vch := wp.workerChanPool.Get()
		ch = vch.(*workerChan)
		//  For the new  workerChan  Turn on  goroutine
		go func() {
			//  Handle  channel  Data in 
			//  After processing, put it back into the object pool 
	return ch

getCh The method will go first ready In the idle queue workerChan, If not, get... From the object pool , New... From the object pool workerChan It will start Goroutine Used for processing channel Data in .

Deal with connection

func (wp *workerPool) workerFunc(ch *workerChan) {
	var c net.Conn

	var err error
	//  consumption  channel  Data in 
	for c = range {
		if c == nil {
		//  Read the request data and return in response 
		if err = wp.WorkerFunc(c); err != nil && err != errHijacked {
		c = nil
		//  Change the current  workerChan  Put in  ready  In line 
		if !wp.release(ch) {


We're going to iterate to get workerChan Of channel Medium Connection And then execute WorkerFunc Function handles the request , After processing, the current workerChan Reinsert into ready Reuse in the queue .

It should be noted that , This loop will get Connection by nil Jump out of the loop , This nil yes workerPool In asynchronous call clean Methods check the workerChan If you have too much free time, you will go to channel Pass in a nil.

Set up here WorkerFunc The function is Server Of serveConn Method , It will get the parameters of the request , And then call to the corresponding handler Request processing , Then return response, because serveConn The method is relatively long, so we won't analyze it here , Interested students have a look for themselves .


We have analyzed fasthttp Implementation principle of , We can know from the principle that fasthttp and net/http What's the difference in implementation , So it can be concluded that fasthttp Reason for speed , Then we can know how to reduce memory allocation and improve performance from its implementation details .

Sweep code _ Search for syndication patterns - Standard color plate

copyright notice
author[luozhiyun],Please bring the original link to reprint, thank you.

Random recommended