current position:Home>Extreme optimization of Tencent classroom applet performance -- comprehensive chapter

Extreme optimization of Tencent classroom applet performance -- comprehensive chapter

2021-08-24 01:19:12 Front peak

Introduction |  If your applet also has performance problems , Our practical experience may inspire you , We start small programs 、 Loading to interaction has been explored . By the way , This article has been praised by the technical director of small programs inside Tencent .

1. origin

Thing , Start with a pleasant weekend afternoon ……

On that day , The phone was suddenly awakened , Multiple wechat messages pop up . It turned out to be a group of activities being promoted on campus this weekend , Think of the orderly development progress before , The friendly process of communicating with products , The activity should have a good response .

The reality is cruel :

“ Our little program opens slowly into a dog !”

“ This loading The loading process is too long !”

“ Scrolling loads a bit of a card , And it's easy to report errors ……”

What you see is the most direct complaint .

See the user's recording screen , These problems do arise , So we still need to optimize the performance of the main process for the applet , The three sentences of complaint can be summarized as 3 A little bit :

  • The applet starts slowly

  • Applet requests are slow

  • Applet interaction is slow

2. location

2.1. Slow start

After receiving the feedback, the first reaction is , Is the user's network speed too slow , Run it yourself , I found that my mobile phone ran without problem , Gray is often smooth , Subconsciously, you may want to record a screen to reply to the past .

 picture

But there's a user recording screen there , Of course not so hasty , So we checked the market data of the management background applet under different networks :

The Internet Time to start
The overall 3.6s
WIFI 3.5s
4G 3.9s
2G/3G 4.1s


Statistically speaking , The overall 3.7s The start-up time of , The network will have an impact on the startup time , But the impact is not great , Even if it is 2G-3G Compared with the data of the market, it is not much slower , It can be seen that things are not simple .

So let's look at the market data from another dimension :

Model Time to start JS Inject First rendering
The overall 3.6s 0.29s  0.16s
High end machine 2.9s 0.19s 0.06s
Midrange 4.8s 0.42s 0.19s
Low end machine 7.9s 0.72s 0.43s

From here we can see that the problem comes , The performance of the mobile phone has a great impact on the startup speed of the applet , Compared with high-end machines, low-end machines have 2-3 Times the gap , In particular, render layers even have 5-6 Times the gap , Moreover, the mobile phones used by the users with problem feedback are really medium and low-end machines , But we can't control what mobile phones users use , Is there any way to optimize ?

In response to this question , We need to understand the startup process of the applet , According to the official documents , The startup of the applet can be divided into the following steps :

 picture

The figure above describes a complete cold start process from the user clicking the applet to the page requesting data , And the process of applet initialization ( Information preparation 、 Environmental preparation ) It takes a long time , But this part of the work is completed by the wechat client , Developers can't interfere with , So we can only focus on the next steps ( Download the code package 、 Inject code package 、 First rendering ).

According to the official documents , The means to optimize this part are :

  1. Reduce the size of the code package

  2. Reduce code complexity

  3. Reduce synchronous code interface calls

  4. Reduce the complexity of page structure

  5. Reduce the number of custom components

Back 4 There are no particularly good technical restrictions , Need us to Code Review Check the interface calls with high complexity and high overhead , Complexity can also be used here, such as CodeCC( Tencent internal code checking tool )  This kind of tool to analyze , Reduce the number of custom components , This is more difficult to choose , Need to be in code readability 、 Make a trade-off between reusability , Not the focus of this optimization .

So we will focus on the problem of code package volume , Through our CI Records can be collected from our total package size :

 picture

You can see that the volume of the main package reaches 1949.71KB, Close to the 2M The limit of , After dependency analysis , It is found that except for some unused modules and components , A large part of the content is static resources , At the same time, we also see such a sentence in the official documents :

The applet code package will be downloaded using ZSTD Algorithm for compression , These resource files take up a lot of code package volume , And it is usually difficult to be further compressed , The impact on download time is much greater than code files .

So we need to reduce the size of the code package , The most direct way is to remove unnecessary resources :

  • Optimize static resources , Upload unnecessary static resource files to CDN

  • Perform dependency analysis on the components of the applet , Filter out unused components

At the same time, we are also concerned about , Some subcontracts are very small , However, due to ordinary subcontracting , When opening these pages, you also need to download the main package first , In fact, there is some waste in the time of package download , The typical thing is WebView page , They often only need to deal with parameters , The dependence on the main package is not very strong , So there is another point that can be optimized :

  • Independently subcontract pages with strong independence , Minimize package download time

2.2. Slow request

We found the user's home page data through the log, and the request will be returned to 3-4s, The request is slow. Under normal circumstances, there are two situations :

  • The sudden increase in concurrency leads to slow server response

  • The slow network speed of users leads to slow sending and receiving requests

We find the user's access time end through log statistics , The request volume is consistent with the usual , Look at the time-consuming statistics of market requests , No big fluctuations :

 picture

So we can basically eliminate the problem of the background , Although the data of the market are 500ms about , But when the user network is bad , How to guarantee this piece ?

The answer, of course, is to pull in advance , When the user cold starts , We can use the data prefetch provided by the applet Ability to pull ahead , From the start-up time of the small program , Absolutely. cover It takes time to drop our interface request , You can render the page directly after the applet is successfully started .

In case of hot start , The slow request is mainly reflected in the requests and Page switching Requests that occur when , In the next section, we will analyze , Here we mainly look at page switching , From our statistics , Page switching takes about 400ms about , And the time available is probably 50ms-100ms


 picture
route Time


Use the time of page switching to load the data of the page in advance , It can reduce the data request time on the user's perception , At the same time, after the first request, the data of the page can be cached according to certain policies , Thus, the effect of second entering the page can be achieved .

In short , There are several optimization methods for slow requests , And in theory, the effect will be very significant :

  • Cold start start start data pre pull

  • Pull data in advance during page route switching

  • Caching data

2.3. Interaction is slow

Let's talk about what the slow interaction here means , The phenomenon that we receive user feedback is : After the user's first screen is successfully loaded , The response of subsequent scroll loading and some button clicks is very slow and easy to report errors . After receiving this feedback, I positioned it for a long time , It makes sense if the request is slow due to user network problems , Should all requests be slow , However, the phenomenon that users show is that the subsequent loading and interaction are very slow , On the contrary, the first screen is still normal .

Query through log , We found that the user's request error is the request timeout , Why does the timeout focus on interactive loading ? After positioning for a period of time, we found that the errors reported by a user are concentrated in the first screen. After loading, they immediately slide down or click , If you click again after a period of time, no error will be reported .

After discovering this phenomenon , We thought of official documents About the Internet Instructions for use A limitation of :

wx.request、wx.uploadFile、wx.downloadFile The maximum concurrency limit for is 10 individual

Combined with our understanding of wx.request Encapsulation , The request timeout timer is from the call wx.request It started when I was young , If the request concurrency exceeds the limit , Then it is easy to request timeout , When we request data from the first business interface, we will report a series of data , Include pv、 Component exposure 、 Key link management, etc , So we use Whistle Of resDelay Method , Delay our escalation requests 5000ms return , Sure enough, the situation that users feed back is repeated .

 picture

Once the problem is found, the direction of optimization is clear :

  • Ensure that business requests related to user experience are sent normally

Is there any other reason for the slow interaction ? As we continue to tap performance bottlenecks , It is found that the course details page of Tencent classroom applet has a lot of content , Yes 5-6 Screen height , Users only care whether the first screen is displayed faster , But our original treatment was rough , After getting the data on the details page, process the data , After formatting the data required by the whole page, call... At one time  this.setData  To update the page , So if you want to improve the first screen speed , What needs to be done here is :

  • The page is rendered step by step

2.4. Optimization points are summarized

To sum up the points and directions that need to be optimized :

  1. Slow start mainly starts from optimizing the code package :

    • Optimize static resources , Upload unnecessary static resource files to CDN

    • Perform dependency analysis on the components of the applet , Filter out unused components

    • Independently subcontract pages with strong independence , Reduce the main package download time

  2. Slow requests mainly start with preloading and caching :

    • Cold start start start data pre pull

    • Pull data in advance during page route switching

    • Caching data

  3. Slow interaction needs to start with initiating requests and page rendering :

    • Ensure that business requests related to user experience are sent normally

    • The page is rendered step by step

3. Optimize

3.1. Startup optimization

3.1.1. Independent subcontracting

User feedback is mainly due to campus promotion activities , The activity page is through WebView Embedded H5 To carry , and WebView The startup process of the page is not the same as the native page of the applet :


 picture


actually WebView The page only needs to improve the function of login status transmission , The dependence on the main package is not very large , And the larger performance problems of this part of the page need to be solved in h5 Let's optimize over there , So we subcontracted it independently for the first time .

The final optimization effect is good , Because there is no need to download the main package during startup , Improved startup performance 30%.

3.1.2. On static resources CDN

Our applet is mainly composed of native pages + kbone Page composition ,kbone It's an official scheme , adopt webpack structure , There are many schemes to package static resources separately . Our native page uses gulp Build , The original main function is to convert ts Turn into js, At the same time css File by postcss Turn into wxss, because wxss References to relative paths are not supported , So in wxss The pictures and fonts referenced in are converted to base64 Of , Then for the rest of the files, such as json、wxml The file is copied directly into the product .

This treatment is rough , adopt postcss take background-image All referenced local pictures are converted to base64, It will also cause many pictures to occupy in the project 2 Three times the volume .


 picture
CI technological process - Before optimization


So we first need to match the static resources under the source code to and build them separately , And in order to avoid the problem of files with the same name , You need to type the resource hashtag, We need to use a gulp plug-in unit gulp-rev, This plug-in can modify resource-based content hash.


 picture
CI technological process - After optimization


Put the picture on CDN after , hold css、js、json、wxml Replace the reference path in with CDN Address , The specific replacement logic is as follows .


 picture
CDN technological process


3.1.3. Filter unused components

As the business iterates , Inevitably, some components are discarded but difficult to detect , Through the applet developed by our team imweb-miniprogram-cli Analyze the components used in the page , You can filter out unused components in the project , Will not be packaged into the final product , The general idea is as follows :


 picture
Component dependency


from app.json Start , Get all the pages and subcontracts configured by the applet , clear through App、 page 、 Custom components used in subcontracting for collection , And recursively check the components used by the custom components , If an unused component is detected , Will also give a hint , Very friendly :


 picture
Component filtering


You can see that several unused components have been found in our project .

3.2. Request to optimize

3.2.1. Data prefetch

Data prefetching needs to be enabled in the management background of the applet , The data source can be developer server or cloud development , There are some restrictions on choosing a developer server , If it is filled in directly CGI Address , You can only pull one kind of data , inflexible , However, if another service is built for pre pulling, the workload involved will be very large , So we chose cloud development , The general flow chart is as follows :


 picture
Data prefetch - Probably


When the applet starts , The wechat client will pull the specified cloud function according to the configuration , In the cloud function, pass cl5 Call the service in the business background to pull the required data , After being pulled, the client will cache the data locally , When the applet starts successfully , Invoke in business code wx.getBackgroundFetchData You can get the pre pulled data , If the cached data is pulled to the required data, it can be rendered directly , If not, demote to the business and pull the interface again .

In the cloud function, you can get the start of this applet path and query Parameters , Therefore, we can judge which service in the business background needs to be called for this pre pull according to these two parameters , In order to start the applet from different pages, you can pre pull the required data through a cloud function .

const preFetchMap = {
  'pages/index/index': fetchIndex,
  'pages/course/course': fetchCourse,
}

//  Cloud function entry function
exports.main = async (event) => {
  const { path, query = '' } = event;
  const fetchFn = preFetchMap[path];

  if (fetchFn) {
    const res = await fetchFn(query);
    return res;
  }

  return {
    error: {
      event,
      retcode-1002,
      msg`${path} The page is not set with pre pull logic `
    }
  };
};

But here's the thing , Because the applet itself does a lot of initialization optimization , It is possible that after the applet starts , The pre pulled data has not been returned , So we made further optimization , In the process of business pull, you can use  wx.onBackgroundFetchData Listen for the return of the prefetch , Render directly after receiving and returning , Use the pre pulled data to render the first screen as much as possible .


 picture
Data prefetch


3.2.2. Pull ahead of time & Data caching

I've already mentioned that , Pull in advance is to use the applet to switch the gap of the page and start pulling data , Thus, there is less time for sensory data requests , The overall logic is through encapsulated jump logic , Add different data pull logic to the corresponding page , And pull the promise Mounted on App On , When the page switching is completed, it is preferred to use App Upper promise To get data .

The data cache is after the data is pulled successfully , Pass the fixed data through  wx.setStorage  Cache locally , When you switch to this page for the second time , First render with locally cached data , Later, it will be updated by pulling the data .


 picture
Pull ahead of time


3.3. Interaction optimization

3.3.1. Business request guarantee

The core idea of ensuring business requests is to give priority to business requests , We encapsulate a   Queuing request module  , Through to  wx.request API Interception of , The requests are prioritized according to the configuration , Low priority requests will be pushed to the waiting queue after the number of concurrent requests reaches a certain threshold  WaitingQueue  in , Leave enough channels for high priority business requests .


 picture
Request queuing


3.3.2. Step by step rendering

I believe you can also understand the scheme here , It is mainly to give priority to the data required for the first screen and pass  setData  Update the view , Then process the rest of the data . But according to the official documents :

setData  Function to send data from the logical layer to the view layer ( asynchronous ), At the same time change the corresponding  this.data  Value ( Sync ).

The execution order of applet code also follows JS Event cycle mechanism of , It's just a data call after processing  setData , Then continue or pass  Promise  Deal with the next step , It can not achieve the purpose of step-by-step rendering , And directly through callback in  setTimeout  Use nested rendering in , The readability of the code will become poor , And it's not very elegant . Our solution is to use  setTimeout  Encapsulates a package that conforms to Promise Standard method , So you can use Promise Then continue to render step by step :


 picture


4. results

After a series of optimizations , The effect is still obvious :

4.1. Bag size

In terms of packet size :

  • From 9132.94KB Reduce to 6736.42KB, Less 27%;

  • Master package from 1949.71KB Reduce to 985.96KB, Less 49.5%;

From the start-up time-consuming data , Downloading takes time and JS The injection time has decreased significantly :


 picture
Time to start


Then look at the time-consuming distribution , You can see 3s The proportion of users opened in the has increased significantly , from 56.26% Add to 64.25%;


 picture
Open distribution


4.2. The request takes time

Data prefetch , Pull ahead of time , Data caching works well during cold start and page switching :

Home page request speed from Average 400ms Down to 50ms, To optimize the 87.5%;

The request speed of the course details page is from Average 800ms Down to 90ms, To optimize the 88.75%;

The data cache allows the page to open in seconds during the second access :


 picture
Secondary loading


After using queued requests , The intervention effect on the network request sequence is also obvious , The average time-consuming of gray-scale user service request is 50-100ms, about 15% The optimization of the ;

At the same time, we analyze the time consumption in 80 Quantile 、50 Quantile 、20 The effect of quantile is found , The longer the request takes , The more obvious the optimization effect is , In other words, it can play a better role in the case of weak network .


 picture
Request queuing results


4.3. Rendering

After using step-by-step rendering , Our page can start rendering immediately after processing the basic data of the first screen , Because our directory structure is complex , It takes a long time to process , That's why part two deals with directories , The actual rendering effect is shown in the figure below :


 picture
Step by step rendering


The first screen can be earlier than the original 100ms-150ms renders .

5. summary

Our performance optimization is for applet startup 、 request 、 Interaction 、 Performance mining has been carried out in many aspects of rendering , This is the extreme that can be achieved under the condition of low requirements for the version of the basic library .

Take the home page of our core page and the course details page :

  • The cold start of the home page takes time, and some optimizations that developers can intervene are probably 1300 download + 300 Inject + 170 First shading + 430 request = 2200ms -> 750 + 245 + 170 + 50 = 1215ms, To optimize the 45%

  • course detailed Page cold start takes time, and some optimizations that developers can intervene are probably 1300 download + 300 Inject + 170 First shading + 790 request = 2560ms -> 750 + 245 + 170 + 100 = 1265ms, To optimize the 50.5%

  • Page switching first enters the details page and takes time from 400 route + 800 request + 450 Handle = 1650ms -> 400 + 720 + 300 = 1420ms, To optimize the 14%

  • Entering the details page twice almost The loading and rendering process is not visible

Are there any more optimization methods ? The official also provides some advanced functions , Higher requirements for the version of the basic library , for example :

  • Component's On demand injection and time-consuming injection It can further reduce the time-consuming of code package download , But there was a problem with this feature when we released it , It will cause the custom components of the home page not to load , So... Is not used for the time being .

  • You can also use 2.11.1 Start supported Initial render cache , You don't have to wait until the logical layer is initialized , You can start rendering views earlier .

  • Still in the experimental stage Subcontract asynchronization , The asynchronous loading of modules can also reduce the download time and cost of code packages JS The injection takes time .

 

 picture It's not easy to create , Add one give the thumbs-up 、 Looking at   Give me a hand !

copyright notice
author[Front peak],Please bring the original link to reprint, thank you.
https://en.qdmana.com/2021/08/20210824011812484h.html

Random recommended