current position:Home>Electron / chromium screen recording - the pits I stepped on
Electron / chromium screen recording - the pits I stepped on
2021-08-27 04:12:39 【Byte front end】
background
Web Screen recording may not be new to us , The most common scenario , for example : Various video conferences 、 Remote desktop software , The emergence of remote conference software greatly facilitates people's communication , stay WFH During this period, it plays a key role in the online operation of many enterprises . In addition to real-time sharing on the screen , There is another application scenario for screen recording , namely “ Record real-time operations and keep on-site records , Facilitate subsequent tracing and playback ”, This is the main scenario of our business . For our business , Strongly dependent on the stability of this function . Here are some hard indicators of our business for this function :
Index requirements
- Support any length of recording , Support over 6 Hours of recording .
- Support simultaneous recording . Record the sound of the content being played on the screen at the same time .
- Cross-platform support , compatible Windows、Mac、Linux Three platforms .
- Support in App from A Drag the window to B Continue recording while window is open .
- Support in minimizing , Maximize , Keep recording when full screen , And the recording range is only App Inside , Cannot record to App Outside .
- Support long-term , Continuous , Don't shut down App You can keep recording .
- Support without downloading the screen completely , stay Web Drag and drop the timeline at will .
- Support App In case of multi tab switching , Simultaneous recording of multiple tabs .
- Support App Multiple windows are in the same system window , Record at the same time App window .
- Support the recording of live real-time stream .
- The screen recording file cannot be stored locally , After recording, it must be uploaded automatically and stored encrypted .
Technical scheme exploration
at present Chromium Direct recording of end-to-end video , Generally speaking, there are two technical solutions , namely :rrweb programme 、 as well as WebRTC API programme . If you think about Electron scene , There will be another kind of ffmpeg The plan .
rrweb
advantage
- Support direct recording to the current video while recording the screen Tab The sound inside .
- Cross platform compatibility .
- Support window dragging 、 To minimize the 、 Maximize 、 Continuous recording of full screen and other situations .
- The recording screen size is small .
- Support without downloading the screen completely , stay Web Drag and drop the timeline at will .
- Good performance .
Inferiority
- Unable to record live streaming . Consider its implementation principle , The recording scene is limited .
- Not supported when closing App Tab recording , If Renderer Process shutdown , The recording will be directly terminated and the recording screen will be lost .
- Some scenarios will affect the page DOM Have an impact on .
ffmpeg
advantage
- Same volume , The output quality of screen recording files is good .
- Good performance .
- Support recording live live streaming .
Inferiority
- Cross platform compatibility, complex processing .
- The recording area is not dynamic , Although support constituency , But if App Mobile is powerless to record off screen content .
- I won't support it App In case of multi tab switching , Pause or resume multiple tabs .
- Support in App from A Drag the window to B The window continues to App Recording .
- The intermediate time of the screen recording file will be stored locally , if App After closing, the screen recording file will be exposed .
- I won't support it App In the case of multiple windows , And recording at the same time .
webRTC
advantage
- All indicators are supported 1-11.
Inferiority
- Poor performance , When recording CPU The occupancy rate is relatively high .
- Native recorded video files , No video duration .
- Native recorded video files , Timeline drag is not supported .
- Native does not support ultra long recording , If the screen recording file is larger than... Of disk space 1/10 Will report a mistake .
- Native recording takes up a lot of memory .
- Video deletion depends on V8 And Blob Implemented garbage collection mechanism , Very easy to leak memory .
in consideration of rrweb Better performance , At first, our first edition was actually based on rrweb Realized , but rrweb The original hard injury eventually led us to abandon the scheme , For example, if the user closes the window, it will directly lead to the loss of screen recording, which is unacceptable , secondly rrweb Not supporting live streaming is the fundamental reason why we finally gave him up . Besides, consider ffmpeg The limitations of , And our own index requirements , In the end, we chose webRTC API The direct recording scheme realizes the screen recording function , And then stepped on a series of pits , Here are some sharing .
Media stream acquisition
stay WebRTC In the standard , The starting point of all the continuous production of media , Are abstracted as media streams , For example, we need to record screen and sound , The key to its implementation is to find the source of screen recording and audio recording , The overall process is shown in the figure below :
Video stream capture
Want to get a video stream , First, you need to get the information of the video stream you want to capture MediaSourceId.Electron Provides a way to get each “ window ” and “ The screen ” video MediaSourceId Common to API
import { desktopCapturer } from 'electron';
// Get all windows or screens mediaSourceId
desktopCapturer.getSources({
types: ['screen', 'window'], // Set what needs to be captured is " The screen ", still " window "
thumbnailSize: {
height: 300, // The height of a screenshot of a window or screen
width: 300 // The width of a screenshot of a window or screen
},
fetchWindowIcons: true // If the video source is a window and has an icon , Set this value to capture the window icon
}).then(sources => {
sources.forEach(source => {
// If the video source is a window and has an icon , And fetchWindowIcons Set to true, Is the captured window icon
console.log(source.appIcon);
// Monitor Id
console.log(source.display_id);
// Video source mediaSourceId, Through this mediaSourceId Get video source
console.log(source.id);
// Window name , Generally speaking, it is consistent with the process name seen by the task manager
console.log(source.name);
// The window or screen is calling this API Screenshot captured in an instant
console.log(source.thumbnail);
});
});
Copy code
If you just want to get the current window MediaSourceID
import { remote } from 'electron';
// Get the current window mediaSourceId How to do it
const mediaSourceId = remote.getCurrentWindow().getMediaSourceId();
Copy code
In the access to mediaSourceId after , Continue to get the video stream , The method is as follows :
import { remote } from 'electron';
// Video stream capture
const videoSource: MediaStream = await navigator.mediaDevices.getUserMedia({
audio: false, // Forcibly indicates not to record audio , Additional audio acquisition
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: remote.getCurrentWindow().getMediaSourceId()
}
}
});
Copy code
If the video source is the entire desktop window , And if the operating system is macOS, And authorize “ Screen recording permission ” After the above steps are performed , We can easily get the video source .
Audio source acquisition
Unlike easy access to video sources , The acquisition of audio source is really a little complicated , in the light of macOS and Windows System , Two acquisition methods need to be handled separately . First , stay Windows Getting screen audio is very simple and easy , Without any authorization , So if you need to record audio here , Be sure to do a good job of permission prompt 、
// Windows Audio stream acquisition
const audioSource: MediaStream = await navigator.mediaDevices.getUserMedia({
audio: {
mandatory: {
// Do not need to specify mediaSourceId You can record , It's system audio
chromeMediaSource: 'desktop',
},
},
// If you want to record audio , You must also bring the video option , Otherwise it will fail
video: {
mandatory: {
chromeMediaSource: 'desktop',
},
},
});
// Then manually remove the unused video source , You can complete the acquisition of audio stream
(audioSource.getVideoTracks() || []).forEach(track => audioSource.removeTrack(track));
Copy code
next , Look again macOS Acquisition of audio stream , There are some difficulties here , because macOS Audio permission settings ( Reference resources ), No one can record system audio directly , Unless a third party drive is installed Kext, such as soundFlower perhaps blackHole, because blackHole Support at the same time arm64 M1 The processor and x64 Intel processor ( Reference resources ), So we finally chose blackHole To get the system audio . Then guide the user to install BlackHole front , We need to check the current installation status first , If the user has not installed , Prompt for installation , If installed, continue , The way here is as follows :
import { remote } from 'electron';
const isWin = process.platform === 'win32';
const isMac = process.platform === 'darwin';
declare type AudioRecordPermission =
| 'ALLOWED'
| 'RECORD_PERMISSION_NOT_GRANTED'
| 'NOT_INSTALL_BLACKHOLE'
| 'OS_NOT_SUPPORTED';
// Check whether the user's computer is installed SoundFlower perhaps BlackHole
async function getIfAlreadyInstallSoundFlowerOrBlackHole(): Promise<boolean> {
const devices = await navigator.mediaDevices.enumerateDevices();
return devices.some(
device => device.label.includes('Soundflower (2ch)') || device.label.includes('BlackHole 2ch (Virtual)')
);
}
// Get whether you have microphone permission (blackhole The implementation method is to simulate the screen audio as a microphone )
function getMacAudioRecordPermission(): 'not-determined' | 'granted' | 'denied' | 'restricted' | 'unknown' {
return remote.systemPreferences.getMediaAccessStatus('microphone');
}
// Request microphone permission (blackhole The implementation method is to simulate the screen audio as a microphone )
function requestMacAudioRecordPermission(): Promise<boolean> {
return remote.systemPreferences.askForMediaAccess('microphone');
}
async function getAudioRecordPermission(): Promise<AudioRecordPermission> {
if (isWin) {
// Windows Direct support for
return 'ALLOWED';
} else if (isMac) {
if (await getIfAlreadyInstallSoundFlowerOrBlackHole()) {
if (getMacAudioRecordPermission() !== 'granted') {
if (!(await requestMacAudioRecordPermission())) {
return 'RECORD_PERMISSION_NOT_GRANTED';
}
}
return 'ALLOWED';
}
return 'NOT_INSTALL_BLACKHOLE';
} else {
// Linux Recording audio is not supported yet
return 'OS_NOT_SUPPORTED';
}
}
Copy code
Besides ,Electron The application must be info.plist Declare that you need audio recording permission , Before you can record audio , With Electron-builder Take the packaging process as an example :
// add to electron-builder To configure
const createMac = () => ({
...commonConfig,
// Statement afterPack Hook function , Used when processing audio authorization i18n
afterPack: 'scripts/macAfterPack.js',
mac: {
...commonMacConfig,
// Must specify entitlements.mac.plist Permission statement for signing
entitlements: 'scripts/entitlements.mac.plist',
// The runtime must be limited to "hardened", To make the application pass natorize notarization
hardenedRuntime: true,
extendInfo: {
// by info.plist Add multilingual support
LSHasLocalizedDisplayName: true,
}
}
});
Copy code
In order to obtain audio recording permission , You need to customize it entitlements.mac.plist, And declare the following four variables :
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0"> <dict> <key>com.apple.security.cs.allow-jit</key> <true/> <key>com.apple.security.cs.allow-unsigned-executable-memory</key> <true/> <key>com.apple.security.cs.allow-dyld-environment-variables</key> <true/> <key>com.apple.security.device.audio-input</key> <true/> </dict> </plist>
Copy code
In order to make the audio before recording “ Microphone Authorization ” Tips support multiple languages , Here we manually add the following custom text to each language .lproj/InfoPlist.strings In the file :
// macAfterPack.js
const fs = require('fs');
// Used to store to xxx.lproj/InfoPlist.strings Of course i18n written words
const i18nNSStrings = {
en: {
NSMicrophoneUsageDescription: 'Please allow this program to access your system audio',
},
ja: {
NSMicrophoneUsageDescription: 'このプログラムがシステムオーディオにアクセスして Recording することを Permission してください',
},
th: {
NSMicrophoneUsageDescription: 'โปรดอนุญาตให้โปรแกรมนี้เข้าถึงและบันทึกเสียงระบบของคุณ',
},
ko: {
NSMicrophoneUsageDescription: '이 프로그램이 시스템 오디오에 액세스하고 녹음 할 수 있도록 허용하십시오',
},
zh_CN: {
NSMicrophoneUsageDescription: ' Please allow this program to access and record your system audio ',
},
};
exports.default = async context => {
const { electronPlatformName, appOutDir } = context;
if (electronPlatformName !== 'darwin') {
return;
}
const productFilename = context.packager.appInfo.productFilename;
const resourcesPath = `${appOutDir}/${productFilename}.app/Contents/Resources/`;
console.log(
`[After Pack] start create i18n NSString bundle, productFilename: ${productFilename}, resourcesPath: ${resourcesPath}`
);
return Promise.all(
Object.keys(i18nNSStrings).map(langKey => {
const infoPlistStrPath = `${langKey}.lproj/InfoPlist.strings`;
let infos = '';
const langItem = i18nNSStrings[langKey];
Object.keys(langItem).forEach(infoKey => {
infos += `"${infoKey}" = "${langItem[infoKey]}";\n`;
});
return new Promise(resolve => {
const filePath = `${resourcesPath}${infoPlistStrPath}`;
fs.writeFile(filePath, infos, err => {
resolve();
if (err) {
throw err;
}
console.log(`[After Pack] ${filePath} create success`);
});
});
})
);
};
Copy code
above , Can complete the most basic macOS Audio recording capabilities and permissions . next , With Blackhole The installation process is shown in the figure below : When installed , Need to be in 「 Start up stage 」 The search system comes with software 「 Audio MIDI Set up 」 And open .
Click on the bottom left corner 「+」 Number , choice 「 Create multiple output devices 」.
In the menu on the right 「 Use 」 Check inside 「BlackHole」( Mandatory ) and 「 The speaker 」/「 The headset 」( Choose one or more )「 Main equipment 」 choice 「 The speaker 」/「 The headset 」.
On the menu bar 「 The volume 」 Select the... Created just now in the settings 「 Multiple output devices 」 For sound output devices .
Yes ,macOS The audio recording steps are very cumbersome , But this can only be said to be the current optimal solution . After completing the above “ Basic permission configuration ” And “Blackhole Extended configuration ” after , We can get the audio stream smoothly in the code :
if (process.platform === 'darwin') {
const permission = await getAudioRecordPermission();
switch (permission) {
case 'ALLOWED':
const devices = await navigator.mediaDevices.enumerateDevices();
const outputdevices = devices.filter(
_device => _device.kind === 'audiooutput' && _device.deviceId !== 'default'
);
const soundFlowerDevices = outputdevices.filter(_device => _device.label === 'Soundflower (2ch)');
const blackHoleDevices = outputdevices.filter(_device => _device.label === 'BlackHole 2ch (Virtual)');
// If the user installs soundFlower perhaps blackhole, Get by priority deviceId
const deviceId = soundFlowerDevices.length ?
soundFlowerDevices[0].deviceId :
blackHoleDevices.length ?
blackHoleDevices[0].deviceId :
null;
if (deviceId) {
// When available deviceId when , Grab audio stream
const audioSource = await navigator.mediaDevices.getUserMedia({
audio: {
deviceId: {
exact: deviceId, // According to the obtained deviceId, Get the audio stream
},
sampleRate: 44100,
// All three parameters here are turned off to get the most original audio
// otherwise Chromium Some audio processing will be done by default
echoCancellation: false,
noiseSuppression: false,
autoGainControl: false,
},
video: false,
});
}
break;
case 'NOT_INSTALL_BLACKHOLE':
// Here are some tips , Inform the user that the plug-in is not installed
break;
case 'RECORD_PERMISSION_NOT_GRANTED':
// Here are some tips , Inform the user that there is no authorization
break;
default:
break;
}
}
Copy code
above , Although a little cumbersome , however ! At least ! We can record... At the same time Windows and macOS Your audio ~ If configured correctly , After executing the above code , The native authorization pop-up window as shown in the figure will pop up : If the user is not careful, it is not allowed , Follow up can also be in “ System preferences - Security and privacy - Microphone ” Open the recording authorization here .
Merge audio and video streams
After performing the above steps , We can merge the two streams , Extract their respective orbits , Complete a new MediaStream The creation of .
// Merge audio stream and video stream
const combinedSource = new MediaStream([...this._audioSource.getAudioTracks(), ...this._videoSource.getVideoTracks()]);
Copy code
Recording of media stream
Coding format
We already have a recording source , But no recording was created = Didn't start recording ,Chromium There's a name MediaRecorder Class , For us to stream incoming media and record video , So how to create MediaRecorder And initiate recording , Is the core of the recording screen .MediaRecorder It only supports recording webm Format , But it supports multiple encoding formats , for example :vp8、vp9、h264 etc. ,MediaRecorder Provides a thoughtful API, It is convenient for us to test the compatibility of coding format
let types: string[] = [
"video/webm",
"audio/webm",
"video/webm;codecs=vp9",
"video/webm;codecs=vp8",
"video/webm;codecs=daala",
"video/webm;codecs=h264",
"audio/webm;codecs=opus",
"video/mpeg"
];
for (let i in types) {
// You can test the coding you need MIME Type Do you support
console.log( "Is " + types[i] + " supported? " + (MediaRecorder.isTypeSupported(types[i]) ? "Yes" : "No :("));
}
Copy code
After testing , When recording in the above encoding format CPU There is no essential difference between occupation , Therefore, it is recommended to choose VP9 record .
Create recording
Make sure you code , And merge the audio and video streams , We can really start recording :
const recorder = new MediaRecorder(combinedSource, {
mimeType: 'video/webm;codecs=vp9',
// Support to manually set the bit rate , There are 1.5Mbps Bit rate of , To limit the case of large bit rate
// Because it is still a dynamic bit rate , This value is not accurate
videoBitsPerSecond: 1.5e6,
});
const timeslice = 5000;
const fileBits: Blob[] = [];
// When data is available , Will call back the function , There are four situations :
// 1. Manual stop MediaRecorder when
// 2. Set up timeslice, Every time timeslice At intervals
// 3. When all tracks in the media stream become inactive
// 4. call recorder.requestData() Error transferring buffer data
recorder.ondataavailable = (event: BlobEvent) => {
fileBits.push(event.data as Blob);
}
recorder.onstop = () => {
// Stop the screen recording and get the screen recording file
// The trigger time must be ondataavailable after
const videoFile = new Blob(fileBits, { type: 'video/webm;codecs=vp9' });
}
if (timeslice === 0) {
// Start recording , And store data all the way to the buffer , Until it stops
recorder.start();
} else {
// Start recording , And every time timeslice millisecond , Trigger once ondataavailable, Output and empty the buffer ( It's very important )
recorder.start(timeslice);
}
setTimeout(() => {
// 30 Seconds after stop
recorder.stop();
}, 30000);
Copy code
Pause / Resume recording
// Suspend recording
recorder.pause();
// Resume recording
recorder.resume();
Copy code
Complete the above API Call to , We “ Screen recording function MVP” Even if the version runs through .
Processing of recording products
As mentioned in the previous technical solution exploration , This method is implemented directly using the browser , There will be some holes , For all that , The core of this paper is actually this part , That is to solve the pits brought by the recording screen .
The lock screen triggers the video stream stop problem
It was found that , adopt navigator.getUserMedia Get the video stream , In case of lock screen ( Yes macOS、Windows All operating systems will ) It will interrupt , We can test this phenomenon with the following code :
import { remote } from 'electron';
// Video stream capture
const videoSource: MediaStream = await navigator.mediaDevices.getUserMedia({
audio: false, // Forcibly indicates not to record audio , Additional audio acquisition
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: remote.getCurrentWindow().getMediaSourceId()
}
}
});
recorder.ondataavailable = () => console.log(' Data available ');
recorder.onstop = () => console.log(' The screen recording stops ');
const recorder = new MediaRecorder(videoSource, {
mimeType: 'video/webm;codecs=vp9',
// Support to manually set the bit rate , There are 1.5Mbps Bit rate of , To limit the case of large bit rate
// Because it is still a dynamic bit rate , This value is not accurate
videoBitsPerSecond: 1.5e6,
});
// Start recording , etc. 10 second , Manually trigger the lock screen
recorder.start();
setInterval(() => {
console.log(' Orbit active :', videoSource.active);
}, 1000);
10 Seconds later, the console outputs :
Orbit active : true
Orbit active : true
Orbit active : true
Orbit active : true
Orbit active : true
Orbit active : true
Orbit active : true
Orbit active : true
Orbit active : true
Data available
The screen recording stops
Orbit active : false
...
Copy code
The above experiments show that locking the screen will trigger the video stream status from “ active ” To “ Inactive ”, The biggest problem is After unlocking “ The state does not automatically return to active ”, The developer must manually re invoke navigator.mediaDevices getUserMedia Get the video stream . So how do you know if the user locks the screen ? Here I explore a method :
// start-up MediaRecorder When , If you throw it wrong , At this point, re acquire the video stream
try {
this.recorder.start(5000);
} catch (e) {
this._combinedSource = await this.getSystemVideoMediaStream()
this.recorder = new MediaRecorder(this._combinedSource, {
mimeType: VIDEO_RECORD_FORMAT,
videoBitsPerSecond: 1.5e6,
});
this.recorder.start(5000);
}
Copy code
The second pit is , The above is only for pure video streaming scene recording , If you record an audio stream at the same time + Video streaming , that “ Because the audio stream is always active when the screen is locked ”, and “ Only when the video stream locks the screen, the trigger state changes to inactive ”, Since not all orbits become inactive , here “MediaRecorder It doesn't trigger ondataavailable and onstop, The recording will continue , But the video is black ”, Become a big slot and pit of this problem . So how to solve the problem that the screen lock of audio and video stream does not trigger ondataavailable and onstop The question of ? Here's a way I explore :
// If the video stream is not active , Stop audio streaming
// If the audio stream is not active , Stop video streaming ( Although it won't happen , Just the bottom )
const startStreamActivityChecker = () =>
window.setInterval(() => {
if (this._videoSource?.active === false) {
this._audioSource?.getTracks().forEach(track => track.stop());
}
if (this._audioSource?.active === false) {
this._videoSource?.getTracks().forEach(track => track.stop());
}
}, 1000);
}
Copy code
Lack of video duration and non draggable timeline
- Issue1: MediaRecorder output should have Cues element -bugs.chromium.org/p/chromium/…
- Issue2: Videos created with MediaRecorder API are not seekable / scrubbable -bugs.chromium.org/p/chromium/…
- Issue3: No duration or seeking cue for opus audio produced with mediarecoder -bugs.chromium.org/p/chromium/…
- Issue4: MediaRecorder: consider producing seekable WebM files -bugs.chromium.org/p/chromium/…
I think these two problems , Count as MediaRecorder api The biggest mistake in design . because webm The video duration and drag information of the file are written in the header of the file , So in WebM Before recording , The head of the "Duration" It is always an unknown value increasing . But because of MediaRecorder Support slice timing output, small Blob file , Lead to the first Blob It is impossible for the head to contain Duration Field , The same search header "SeekHead", "Seek", "SeekID", "SeekPosition", "Cues", "CueTime", "CueTrack", "CueClusterPosition", "CueTrackPositions", "CuePoint" Also missing . but Blob At the beginning of design, it is an immutable file type , As a result, the final recorded file does not Duration Video duration field , This problem has been Chromium The official logo is “wont fix”, And recommend developers to find the community to solve .
Use ffmpeg Repair
One option within the community is to use ffmpeg On the file “ Copy ” And the output , For example, enter the following command :
ffmpeg -i without_meta.webm -vcodec copy -acodec copy with_meta.webm
Copy code
ffmpeg Will automatically calculate Duration And search header information , The biggest problem with this scheme is , If the client is integrated ffmpeg, You need to directly operate files and write cross platform solutions , Expose files to local . If you do it on the server side , It will also increase the overall processing flow and time of files , Although it's not impossible , But this is not the ultimate solution we pursue .
Use npm library fix-webm-duration Repair
This is another solution in the community , That is, analysis webm The header information of the file , And manually record the video duration at the front end , After parsing, manually record the Duration write in webm Head , However, this scheme also can not solve the drag and drop information caused by the loss of search header , And rely on manually recorded duration, The repair content is limited .
be based on ts-ebml, utilize fix-webm-metainfo Repair
This is the final solution of this problem , That is, complete resolution webm ebml and segment head , According to the fact simple block Calculation of the size of Duration And search headers . We make use of ebml analysis webm, With MediaRecorder Straight out webm File as an example , The structure is as follows :
m 0 EBML
u 1 EBMLVersion 1
u 1 EBMLReadVersion 1
u 1 EBMLMaxIDLength 4
u 1 EBMLMaxSizeLength 8
s 1 DocType webm
u 1 DocTypeVersion 4
u 1 DocTypeReadVersion 2
m 0 Segment
m 1 Info segmentContentStartPos, all CueClusterPositions provided in info.cues will be relative to here and will need adjusted
u 2 TimecodeScale 1000000
8 2 MuxingApp Chrome
8 2 WritingApp Chrome
m 1 Tracks tracksStartPos
m 2 TrackEntry
u 3 TrackNumber 1
u 3 TrackUID 31790271978391090
u 3 TrackType 2
s 3 CodecID A_OPUS
b 3 CodecPrivate <Buffer 19>
m 3 Audio
f 4 SamplingFrequency 48000
u 4 Channels 1
m 2 TrackEntry
u 3 TrackNumber 2
u 3 TrackUID 24051277436254136
u 3 TrackType 1
s 3 CodecID V_VP9
m 3 Video
u 4 PixelWidth 1200
u 4 PixelHeight 900
m 1 Cluster clusterStartPos
u 2 Timecode 0
b 2 SimpleBlock track:2 timecode:0 keyframe:true invisible:false discardable:false lacing:1
Copy code
According to the webm Description on the official website ( link ), A normal webm The header information , It should be parsed as follows :
m 0 EBML
u 1 EBMLVersion 1
u 1 EBMLReadVersion 1
u 1 EBMLMaxIDLength 4
u 1 EBMLMaxSizeLength 8
s 1 DocType webm
u 1 DocTypeVersion 4
u 1 DocTypeReadVersion 2
m 0 Segment
// This part is missing
m 1 SeekHead -> This is SeekPosition 0, so all SeekPositions can be calculated as (bytePos - segmentContentStartPos), which is 44 in this case
m 2 Seek
b 3 SeekID -> Buffer([0x15, 0x49, 0xA9, 0x66]) Info
u 3 SeekPosition -> infoStartPos =
m 2 Seek
b 3 SeekID -> Buffer([0x16, 0x54, 0xAE, 0x6B]) Tracks
u 3 SeekPosition { tracksStartPos }
m 2 Seek
b 3 SeekID -> Buffer([0x1C, 0x53, 0xBB, 0x6B]) Cues
u 3 SeekPosition { cuesStartPos }
m 1 Info
// This part is missing
f 2 Duration 32480 -> overwrite, or insert if it doesn't exist
u 2 TimecodeScale 1000000
8 2 MuxingApp Chrome
8 2 WritingApp Chrome
m 1 Tracks
m 2 TrackEntry
u 3 TrackNumber 1
u 3 TrackUID 31790271978391090
u 3 TrackType 2
s 3 CodecID A_OPUS
b 3 CodecPrivate <Buffer 19>
m 3 Audio
f 4 SamplingFrequency 48000
u 4 Channels 1
m 2 TrackEntry
u 3 TrackNumber 2
u 3 TrackUID 24051277436254136
u 3 TrackType 1
s 3 CodecID V_VP9
m 3 Video
u 4 PixelWidth 1200
u 4 PixelHeight 900
// This part is missing
m 1 Cues -> cuesStartPos
m 2 CuePoint
u 3 CueTime 0
m 3 CueTrackPositions
u 4 CueTrack 1
u 4 CueClusterPosition 3911
m 2 CuePoint
u 3 CueTime 600
m 3 CueTrackPositions
u 4 CueTrack 1
u 4 CueClusterPosition 3911
m 1 Cluster
u 2 Timecode 0
b 2 SimpleBlock track:2 timecode:0 keyframe:true invisible:false discardable:false lacing:1
Copy code
You can see , We just need to fix the missing Duration、SeakHead、Cues, We can solve our problems , The overall process is as follows :ts-ebml Is a community open source library , The library is located in ebml Of Decoder、Reader Realized ArrayBuffer To read EBML Based on the mutual transformation ability , Added Webm Fix features , But greater than... Is not supported 2GB Video file , The root cause is the direct response to Blob Convert to ArrayBuffer There is a problem ,ArrayBuffer The maximum length of is only 2046 * 1024 * 1024, For this reason, I released an early one called fix-webm-metainfo Of npm package , utilize Buffer Of slice Method , Use Buffer[] Instead of Buffer Solved the problem .
import { tools, Reader } from 'ts-ebml';
import LargeFileDecorder from './decoder';
// fix-webm-metainfo Early implementation process
async function fixWebmMetaInfo(blob: Blob): Promise<Blob> {
// solve ts-ebml Greater than is not supported 2GB Problems with video files
const decoder = new LargeFileDecorder();
const reader = new Reader();
reader.logging = false;
const bufSlices: ArrayBuffer[] = [];
// because Uint8Array perhaps ArrayBuffer The maximum length supported is 2046 * 1024 * 1024
const sliceLength = 1 * 1024 * 1024 * 1024;
for (let i = 0; i < blob.size; i = i + sliceLength) {
// cutting Blob, And read ArrayBuffer
const bufSlice = await blob.slice(i, Math.min(i + sliceLength, blob.size)).arrayBuffer();
bufSlices.push(bufSlice);
}
// analysis ArrayBuffer To be readable and modifiable EBML Element type , And use reader Read to calculate Duration and Cues
decoder.decode(bufSlices).forEach(elm => reader.read(elm));
// When all reads are complete , end reader
reader.stop();
// utilize reader Make good cues And duration, The reconstruction meta head , And convert back to arrayBuffer
const refinedMetadataBuf = tools.makeMetadataSeekable(reader.metadatas, reader.duration, reader.cues);
const firstPartSlice = bufSlices.shift() as ArrayBuffer;
const firstPartSliceWithoutMetadata = firstPartSlice.slice(reader.metadataSize);
// Rebuild back to Blob
return new Blob([refinedMetadataBuf, firstPartSliceWithoutMetadata, ...bufSlices], { type: blob.type });
}
Copy code
The process is stuck and the cache is not reused
As the video length increases ,fix-webm-metainfo Although it solves the repair problem of large and long video , However, in the face of full reading and calculation of large files in a short time , There is a problem of blocking the rendering process for a short time .
Web Worker Handle
Web Worker Naturally suitable for the processing of the scene , utilize Web Worker, We can do this without creating additional processes , Create an extra Worker Threads , Specialized in processing and parsing large video files , At the same time, it will not jam the main thread , In addition, due to Web Worker Support by reference (Transferable Object) Pass on ArrayBuffer, Therefore, it has become the best solution to this problem . First, in the Electron Of BrowserWindow In the open nodeIntegrationInWorker:
webPreferences: {
...
nodeIntegration: true,
nodeIntegrationInWorker: true,
},
Copy code
Then write Worker process :
import { tools, Reader } from 'ts-ebml';
import LargeFileDecorder from './decoder';
// index.worker.ts
export interface IWorkerPostData {
type: 'transfer' | 'close';
data?: ArrayBuffer;
}
export interface IWorkerEchoData {
buffer: ArrayBuffer;
size: number;
duration: number;
}
const bufSlices: ArrayBuffer[] = [];
async function fixWebm(): Promise<void> {
const decoder = new LargeFileDecorder();
const reader = new Reader();
reader.logging = false;
decoder.decode(bufSlices).forEach(elm => reader.read(elm));
reader.stop();
const refinedMetadataBuf = tools.makeMetadataSeekable(reader.metadatas, reader.duration, reader.cues);
// Return the calculated result to the parent thread
self.postMessage({
buffer: refinedMetadataBuf,
size: reader.metadataSize,
duration: reader.duration
} as IWorkerEchoData, [refinedMetadataBuf]);
}
self.addEventListener('message', (e: MessageEvent<IWorkerPostData>) => {
switch (e.data.type) {
case 'transfer':
// Save the passed ArrayBuffer
bufSlices.push(e.data.data);
break;
case 'close':
// Repair WebM, Then close Worker process
fixWebm().catch(self.postMessage).finally(() => self.close());
break;
default:
break;
}
});
Copy code
The parent process :
import FixWebmWorker from './worker/index.worker';
import type { IWorkerPostData, IWorkerEchoData } from './worker/index.worker';
async function fixWebmMetaInfo(blob: Blob): Promise<Blob> {
// establish Worker process
const fixWebmWorker: Worker = new FixWebmWorker();
return new Promise(async (resolve, reject) => {
fixWebmWorker.addEventListener('message', (event: MessageEvent<IWorkerEchoData>) => {
if (Object.getPrototypeOf(event.data)?.name === 'Error') {
return reject(event.data);
}
let refinedMetadataBlob = new Blob([event.data.buffer], { type: blob.type });
// Manually shut down Worker process
fixWebmWorker.terminate();
let body: Blob;
let firstPartBlobSlice = blobSlices.shift();
body = firstPartBlobSlice.slice(event.data.size);
firstPartBlobSlice = null;
// notes : Besides using Web Worker, Compared with earlier schemes , Also on meta ArrayBuffer Generate Blob
// No more ArrayBuffer The reconstruction , It's before reuse Blob
// This step will greatly reduce one file write , It can also solve the memory leakage problem caused by non release of references
// This is the most critical and decisive step in this paper
let blobFinal = new Blob([refinedMetadataBlob, body, ...blobSlices], { type: blob.type });
refinedMetadataBlob = null;
body = null;
blobSlices = [];
resolve(blobFinal);
blobFinal = null;
});
fixWebmWorker.addEventListener('error', (event: ErrorEvent) => {
blobSlices = [];
reject(event);
});
let blobSlices: Blob[] = [];
let slice: Blob;
const sliceLength = 1 * 1024 * 1024 * 1024;
try {
for (let i = 0; i < blob.size; i = i + sliceLength) {
slice = blob.slice(i, Math.min(i + sliceLength, blob.size));
// Slice read ArrayBuffer
const bufSlice = await slice.arrayBuffer();
// Send to Worker process , And make use of Transferable Objects Improve performance
fixWebmWorker.postMessage({
type: 'transfer',
data: bufSlice
} as IWorkerPostData, [bufSlice]);
blobSlices.push(slice);
slice = null;
}
// End processing
fixWebmWorker.postMessage({
type: 'close',
});
} catch (e) {
blobSlices = [];
slice = null;
reject(new Error(`[fix webm] read buffer failed: ${e?.message || e}`));
}
});
}
Copy code
Through the study of early fix-webm-metainfo During the repair process blob_storage Observe the paging file of the temporary directory , We noticed the obvious problems of memory non release and repeated file generation , Take it out again fix-webm After logic , The problem will not recur , This shows that the current fix-webm-metainfo The file cache is not reused and the file reference is not deleted ( This problem will be discussed later ).
File cache reuse
So in ArrayBuffer And Blob The transformation of , Is there a non-destructive , And the way of reusable file cache ? That's why fix-webm-metainfo In later iterations , Reuse is adopted Blob The way to establish the repaired Blob, Instead of using it directly ArrayBuffer establish Blob Why . Observe the following two ways to generate Blob What's the difference? :
// First create a Blob
const a = new Blob([new ArrayBuffer(10000000)]);
// Read its buffer
const buffer = await a.arrayBuffer();
// The way 1, How much memory will actually be used ?
const b = new Blob([buffer]);
const c = new Blob([buffer]);
const d = new Blob([buffer]);
const e = new Blob([buffer]);
const f = new Blob([buffer]);
const g = new Blob([buffer]);
const h = new Blob([buffer]);
// The way 2, What about this ?
const i = new Blob([a]);
const j = new Blob([a]);
const k = new Blob([a]);
const l = new Blob([a]);
const m = new Blob([a]);
const n = new Blob([a]);
const o = new Blob([a]);
Copy code
Guess what the answer is ? Yes ,Blob There is a mechanism to reuse the local file cache , The way 1 Will generate... In memory or disk 7 As like as two peas , And the way 2 No extra file will be generated ,i To o All files are reused a Of blob, There is only one copy in memory or disk . that , Repair webm That way essentially modifies the bytes in the file header , Will this method reuse the same local file cache ? The answer is yes , Before being repaired webm And the repaired webm Because the difference is only in the head , Most areas of the whole use the same Blob slice The one who came out blob establish , Therefore, space is still reused .
Main process memory leak problem
according to Electron Official process.getProcessMemoryInfo() api, We implement memory monitoring for the main process and rendering process respectively , Through monitoring, it is found that the memory occupation of the main process of the user using the recording screen can often reach 2GB, Users who do not use the screen recording function , The memory occupied by the main process is only 80MB, This indicates a 100% memory leak . Before we talk about memory leaks in the main process , Have to mention Blob Implementation of file type . according to Chromium Blob Achieve official instructions (PPT) Here's the picture , We are Renderer The process is created in any way Blob, In essence, there will eventually be a cross process transfer to Browser The course of the process ( The main process ), That is to say, although MediaRecorder Is a recording based on the rendering process , But when outputting the buffer file as Blob The process of ( namely ondataavailable Trigger instant ), There will be cross process transfers . The above description is in “ Rendering Progress ” Recording , and “ The main process ” The root cause of the increasing memory footprint , Then be more specific ,Blob How is it transmitted ? let me put it another way , All we know is to create Blob when , It is not enough for binary data to be transferred across processes to the main process . If the file is large enough , What happens when the main process runs out of memory ? Chromium How to manage and store Blob What about the binary files contained in ?
Blob Transmission mode of
Here we read Chromium Of Blob Controller(Code) And add LOG(INFO) Observe
// effect : Determine the transmission strategy
// storage/browser/blob/blob_memory_controller.cc
BlobMemoryController::Strategy BlobMemoryController::DetermineStrategy(
size_t preemptive_transported_bytes,
uint64_t total_transportation_bytes) const {
// Blob File size is 0, No need to transmit
if (total_transportation_bytes == 0)
return Strategy::NONE_NEEDED;
// When Blob The file size is larger than the number of available memory , And greater than the available disk space , Direct transmission failure
if (!CanReserveQuota(total_transportation_bytes))
return Strategy::TOO_LARGE;
// Normal calls can be ignored
if (preemptive_transported_bytes == total_transportation_bytes &&
pending_memory_quota_tasks_.empty() &&
preemptive_transported_bytes <= GetAvailableMemoryForBlobs()) {
return Strategy::NONE_NEEDED;
}
// Chromium Open file paging at compile time ( Default on ), And configured override_file_transport_min_size when
if (UNLIKELY(limits_.override_file_transport_min_size > 0) &&
file_paging_enabled_ &&
total_transportation_bytes >= limits_.override_file_transport_min_size) {
return Strategy::FILE;
}
// Blob Less than 0.25MB when , Go directly ipc transmission
if (total_transportation_bytes <= limits_.max_ipc_memory_size)
return Strategy::IPC;
// Chromium Open file paging at compile time ( Default on )
// Blob The file size is less than the available disk space
// Blob The file size is larger than the available memory space
if (file_paging_enabled_ &&
total_transportation_bytes <= GetAvailableFileSpaceForBlobs() &&
total_transportation_bytes > limits_.memory_limit_before_paging()) {
return Strategy::FILE;
}
// Default transport policy , Memory sharing , Passed to the main process through the rendering process
return Strategy::SHARED_MEMORY;
}
bool BlobMemoryController::CanReserveQuota(uint64_t size) const {
// At the same time, check the internal “ Available memory space ” And “ Available disk space ”
return size <= GetAvailableMemoryForBlobs() ||
size <= GetAvailableFileSpaceForBlobs();
}
// If the current memory usage is less than 2GB( Press x64 Computers count ,max_blob_in_memory_space = 2 * 1024 * 1024 * 1024)
// Calculate the amount of memory remaining
size_t BlobMemoryController::GetAvailableMemoryForBlobs() const {
if (limits_.max_blob_in_memory_space < memory_usage())
return 0;
return limits_.max_blob_in_memory_space - memory_usage();
}
// Calculate the amount of disk remaining
uint64_t BlobMemoryController::GetAvailableFileSpaceForBlobs() const {
if (!file_paging_enabled_)
return 0;
uint64_t total_disk_used = disk_used_;
if (in_flight_memory_used_ < pending_memory_quota_total_size_) {
total_disk_used +=
pending_memory_quota_total_size_ - in_flight_memory_used_;
}
if (limits_.effective_max_disk_space < total_disk_used)
return 0;
// Actual maximum disk space - Used disk space
return limits_.effective_max_disk_space - total_disk_used;
}
Copy code
Can be found :Blob There are basically three types of transmission and storage , namely :“ file ”,“ Shared memory ”, as well as “IPC”,
- When the file is smaller than 0.25MB Give priority to go when “IPC” Mode transmission
- When “ Available memory space ” When it is larger than the file volume, it takes precedence “ Shared memory ” Mode transmission
- When “ Available memory space ” Not enough, but “ Available disk space ” When there is enough , Priority “ file ” Mode transmission
- When “ Available memory space ” And “ Available disk space ” When not enough ,Blob No transmission , And finally feed back to the rendering process , Will be submitted to the “File not readble” And so on .
Maximum storage limit
Here's a question “ Available memory space ” And “ Available disk space ” How to define ? If calculation ? Think of it here. , It also caused me to think , If the available memory space is very large , What's the problem ? With these questions , We continue to study Chromium The implementation of the :
BlobStorageLimits CalculateBlobStorageLimitsImpl(
const FilePath& storage_dir,
bool disk_enabled,
base::Optional<int64_t> optional_memory_size_for_testing) {
int64_t disk_size = 0ull;
int64_t memory_size = optional_memory_size_for_testing
? optional_memory_size_for_testing.value()
: base::SysInfo::AmountOfPhysicalMemory();
if (disk_enabled && CreateBlobDirectory(storage_dir) == base::File::FILE_OK)
disk_size = base::SysInfo::AmountOfTotalDiskSpace(storage_dir);
BlobStorageLimits limits;
if (memory_size > 0) {
#if !defined(OS_CHROMEOS) && !defined(OS_ANDROID) && !defined(OS_ANDROID) && defined(ARCH_CPU_64_BITS)
// No ChromeOS, Not Android , And the architecture is 64 position , be “ Maximum available memory size ” by 2GB
constexpr size_t kTwoGigabytes = 2ull * 1024 * 1024 * 1024;
limits.max_blob_in_memory_space = kTwoGigabytes;
#elif defined(OS_ANDROID)
// Android ,“ Maximum available memory ” For physical memory 1/100
limits.max_blob_in_memory_space = static_cast<size_t>(memory_size / 100ll);
#else
// Other architectures or ,“ Maximum available memory ” For physical memory 1/5
limits.max_blob_in_memory_space = static_cast<size_t>(memory_size / 5ll);
#endif
}
// It's done “ Maximum available memory ” The minimum value of is not less than twice “ Minimum page size ”
if (limits.max_blob_in_memory_space < limits.min_page_file_size)
limits.max_blob_in_memory_space = limits.min_page_file_size;
if (disk_size >= 0) {
#if defined(OS_CHROMEOS)
// ChromeOS,“ Maximum available disk size ” Is the size of the physical disk 1/2
limits.desired_max_disk_space = static_cast<uint64_t>(disk_size / 2ll);
#elif defined(OS_ANDROID)
// Android,“ Maximum available disk size ” Is the physical disk size 3/50
limits.desired_max_disk_space = static_cast<uint64_t>(3ll * disk_size / 50);
#else
// Other platforms or architectures ,“ Maximum available disk size ” Is the physical disk size 1/10
limits.desired_max_disk_space = static_cast<uint64_t>(disk_size / 10);
#endif
}
if (disk_enabled) {
UMA_HISTOGRAM_COUNTS_1M("Storage.Blob.MaxDiskSpace2",
limits.desired_max_disk_space / kMegabyte);
}
limits.effective_max_disk_space = limits.desired_max_disk_space;
CHECK(limits.IsValid());
return limits;
}
Copy code
Summarize the two indicators , And OS、Arch、Memory Size、Disk Size It may have something to do with :
Maximum available memory size
-
The architecture is x64 And the platform is not Chrome OS or Android:
2GB
-
Platform is Android:
Physical memory size of the device / 100
-
Other platforms or architectures ( for example macOS arm64,chromeOS):
Physical memory size of the device / 5
Maximum available disk size
-
Platform is Chrome OS:
Equipment , The size of the logical disk of the partition where the software resides / 2
-
The platform is Android :
Equipment , The size of the logical disk of the partition where the software resides * 3/50
-
Other platforms or architectures :
Equipment , The size of the logical disk of the partition where the software resides / 10
What does the above conclusion mean ? We found two problems :
- problem 1:X64 The maximum memory available for the architecture is 2GB, This is actually very big , The user's screen recording stores content that is not frequently accessed , The user's computer may only have 8GB, If this 2GB Being occupied for nothing is actually a great waste .
- problem 2:X64 And non X64 The maximum available memory of the architecture is inconsistent .
- problem 3: The maximum available disk size is only... Of the physical hard disk size 1/10, With 128GB Of SSD Hard disk for example , Even if all 128GB All assigned to C disc , Then the maximum available disk size is only 12.8GB, Regardless of any other Blob Disk occupied by , Even if the user C Dish has 100GB Remaining space of , Still can't escape the screen recording. The file volume is limited to 12.8GB Embarrassment .
The truth came out , The main process is not “ Memory leak ” It is “ The design is so ”.
modify Chromium
So if we reduce the maximum memory space , Increase the maximum available disk space , Can you solve the memory occupation problem of the main process , It also solves the two problems of screen recording file volume limitation ? The answer is yes , It's easy to modify :
// If the number of physical memory is greater than 0
if (memory_size > 0) {
#if !defined(OS_CHROMEOS) && !defined(OS_ANDROID)
// Remove 64 Bit judgment logic , keep 32 position Windows,Arm64 Mac coincident 2000MB -> 200MB Maximum memory recording space, logical modification
constexpr size_t kTwoHundrendMegabytes = 2ull * 100 * 1024 * 1024;
limits.max_blob_in_memory_space = kTwoHundrendMegabytes;
#elif defined(OS_ANDROID)
limits.max_blob_in_memory_space = static_cast<size_t>(memory_size / 100ll);
#else
limits.max_blob_in_memory_space = static_cast<size_t>(memory_size / 5ll);
#endif
}
if (limits.max_blob_in_memory_space < limits.min_page_file_size)
limits.max_blob_in_memory_space = limits.min_page_file_size;
if (disk_size >= 0) {
#if defined(OS_CHROMEOS)
limits.desired_max_disk_space = static_cast<uint64_t>(disk_size / 2ll);
#elif defined(OS_ANDROID)
limits.desired_max_disk_space = static_cast<uint64_t>(3ll * disk_size / 50);
#else
// Remove the recording screen Blob_Storage Size limit for , The maximum space consists of... Of the full disk space 1/10 Turn into 1
limits.desired_max_disk_space = static_cast<uint64_t>(disk_size);
#endif
}
Copy code
If you have similar needs , This modification can be reused directly , Without any side effects .
Buffer memory release problem
With the above pair Blob Understanding of file format , We can basically sort out the whole transmission link of the screen recording function . Solution of buffer memory release problem , I believe you can also think of , During the recording , Not right MediaRecorder stop front , because MediaRecorder All recorded data is stored in Renderer In progress , It will cause abnormal occupation of memory , With the increase of screen recording time , The occupation of this part will be particularly huge , The solution is simple , Set up a timeslice Or timing requestData() that will do
const recorder = new MediaRecorder(combinedSource, {
mimeType: 'video/webm;codecs=vp9',
videoBitsPerSecond: 1.5e6,
});
const timeslice = 5000;
const fileBits: Blob[] = [];
recorder.ondataavailable = (event: BlobEvent) => {
fileBits.push(event.data as Blob);
}
recorder.onstop = () => {
const videoFile = new Blob(fileBits, { type: 'video/webm;codecs=vp9' });
}
// Solution 1 , When you start recording , Set up timeSlice, Ensure that every timeslice millisecond , Auto trigger once ondataavailable, Output and empty the buffer ( It's very important )
recorder.start(timeslice);
// Solution 2 , Manually during recording requestData Empty buffer
recorder.start();
setInterval(() => recorder.requestData(), timeslice);
Copy code
Rendering process memory leak problem
In the process of writing , Due to some negligence , We might write code with memory leaks , So how to solve the problem ? The conclusion is that , Always follow the following principles :
-
Everything is right Blob All references are cleared in time Copy code
-
As far as possible with let Point to Blob And manually release , Prevent references from not being released Copy code
// example 1
const a = new Map();
a.set('key', {
blob: new Blob([1]) // Blob1
});
// Hand release
a.get('key').blob = null;
// example 2
let a = new Blob([]);
doSomething(a);
// Hand release
a = null;
Copy code
Blob-Internals Observe and quote
If you want to Debug, By observing Blob How to count references , Direct access chrome://blob-internals/ Pictured above is an example , every last Blob There is a unique UUID, By observing UUID Of Blob Reference count of , We can relatively easily Debug Blob Leakage of .
Profiler Grab heap snapshot
You can also use Profiler Grab memory stack .
blob_storage Catalog observation
If you are right Chromium Ability to modify , It can be done by putting “ Maximum available memory ” Change to a smaller value ( such as 10MB, To force Blob Directly use the file transfer method to store it to the hard disk ), Direct observation blob_storage Generation of paging files in the directory .Blob Files are stored in the form of paging on the local disk , Its size is a dynamic value , The minimum is 5MB, The maximum is 100MB. The directory will be emptied every time the application is closed , Therefore, it is necessary to ensure that the application is turned on and continuously observed , This is the most intuitive and easy-to-use way at present , Generally speaking, if the user continues not to close the application , And your code has a memory leak , Then we can basically observe that a large number of paging files will be generated in this directory without being released .
Subsequent performance optimization
Current processing , Although all repair problems have been solved perfectly , But there is one last problem , It will occupy a lot of memory when repairing , I will continue to maintain fix-webm-metainfo library , Incomplete transmission through ArrayBuffer The way , Solve this problem .
Welcome to your attention 「 Byte front end ByteFE 」
Resume delivery contact email 「[email protected]」
copyright notice
author[Byte front end],Please bring the original link to reprint, thank you.
https://en.qdmana.com/2021/08/20210827041225048u.html
The sidebar is recommended
- Crazy blessing! Tencent boss's "million JVM learning notes", real topic of Huawei Java interview 2020-2021
- JS JavaScript how to get the subscript of a value in the array
- How to implement injection in vuex source code?
- JQuery operation select (value, setting, selected)
- One line of code teaches you how to advertise on Tanabata Valentine's Day - Animation 3D photo album (music + text) HTML + CSS + JavaScript
- An article disassembles the pyramid architecture behind the gamefi outbreak
- BEM - a front-end CSS naming methodology
- [vue3] encapsulate custom global plug-ins
- Error using swiper plug-in in Vue
- Another ruthless character fell by 40000, which was "more beautiful" than Passat and maiteng, and didn't lose BMW
guess what you like
-
Huang Lei basks in Zhang Yixing's album, and the relationship between teachers and apprentices is no less than that in the past. Netizens envy Huang Lei
-
He was cheated by Wang Xiaofei and Li Chengxuan successively. Is an Yixuan a blessed daughter and not a blessed home?
-
Zhou Shen sang the theme song of the film "summer friends and sunny days" in mainland China. Netizen: endless aftertaste
-
Pink is Wangyuan online! Back to the peak! The new hairstyle is creamy and sassy
-
Front end interview daily 3 + 1 - day 858
-
Spring Webflux tutorial: how to build reactive web applications
-
[golang] walk into go language lesson 24 TCP high-level operation
-
August 23, 2021 Daily: less than three years after its establishment, Google dissolved the health department
-
The female doctor of Southeast University is no less beautiful than the female star. She has been married four times, and her personal experience has been controversial
-
There are many potential safety hazards in Chinese restaurant. The top of the program recording shed collapses, and the artist will fall down if he is careless
Random recommended
- Anti Mafia storm: He Yun's helpless son, Sun Xing, is destined to be caught by his dry son
- Introduction to flex flexible layout in CSS -- learning notes
- CSS learning notes - Flex layout (Ruan Yifeng tutorial summary)
- Today, let's talk about the arrow function of ES6
- Some thoughts on small program development
- Talk about mobile terminal adaptation
- Unwilling to cooperate with Wang Yibo again, Zhao Liying's fans went on a collective strike and made a public apology in less than a day
- JS function scope, closure, let, const
- Zheng Shuang's 30th birthday is deserted. Chen Jia has been sending blessings for ten years. Is it really just forgetting to make friends?
- Unveil the mystery of ascension
- Asynchronous solution async await
- Analysis and expansion of Vue infinite scroll source code
- Compression webpack plugin first screen loading optimization
- Specific usage of vue3 video play plug-in
- "The story of huiyeji" -- people are always greedy, and fairies should be spotless!
- Installing Vue devtool for chrome and Firefox
- Basic usage of JS object
- 1. JavaScript variable promotion mechanism
- Two easy-to-use animation JS that make the page move
- Front end Engineering - scaffold
- Java SQL Server intelligent fixed asset management, back end + front end + mobile end
- Mediator pattern of JavaScript Design Pattern
- Array de duplication problem solution - Nan recognition problem
- New choice for app development: building mobile applications using Vue native
- New gs8 Chengdu auto show announces interior Toyota technology blessing
- Vieira officially terminated his contract and left the team. The national security club sent blessings to him
- Less than 200000 to buy a Ford RV? 2.0T gasoline / diesel power, horizontal bed / longitudinal bed layout can be selected
- How does "heart 4" come to an end? Pinhole was boycotted by the brand, Ma Dong deleted the bad comments, and no one blessed him
- We are fearless in epidemic prevention and control -- pay tribute to the front-line workers of epidemic prevention!
- Front end, netty framework tutorial
- Xiaomi 11 | miui12.5 | android11 solves the problem that the httpcanary certificate cannot be installed
- The wireless charging of SAIC Roewe rx5 plus is so easy to use!
- Upload and preview pictures with JavaScript, and summarize the most complete mybatis core configuration file
- [25] typescript
- CSS transform Complete Guide (Second Edition) flight.archives 007
- Ajax foundation - HTTP foundation of interview essential knowledge
- Cloud lesson | explain in detail how Huawei cloud exclusive load balancing charges
- Decorator pattern of JavaScript Design Pattern
- [JS] 10. Closure application (loop processing)
- Left hand IRR, right hand NPV, master the password of getting rich