current position:Home>[graphics notes series] rendering pipeline

[graphics notes series] rendering pipeline

2021-08-26 00:21:37 Studying hard

Why learn to render pipelines ?

Before I studied computer graphics , I'm curious about how to show the three-dimensional objects of the real world on the two-dimensional screen of the computer . At that time, my idea was very simple , As shown in the figure below , What I want is to determine the input and output first , Then study the algorithm processing process .


however , The input and output here are problematic :

  • First, the three-dimensional coordinates of the object are too vague , What is your basic unit to describe an object ? Yes. 、 Line or surface , So what exactly is the data to be input . I still remember a sentence that impressed me when I was in class. It was called "dot move into line..." 、 Line to surface 、 Face to face , So you just record the key points of an object , And the relationship between points can construct the object , The concrete manifestation of this relationship in geometry is whether there is a connecting line between points ;
  • Second, only the geometric characteristics of the object are not enough to describe the three-dimensional object in the real world , The visual effect produced by the interaction between the physical characteristics of the object and light should also be used as the key input data , The corresponding output result shall also include the color characteristics of the pixel on the screen, such as hue 、 Lightness and purity, etc ;
  • The third is the lack of light information and viewing angle information , It can be seen from two , The real visual effect of an object depends on the interaction between light and some physical features of the object , Then you can't lack the information of light , Besides , In the real world , The visual range of the human eye is limited , We always go from one place , In a certain direction , See a range of objects , Therefore, we can't lack the perspective information of observing objects ;
  • Fourth, the description of the output data is too vague , What exactly is to be output , In fact, if it's just a simple conversion of point coordinates , This mapping relationship is OK , But we have to reflect the point to form a line 、 The point forming surface must fill the screen pixels that conform to the corresponding relationship , This involves the problem of sampling continuous lines or surfaces with discrete screen pixels , For example, using discrete pixels to represent the linear equation of two points and their connecting lines , Therefore, the output at this time is the screen pixel coordinates describing the image of the object seen by the observer . that , After modification and sorting , My idea becomes as follows .

 Unnamed file  (3).png

After I know something about graphics , The above input and output have corresponding concepts , As shown in the figure below . We use the vertex coordinates of the object as the key points for building the object ; Put the organization of vertices into the process of algorithm processing , It is called element assembly , For example, use triangles to organize vertices 、 Organize vertices with line segments, or organize vertices as independent points, etc ; A collection of physical attributes that represent the interaction between an object and light with a material ; Another is to add a camera as an observer in the scene where the object is located , Add lighting to calculate the color information of the object ; Store the output result in the frame buffer ( One frame is often used to express the picture of the moment ), The two-dimensional index of the frame buffer is equivalent to the screen coordinates of the pixels , The stored value is naturally in the form of RGBA Describe the color information .

 Unnamed file  (1).png

So much , Finally, we can answer why we should learn rendering pipeline , Because it is the core algorithm flow to solve the problem of how to move the real-world scene to the computer screen , Of course, the problem of moving the real-world scene to the computer screen is actually much more complex , And limited by the performance of the computer , More will simplify the real-world scene model , Or some models of artistic creation are presented on the computer , Then turn around , People can also create and design some three-dimensional works through computers .

What is a rendering pipeline ?

After thinking above , We should know what the rendering pipeline is , It should be a processing flow composed of a set of algorithms , Such a process , Not necessarily the same , Different platforms may be different , But it's not bad , For me, the important thing is to experience the train of thought , According to some materials and my own study, I sorted out the following rendering pipeline diagram . I'll take another note of the specific process in the rendering pipeline , Here's more thinking , Why do you have to deal with the corresponding steps ? What is the result after processing? Like this ?  Render pipeline .png The main purpose of model transformation is to transform the model data into the world coordinate system through matrix , Because the original coordinates of the model are the coordinate data when the model is generated , When we put it in a scene , The coordinate system followed may be different , So you need to make a conversion as needed ;

The main purpose of view transformation is based on the observation position of the camera 、 Look at the corresponding scene from the perspective and scope , At this point, the origin of the coordinate system becomes the location of the camera , The coordinates of the corresponding model are also transformed from the world coordinate system to the coordinate system constructed with the camera as the coordinate origin ;

Projection transformation is like drawing a sketch , Pay attention to some perspective skills , Near big far small, so , More in line with the law of observation .

Clipping coordinate system and homogeneous clipping are not easy to talk about , It is still necessary to explain later in combination with specific algorithms , Its essence is to clip the scene outside the visual cone , Keep only the scene within the viewing cone ;

NDC Standardizing the device coordinate system is actually scaling the scene to (-1, -1, -1)~(1, 1, 1) In this normal coordinate system , Facilitate data processing .

As the name suggests, back removal is to delete the back of invisible objects , Optimize the processing performance .

The main purpose of viewport transformation is to transform the scene from the previous standardized device coordinate system to (-1, -1, -1)~(1, 1, 1) Into the size of your computer screen .

The essence of primitive assembly is how to organize the vertices of the scene , Draw line 、 Draw a triangle or something , It is usually used to draw triangular surfaces , Because the triangle is the smallest face element , Often a plane is divided into multiple triangles , In order to process the algorithm .

The purpose of rasterization is to describe the scene after element assembly with pixels on the screen , For example, use these pixels to describe a triangular surface , In addition to sampling the border of the triangle , Also sample and fill its interior .

What is the element coloring , A concept of slice element is introduced here , I checked the information for a long time , Do not understand , Follow people to implement a simple raster renderer before they understand , This thing is screen pixels , It's called pianyuan because , There are also some coloring and detection processes between the slice element and the pixels on the screen , So it's called pianyuan . That piece of element coloring can be simply understood as coloring pixels , Of course, this color value needs to be calculated by some algorithms to restore some good-looking visual effects .

The main purpose of chip element testing is to detect and process chips, such as in-depth testing 、 Transparency test 、 Template testing 、 Mix and so on .

Finally, the chip elements passing the chip element test are stored in the frame buffer , Then show it through the pixels on the screen , The process is over .

Reference material

GAMES101- Introduction to modern computer graphics - Yan Lingqi

Fundamentals of Computer Graphics, Fourth Edition

Tiny renderer or how OpenGL works: software rendering in 500 lines of code

Resource Recommendation

Zero basis how to learn computer graphics

copyright notice
author[Studying hard],Please bring the original link to reprint, thank you.

Random recommended