3D Rendering Pipeline Reference: Real-Time Rendering 3 rd Edition Chapters 2 4 OpenGL SuperBible 6 th Edition Overview Rendering Pipeline Modern CG Inside a Desktop Architecture Shaders Tool Stage Asset Conditioning Stage The Application Stage The Geometry Stage The Rasterizer Stage 1
https://youtu.be/gxi0l3yqbra Type equation here. 2
3
Fundamentals Rendering Pipeline Graphics Rendering Pipeline to generate/render a 2-D image give a virtual camera, 3-D objects, lights sources, shading equations, textures, etc Similar to mechanical pipelines such as ski-lifts and car assemblies A non-pipelined system can be sped up by n times by each additional stage added The stages of the pipeline are executed in parallel Stalled by the slowest moving stage (bottleneck) Fundamentals From 3D to 2D A high-level view of our pipeline hardware and application 3D Models Textures Assets Application Input Devices: Mouse Keyboard Tablet Processor Graphics Processor Memory Frame Buffer Display Device From Chapter 1 Interactive Computer Graphics 5 th Edition 4
Architecture Tools Stage (Offline) creation of assets through authoring tools Asset Conditioning Stage (Offline) Convert to compatible format Application Stage software running on the CPU ( drawing API calls, collision detection, animation, physics, etc ) Geometry Stage geometry transformations/projections, what is drawn, where drawn and should be drawn (typically on GPU) Rasterizer Stage draws the image using the generated data or perpixel operations. (Completely on the GPU) Tools Asset Conditioning Controlling the Pipeline Tools Stage (Offline) Through authoring tools Asset Conditioning Stage (Offline) Through exporters / converters Application Stage Through a programming language and Application Programming Interfaces (APIs) Geometry Stage Through Vertex, Tessellation, Geometry Shaders and various state settings Rasterizer Stage Through Fragment Shaders and various state settings Requires communication with the Graphics hardware! We need a way to control this system! 5
Fixed Pipeline http://www.khronos.org/opengles/2_x/ Programmable Pipeline 6
Controlling the Pipeline API: (Application Programming Interface) Programmer uses an interface to control the graphics subsystem Graphics subsystems can be: Workstation (Quadro Card) Desktop or laptop (GeForce / Radeon) Console system (Custom Graphics Chipset) Mobile phone or tablet (Tegra / Intel) Standardized subsystem interface increases portability Lets developers ignore the platform ( to a limit ) Controlling the Pipeline The goal of any interface: Provide an abstraction layer Separate the application from the graphics subsystem Your app does not need to know the hardware APIs try to strike a balance between the abstraction level Game Engine High abstraction, potentially needs significant rewrites to make it work for purposes outside games Console Games Low abstraction allows designers to get maximum performance, but does not port well between consoles. The first generation of games are unfamiliar with the hardware and it takes time to understand it 7
Controlling the Pipeline Application invokes commands which are converted by a driver into commands for the underlying graphics hardware Hardware works on the commands as efficiently and quickly as possible Commands are queued /partially completed Multiple stages can be processed in parallel Many commands are repetitive tasks (vertex or pixel commands) and independent of one another Controlling the Pipeline GPUs consist of large numbers of small programmable processors (shader cores) Cores run small programs called shaders Cores have low throughput and lack advanced processor features GPU can contain hundreds to thousands of cores to perform large amounts of work 8
The Six Types of Shaders Vertex Shaders enable operations to be performed per vertex Tessellation Control Shader determines level of tessellation and generate data for tessellation engine Tessellation Evaluation Shader operates on the output vertices of the tessellation engine Geometry Shaders allow the creation and destruction of geometric primitives (points, lines, triangles) at run-time Pixel Shaders enable operation to be performed per fragment Compute Shader independent pipeline that operates on work items Effects The shaders require matching incoming and outgoing data between the stages Shaders do not exist in a vacuum The various shader stages can be utilized to create a myriad of effects Non-realistic to hyper-realistic effects are possible with shaders 9
Fundamentals Tools Stage Artist use digital content creation (DCC) applications to produce content without understanding how it works Maya in this course, free student edition: http://www.autodesk.com/education/student-software Utilize image editing program such as Adobe Photoshop or GIMP to create 2D Images You can use the Rowan Cloud s copy: www.rowan.edu/cloud Tools in this stage are expected to be easy to use and reliable so that non-technical people can get their work done! Fundamentals Tools Stage 10
Fundamentals Asset Conditioning Stage DCC is usually far more complex than the game engine can accept DCC Graph structure History of edits Animation controls Application specific features Application data format is usually a closed design We will utilize the FBX format to convert models from Maya to Unity in this course The Geometry Stage Moving to the GPU! Geometry Stage (Conceptual) The Geometry Stage is divided into several functional stages. 11
The Geometry Stage The Concept Primary purpose: get your objects to the proper place Object was created with local coordinates Must be converted into world coordinates Oriented according to the view Distort the projection Finally, the 3D world must be flattened into a 2D image (screen space coordinates) The Geometry Stage Local Coordinates: World Coordinates: 12
The Geometry Stage - Model & World Transform A model transitions between many spaces or coordinate systems A model begins in Local Space or Model Space The origin of the model is situated in a convenient place for the artist/programmer TIP: From here on out, remember this to keep your axes straight: XYZ = RGB The Geometry Stage - Model & World Transform This model needs a model transform Dictates the necessary transformations (translate, rotate, scale) to move it from model space into world space. 13
The Geometry Stage - Model & World Transform A single model may have multiple model transforms We call the model a mesh Each reference to that model data is an instance This allows us to have multiple copies of the same model, without duplicate base geometry (saves on memory) Local vs World In Unity, we can affect an object through either its local or world coordinates If you modify the position or rotation, you are doing so in World Space If you modify the local position or local rotation, you do so with respect to the parent s space Let s create a set of planets with carefully setup pivot points and groups 14
Local vs World Local vs World Create the following Hierarchy Download textures of Earth, Moon and Sun Setup Materials and assign to each object The pivot objects are empty game objects just meant to provide a pivot point to offset about 15
Local vs World The Geometry Stage - Model & World Transform A model is composed of vertices and normals Vertices can be thought of as points, which dictate the shape of the object Normals are utilized primarily in lighting to determine the amount of illumination on a surface Vertices and normals are transformed inside the pipeline in the vertex shader 16
The Geometry Stage Frustum Eye Coordinates Projection Coordinates Unity Camera Our camera component defines several important items How the screen is cleared and what color The projection type The near and far plane, which can impact the visuals of your scene in several ways 17
The Geometry Stage - World & View Transform After world space, we transform into view space The view field is called the View Frustum The purpose of the view camera is to orient the entire world so that the camera is facing down the +/- Z axis (Direct X is Left or positive, OpenGL is Right or negative) The Geometry Stage - World & View Transform There is no spoon - The Matrix The reality behind the camera is that there is no camera While we can create a camera class that operates like a reallife camera, it is still a mathematical abstraction Does not need to follow realworld rules We don t move a camera, we move the world in the opposite direction 18
The Geometry Stage - World & View Transform The Geometry Stage - View Transform To transform all of the objects in model space to camera / eye / view space, we invert the camera s transformations. This is accomplished by negating all the terms. 19
The Geometry Stage - Projections Our geometry still exists as 3D coordinates Convert the 3D points into homogenous coordinates Orthographic Projection and Perspective Projection Projection involves a volume, with orthographic being a rectangular box and perspective being a frustum All points are normalized between a [1,1,1] and [-1,-1,-1] volume The Geometry Stage - Orthographic Projection Orthographic view: keep lines parallel after being transformed Easiest method: disregard the z value through the following transform P 0 = 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 20
The Geometry Stage - Orthographic Projection This form of orthographic projection is non-invertible (determinant is zero) Once we step down from 3D to 2D, the data is lost! The Geometry Stage - Orthographic Projection This final cube is called the canonical view volume The coordinates are referred to as the normalized device coordinates 21
The Geometry Stage - Orthographic Projection A canonical view volume is created through a translate and scale transform P 0 = S s T t 2 r l 0 0 0 0 2 t b 0 0 0 0 2 f n 0 0 0 0 1 l + r 1 0 0 2 t + b 0 1 0 2 f + n 0 0 1 2 0 0 0 1 The Geometry Stage - Perspective Projection In a perspective projection, parallel lines converge Meant to represent our own way of viewing the world The father an object is, the smaller it becomes 22
The Geometry Stage - Projection Transform The view volume as seen from the side of a camera. The Geometry Stage - Projection Transform A scene waiting to be projected into 2D space 23
The Geometry Stage - Projection Transform An example of an orthographic view transformation The Geometry Stage - Projection Transform An example of a perspective view transformation 24
GPU Pipeline Overview The GPU implements the geometry and rasterization stages of the pipeline Each of the GPU stages have levels of configurability / programmability Green - full programmability Yellow - configurability Blue - completely fixed 25
GPU Pipeline Overview Vertex Shader Implement model and view transform, shading and projection (output is a vertex in eye coordinates) Geometry Shader Optional, operates on points, lines and triangles to destroy or create new primitives Clipping, Screen Mapping, Triangle Setup, Triangle Traversal fixed-function stages (implemented in hardware) Pixel Shader performs the final pixel value from fragments Merger customizable stage for merging buffers Vertex Shader The first stage that performs any graphical processing Deals exclusively with vertices Have access to vertex position, color, normal, UVs and more It is also responsible for transforming vertices from model to homogenous coordinate space 26
Vertex Shader Each incoming vertex is processed and outputted The vertex shader can not create or destroy vertices There is no communication between vertices and you don t know the order of processing Vertex Shader Some uses: Object deformations (twists, bends) Procedural deformations for flags and cloth Per-vertex lighting Page curls, heat haze, water ripples The vertex information is passed on to the optional Tessellation/Geometry Stages, Clipping, Screen Mapping Triangle Setup, Triangle Traversal 27
Tessellation Control & Evaluation Shaders Process of breaking large primitive patches into smaller primitives before rendering Most common use is to add geometric detail to lower fidelity meshes Has three phases: 1. Tessellation Control Shader 2. Fixed-function tessellation engine 3. Tessellation Evaluation Shader These stages are sandwiched between the vertex shader and geometry shader This shader is OPTIONAL and does not need to be active! Tessellation Control & Evaluation Shaders 28
Tessellation Control & Evaluation Shader A patch is a group of vertices Input vertices are referred to as control points Tessellation control shader responsible for: Per-patch inner and outer tessellation factors Position and other attributes for each output control point Per-patch user-defined varyings Tessellation Control defines to the engine how to split up the patch based on inner and outer factors Tessellation Evaluation Shader takes in tessellated points in a coordinate system with respect to the control points Up to YOU to determine final location for points Geometry Shader Located after the vertex shader/tessellation shaders and is optional Input is an object such as triangles, line segments or points and the associated vertices. Unique in that is can generate / transform / delete the amount of data passing through the pipeline Additional vertices outside the processed object can be passed in, which can be utilized for algorithms dependent on nearest-neighbors 29
Geometry Shader Optional phase, when not included vertex/tessellation data is interpolated and fed to the fragment shader We specify with layout qualifiers the type of primitive input and the expected primitive output: Input / Ouput Primitive Modes points lines triangles lines_adjacency triangles_adjacency Yes, we can change the input from one mode to another with geometry shaders! Clipping No reason to compute anything outside the view volume Any point that falls completely outside the -1, +1 view cube will not be passed onto the next stage Primitives that are on the boundary require clipping 30
Clipping You can also define additional clipping planes to slice the object, referred to as sectioning The clipping stage is almost always controlled by a fixed-operation stage in the 3D hardware. Clipping 31
Screen Mapping The X and Y coordinates are transformed into screen coordinates (At this point, we still retain the X, Y & Z coordinates) All three coordinates are referred to as window coordinates All that remains is scaling and translating from -1 to +1 to screen space Screen Mapping We can map the camera s image to only a portion of the scene by manipulating the Viewport Rect We could setup multiple cameras and have them all be visible by manipulating the Viewport Rect Here we see four cameras with four separate views to the window 32
Rasterization Stage Conversion of 2D vertices into color pixels (picture elements) This processes is called Rasterization or Scan Conversion. These are fixed-functionality stages! This stage is broken down into the following sub-stages: Rasterization Stage Triangle Setup Data for the triangles are computed for scan conversion and interpolation. Triangle Traversal Each pixel that has its center covered by a triangle has a fragment computed Determines which pixels are covered is called triangle traversal or scan conversion. Fragments are computed using interpolating techniques between the three vertices that make up the triangle 33
Pixel Shader Pixel shader can only operate on it s fragments passed in You can not operate on neighboring pixels One exception to this rule and it has to do with gradient calculation You can return: a fragment color depth information, reject processing fragments fog computations alpha testing and more You can have multiple render targets (MRTs) GTA : http://www.adriancourreges.com/blog/2015/11/02/gt a-v-graphics-study/ Doom : http://www.adriancourreges.com/blog/2016/09/09/do om-2016-graphics-study/ 34
http://www.valvesoftware.com/publications/2004/gdc2004_half-life2_shading.pdf 35
36
37
38
Rasterization Pixel Shading Applying textures is performed in this stage UV coordinates are passed from the vertex shader Coordinates are used to lookup the color value for the fragment being processed Merging Stage Depths and colors of the fragments are merged into the frame buffer Stencil Buffer and Z-buffer operations are performed as well as color blending for transparency and compositing This stage features a suite of highly configurable options, but it is not programmable You can specify the mathematical operation utilized during fragment combinations using addition/multiplication/etc and even clamps and bitwise logical operations 39
Rasterization - Merging The Z-Buffer contains the current closest fragment on the screen If a new fragment is closer, the z-buffer value is overwritten If it is further away the fragment is discarded. There is a significant weakness with the z-buffer Requires semi-transparent objects to be rendered in proper order from back to front Rasterization - Merging There are additional buffers beyond the color and z- buffer. Alpha channel Alpha checks can be processed and fragments discarded if they do not match a set alpha value. Stencil buffer An offscreen buffer to record the locations of rendered primitives. Can create cut outs and only allow certain parts of our screen rendered. (Good for reflections) Frame buffer This is actually every buffer utilized in the application, however sometimes it is referred to as the combined Z and color buffers Accumulation Buffer Accumulation of multiple frames into one buffer. (Can be used for motion blur or depth of fields and a number of other effects) 40
Rasterization - Merging This is the end of the pipeline All primitives are now rasterized A double buffer is utilized where graphics are drawn to an internal (not visible) buffer Either a bit swap is performed or a pointer swap and the monitor is update with the new image In this lab a technique called quad buffering is employed Utilizes 4 buffers, two for each eye, one front and one back buffer Requires additional memory and processing time 41