E.Order of Operations

Similar documents
1.2.3 The Graphics Hardware Pipeline

Module 13C: Using The 3D Graphics APIs OpenGL ES

Real-Time Rendering (Echtzeitgraphik) Michael Wimmer

Programming Guide. Aaftab Munshi Dan Ginsburg Dave Shreiner. TT r^addison-wesley

Advanced Rendering Techniques

OpenGL SUPERBIBLE. Fifth Edition. Comprehensive Tutorial and Reference. Richard S. Wright, Jr. Nicholas Haemel Graham Sellers Benjamin Lipchak

Rasterization Overview

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Lecture 2. Shaders, GLSL and GPGPU

Introduction to the OpenGL Shading Language

Pipeline Operations. CS 4620 Lecture 14

Rendering Objects. Need to transform all geometry then

Graphics Pipeline & APIs

Surface Graphics. 200 polys 1,000 polys 15,000 polys. an empty foot. - a mesh of spline patches:

CS 130 Final. Fall 2015

Introduction to OpenGL

Buffers, Textures, Compositing, and Blending. Overview. Buffers. David Carr Virtual Environments, Fundamentals Spring 2005 Based on Slides by E.

CS4620/5620: Lecture 14 Pipeline

Scanline Rendering 2 1/42

Graphics Pipeline & APIs

INF3320 Computer Graphics and Discrete Geometry

OpenGL Essentials Training

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

OpenGL. Jimmy Johansson Norrköping Visualization and Interaction Studio Linköping University

OpenGl Pipeline. triangles, lines, points, images. Per-vertex ops. Primitive assembly. Texturing. Rasterization. Per-fragment ops.

OpenGL. Toolkits.

INF3320 Computer Graphics and Discrete Geometry

Supplement to Lecture 22

Today. Rendering - III. Outline. Texturing: The 10,000m View. Texture Coordinates. Specifying Texture Coordinates in GL

Mouse Ray Picking Explained

The OpenGL R Graphics System: A Specification (Version 1.3)

Definition. Blending & Compositing. Front & Back Buffers. Left & Right Buffers. Blending combines geometric objects. e.g.

EECE 478. Learning Objectives. Learning Objectives. Rasterization & Scenes. Rasterization. Compositing

Graphics Hardware. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 2/26/07 1

CS452/552; EE465/505. Clipping & Scan Conversion

Graphics Processing Unit Architecture (GPU Arch)

Computer Graphics. Bing-Yu Chen National Taiwan University

CS 381 Computer Graphics, Fall 2008 Midterm Exam Solutions. The Midterm Exam was given in class on Thursday, October 23, 2008.

Sung-Eui Yoon ( 윤성의 )

CS4202: Test. 1. Write the letter corresponding to the library name next to the statement or statements that describe library.

Transforms 1 Christian Miller CS Fall 2011

Performance OpenGL Programming (for whatever reason)

Graphics Performance Optimisation. John Spitzer Director of European Developer Technology

Introduction to Shaders for Visualization. The Basic Computer Graphics Pipeline

GRAFIKA KOMPUTER. ~ M. Ali Fauzi

OpenGL: Open Graphics Library. Introduction to OpenGL Part II. How do I render a geometric primitive? What is OpenGL

World Coordinate System

Normalized Device Coordinate System (NDC) World Coordinate System. Example Coordinate Systems. Device Coordinate System

Ciril Bohak. - INTRODUCTION TO WEBGL

CSE 690: GPGPU. Lecture 2: Understanding the Fabric - Intro to Graphics. Klaus Mueller Stony Brook University Computer Science Department

CHAPTER 1 Graphics Systems and Models 3

Lecture 5: Viewing. CSE Computer Graphics (Fall 2010)

CSE 167: Lecture #4: Vertex Transformation. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Real-Time Graphics Architecture

CS230 : Computer Graphics Lecture 4. Tamar Shinar Computer Science & Engineering UC Riverside

Today s Agenda. Basic design of a graphics system. Introduction to OpenGL

Projection Matrix Tricks. Eric Lengyel

CS 428: Fall Introduction to. OpenGL primer. Andrew Nealen, Rutgers, /13/2010 1

Pipeline Operations. CS 4620 Lecture 10

TSBK03 Screen-Space Ambient Occlusion

X. GPU Programming. Jacobs University Visualization and Computer Graphics Lab : Advanced Graphics - Chapter X 1

Describe the Orthographic and Perspective projections. How do we combine together transform matrices?

CS770/870 Spring 2017 Open GL Shader Language GLSL

CS770/870 Spring 2017 Open GL Shader Language GLSL

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

CS GPU and GPGPU Programming Lecture 2: Introduction; GPU Architecture 1. Markus Hadwiger, KAUST

Hardware-driven visibility culling

Tutorial on GPU Programming #2. Joong-Youn Lee Supercomputing Center, KISTI

The Graphics Pipeline

An Interactive Introduction to OpenGL Programming

CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Rasterization. CS 4620 Lecture Kavita Bala w/ prior instructor Steve Marschner. Cornell CS4620 Fall 2015 Lecture 16

PowerVR Series5. Architecture Guide for Developers

Pixels and Buffers. CS 537 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science

Computer Graphics. Bing-Yu Chen National Taiwan University

CS130 : Computer Graphics. Tamar Shinar Computer Science & Engineering UC Riverside

Graphics Hardware, Graphics APIs, and Computation on GPUs. Mark Segal

Deferred Rendering Due: Wednesday November 15 at 10pm

The Graphics Pipeline

CS 591B Lecture 9: The OpenGL Rendering Pipeline

Computergrafik. Matthias Zwicker. Herbst 2010

Discrete Techniques. 11 th Week, Define a buffer by its spatial resolution (n m) and its depth (or precision) k, the number of

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

Adaptive Point Cloud Rendering

CSE 167: Introduction to Computer Graphics Lecture #5: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017

Rationale for Non-Programmable Additions to OpenGL 2.0

Scan line algorithm. Jacobs University Visualization and Computer Graphics Lab : Graphics and Visualization 272

CS452/552; EE465/505. Image Processing Frame Buffer Objects

3D graphics, raster and colors CS312 Fall 2010

20 Years of OpenGL. Kurt Akeley. Copyright Khronos Group, Page 1

CS 354R: Computer Game Technology

Computer Graphics Shadow Algorithms

HE COMPLETE OPENGL PROGI FOR WINDOW WIND

Introduction to Shaders.

Mali Demos: Behind the Pixels. Stacy Smith

Programming Graphics Hardware

Graphics Programming

Spring 2009 Prof. Hyesoon Kim

Transcription:

Appendix E E.Order of Operations This book describes all the performed between initial specification of vertices and final writing of fragments into the framebuffer. The chapters of this book are arranged in an order that facilitates learning, rather than in the exact order in which these are actually performed. Sometimes the exact order of doesn t matter for example, surfaces can be converted to polygons and then transformed, or transformed first and then converted to polygons, with identical results and different implementations of OpenGL might do things differently. This appendix describes a possible order; any implementation is required to yield equivalent results. If you want more details than are presented here, see the OpenGL Specification at http://www.opengl.org/registry/. This appendix has the following major sections: Overview Geometric Operations Operations Fragment Operations Odds and Ends 1

Overview This section gives an overview of the order of. Shown in Figure E-1 is a schematic representation of the fixed-function OpenGL pipeline. Geometric (vertices, lines, and polygons) follows the path through the row of boxes that include evaluators and per-vertex, while pixel (pixels, images, and bitmaps) is treated differently for part of the process. Both types of undergo the rasterization and per-fragment before the final pixel is written into the framebuffer. Rasterization and primitive Fragment Texture Per-fragment Framebuffer Figure E-1 The Fixed-Function Pipeline s Order of Operations All, whether it describes geometry or pixels, can be saved in a display list or processed immediately. When a display list is executed, the is sent from the display list just as if it were sent by the application. All geometric primitives are eventually described by vertices. If evaluators are used, that is converted to vertices and treated as vertices from then on. may also be stored in and used from specialized vertex arrays. Per-vertex calculations are performed on each vertex, followed by rasterization to fragments. For pixel, pixel are performed, and the results are stored in the texture memory, used for polygon stippling, or rasterized to fragments. 2 Appendix E: Order of Operations

Finally, the fragments are subjected to a series of per-fragment, after which the final pixel values are drawn into the framebuffer. This model differs only slightly for the programmable pipeline available in OpenGL Version 3.1 and greater, as illustrated in Figure E-2. Rasterization and primitive Fragment Texture Per-fragment Framebuffer Figure E-2 The Programmable Pipeline s Order of Operations Geometric Operations Geometric, whether it comes from a display list, an evaluator, the vertices of a rectangle, or is in the form of raw, consists of a set of vertices and the type of primitive it describes (a vertex, line, or polygon). includes not only the (x, y, z, w) coordinates, but also other vertex attributes: a normal vector, texture coordinates, primary and secondary RGBA colors, a color index, material properties, edge-flag, or entirely generic values assigned meaning in a programmable vertex shader. All of these elements except the vertex s coordinates can be specified in any order, and default values exist as well. As soon as a vertex rendering command (e.g., gl*(), gldrawarrays(), gldrawelements()) is issued, the components are padded, if necessary, to four dimensions (using z = 0 and w = 1), and the current values of all the elements are associated with the Geometric Operations 3

vertex. The complete set of vertex is then processed. (If vertex arrays are used, vertex may be batch processed, and processed vertices may be reused.) Per- Operations In the fixed-function pipeline mode, the per-vertex stage of processing, each vertex s spatial coordinates are transformed by the modelview matrix, while the normal vector is transformed by that matrix s inverse transpose and renormalized if specified. If automatic texture generation is enabled, new texture coordinates are generated from the transformed vertex coordinates, and they replace the vertex s old texture coordinates. The texture coordinates are then transformed by the current texture matrix and passed on to the primitive step. Meanwhile, the lighting calculations, if enabled, are performed using the transformed vertex and normal vector coordinates and the current material, lights, and lighting model. These calculations generate new colors or indices that are clamped or masked to the appropriate range and passed on to the primitive step. For vertex shader programs, the above sequence of is replaced by the execution of the user-defined shader, which must update the vertex s position, and may update the primary and secondary colors for the primitives front- and back-facing colors, associated texture coordinates, and fog coordinates. Additionally, the point size, and vertex position to be used with user-defined clip planes may also be updated. Primitive Assembly Primitive varies, depending on whether the primitive is a point, a line, or a polygon. If flat is enabled, the colors or indices of all the vertices in a line or polygon are set to the same value. If special clipping planes are defined and enabled, they re used to clip primitives of all three types. (The clipping-plane equations are transformed by the inverse transpose of the modelview matrix when they re specified.) Point clipping simply passes or rejects vertices; line or polygon clipping can add additional vertices, depending on how the line or polygon is clipped. After this clipping, the spatial coordinates of each vertex are transformed by the projection matrix, and the results are clipped against the standard viewing planes x = ±w, y = ±w, and z = ±w. 4 Appendix E: Order of Operations

If selection is enabled, any primitive not eliminated by clipping generates a selection-hit report, and no further processing is performed. Without selection, perspective division by w occurs and the viewport and depth-range are applied. Also, if the primitive is a polygon, it s then subjected to a culling test (if culling is enabled). A polygon might convert to vertices or lines, depending on the polygon mode. Finally, points, lines, and polygons are rasterized to fragments, taking into account polygon or line stipples, line width, and point size. Rasterization involves determining which squares of an integer grid in window coordinates are occupied by the primitive. If antialiasing is enabled, coverage (the portion of the square that is occupied by the primitive) is also computed. Color and depth values are also assigned to each such square. If polygon offset is enabled, depth values are slightly modified by a calculated offset value. Operations s from host memory or pixel unpack buffers are first unpacked into the proper number of components. The OpenGL unpacking facility handles a number of different formats. Next, the is scaled, biased, and processed using a pixel map. The results are clamped to an appropriate range, depending on the type, and then either written in the texture memory for use in texture mapping or rasterized to fragments. If pixel is read from the framebuffer, pixel-transfer (scale, bias, mapping, and clamping) are performed. The results are packed into an appropriate format and then returned to processor memory. The pixel-copy operation is similar to a combination of the unpacking and transfer, except that packing and unpacking are unnecessary, and only a single pass is made through the transfer before the is written back into the framebuffer. Texture Memory Texture images can be specified from framebuffer memory, as well as processor memory. All or a portion of a texture image may be replaced. Texture may be stored in texture objects, which can be loaded into texture memory. If there are too many texture objects to fit into texture Operations 5

memory at the same time, the textures that have the highest priorities remain in the texture memory. Fragment Operations In the fixed-function mode, if texturing is enabled, a texel is generated from texture memory for each fragment from every enabled texture unit (by means of glactivetexture()) and is applied to the fragment. Then fog calculations are performed, if they re enabled, followed by the application of coverage (antialiasing) values, if antialiasing is enabled. If a fragment shader program is enabled, texture sampling and application, per-pixel fog computations, and alpha-value assignment may be done in the fragment shader. Assuming the fragment was not discarded by the fragment shader, the fragment s color will be updated with the color assigned in the fragment shader, which may include the combination of colors generated from the iterated primary and secondary colors, texture application, fog computations, or other color values. The depth value for the fragment may be also be updated and any associated fragment may also be updated. Next comes scissoring, followed by the alpha test (in RGBA mode only, and for OpenGL versions to Version 3.0), the stencil test, and the depth-buffer test. If in RGBA mode, blending is performed. Blending is followed by dithering and logical operation. All these may be disabled. The fragment is then masked by a color mask or an index mask, depending on the mode, and drawn into the appropriate buffer. If fragments are being written into the stencil or depth buffer, masking occurs after the stencil and depth tests, and the results are drawn into the framebuffer without performing the blending, dithering, or logical operation. Odds and Ends For the fixed-function pipeline, matrix deal with the current matrix stack, which can be the modelview, the projection, or the texture matrix stack. If the Imaging subset is included in the implementation, a color matrix stack will also be present. The commands glmultmatrix*(), glloadmatrix*(), and glloadidentity() are applied to the top matrix on the stack, while gltranslate*(), glrotate*(), glscale*(), glortho(), and glfrustum() are used to create a matrix that s multiplied by the top matrix. 6 Appendix E: Order of Operations

When the modelview matrix is modified, its inverse transpose is also generated for normal vector transformation. The commands that set the current raster position are treated exactly like a vertex command up until the point when rasterization would occur. At this point, the value is saved and used in the rasterization of pixel. The various glclear() commands bypass all except scissoring, dithering, and writemasking. Odds and Ends 7