CSE 690: GPGPU. Lecture 2: Understanding the Fabric - Intro to Graphics. Klaus Mueller Stony Brook University Computer Science Department

Similar documents
Surface Graphics. 200 polys 1,000 polys 15,000 polys. an empty foot. - a mesh of spline patches:

Pipeline Operations. CS 4620 Lecture 10

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture 14

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

Interpolation using scanline algorithm

Describe the Orthographic and Perspective projections. How do we combine together transform matrices?

Lecture 4. Viewing, Projection and Viewport Transformations

OpenGL Transformations

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

Lecture 5: Viewing. CSE Computer Graphics (Fall 2010)

OpenGL: Open Graphics Library. Introduction to OpenGL Part II. How do I render a geometric primitive? What is OpenGL

Rasterization Overview

CS4202: Test. 1. Write the letter corresponding to the library name next to the statement or statements that describe library.

3D Rasterization II COS 426

Advanced Computer Graphics (CS & SE )

The Viewing Pipeline adaptation of Paul Bunn & Kerryn Hugo s notes

Mach band effect. The Mach band effect increases the visual unpleasant representation of curved surface using flat shading.

CS 130 Final. Fall 2015

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

How do we draw a picture?

Introduction to 3D Graphics with OpenGL. Z-Buffer Hidden Surface Removal. Binghamton University. EngiNet. Thomas J. Watson

Computer Graphics. Chapter 10 Three-Dimensional Viewing

Models and Architectures

6.837 Introduction to Computer Graphics Assignment 5: OpenGL and Solid Textures Due Wednesday October 22, 2003 at 11:59pm

COMP3421. Introduction to 3D Graphics

Shading. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller/Fuhrmann

OpenGL: Setup 3D World

CSE 167: Lecture #4: Vertex Transformation. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Module Contact: Dr Stephen Laycock, CMP Copyright of the University of East Anglia Version 1

CS 4204 Computer Graphics

CHAPTER 1 Graphics Systems and Models 3

Lecture 4 Advanced Computer Graphics (CS & SE )

COMP3421. Introduction to 3D Graphics

Shadow Algorithms. CSE 781 Winter Han-Wei Shen

9. Illumination and Shading

Computer Graphics. Shadows

Models and Architectures. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

3D Graphics Pipeline II Clipping. Instructor Stephen J. Guy

CS 112 The Rendering Pipeline. Slide 1

Visible-Surface Detection Methods. Chapter? Intro. to Computer Graphics Spring 2008, Y. G. Shin

Introduction to Computer Graphics with WebGL

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination

Viewing with Computers (OpenGL)

Today. Rendering pipeline. Rendering pipeline. Object vs. Image order. Rendering engine Rendering engine (jtrt) Computergrafik. Rendering pipeline

CSE528 Computer Graphics: Theory, Algorithms, and Applications

For each question, indicate whether the statement is true or false by circling T or F, respectively.

CSE328 Fundamentals of Computer Graphics

COMP3421. Introduction to 3D Graphics

Chapter 5. Projections and Rendering

Computer Graphics I Lecture 11

Graphics Pipeline. CS535 Fall Daniel G. Aliaga Department of Computer Science Purdue University

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Books, OpenGL, GLUT, GLUI, CUDA, OpenCL, OpenCV, PointClouds, and G3D

CEng 477 Introduction to Computer Graphics Fall 2007

Graphics for VEs. Ruth Aylett

Computer Graphics. Illumination and Shading

E.Order of Operations

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

Computer Graphics with OpenGL ES (J. Han) Chapter VII Rasterizer

CS 591B Lecture 9: The OpenGL Rendering Pipeline

The Traditional Graphics Pipeline

Prof. Feng Liu. Fall /19/2016

VISIBILITY & CULLING. Don t draw what you can t see. Thomas Larsson, Afshin Ameri DVA338, Spring 2018, MDH

Viewing COMPSCI 464. Image Credits: Encarta and

Illumination and Shading

Graphics for VEs. Ruth Aylett

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Hidden surface removal. Computer Graphics

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Graphics and Interaction Surface rendering and shading

Lecture outline Graphics and Interaction Surface rendering and shading. Shading techniques. Introduction. Surface rendering and shading

The 3D Graphics Rendering Pipeline

CS5620 Intro to Computer Graphics

CSE 167: Introduction to Computer Graphics Lecture #5: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE 167: Introduction to Computer Graphics Lecture #8: Textures. Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2016

CSE328 Fundamentals of Computer Graphics: Concepts, Theory, Algorithms, and Applications

COMP30019 Graphics and Interaction Scan Converting Polygons and Lines

Fundamental Types of Viewing

3D Viewing Episode 2

GRAFIKA KOMPUTER. ~ M. Ali Fauzi

Transformation Pipeline

Level of Details in Computer Rendering

CS 4620 Program 3: Pipeline

Real-Time Shadows. André Offringa Timo Laman

Programming with OpenGL Part 2: Complete Programs Computer Graphics I, Fall

RASTERISED RENDERING

The Traditional Graphics Pipeline

Graphics Pipeline & APIs

CS451Real-time Rendering Pipeline

The Traditional Graphics Pipeline

From Graphics to Visualization

CSE528 Computer Graphics: Theory, Algorithms, and Applications

Computer Graphics 10 - Shadows

Drawing in 3D (viewing, projection, and the rest of the pipeline)

Graphics Hardware and Display Devices

Chapter 7 - Light, Materials, Appearance

INTRODUCTION TO COMPUTER GRAPHICS. It looks like a matrix Sort of. Viewing III. Projection in Practice. Bin Sheng 10/11/ / 52

CS380: Computer Graphics Viewing Transformation. Sung-Eui Yoon ( 윤성의 ) Course URL:

Transcription:

CSE 690: GPGPU Lecture 2: Understanding the Fabric - Intro to Graphics Klaus Mueller Stony Brook University Computer Science Department Klaus Mueller, Stony Brook 2005 1

Surface Graphics Objects are explicitely defined by a surface or boundary representation (explicit inside vs outside) This boundary representation can be given by: - a mesh of polygons: 200 polys 1,000 polys 15,000 polys - a mesh of spline patches: an empty foot 2

Polygon Mesh Definitions v3 e3 f1 e2 e1 v2 n1 v1, v2, v3: vertices (3D coordinates) e1, e2, e3: edges e1 = v2 - v1 and e2 = v3 - v2 f1: polygon or face v1 n1: face normal n1 = e1 e2 -------------------- e1 e2 view plane (screen) n3 backface n4 n1 n2 = = e11 e12 ------------------------- e11 e12 e21 e22 ------------------------- e21 e22, e21 = -e12 n1 e11 e12 e21 e22 view vector vp n2 Rule: if all edge vectors in a face are ordered counterclockwise, then the face normal vectors will always point towards the outside of the object. This enables quick removal of back-faces (back-faces are the faces hidden from the viewer): - back-face condition: vp n > 0 3

Polygon Mesh Data Structure Vertex list (v1, v2, v3, v4,...): (x1, y1, z1), (x2, y2, z2), (x3, y3, z3), (x4, y4, z4),... Edge list (e1, e2, e3, e4, e5,...): (v1, v2), (v2, v3), (v3, v1), (v1, v4), (v4, v2),... Face list (f1, f2,...): v3 e2 v2 (e1, e2, e3), (e4, e5, -e1),... or (v1, v2, v3), (v1, v4, v2),... e3 f1 n1 e1 f2 e5 Normal list (n1, n2,...), one per face or per vertex (n1x, n1y, n1z), (n2x, n2y, n2z),... v1 e4 n2 v4 Use Pointers or indices into vertex and edge list arrays, when appropriate 4

Object-Order Viewing - Overview screen in world space W Cop y n v u H Eye (Camera) Viewing Transform y transformed object - z object - z object projected to the screen x Eye v u screen in viewing space x A view is specified by: - eye position (Eye) - view direction vector (n) - screen center position (Cop) - screen orientation (u, v) - screen width W, height H u, v, n are orthonormal vectors After the viewing transform: - the screen center is at the coordinate system origin - the screen is aligned with the x, y-axis - the viewing vector points down the negative z-axis - the eye is on the positive z-axis All objects are transformed by the viewing transform 5

Step 1: Viewing Transform The sequence of transformations is: - translate the screen Center Of Projection (COP) to the coordinate system orgin (T view ) - rotate the translated screen such that the view direction vector n points down the negative z- axis and the screen vectors u, v are aligned with the x, y-axis (R view ) We get M view = R view T view We transform all object (points, vertices) by M view : x' y' z' 1 = u x u y u z 0 v x v y v z 0 n x n y n z 0 0 0 0 1 100 Cop x 010 Cop y 001 Cop z 000 1 x y z 1 Now the objects are easy to project since the screen is in a convenient position - but first we have to account for perspective distortion... 6

Step 2: Perspective Projection y y y p Eye z eye z screen plane A (view-transformed) vertex with coordinates (x, y, z ) projects onto the screen as follows: y p eye = y' ----------------- x eye z' p = x' eye ----------------- eye z' x p and y p can be used to determine the screen coordinates of the object point (i.e., where to plot the point on the screen) 7

Step 1 + Step 2 = World-To-Screen Transform Perspective projection can also be captured in a matrix M proj with a subsequent perspective divide by the homogenous coordinate w: x h y h z h w = eye 0 0 0 0 eye 0 0 0 0 1 0 0 0 1 eye x' y' z' 1 x h x p = ----- w y h y p = ----- w So the entire world-to-screen transform is: M trans = M proj M view = M proj R view T view with a subsequent divide by the homogenous coordinate M trans is composed only once per view and all object points (vertices) are multiplied by it 8

Step 3: Window Transform (1) Note: our camera screen is still described in world coordinates However, our display monitor is described on a pixel raster of size (Nx, Ny) The transformation of (perspective) viewing coordinates into pixel coordinates is called window transform Assume: - we want to display the rendered screen image in a window of size (Nx, Ny) pixels - the width and height of the camera screen in world coordinates is (W, H) - the center of the camera is at the center of the screen coordinate system Then: - the valid range of object coordinates is (-W/2... +W/2, -H/2... +H/2) - these have to be mapped into (0... Nx-1, 0... Ny-1): x W s x p + ---- Nx 1 = --------------- y H 2 W s y p + --- Ny 1 = --------------- 2 H H camera W window Nx Ny 9

Step 3: Window Transform (2) The window transform can be written as the matrix M window : x s y s 1 = Nx 1 Nx 1 --------------- 0 --------------- W 2 Ny 1 0 --------------- Ny --------------- 1 H 2 0 0 1 x p y p 1 After the perspective divide, all object points (vertices) are multiplied by M window Note: we could figure the window transform into M trans - in that case, there is only one matrix multiply per object point (vertex) with a subsequent perspective divide - the OpenGL graphics pipeline does this 10

Orthographic (Parallel) Projection Leave out the perspective mapping (step 2) in the viewing pipeline In orthographic projection, all object points project along parallel lines onto the screen y y y y p =y y p z z perspective projection orthographic projection 11

Polygon Shading Methods - Faceted Shading vertex3 How are the pixel colors determined in z-buffer? rasterized pixel (x,y) scanline y The simplest method is flat or faceted shading: vertex2 - each polygon has a constant color vertex1 - compute color at one point on the polygon (e.g., at center) and use everywhere - assumption: lightsource and eye is far away, i.e., N L, H E = const. N L E Problem: discontinuities are likely to appear at face boundaries 12

Polygon Shading Methods - Gouraud Shading Colors are averaged across polygons along common edges no more discontinuities Steps: - determine average unit normal at each poly vertex: N v = n k = 1 N k n k = 1 N k n: number of faces that have vertex v in common N v - apply illumination model at each poly vertex C v - linearly interpolate vertex colors across edges - linearly interpolate edge colors across scan lines C 3 C 1 y C p C 12 C 23 x C 2 Downside: may miss specular highlights at off-vertex positions or distort specular highlights 13

Polygon Shading Methods - Phong Shading Phong shading linearly interpolates normal vectors, not colors Steps: more realistic specular highlights illumination model N 3 - determine average normal at each vertex - linearly interpolate normals across edges - linearly interpolate normals across scanlines N 1 y N p C p N 12 N 23 x N 2 - apply illumination model at each pixel to calculate pixel color Downside: need more calculations since need to do illumination model at each pixel 14

Rendering the Polygonal Objects - The Hidden Surface Removal Problem We have removed all faces that are definitely hidden: the back-faces But even the surviving faces are only potentially visible - they may be obscured by faces closer to the viewer face A of object 1 is partially obscured by face B of object 2 obj 1 A obj 2 B screen Problem of identifying those face portions that are visible is called the hidden surface problem Solutions: - pre-ordering of the faces and subdivision into their visible parts before display (expensive) - the z-buffer algorithm (cheap, fast, implementable in hardware) 15

The Z-Buffer (Depth-Buffer) Scan Conversion Algorithm Two data structures: - z-buffer: holds for each image pixel the z-coordinate of the closest object so far - color-buffer: holds for each pixel the closest object s color Basic z-buffer algorithm: color buffer z-buffer // initialize buffers for all (x, y) z-buffer(x, y) = -infinity; color-buffer(x, y) = color background // scan convert each front-face polygon for each front-face poly for each scanline y that traverses projected poly for each pixel x in scanline y and projected poly if z poly(x, y) > z-buffer(x, y) initialize buffers scan-convert face B of obj. 2 z-buffer(x, y) = z poly(x, y) color-buffer(x, y) = color poly(x, y) scan-convert face A of obj. 1 16

Stencil Buffer Allows a screen area to be stenciled out No write will occur in these areas on rasterization stencil buffer projection screen image stencil 17

Rendering With OpenGl (1) look also in www.opengl.org glmatrixmode(gl_projection) Define the viewing window: glortho() for parallel projection glfrustum() for perspective projection glmatrixmode(gl_modelview) Specify the viewpoint glulookat() /* need to have GLUT */ Model the scene Modelview Matrix Stack glulookat(...) gltranslate(x,y,z) glrotate(ϕ y,0,1,0) glrotate(ϕ z,0,0,1) glrotate(ϕ x,1,0,0) order of execution rotate first, then translate, then do viewing... gltranslate(), glrotate(), glscale(),... Vertex x y z w object coordinates Modelview Matrix Viewport Transfor- mation eye coordinates OpenGL rendering pipeline Projection Matrix clip coordinates Perspective Division window coordinates normalized device coordinates 18

Rendering With OpenGl (2) Specify the light sources: gllight() Enable the z-buffer: glenable(gl_depth_test) Enable lighting: glenable(gl_lighting) Enable stencil test (GL_STENCIL_TEST) Enable light source i: glenable(gl_lighti) /* GL_LIGHTi is the symbolic name of light i */ Select shading model: glshademodel() /* GL_FLAT or GL_SMOOTH */ For each object: /* duplicate the matrix on the stack if want to apply some extra transformations to the object */ glpushmatrix(); gltranslate(), glrotate(), glscale() /* any specific transformation on this object */ for all polygons of the object: /* specify the polygon (assume a triangle here) */ glbegin(gl_polygon); glcolor3fv(c1); glvertex3fv(v1); glnormal3fv(n1); /* vertex 1 */ glcolor3fv(c2); glvertex3fv(v2); glnormal3fv(n2); /* vertex 2 */ glcolor3fv(c3); glvertex3fv(v3); glnormal3fv(n3); /* vertex 3 */ glend(); glpopmatrix() /* get rid of the object-specific transformations, pop back the saved matrix */ 19