Surface Graphics. 200 polys 1,000 polys 15,000 polys. an empty foot. - a mesh of spline patches:

Similar documents
CSE 690: GPGPU. Lecture 2: Understanding the Fabric - Intro to Graphics. Klaus Mueller Stony Brook University Computer Science Department

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture 10

Pipeline Operations. CS 4620 Lecture 14

Interpolation using scanline algorithm

Rasterization Overview

3D Rasterization II COS 426

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

OpenGL Transformations

CS 130 Final. Fall 2015

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

OpenGL: Open Graphics Library. Introduction to OpenGL Part II. How do I render a geometric primitive? What is OpenGL

Module Contact: Dr Stephen Laycock, CMP Copyright of the University of East Anglia Version 1

Models and Architectures

Describe the Orthographic and Perspective projections. How do we combine together transform matrices?

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

Shading. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller/Fuhrmann

Mach band effect. The Mach band effect increases the visual unpleasant representation of curved surface using flat shading.

Lecture 4. Viewing, Projection and Viewport Transformations

Models and Architectures. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

E.Order of Operations

CS4202: Test. 1. Write the letter corresponding to the library name next to the statement or statements that describe library.

CHAPTER 1 Graphics Systems and Models 3

Computer Graphics. Chapter 10 Three-Dimensional Viewing

Advanced Computer Graphics (CS & SE )

Lecture 5: Viewing. CSE Computer Graphics (Fall 2010)

Introduction to Computer Graphics with WebGL

COMP3421. Introduction to 3D Graphics

The Viewing Pipeline adaptation of Paul Bunn & Kerryn Hugo s notes

6.837 Introduction to Computer Graphics Assignment 5: OpenGL and Solid Textures Due Wednesday October 22, 2003 at 11:59pm

How do we draw a picture?

OpenGL: Setup 3D World

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination

Graphics Pipeline. CS535 Fall Daniel G. Aliaga Department of Computer Science Purdue University

Computer Graphics with OpenGL ES (J. Han) Chapter VII Rasterizer

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

CS 4204 Computer Graphics

CS 381 Computer Graphics, Fall 2008 Midterm Exam Solutions. The Midterm Exam was given in class on Thursday, October 23, 2008.

CEng 477 Introduction to Computer Graphics Fall 2007

CSE 167: Introduction to Computer Graphics Lecture #8: Textures. Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2016

COMP3421. Introduction to 3D Graphics

CS 4620 Program 3: Pipeline

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Announcements. Written Assignment 2 is out see the web page. Computer Graphics

Books, OpenGL, GLUT, GLUI, CUDA, OpenCL, OpenCV, PointClouds, and G3D

CSE 167: Lecture #4: Vertex Transformation. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

9. Illumination and Shading

3D Graphics Pipeline II Clipping. Instructor Stephen J. Guy

For each question, indicate whether the statement is true or false by circling T or F, respectively.

CS451Real-time Rendering Pipeline

Computer Graphics I Lecture 11

Introduction to 3D Graphics with OpenGL. Z-Buffer Hidden Surface Removal. Binghamton University. EngiNet. Thomas J. Watson

Today. Rendering pipeline. Rendering pipeline. Object vs. Image order. Rendering engine Rendering engine (jtrt) Computergrafik. Rendering pipeline

Graphics for VEs. Ruth Aylett

- Rasterization. Geometry. Scan Conversion. Rasterization

RASTERISED RENDERING

TSBK03 Screen-Space Ambient Occlusion

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

COMP3421. Introduction to 3D Graphics

The 3D Graphics Rendering Pipeline

OpenGL. Toolkits.

Lecture 4 Advanced Computer Graphics (CS & SE )

Announcements. Submitting Programs Upload source and executable(s) (Windows or Mac) to digital dropbox on Blackboard

Chapter 7 - Light, Materials, Appearance

Graphics for VEs. Ruth Aylett

Chapter 5. Projections and Rendering

CS 591B Lecture 9: The OpenGL Rendering Pipeline

CS 112 The Rendering Pipeline. Slide 1

Computer Graphics. Shadows

Shadow Algorithms. CSE 781 Winter Han-Wei Shen

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Advanced Lighting Techniques Due: Monday November 2 at 10pm

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2014

CS 354R: Computer Game Technology

Graphics Pipeline & APIs

Hidden surface removal. Computer Graphics

Module 13C: Using The 3D Graphics APIs OpenGL ES

CSE4030 Introduction to Computer Graphics

CS559 Computer Graphics Fall 2015

CS 464 Review. Review of Computer Graphics for Final Exam

Computer Graphics. Illumination and Shading

Visible-Surface Detection Methods. Chapter? Intro. to Computer Graphics Spring 2008, Y. G. Shin

CS 130 Exam I. Fall 2015

VISIBILITY & CULLING. Don t draw what you can t see. Thomas Larsson, Afshin Ameri DVA338, Spring 2018, MDH

Books, OpenGL, GLUT, CUDA, OpenCL, OpenCV, PointClouds, G3D, and Qt

The Traditional Graphics Pipeline

Computer Graphics (CS 543) Lecture 10: Soft Shadows (Maps and Volumes), Normal and Bump Mapping

Illumination and Shading

Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL

Graphics Hardware and Display Devices

Graphics Hardware and OpenGL

From Graphics to Visualization

Today s Agenda. Basic design of a graphics system. Introduction to OpenGL

Graphics Pipeline & APIs

Hidden Surfaces II. Week 9, Mon Mar 15

Advanced Rendering Techniques

The Traditional Graphics Pipeline

3D Graphics and OpenGl. First Steps

Transcription:

Surface Graphics Objects are explicitely defined by a surface or boundary representation (explicit inside vs outside) This boundary representation can be given by: - a mesh of polygons: 200 polys 1,000 polys 15,000 polys - a mesh of spline patches: an empty foot 3

Polygon Mesh Definitions v3 e3 f1 e2 e1 v2 n1 v1, v2, v3: vertices (3D coordinates) e1, e2, e3: edges e1 = v2 - v1 and e2 = v3 - v2 f1: polygon or face v1 n1: face normal n1 = e1 e2 -------------------- e1 e2 view plane (screen) n3 backface n4 n1 n2 = = e11 e12 ------------------------- e11 e12 e21 e22 ------------------------- e21 e22, e21 = -e12 n1 e11 e12 e21 e22 view vector vp n2 Rule: if all edge vectors in a face are ordered counterclockwise, then the face normal vectors will always point towards the outside of the object. This enables quick removal of back-faces (back-faces are the faces hidden from the viewer): - back-face condition: vp n > 0 5

Polygon Mesh Data Structure Vertex list (v1, v2, v3, v4,...): (x1, y1, z1), (x2, y2, z2), (x3, y3, z3), (x4, y4, z4),... Edge list (e1, e2, e3, e4, e5,...): (v1, v2), (v2, v3), (v3, v1), (v1, v4), (v4, v2),... Face list (f1, f2,...): v3 e2 v2 (e1, e2, e3), (e4, e5, -e1),... or (v1, v2, v3), (v1, v4, v2),... e3 f1 n1 e1 f2 e5 Normal list (n1, n2,...), one per face or per vertex (n1x, n1y, n1z), (n2x, n2y, n2z),... v1 e4 n2 v4 Use Pointers or indices into vertex and edge list arrays, when appropriate 6

Object-Order Viewing - Overview screen in world space W Cop y n v u H Eye (Camera) Viewing Transform y transformed object - z object - z object projected to the screen x Eye v u screen in viewing space x A view is specified by: - eye position (Eye) - view direction vector (n) - screen center position (Cop) - screen orientation (u, v) - screen width W, height H u, v, n are orthonormal vectors After the viewing transform: - the screen center is at the coordinate system origin - the screen is aligned with the x, y-axis - the viewing vector points down the negative z-axis - the eye is on the positive z-axis All objects are transformed by the viewing transform 7

Step 1: Viewing Transform The sequence of transformations is: - translate the screen Center Of Projection (COP) to the coordinate system orgin (T view ) - rotate the translated screen such that the view direction vector n points down the negative z- axis and the screen vectors u, v are aligned with the x, y-axis (R view ) We get M view = R view T view We transform all object (points, vertices) by M view : x' y' z' 1 = u x u y u z 0 v x v y v z 0 n x n y n z 0 0 0 0 1 100 Cop x 010 Cop y 001 Cop z 000 1 x y z 1 Now the objects are easy to project since the screen is in a convenient position - but first we have to account for perspective distortion... 8

Step 2: Perspective Projection y y y p Eye z eye z screen plane A (view-transformed) vertex with coordinates (x, y, z ) projects onto the screen as follows: y p eye = y' ----------------- x eye z' p = x' eye ----------------- eye z' x p and y p can be used to determine the screen coordinates of the object point (i.e., where to plot the point on the screen) 9

Step 1 + Step 2 = World-To-Screen Transform Perspective projection can also be captured in a matrix M proj with a subsequent perspective divide by the homogenous coordinate w: x h y h z h w = eye 0 0 0 0 eye 0 0 0 0 1 0 0 0 1 eye x' y' z' 1 x h x p = ----- w y h y p = ----- w So the entire world-to-screen transform is: M trans = M proj M view = M proj R view T view with a subsequent divide by the homogenous coordinate M trans is composed only once per view and all object points (vertices) are multiplied by it 10

Step 3: Window Transform (1) Note: our camera screen is still described in world coordinates However, our display monitor is described on a pixel raster of size (Nx, Ny) The transformation of (perspective) viewing coordinates into pixel coordinates is called window transform Assume: - we want to display the rendered screen image in a window of size (Nx, Ny) pixels - the width and height of the camera screen in world coordinates is (W, H) - the center of the camera is at the center of the screen coordinate system Then: - the valid range of object coordinates is (-W/2... +W/2, -H/2... +H/2) - these have to be mapped into (0... Nx-1, 0... Ny-1): x W s x p + ---- Nx 1 = --------------- y H 2 W s y p + --- Ny 1 = --------------- 2 H H camera W window Nx Ny 11

Step 3: Window Transform (2) The window transform can be written as the matrix M window : x s y s 1 = Nx 1 Nx 1 --------------- 0 --------------- W 2 Ny 1 0 --------------- Ny --------------- 1 H 2 0 0 1 x p y p 1 After the perspective divide, all object points (vertices) are multiplied by M window Note: we could figure the window transform into M trans - in that case, there is only one matrix multiply per object point (vertex) with a subsequent perspective divide - the OpenGL graphics pipeline does this 12

Orthographic (Parallel) Projection Leave out the perspective mapping (step 2) in the viewing pipeline In orthographic projection, all object points project along parallel lines onto the screen y y y y p =y y p z z perspective projection orthographic projection 13

Rendering the Polygonal Objects - The Hidden Surface Removal Problem We have removed all faces that are definitely hidden: the back-faces But even the surviving faces are only potentially visible - they may be obscured by faces closer to the viewer face A of object 1 is partially obscured by face B of object 2 obj 1 A obj 2 B screen Problem of identifying those face portions that are visible is called the hidden surface problem Solutions: - pre-ordering of the faces and subdivision into their visible parts before display (expensive) - the z-buffer algorithm (cheap, fast, implementable in hardware) 14

The Z-Buffer (Depth-Buffer) Scan Conversion Algorithm Two data structures: - z-buffer: holds for each image pixel the z-coordinate of the closest object so far - color-buffer: holds for each pixel the closest object s color Basic z-buffer algorithm: color buffer z-buffer // initialize buffers for all (x, y) z-buffer(x, y) = -infinity; color-buffer(x, y) = color background // scan convert each front-face polygon for each front-face poly for each scanline y that traverses projected poly for each pixel x in scanline y and projected poly if z poly(x, y) > z-buffer(x, y) initialize buffers scan-convert face B of obj. 2 z-buffer(x, y) = z poly(x, y) color-buffer(x, y) = color poly(x, y) scan-convert face A of obj. 1 15

Polygon Shading Methods - Faceted Shading vertex3 How are the pixel colors determined in z-buffer? rasterized pixel (x,y) scanline y The simplest method is flat or faceted shading: vertex2 - each polygon has a constant color vertex1 - compute color at one point on the polygon (e.g., at center) and use everywhere - assumption: lightsource and eye is far away, i.e., N L, H E = const. N L E Problem: discontinuities are likely to appear at face boundaries 16

Polygon Shading Methods - Gouraud Shading Colors are averaged across polygons along common edges no more discontinuities Steps: - determine average unit normal at each poly vertex: N v = n k = 1 N k n k = 1 N k n: number of faces that have vertex v in common N v - apply illumination model at each poly vertex C v - linearly interpolate vertex colors across edges - linearly interpolate edge colors across scan lines C 3 C 1 y C p C 12 C 23 x C 2 Downside: may miss specular highlights at off-vertex positions or distort specular highlights 17

Polygon Shading Methods - Phong Shading Phong shading linearly interpolates normal vectors, not colors Steps: more realistic specular highlights illumination model N 3 - determine average normal at each vertex - linearly interpolate normals across edges - linearly interpolate normals across scanlines N 1 y N p C p N 12 N 23 x N 2 - apply illumination model at each pixel to calculate pixel color Downside: need more calculations since need to do illumination model at each pixel 18

Rendering With OpenGl (1) look also in www.opengl.org glmatrixmode(gl_projection) Define the viewing window: glortho() for parallel projection glfrustum() for perspective projection glmatrixmode(gl_modelview) Specify the viewpoint glulookat() /* need to have GLUT */ Model the scene Modelview Matrix Stack glulookat(...) gltranslate(x,y,z) glrotate(ϕ y,0,1,0) glrotate(ϕ z,0,0,1) glrotate(ϕ x,1,0,0) order of execution rotate first, then translate, then do viewing... gltranslate(), glrotate(), glscale(),... Vertex x y z w object coordinates Modelview Matrix Viewport Transfor- mation eye coordinates OpenGL rendering pipeline Projection Matrix clip coordinates Perspective Division window coordinates normalized device coordinates 19

Rendering With OpenGl (2) Specify the light sources: gllight() Enable the z-buffer: glenable(gl_depth_test) Enable lighting: glenable(gl_lighting) Enable light source i: glenable(gl_lighti) /* GL_LIGHTi is the symbolic name of light i */ Select shading model: glshademodel() /* GL_FLAT or GL_SMOOTH */ For each object: /* duplicate the matrix on the stack if want to apply some extra transformations to the object */ glpushmatrix(); gltranslate(), glrotate(), glscale() /* any specific transformation on this object */ for all polygons of the object: /* specify the polygon (assume a triangle here) */ glbegin(gl_polygon); glcolor3fv(c1); glvertex3fv(v1); glnormal3fv(n1); /* vertex 1 */ glcolor3fv(c2); glvertex3fv(v2); glnormal3fv(n2); /* vertex 2 */ glcolor3fv(c3); glvertex3fv(v3); glnormal3fv(n3); /* vertex 3 */ glend(); glpopmatrix() /* get rid of the object-specific transformations, pop back the saved matrix */ 20

Texture Mapping - Realistic Detail for Boring Polygons Leonard McMillan, UNC 3

Texture Mapping - Large Walls Take pictures, map as textures onto large polygon 4

Texture Mapping Large Walls - OpenGL Program glenable(gl_texture_2d); for each polygon glbindtexture(texturename); glbegin(gl_quad); glcolor3fv(c1); glvertex3fv(v1); gltexcoord2d(0.0, 0.0); /* vertex 1 */ glcolor3fv(c2); glvertex3fv(v2); gltexcoord2d(0.0, 1.0); /* vertex 2 */ glcolor3fv(c3); glvertex3fv(v3); gltexcoord2d(1.0, 1.0); /* vertex 3 */ glcolor3fv(c4); glvertex3fv(v4); gltexcoord2d(1.0, 0.0); /* vertex 4 */ glend(); (1.0, 0.0) (1.0, 1.0) (0.0, 0.0) (0.0, 1.0) 5

Texture Mapping - Small Facets Leonard McMillan, UNC 6

Texture Mapping Small Facets - OpenGL Program glenable(gl_texture_2d); glbindtexture(texturename); for each polygon i in the mesh glbegin(gl_quad); glcolor3fv(c[i][0]); glvertex3fv(v[i][0]); gltexcoord2fv(t[i][0]); /* vertex 1 */ glcolor3fv(c[i][1]); glvertex3fv(v[i][1]); gltexcoord2fv(t[i][1]); /* vertex 2 */ glcolor3fv(c[i][2]); glvertex3fv(v[i][2]); gltexcoord2fv(t[i][2]); /* vertex 3 */ glend(); polygon mesh texture (image) v[i][0] screen pixel v[i][2] interpolated texel v[i][1] 7

Complete Graphics Pipeline screen display (framebuffer) polygon primitives transformed into screen space vertex transformation transformed vertices rasterization shading texture mapping shaded and texture fragments vertex shader fragment shader (vertex engine) (rasterizers) fragments are interpolated from texture image 8

Graphics Hardware - Peeking Under The Hood Graphics hardware accelerates vertex and fragment shaders - (almost) fully programmable - enables accurate physics and visuals - realistic games - latest: real-time movie production on the PC - accelerate even general purpose, scientific and numerical computations (GPGPU) 9