Perspective Projection and Texture Mapping

Similar documents
Lecture 6: Texture. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

CSE 167: Introduction to Computer Graphics Lecture #5: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017

Projection and viewing. Computer Graphics CSE 167 Lecture 4

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Texture mapping. Computer Graphics CSE 167 Lecture 9

Rasterization Overview

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

3D Rasterization II COS 426

Today. Texture mapping in OpenGL. Texture mapping. Basic shaders for texturing. Today. Computergrafik

Lecture 6: Texturing Part II: Texture Compression and GPU Latency Hiding Mechanisms. Visual Computing Systems CMU , Fall 2014

Today. Rendering pipeline. Rendering pipeline. Object vs. Image order. Rendering engine Rendering engine (jtrt) Computergrafik. Rendering pipeline

The Rasterization Pipeline

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

CS 130 Final. Fall 2015

Parallel Triangle Rendering on a Modern GPU

C P S C 314 S H A D E R S, O P E N G L, & J S RENDERING PIPELINE. Mikhail Bessmeltsev

CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015

Computer Graphics. Lecture 8 Antialiasing, Texture Mapping

CSE 167: Introduction to Computer Graphics Lecture #8: Textures. Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2016

CMSC427 Transformations II: Viewing. Credit: some slides from Dr. Zwicker

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

3D Viewing. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 9

Game Architecture. 2/19/16: Rasterization

TEXTURE MAPPING. DVA338 Computer Graphics Thomas Larsson, Afshin Ameri

CS 431/636 Advanced Rendering Techniques

Texture Mapping. Texture (images) lecture 16. Texture mapping Aliasing (and anti-aliasing) Adding texture improves realism.

lecture 16 Texture mapping Aliasing (and anti-aliasing)

CSE528 Computer Graphics: Theory, Algorithms, and Applications

Drawing in 3D (viewing, projection, and the rest of the pipeline)

Textures. Texture coordinates. Introduce one more component to geometry

CSE328 Fundamentals of Computer Graphics

3D Viewing. CS 4620 Lecture 8

Shadows in the graphics pipeline

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

CSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018

CS 498 VR. Lecture 18-4/4/18. go.illinois.edu/vrlect18

The Rasterizer Stage. Texturing, Lighting, Testing and Blending

Drawing Fast The Graphics Pipeline

Lecture 4. Viewing, Projection and Viewport Transformations

CS 4620 Program 3: Pipeline

Texture Mapping. Michael Kazhdan ( /467) HB Ch. 14.8,14.9 FvDFH Ch. 16.3, , 16.6

Programming Graphics Hardware

Texture Mapping and Sampling

CSE 167: Introduction to Computer Graphics Lecture #9: Textures. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

Drawing in 3D (viewing, projection, and the rest of the pipeline)

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

CS451Real-time Rendering Pipeline

Computer Graphics 7 - Texture mapping, bump mapping and antialiasing

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

Render-To-Texture Caching. D. Sim Dietrich Jr.

Texturing Theory. Overview. All it takes is for the rendered image to look right. -Jim Blinn 11/10/2018

Drawing Fast The Graphics Pipeline

CSE 167: Lecture #4: Vertex Transformation. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Lecture 4: Visibility. (coverage and occlusion using rasterization and the Z-buffer) Visual Computing Systems CMU , Fall 2014

Acknowledgement: Images and many slides from presentations by Mark J. Kilgard and other Nvidia folks, from slides on developer.nvidia.

Texture Mapping II. Light maps Environment Maps Projective Textures Bump Maps Displacement Maps Solid Textures Mipmaps Shadows 1. 7.

Computer Graphics. Texture Filtering & Sampling Theory. Hendrik Lensch. Computer Graphics WS07/08 Texturing

CS4620/5620: Lecture 14 Pipeline

CS GPU and GPGPU Programming Lecture 12: GPU Texturing 1. Markus Hadwiger, KAUST

CSE 167: Lecture 11: Textures 2. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

Real-Time Graphics Architecture

CS GPU and GPGPU Programming Lecture 16+17: GPU Texturing 1+2. Markus Hadwiger, KAUST

Drawing in 3D (viewing, projection, and the rest of the pipeline)

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

The Light Field and Image-Based Rendering

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

The Graphics Pipeline and OpenGL I: Transformations!

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Real-Time Shadows. Last Time? Textures can Alias. Schedule. Questions? Quiz 1: Tuesday October 26 th, in class (1 week from today!

Pipeline Operations. CS 4620 Lecture 10

INTRODUCTION TO COMPUTER GRAPHICS. It looks like a matrix Sort of. Viewing III. Projection in Practice. Bin Sheng 10/11/ / 52

CS 450: COMPUTER GRAPHICS TEXTURE MAPPING SPRING 2015 DR. MICHAEL J. REALE

INTRODUCTION TO COMPUTER GRAPHICS. cs123. It looks like a matrix Sort of. Viewing III. Projection in Practice 1 / 52

Lecture 17: Shadows. Projects. Why Shadows? Shadows. Using the Shadow Map. Shadow Maps. Proposals due today. I will mail out comments

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Mattan Erez. The University of Texas at Austin

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Triangle Rasterization

CS GPU and GPGPU Programming Lecture 11: GPU Texturing 1. Markus Hadwiger, KAUST

Computergrafik. Matthias Zwicker. Herbst 2010

Advanced Lighting Techniques Due: Monday November 2 at 10pm

CS 354R: Computer Game Technology

Prof. Feng Liu. Fall /19/2016

CSE 167: Introduction to Computer Graphics Lecture #7: Textures. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018

Real-Time Graphics Architecture. Kurt Akeley Pat Hanrahan. Texture

CS 4620 Midterm 1. Tuesday 22 October minutes

Fondamenti di Grafica 3D The Rasterization Pipeline.

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

Three-Dimensional Viewing Hearn & Baker Chapter 7

Computing Visibility. Backface Culling for General Visibility. One More Trick with Planes. BSP Trees Ray Casting Depth Buffering Quiz

Drawing a Triangle (and an introduction to sampling)

For Intuition about Scene Lighting. Today. Limitations of Planar Shadows. Cast Shadows on Planar Surfaces. Shadow/View Duality.

Lecture 13: Reyes Architecture and Implementation. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Models and The Viewing Pipeline. Jian Huang CS456

CS354 Computer Graphics Ray Tracing. Qixing Huang Januray 24th 2017

INF3320 Computer Graphics and Discrete Geometry

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters.

OpenGL Transformations

Transcription:

Lecture 7: Perspective Projection and Texture Mapping Computer Graphics CMU 15-462/15-662, Spring 2018

Perspective & Texture PREVIOUSLY: - transformation (how to manipulate primitives in space) - rasterization (how to turn primitives into pixels) TODAY: - see where these two ideas come crashing together! - revisit perspective transformations - talk about how to map texture onto a primitive to get more detail - and how perspective creates challenges for texture mapping! Why is it hard to render an image like this?

Perspective Projection

Perspective projection distant objects appear smaller parallel lines converge at the horizon

Early painting: incorrect perspective Carolingian painting from the 8-9th century

Evolution toward correct perspective Ambrogio Lorenzetti Annunciation, 1344 Brunelleschi, elevation of Santo Spirito, 1434-83, Florence Masaccio The Tribute Money c.1426-27 Fresco, The Brancacci Chapel, Florence

Later rejection of proper perspective projection

Return of perspective in computer graphics

Rejection of perspective in computer graphics

Transformations: From Objects to the Screen [WORLD COORDINATES] [VIEW COORDINATES] [CLIP COORDINATES] (1,1,1) y z x original description of objects all positions now expressed relative to camera; camera is sitting at origin looking down -z direction (-1,-1,-1) everything visible to the camera is mapped to unit cube for easy clipping primitives are now 2D and can be drawn via rasterization [WINDOW COORDINATES] (0, 0) (w, h) Screen transform: objects now in 2D screen coordinates [NORMALIZED COORDINATES] (-1,-1) (1,1) unit cube mapped to unit square via perspective divide

Review: simple camera transform Consider object positioned in world at (10, 2, 0) Consider camera at (4, 2, 0), looking down x axis y x z What transform places in the object in a coordinate space where the camera is at the origin and the camera is looking directly down the -z axis? Translating object vertex positions by (-4, -2, 0) yields position relative to camera Rotation about y by /2 gives position of object in new coordinate system where camera s view direction is aligned with the -z axis

Camera looking in a different direction Consider camera looking in direction What transform places in the object in a coordinate space where the camera is at the origin and the camera is looking directly down the -z axis? y w z v u w 2 Form orthonormal basis around w: (see u and v) Consider rotation matrix: R R = R maps x-axis to u, y-axis to v, z axis to - 2 2 u x v x w x 4u y v y w y u z v z w z 2 4 3 5 2 w Why is that the inverse? 2 4 2 How do we invert? 2 3 u x u y u z R 1 = R T = 4 2 2 v x v y v z 5 w4 x 4 w y w z 3 5 3 4 4 5 5 x 3 3 5 5 R T u = u u v u w u T = 1 0 T 0 R T v = u v v v w v T = 0 1 T 0 R T w = u w v w w w T T = 0 0 1 22 33 4 5

View frustum View frustum is region the camera can see: x 8 y x 4 x 7 x 3 z x 1 x 5 x Pinhole Camera (0,0) x 2 znear x 6 zfar Top/bottom/left/right planes correspond to sides of screen Near/far planes correspond to closest/furthest thing we want to draw

Clipping In real-time graphics pipeline, clipping is the process of eliminating triangles that aren t visible to the camera - Don t waste time computing pixels (or really, fragments) you can t see! - Even tossing out individual fragments is expensive ( fine granularity ) - Makes more sense to toss out whole primitives ( coarse granularity ) - Still need to deal with primitives that are partially clipped from: https://paroj.github.io/gltut/

Aside: Near/Far Clipping But why near/far clipping? - Some primitives (e.g., triangles) may have vertices both in front & behind eye! (Causes headaches for rasterization, e.g., checking if fragments are behind eye) - Also important for dealing with finite precision of depth buffer / limitations on storing depth as floating point values near = 10-1 far = 10 3 near = 10-5 far = 10 5 [DEMO] Z-fighting floating point has more resolution near zero hence more precise resolution of primitive-primitive intersection

Mapping frustum to unit cube Before mapping to 2D, map corners of frustum to corners of cube: x 4 y x 8 x 8 y x 4 x 3 x 7 z x 3 x 1 (-1,-1,-1) x 7 (1,1,1) x 5 z x x 1 x 2 znea x 5 x 6 zfar x 2 x x 6 Why do we do this? 1. Makes clipping much easier! - can quickly discard points outside range [-1,1] - need to think a bit about partially-clipped triangles 2. Different maps to cube yield different effects - specifically perspective or orthographic view - perspective is affine transformation, implemented via homogeneous coordinates - for orthographic view, just use identity matrix! Perspective: Set homogeneous coord to z Distant objects get smaller Orthographic: Set homogeneous coord to 1 Distant objects remain same size

Review: homogeneous coordinates w 2 4 wx = wx x wx y w T w = 1 x = x x x y 1 T apple apple y x Many points in 2D-H correspond to same point in 2D x and wx correspond to the same 2D point wx (divide by to convert 2D-H back to 2D)

Perspective vs. Orthographic Projection Most basic version of perspective matrix: objects shrink in distance Most basic version of orthographic matrix: objects stay the same size real projection matrices are a bit more complicated! :-)

Matrix for Perspective Transform Real perspective matrix takes into account geometry of view frustum: x 8 x 4 x 3 x 7 x 1 x 5 x 2 x 6 left (l), right (r), top (t), bottom (b), near (n), far (f) For a derivation: http://www.songho.ca/opengl/gl_projectionmatrix.html

Review: screen transform After divide, coordinates in [-1,1] have to be stretched to fit the screen Example: 2 All points within (-1,1) to (1,1) region are on screen 4 (1,1) in normalized space maps to (W,0) in screen 2 4 Normalized coordinate space: p (1,1) Screen (W x H output image) coordinate space: (0,0) W p (0,0) H (W,H) Step 1: reflect about x-axis Step 2: translate by (1,1) Step 3: scale by (W/2,H/2)

Transformations: From Objects to the Screen [WORLD COORDINATES] [VIEW COORDINATES] [CLIP COORDINATES] (1,1,1) view transform y projection transform z x original description of objects all positions now expressed relative to camera; camera is sitting at origin looking down -z direction (-1,-1,-1) everything visible to the camera is mapped to unit cube for easy clipping perspective divide [WINDOW COORDINATES] (w, h) [NORMALIZED COORDINATES] (1,1) primitives are now 2D and can be drawn via rasterization (0, 0) Screen transform: objects now in 2D screen coordinates screen transform (0, 0) unit cube mapped to unit square via perspective divide

Coverage(x,y) In lecture 2 we discussed how to sample coverage given the 2D position of the triangle s vertices. c x b a

Consider sampling color(x,y) c blue [0,0,1] x b green [0,1,0] a red [0,0,1] What is the triangle s color at the point x?

Review: interpolation in 1D frecon(x) = linear interpolation between values of two closest samples to x f (x) Between: x2 and x3: f recon (t) =(1 t)f(x 2 )+tf(x 3 ) where: t = (x x 2) x 3 x 2 f(x2) f(x3) x0 x1 x2 x3 x4

Consider similar behavior on triangle c blue [0,0,1] Color depends on distance from b a color at x =(1 t) 0 0 1 + t 0 0 0 t = distance from to distance from to x b a c b a x a black [0,0,0] b black [0,0,0] How can we interpolate in 2D between three values?

Interpolation via barycentric coordinates c blue [0,0,1] and form a non-orthogonal basis for points in triangle (origin at ) b a c a a x = a + (b a)+ (c a) =(1 )a + b + c = a + b + c c a + + =1 x Color at x is linear combination of color at three triangle vertices. x = a+ + b + c color color color color a red [1,0,0] b a b green [0,1,0]

Barycentric coordinates as scaled distances c blue [0,0,1] proportional to distance from to edge x c a Compute distance of x from line ca Divide by distance of a from line ca ( height ) (Similiarly for other two barycentric coordinates) c a x b green [0,1,0] a red [0,0,1] b a (Q: Is the mapping from x to barycentric coords affine? Linear?)

Barycentric coordinates as ratio of areas c blue [0,0,1] Also ratio of signed areas: = A A /A = A B /A = A C /A c a AB AA x Why must coordinates sum to one? Why must coordinates be between 0 and 1? AC b green [0,1,0] a red [0,0,1] b a (Similar idea also works for, e.g., a tetrahedron.)

Perspective-incorrect interpolation Due to perspective projection (homogeneous divide), barycentric interpolation of values on a triangle with different depths is not an affine function of screen XY coordinates. Attribute values must be interpolated linearly in 3D object space. A 1 Screen (A 0 + A 1 ) / 2 A 0

Example: perspective incorrect interpolation Good example is quadrilateral split into two triangles: If we compute barycentric coordinates using 2D (projected) coordinates, can lead to (derivative) discontinuity in interpolation where quad was split.

Perspective Correct Interpolation Basic recipe: - To interpolate some attribute ɸ - Compute depth z at each vertex - Evaluate Z := 1/z and P := ɸ/z at each vertex - Interpolate Z and P using standard (2D) barycentric coords - At each fragment, divide interpolated P by interpolated Z to get final value For a derivation, see Low, Perspective-Correct Interpolation

Texture Mapping

Many uses of texture mapping Define variation in surface reflectance Pattern on ball Wood grain on floor

Describe surface material properties Multiple layers of texture maps for color, logos, scratches, etc.

Normal & Displacement Mapping normal mapping displacement mapping Use texture value to perturb surface normal to fake appearance of a bumpy surface (note smooth silhouette/shadow reveals that surface geometry is not actually bumpy!) dice up surface geometry into tiny triangles & offset positions according to texture values (note bumpy silhouette and shadow boundary)

Represent precomputed lighting and shadows Grace Cathedral environment map Environment map used in rendering

Texture coordinates Texture coordinates define a mapping from surface coordinates (points on triangle) to points in texture domain. (0.0, 1.0) (0.5, 1.0) (1.0, 1.0) (0.5, 0.5) (0.0, 0.5) (1.0, 0.5) (0.0, 0.0) (0.5, 0.0) (1.0, 0.0) Eight triangles (one face of cube) with surface parameterization provided as pervertex texture coordinates. mytex(u,v) is a function defined on the [0,1] 2 domain (represented by 2048x2048 image) Location of highlighted triangle in texture space shown in red. Final rendered result (entire cube shown). Location of triangle after projection onto screen shown in red. (We ll assume surface-to-texture space mapping is provided as per vertex values)

Visualization of texture coordinates Texture coordinates linearly interpolated over triangle (0.0, 1.0) (green) (0.0, 0.0) (1.0, 0.0) (red)

More complex mapping Visualization of texture coordinates v Triangle vertices in texture space u Each vertex has a coordinate (u,v) in texture space. (Actually coming up with these coordinates is another story!)

Texture mapping adds detail Rendered result v Triangle vertices in texture space u

Texture mapping adds detail rendering without texture rendering with texture texture image zoom Each triangle copies a piece of the image back to the surface.

Another example: Sponza Notice texture coordinates repeat over surface.

Textured Sponza

Example textures used in Sponza

Texture Sampling 101 Basic algorithm for mapping texture to surface: - Interpolate U and V coordinates across triangle - For each fragment - Sample (evaluate) texture at (U,V) - Set color of fragment to sampled texture value sadly not this easy in general!

Texture space samples Sample positions in XY screen space Sample positions in texture space v 1 2 3 4 5 1 2 3 4 5 u Sample positions are uniformly distributed in screen space (rasterizer samples triangle s appearance at these locations) Texture sample positions in texture space (texture function is sampled at these locations)

Recall: aliasing Undersampling a high-frequency signal can result in aliasing f(x) x 1D example 2D examples: Moiré patterns, jaggies

Aliasing due to undersampling texture No pre-filtering of texture data (resulting image exhibits aliasing) Rendering using pre-filtered texture data

Aliasing due to undersampling (zoom) No pre-filtering of texture data (resulting image exhibits aliasing) Rendering using pre-filtered texture data

Filtering textures Minification: - Area of screen pixel maps to large region of texture (filtering required -- averaging) - One texel corresponds to far less than a pixel on screen - Example: when scene object is very far away Magnification: - Area of screen pixel maps to tiny region of texture (interpolation required) - One texel maps to many screen pixels - Example: when camera is very close to scene object (need higher resolution texture map) Figure credit: Akeley and Hanrahan

Filtering textures...... Actual texture: 700x700 image (only a crop is shown) Texture minification Actual texture: 64x64 image Texture magnification

Mipmap (L. Williams 83) Level 0 = 128x128 Level 1 = 64x64 Level 2 = 32x32 Level 3 = 16x16 Level 4 = 8x8 Level 5 = 4x4 Level 6 = 2x2 Level 7 = 1x1 Idea: prefilter texture data to remove high frequencies Texels at higher levels store integral of the texture function over a region of texture space (downsampled images) Texels at higher levels represent low-pass filtered version of original texture signal

Mipmap (L. Williams 83) v Williams original proposed mip-map layout u Mip hierarchy level = d What is the storage overhead of a mipmap? Slide credit: Akeley and Hanrahan

Computing Mip Map Level Compute differences between texture coordinate values of neighboring screen samples v u Screen space Texture space

Computing Mip Map Level Compute differences between texture coordinate values of neighboring fragments (u,v) 01 v (u,v) 00 (u,v) 10 L L du/dx dv/dx u du/dx = u 10 -u 00 du/dy = u 01 -u 00 dv/dx = v 10 -v 00 dv/dy = v 01 -v 00 mip-map d = log2 L

Sponza (bilinear resampling at level 0)

Sponza (bilinear resampling at level 2)

Sponza (bilinear resampling at level 4)

Visualization of mip-map level (bilinear filtering only: d clamped to nearest level)

Tri-linear filtering mip-map texels: level d+1 Bilinear resampling: four texel reads 3 lerps (3 mul + 6 add) Trilinear resampling: eight texel reads 7 lerps (7 mul + 14 add) Figure credit: Akeley and Hanrahan mip-map texels: level d

Visualization of mip-map level (trilinear filtering: visualization of continuous d)

Pixel area may not map to isotropic region in texture Proper filtering requires anisotropic filter footprint v v=.75 v=.5 v=.25 u u=.25 u=.5 u=.75 Texture space: viewed from camera with perspective L Overblurring in u direction L Trilinear (Isotropic) Filtering Anisotropic Filtering (Modern solution: Combine multiple mip map samples)

Summary: texture filtering using the mip map Small storage overhead (33%) - Mipmap is 4/3 the size of original texture image For each isotropically-filtered sampling operation - Constant filtering cost (independent of mip map level) - Constant number of texels accessed (independent of mip map level) Combat aliasing with prefiltering, rather than supersampling - Recall: we used supersampling to address aliasing problem when sampling coverage Bilinear/trilinear filtering is isotropic and thus will overblur to avoid aliasing - Anisotropic texture filtering provides higher image quality at higher compute and memory bandwidth cost (in practice: multiple mip map samples)

Real Texture Sampling 1. Compute u and v from screen sample x,y (via evaluation of attribute equations) 2. Compute du/dx, du/dy, dv/dx, dv/dy differentials from screen-adjacent samples. 3. Compute mip map level d 4. Convert normalized [0,1] texture coordinate (u,v) to texture coordinates U,V in [W,H] 5. Compute required texels in window of filter 6. Load required texels (need eight texels for trilinear) 7. Perform tri-linear interpolation according to (U, V, d) Takeaway: a texture sampling operation is not just an image pixel lookup! It involves a significant amount of math. For this reason, modern GPUs have dedicated fixed-function hardware support for performing texture sampling operations.

Texturing summary Texture coordinates: define mapping between points on triangle s surface (object coordinate space) to points in texture coordinate space Texture mapping is a sampling operation and is prone to aliasing - Solution: prefilter texture map to eliminate high frequencies in texture signal - Mip-map: precompute and store multiple multiple resampled versions of the texture image (each with different amounts of low-pass filtering) - During rendering: dynamically select how much low-pass filtering is required based on distance between neighboring screen samples in texture space - Goal is to retain as much high-frequency content (detail) in the texture as possible, while avoiding aliasing