Advanced Lighting Techniques Due: Monday November 2 at 10pm

Similar documents
Deferred Rendering Due: Wednesday November 15 at 10pm

Terrain rendering (part 1) Due: Monday, March 10, 10pm

Terrain Rendering (Part 1) Due: Thursday November 30 at 10pm

CS 130 Final. Fall 2015

Computer Graphics (CS 543) Lecture 10: Soft Shadows (Maps and Volumes), Normal and Bump Mapping

Computer Graphics Coursework 1

Computergrafik. Matthias Zwicker. Herbst 2010

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Rasterization Overview

CS 464 Review. Review of Computer Graphics for Final Exam

Pipeline Operations. CS 4620 Lecture 14

Module Contact: Dr Stephen Laycock, CMP Copyright of the University of East Anglia Version 1

Homework 3: Programmable Shaders

CS451Real-time Rendering Pipeline

Computer Graphics (CS 543) Lecture 10: Normal Maps, Parametrization, Tone Mapping

Shading and Texturing Due: Friday, Feb 17 at 6:00pm

CS 354R: Computer Game Technology

Game Architecture. 2/19/16: Rasterization

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

Models and Architectures

Real-Time Shadows. Last Time? Textures can Alias. Schedule. Questions? Quiz 1: Tuesday October 26 th, in class (1 week from today!

Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL

Topics and things to know about them:

CHAPTER 1 Graphics Systems and Models 3

TSBK03 Screen-Space Ambient Occlusion

CS 4620 Program 3: Pipeline

Surface Graphics. 200 polys 1,000 polys 15,000 polys. an empty foot. - a mesh of spline patches:

Shadow Algorithms. CSE 781 Winter Han-Wei Shen

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Practical Shadow Mapping

Introduction to Visualization and Computer Graphics

Today s Agenda. Basic design of a graphics system. Introduction to OpenGL

CS230 : Computer Graphics Lecture 4. Tamar Shinar Computer Science & Engineering UC Riverside

Graphics Hardware and Display Devices


CS 130 Exam I. Fall 2015

Assignment 6: Ray Tracing

CMSC427 Final Practice v2 Fall 2017

Ray Tracer Due date: April 27, 2011

CMSC427 Transformations II: Viewing. Credit: some slides from Dr. Zwicker

Computer Graphics with OpenGL ES (J. Han) Chapter VII Rasterizer

Pipeline Operations. CS 4620 Lecture 10

Homework #2. Shading, Ray Tracing, and Texture Mapping

CS4620/5620: Lecture 14 Pipeline

Software Engineering. ò Look up Design Patterns. ò Code Refactoring. ò Code Documentation? ò One of the lessons to learn for good design:

Computer Graphics I Lecture 11

CS559 Computer Graphics Fall 2015

The Graphics Pipeline

For each question, indicate whether the statement is true or false by circling T or F, respectively.

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

Recollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows

Computer Graphics Fundamentals. Jon Macey

Graphics and Interaction Rendering pipeline & object modelling

The Graphics Pipeline and OpenGL I: Transformations!

Institutionen för systemteknik

Homework #2 and #3 Due Friday, October 12 th and Friday, October 19 th

Real-Time Shadows. Last Time? Schedule. Questions? Today. Why are Shadows Important?

CGT520 Lighting. Lighting. T-vertices. Normal vector. Color of an object can be specified 1) Explicitly as a color buffer

Real-Time Shadows. MIT EECS 6.837, Durand and Cutler

E.Order of Operations

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

COS 116 The Computational Universe Laboratory 10: Computer Graphics

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

Three Main Themes of Computer Graphics

Graphics for VEs. Ruth Aylett

ECS 175 COMPUTER GRAPHICS. Ken Joy.! Winter 2014

GLOBAL EDITION. Interactive Computer Graphics. A Top-Down Approach with WebGL SEVENTH EDITION. Edward Angel Dave Shreiner

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2014

x ~ Hemispheric Lighting

CS 130 Exam I. Fall 2015

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

CS 465 Program 4: Modeller

Adaptive Point Cloud Rendering

[175 points] The purpose of this assignment is to give you practice with shaders in OpenGL.

Rendering Objects. Need to transform all geometry then

Viewing with Computers (OpenGL)

Hardware-Assisted Relief Texture Mapping

Last Time. Why are Shadows Important? Today. Graphics Pipeline. Clipping. Rasterization. Why are Shadows Important?

Graphics for VEs. Ruth Aylett

CS Simple Raytracer for students new to Rendering

CS 4620 Midterm 1. Tuesday 22 October minutes

Chapter 2 A top-down approach - How to make shaded images?

Graphics Pipeline 2D Geometric Transformations

CMSC427: Computer Graphics Lecture Notes Last update: November 21, 2014

Overview. By end of the week:

A simple OpenGL animation Due: Wednesday, January 27 at 4pm

Computer Graphics with OpenGL ES (J. Han) Chapter XI Normal Mapping

3D Rasterization II COS 426

Computer Graphics. Shadows

Real-Time Shadows. Last Time? Today. Why are Shadows Important? Shadows as a Depth Cue. For Intuition about Scene Lighting

Introduction to Computer Graphics with WebGL

CSE 690: GPGPU. Lecture 2: Understanding the Fabric - Intro to Graphics. Klaus Mueller Stony Brook University Computer Science Department

Chapter 17: The Truth about Normals

CSE 167: Lecture #4: Vertex Transformation. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

THE AUSTRALIAN NATIONAL UNIVERSITY Final Examinations(Semester 2) COMP4610/COMP6461 (Computer Graphics) Final Exam

3D GRAPHICS. design. animate. render

Transcription:

CMSC 23700 Autumn 2015 Introduction to Computer Graphics Project 3 October 20, 2015 Advanced Lighting Techniques Due: Monday November 2 at 10pm 1 Introduction This assignment is the third and final part of the first project. In this part, you will add both support for richer modeling and for more advanced rendering modes. This assignment builds on your Project 2 code by adding the following features to your viewer: Orientation for objects objects in the scene description now come with a frame that defines their orientation in world space. Heightfields these are a compact way of representing terrain meshes (and which figure prominently in the final project). Normal mapping this is a technique for creating the illusion of an uneven surface by perturbing the surface normals. Shadow mapping this is a technique for adding shadows to the scene. You should start by fixing any issues with your Project 2 code. For extra credit, you can implement a rendering mode that supports both normal mapping and shadowing. 2 Description 2.1 Scenes We extend the scene format to support new features. The extensions include adding a shadow factor, a ground objects, and object frames to the scene.json file, and adding normal maps to the object materials. We will provide a enhanced version of the Scene class that handles the new features. As before, your code will use the scene object to initialize your view state and object representations. The lighting information contains an extra field, shadow, that specifies the darkness of the shadows. This value is called the shadow factor; see Section 2.5 for details. Ground objects are represented as height fields (described below in Section 2.3). Each scene has at most one ground object, which will be specified as a JSON object with the following fields:

{ } "lighting" : { "direction" : { "x" : 0, "y" : -1, "z" : 0}, "intensity" : { "r" : 0.8, "b" : 0.8, "g" : 0.8}, "ambient" : { "r" : 0.2, "b" : 0.2, "g" : 0.2}, "shadow" : 0.25 }, "camera" : { "size" : { "wid" : 1024, "ht" : 768 }, "fov" : 120, "pos" : { "x" : 0, "y" : 3, "z" : -8 }, "look-at" : { "x" : 0, "y" : 3, "z" : 0 }, "up" : { "x" : 0, "y" : 1, "z" : 0 } }, "ground" : { "size" : { "wid" : 50, "ht" : 50 }, "v-scale" : 0.1, "height-field" : "ground-hf.png", "color" : { "r" : 0.5, "b" : 0.5, "g" : 0.5 }, "color-map" : "ground-cmap.png", "normal-map" : "ground-nmap.png" }, "objects" : [ { "file" : "box.obj", "color" : { "r" : 0, "b" : 1, "g" : 0 }, "pos" : { "x" : 0, "y" : 0, "z" : 0 }, "frame" : { "x-axis" : { "x" : 1, "y" : 0, "z" : -1}, "y-axis" : { "x" : 0, "y" : 1, "z" : 0}, "z-axis" : { "x" : 1, "y" : 0, "z" : 1} } } ] Figure 1: An example scene.json file 1. The size field specifies the width and height of the ground object in world-space units. By convention, the ground object is centered on the origin. 2. The v-scale field specifies the vertical scale in world-space units. 3. The height-field field specifies the name of a gray-scale PNG file, which defines the offsets from the XZ plane. 4. The color field specifies the color used to render the ground object in wireframe, flatshading, and diffuse modes. 5. The color-map field specifies a texture image used to render the ground object in texturing and normal-mapping modes. 6. The normal-map field specifies a texture image used to render the ground object in normalmapping model. Note that this file is not the definition of the vertex normals for the ground 2

mesh. The other change to the format of the scene.json file is the addition of object frames. A frame is a JSON object with three fields x-axis, y-axis, and z-axis, that represent the object space s basis in world coordinates. 2.2 Object frames Associated with each object instance in the scene is a frame that defines how the X, Y, and Z axes are oriented and scaled in world space. For example, the frame 1, 0, 0, 0, 1, 0, 0, 0, 1 specifies the identity transformation, since the object-space axes coincide with the world-space axes. If we wanted to rotate an object by 90 around the Y axis (counter-clockwise) and double its height, we would specify the frame 0, 0, 1, 0, 2, 0, 1, 0, 0. Taking the frame vectors as the columns of a 3 3 matrix and combining it with the object s position, gives an affine transformation for mapping points in object space to world space. Note that the transformation is not restricted to be isotropic. 2.3 Heightfields Heightfields are a special case of mesh data structure, in which only one number, the height, is stored per vertex (height field data is often stored as a greyscale image). The other two coordinates are implicit in the grid position. If the world-space size of the ground object is wid ht and the height field image size is w h, then the horizontal scaling factors in the X ansd Z dimensions are given by s x = wid w 1 and s z = ht h 1 If s v is the vertical scale and H is the height field, then the 3D coordinate of the vertex in row i and column j is h i,j = x 0 + s x j, s v H[i, j], z 0 + s z i, where the upper-left corner of the height field has X and Z coordinates of x 0 and z 0 (respectively). By convention, the top of the height field is north; thus, assuming a right-handed coordinate system, the positive X-axis points east, the positive Y axis points up, and the positive Z-axis points south. Because of their regular structure, height fields are trivial to triangulate; for example, Figure 2 gives two possible triangulations of a 5 5 grid. The triangulation on the left lends itself to rendering as triangle fans, while the triangulation on the right can be rendered as a triangle strip. Rendering with these primitives reduces the number of vertices that are submitted. For example, rendering the left triangulation using fans of eight triangles each will reduce the number of vertices by about 45% 1 But the space savings will be much less when using glelements, since the indices only account for about 17% of the VAO data. Therefore, we recommend rendering the ground using GL_TRIANGLES. Unlike an object specified in an OBJ file, a height field does not come with normal vectors, so you will need to compute these as part of your initialization. There are a couple of different ways to compute these normals, but most approaches involve averaging the sum of the normals for surrounding faces or edges. 1 Each fan of eight triangles will take 10 indices plus a restart index (see the documentation for glprimitiverestartindex). 3

0 1 2 3 4 0 0 1 2 3 4 0 1 1 2 2 3 3 4 4 Figure 2: Possible height field triangulations For example, consider the vertex h i,j. It has four neighboring right triangles (A, B, C, and D) for which it is the apex as illustrated in the following diagram: 2 i-1,j e 1,0 A B i,j-1 e 0, 1 e 0,1 i,j i,j+1 C D e 1,0 i+1,j To compute the vertex normal ni, j for h i,j, we start by computing the four edge vectors radiating out from h i,j : e 1,0 = h i 1,j h i,j e 0, 1 = h i,j 1 h i,j e 0,1 = h i,j+1 h i,j e 1,0 = h i+1,j h i,j 2 Note that these triangles are not the ones in the triangulation. 4

Then we can compute the face normals of the triangles n A = e 0, 1 e 1,0 e 0, 1 e 1,0 n B = e 0,1 e 1,0 e 0,1 e 1,0 n C = e 0, 1 e 1,0 e 0, 1 e 1,0 n D = e 1,0 e 0,1 e 1,0 e 0,1 Then the vertex normal is just the average of the triangle normals: n i,j = 1 4 (n A + n B + n C + n D ) The remaining detail is to remember that for vertices at the corners and edges, we have fewer adjacent faces. 2.4 Normal mapping Normal mapping is a technique to simulate the lighting effects of small-scale geometry without directly modeling the geometry. 3 A normal map is a three-channel texture that holds normal vectors in a surface s tangent space. To use the normal map, we map the light vector to the tangent space and then dot it with the normal vector at the fragment location. Figure 3 illustrates this process. 2.4.1 Normal maps We use a three-channel texture to represent the normal vectors on the surface. These normals are stored in the tangent space of the surface, so to compute the illumination of a fragment, one has to map the light vector(s) and view vector to the surface s tangent space. 4 Specifically, the red and green channels hold the X and Y components of the normal vector, which should coincide with the S and T axes of the texture. The blue channel holds the Z component of the normal. Note that the normal vectors stored in the normal texture are not unit vectors. 2.4.2 Tangent space For each vertex p 0 in the mesh, we want to compute the tangent space at that vertex. Assume that we have a triangle p 0, p 1, p 2 with texture coordinates s 0, t 0, s 1, t 1, and s 2, t 2 (respectively), and that n is the vertex normal at p 0. As mentioned above, we need to map the light vector into the tangent space of the triangle. The tangent space will be defined by three vectors: the tangent vector t, the bi-tangent vector b, and the normal vector n. These vectors (taken as columns) define the mapping from tangent space into object space. When implementing a feature like normal mapping, it is important to keep the coordinate systems straight. The following picture helps: 3 Normal mapping is often called bump mapping, but they are different techniques. 4 Note that since we are only implementing diffuse and ambient lighting in this project, the view vector is not required. 5

a) Uneven surface with normals b) Normal mapping the triangle s surface Bump-surface normals Light vector c) Illumination levels Figure 3: Lighting a surface with a normal map. Part (a) shows an uneven surface and its surface normals; (b) shows the resulting vectors in the triangle s tangent space, and (c) shows the resulting illumination levels. Tangent space TBN matrix Object space Model matrix World space Lights are usually specified in world coordinates, so they need to be mapped to model space and then to tangent space. The tangent space should be chosen such that p i has the coordinates s i s 0, t i t 0, 0 (i.e., p 0 is the origin of the tangent space) Let as illustrated by this picture u = p 1 p 0 object-space vector v = p 2 p 0 object-space vector u 1, v 1 = s 1 s 0, t 1 t 0 tangent-plane coords of p 1 u 2, v 2 = s 2 s 0, t 2 t 0 tangent-plane coords of p 2 6

T p2 (s2, t2) b v p0 (s0, t0) t u p1 (s1, t1) S Since t and b form a basis for the texture s coordinate system (i.e. tangent plane), we have u = u 1 t + v 1 b v = u 2 t + v 2 b Converting these equations to matrix form gives us: [ ] [ ] [ ] u T u1 v v T = 1 t T u 2 v 2 b T [ ] u1 v Multiplying both sides by the inverse of the 1 matrix gives us the definition of the t and u 2 v 2 b vectors: [ ] [ ] [ ] 1 v2 v 1 u T t T u 1 v 2 u 2 v 1 u 2 u 1 v T = b T The matrix [t b n] maps from tangent space to object space, where n is the normal vector assigned to p 0. If this matrix were guaranteed to be orthogonal, then we could take its transpose to get the mapping from object space to tangent space that we need. In practice, however, it is likely to be close, but not orthogonal. We can use a technique called Gram-Schmidt Orthogonalization to transform it into a orthogonal matrix. Using this technique, we get t = normalize (t proj n t) = normalize (t (t n)n) b = normalize ( b (b n)n (b t )t ) Finally, we can define the mapping from object space to tangent space to be t T b T n T. Applying this transform to the light vector l gives us t l, b l, n l, which we can dot with the normal from the normal map to compute the lighting. We only need compute the transformed light vector at vertices and then let the rasterizer interpolate it across the triangle s surface. Also, note that since b = ±1(n t ), we only need to store four floats per vertex to define the mapping from object space into tangent space. In our vertex shader, we set b = t w(n t xyz), where t w { 1, 1}. 7

Object space Model matrix World space View matrix Light's view matrix View eye-space Light eye-space Projection matrix Light's projection matrix View cip-space Light clip-space Figure 4: The coordinate-space transforms required for shadow mapping To support normal mapping, you will need to compute the vector t xyz, t w for each vertex in the mesh. 2.5 Shadows Shadows are one of the most important visual cues for understanding the relationship of objects in a 3D scene. As discussed in class, there are a number of techniques that can be used to render shadows using OpenGL. For this project, we will use shadow mapping. The idea is to compute a map (i.e., texture) that identifies which screen locations are shadowed with respect to a given light. We do this by rendering the scene from the light s point of view into the depth buffer. Then we copy the buffer into a depth texture that is used when rendering the scene. When rendering the scene, we compute a s, t, r texture coordinate for a point p, which corresponds to the coordinates of that point in the light s clipping space. The r value represents the depth of p in the light s view, which is compared against the value stored at s, t in the depth texture. If r is greater than the texture value, then p is shadowed. To implement this technique, we must construct a transformation that maps eye-space coordinates to the light s clip-space (see Figure 4). Let M model be the model matrix M view be the eye s view matrix 8

M light be the light s view matrix, and P light be the light s projection matrix. To map a vertex p to the light s homogeneous clip space, we first apply the model s model matrix (M model ), then the light s view matrix (M light ), and finally the light s projection matrix (P light ). After the perspective division, the coordinates will be in the range [ 1... 1], but we need values in the range [0... 1] to index the depth texture, so we add a scaling transformation (S(0.5)) and a translation (T(0.5, 0.5, 0.5)). You will need to compute the diffuse (D l ) intensity vectors using the techniques of Project 2. You should also compute the shadow factor S l, which is given by the scene when the pixel is in shadow and 1.0 otherwise. Then, assuming that the input color is C in and L is the set of enabled lights, the resulting color should be ( ) C out = clamp S l D l C in l L 2.6 User interface You will add support for two new commands to the Project 1 user interface: n N s S x X switch to normal-mapping mode switch to shadowed mode switch to extreme mode, which includes both normal mapping and shadows (extra credit). In addition, your viewer should provide the camera controls that you implemented in Project 1. Your viewer will support six different rendering modes (seven if you do the extra credit). To help you keep track of which features are enabled in which mode, consult the following table: Rendering Polygon Diffuse Normal Shadow mode mode Color Lighting Texturing mapping mapping Wireframe (w) GL_LINE Flat-shading (f) GL_FILL Diffuse (d) GL_FILL Textured (t) GL_FILL Normal-mapping (n) GL_FILL Shadowed (s) GL_FILL Extreme (x) GL_FILL 3 Sample code Once the Project 2 deadline has passed, we will seed a proj3 directory with a copy of your source code and shaders from Project 2. We will update this code with a new implementation of the Scene class. The seed code will also include a new Makefile in the build directory and new scenes. 9

4 Summary For this project, you will have to do the following: Fix any issues with your Project 2 code. Modify your Project 2 data structures to support object frames and normal mapping. Add support for loading and rendering the ground. Write a shader that renders objects using normal maps. Write a shader that renders objects with shadows. Add support for the new rendering modes to the UI. For extra credit, implement an extreme mode that supports both normal mapping and shadows. 5 Submission We have set up an svn repository for each student on the phoenixforge.cs.uchicago.edu server and we will populate your repository with the sample code. You should commit the final version of your project by 10:00pm on Monday November 2. Remember to make sure that your final version compiles before committing! History 2015-10-22 Simplified description of bump mapping. 2015-10-16 Original version. 10