CS 4620 Midterm, October 23, 2018 SOLUTION

Similar documents
CS559 Computer Graphics Fall 2015

CS 4620 Midterm, March 21, 2017

CS 4620 Midterm 1. Tuesday 22 October minutes

Lecture 17: Shading in OpenGL. CITS3003 Graphics & Animation

Lighting and Shading II. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

CS452/552; EE465/505. Lighting & Shading

Objectives Shading in OpenGL. Front and Back Faces. OpenGL shading. Introduce the OpenGL shading methods. Discuss polygonal shading

CS 130 Final. Fall 2015

C P S C 314 S H A D E R S, O P E N G L, & J S RENDERING PIPELINE. Mikhail Bessmeltsev

CS 130 Exam I. Fall 2015

WebGL and GLSL Basics. CS559 Fall 2015 Lecture 10 October 6, 2015

For each question, indicate whether the statement is true or false by circling T or F, respectively.

Lessons Learned from HW4. Shading. Objectives. Why we need shading. Shading. Scattering

Computer graphics 2: Graduate seminar in computational aesthetics

CS 130 Exam I. Fall 2015

CMSC427 Final Practice v2 Fall 2017

Shadows. Prof. George Wolberg Dept. of Computer Science City College of New York

SUMMARY. CS380: Introduction to Computer Graphics Projection Chapter 10. Min H. Kim KAIST School of Computing 18/04/12. Smooth Interpolation

C O M P U T E R G R A P H I C S. Computer Graphics. Three-Dimensional Graphics V. Guoying Zhao 1 / 65

Computer Graphics with OpenGL ES (J. Han) Chapter 6 Fragment shader

Viewing with Computers (OpenGL)

WebGL and GLSL Basics. CS559 Fall 2016 Lecture 14 October

Advanced Lighting Techniques Due: Monday November 2 at 10pm

INFOGR Computer Graphics

INFOGR Computer Graphics

CMSC427: Computer Graphics Lecture Notes Last update: November 21, 2014

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

CS 4620 Program 3: Pipeline

Virtual Cameras and The Transformation Pipeline

Virtual Environments

Graphics and Interaction Transformation geometry and homogeneous coordinates

Pipeline Operations. CS 4620 Lecture 14

The Graphics Pipeline and OpenGL III: OpenGL Shading Language (GLSL 1.10)!

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates

Scanline Rendering 2 1/42

Input Nodes. Surface Input. Surface Input Nodal Motion Nodal Displacement Instance Generator Light Flocking

3D Viewing. CS 4620 Lecture 8

2D/3D Geometric Transformations and Scene Graphs

Topics and things to know about them:

Computer Graphics (CS 543) Lecture 8a: Per-Vertex lighting, Shading and Per-Fragment lighting

CS559 Computer Graphics Fall 2015

Programmable Graphics Hardware

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Last week. Machiraju/Zhang/Möller/Fuhrmann

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer

CS559 Computer Graphics Fall 2015

CS 4204 Computer Graphics

CS452/552; EE465/505. Intro to Lighting

Institutionen för systemteknik

Lecture 4: Transforms. Computer Graphics CMU /15-662, Fall 2016

Game Engineering: 2D

Programming with OpenGL Part 3: Shaders. Ed Angel Professor of Emeritus of Computer Science University of New Mexico

Geometric transformations assign a point to a point, so it is a point valued function of points. Geometric transformation may destroy the equation

HW-Tessellation Basics

Overview. By end of the week:

Game Architecture. 2/19/16: Rasterization

Linear transformations Affine transformations Transformations in 3D. Graphics 2009/2010, period 1. Lecture 5: linear and affine transformations

CS 432 Interactive Computer Graphics

Transforms. COMP 575/770 Spring 2013

2D transformations: An introduction to the maths behind computer graphics

3D Viewing. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 9

The Graphics Pipeline and OpenGL III: OpenGL Shading Language (GLSL 1.10)!

Sign up for crits! Announcments

Triangle meshes I. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2017

OPENGL RENDERING PIPELINE

Computergraphics Exercise 15/ Shading & Texturing

Triangle meshes I. CS 4620 Lecture 2

CS354 Computer Graphics Ray Tracing. Qixing Huang Januray 24th 2017

CS452/552; EE465/505. Models & Viewing

COMP 175 COMPUTER GRAPHICS. Ray Casting. COMP 175: Computer Graphics April 26, Erik Anderson 09 Ray Casting

CS452/552; EE465/505. Review & Examples

12.2 Programmable Graphics Hardware

Vector Algebra Transformations. Lecture 4

Triangle meshes I. CS 4620 Lecture Kavita Bala (with previous instructor Marschner) Cornell CS4620 Fall 2015 Lecture 2

Lecture 8 Ray tracing Part 1: Basic concept Yong-Jin Liu.

CSE 167: Lecture #8: GLSL. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Three Main Themes of Computer Graphics

3D Graphics for Game Programming (J. Han) Chapter II Vertex Processing

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6

CS6670: Computer Vision

Texture Mapping. Texture (images) lecture 16. Texture mapping Aliasing (and anti-aliasing) Adding texture improves realism.

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

lecture 16 Texture mapping Aliasing (and anti-aliasing)

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

GLSL Introduction. Fu-Chung Huang. Thanks for materials from many other people

3 Polygonal Modeling. Getting Started with Maya 103

Midterm Exam CS 184: Foundations of Computer Graphics page 1 of 11

Chapter 5. Projections and Rendering

Computer Graphics with OpenGL ES (J. Han) Chapter IV Spaces and Transforms

INF3320 Computer Graphics and Discrete Geometry

Geometric Transformations

The exam begins at 2:40pm and ends at 4:00pm. You must turn your exam in when time is announced or risk not having it accepted.

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

2D Image Transforms Computer Vision (Kris Kitani) Carnegie Mellon University

CHAPTER 1 Graphics Systems and Models 3

Computer Graphics Coursework 1

Lecture 5 Vertex and Fragment Shaders-1. CITS3003 Graphics & Animation

The Graphics Pipeline and OpenGL I: Transformations!

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Transcription:

1. [20 points] Transformations CS 4620 Midterm, October 23, 2018 SOLUTION (a) Describe the action of each of the following matrices, as transformations in homogeneous coordinates, in terms of rotation, scaling, and/or translation. For instance, the answer to (0) is a scale by a factor of 2 in the x direction followed by a translation of 1 in x. 2 0 0 1 cos(30 ) 0 sin(30 ) 0 (0) : 0 1 0 0 0 0 1 0 (1) : 0 1 0 0 0 1 0 2 0 1 sin(30 ) 0 cos(30 ) 0 (2) : 1 0 1 (3) : 0 2 1 0 0 1 0 0 1 0 0 0 1 0 0 0 1 (1) (2) (3) (b) Write each of the matrices (2) and (3) from part (a) as a product of three matrices in the form T XT 1 where T is a translation and X is a rotation or scale about the origin. (2) (3) (c) Describe each of the matrices (2) and (3) from part (a) as a rotation or scale about a point. For instance, one answer to (0) would be a scale by a factor of 2 in the x direction about the point ( 1, 0, 0). (2)

(3) 2. [20 points] Manipulation Suppose you are implementing a 2D manipulation interface for a touchscreen device. Like our Manipulators assignment, the UI has three modes, for translation, rotation, and scaling, and it stores the object s transformation as a product T RS where T, R, and S are a translation, a rotation about the origin, and a scale about the origin. But things are simpler than in the assignment; since the input and the manipulated object are both 2D, there is no need for separate manipulation axes. Instead, when the user touches at p and moves to q, we want to apply a translation, a rotation about the origin, or a scale about the origin in the object s local coordinates, that will make the point that appears on the screen at p move to q (or comes as close to q as possible, in the case of rotation). p q Translate mode Rotate mode Scale mode This is a 2D object, so the view transformation is a 2D affine transformation M that maps the visible area of the object to the rectangle [ 1, 1] [ 1, 1]; we ll call this 2D space NDC (normalized device coordinates) just like in 3D. The view transformation is fixed during the manipulation process. (Probably the UI provides some way for the user to scroll and zoom, which would change M, but we re not concerned with that here.) (a) What point in object space transforms to the point p in NDC? Page 2

(b) Given the points p and q in NDC, work out the object s new transformation after a touch interaction in each of the three modes, in terms of the matrices T, R, and S that make up its object transformation before the interaction. You can use the notation T (v) for a translation by the vector v, R(θ) for a rotaion by the angle θ, and S(a, b) for a scale by a on the x axis and b on the y axis. For instance, your answer to (1) will be in the form T T (v)rs where v is some vector (you have to figure out what it is) that depends on the points p and q. (1) in translation mode, what is the object s transformation after a drag from p to q? (2) in rotation mode, what is the object s transformation after a drag from p to q? (3) in scaling mode, what is the object s transformation after a drag from p to q? Solution: Hint: you can check all three answers by ensuring that the point that transforms to p under T RS transforms to q under your updated transformation. ˆ 1(a) (1) A counter-clockwise rotation about y axis by 30 degrees. (2) A counter-clockwise rotation by 90 degrees in 2D plane and then translate along y axis by 1. (3) A scale about the origin by a factor of 2 in both x and y directions; then translations along both x and y directions by -1. ˆ 1(b) 1 0 1 2 0 1 0 1 0 1 2 0 0 (2) : T = 1 0 1 2 X = 1 0 0 (3)T = 0 1 1 X = 0 2 0 0 0 1 0 0 1 0 0 1 0 0 1 ˆ 1(c) (2) Counter-clockwise rotation about (-0.5,0.5) for 90 degrees; (3) Scale about point (1,1) for factor of 2 in both x and y directions. Page 3

ˆ 2(a) ˆ 2(b) S 1 R 1 T 1 M 1 p (1) T T (v)rs where v = M 1 q M 1 p. (T 1 M 1 q T 1 M 1 p also OK). (2) T RR(θ)S where θ is the angle between p = T 1 M 1 p and q = T 1 M 1 q. Theta can be computed various ways; for example θ = atan2(q y, q x) atan2(p y, p x) or θ = sin 1 (w z ) where w = [ˆp ; 0] [ˆq ; 0]. The solution θ = cos 1 (ˆp ˆq ) requires an additional check for the sign of θ. (p = R 1 T 1 M 1 p and q = R 1 T 1 M 1 q also OK.) (3) T RSS(a, b) where a and b are the ratios of x and y coordinates between points p = R 1 T 1 M 1 p and q = R 1 T 1 M 1 q. Specifically a = q x/p x and b = q y/p y. (p = S 1 R 1 T 1 M 1 p and q = S 1 R 1 T 1 M 1 q also OK.) Page 4

3. [20 points] Geometry images In displacement mapping we use a scalar texture to modify the vertex positions of a mesh. In some cases the input geometry can be pretty boring (like a flat square) and everything interesting about the shape comes from the displacement texture (e.g. a mountain range made by using an elevation map for the displacement). A related technique to displacement mapping is geometry images, which takes this one step farther: the original vertex position is completely ignored, and the actual position is completely determined by a 3D position texture. In this problem we want you to write a vertex shader for geometry images. Specifically, your shader should expect vertices with texture coordinates as the only vertex attribute, along with the same uniform matrices we used in the Shaders assignment (see declarations below). The object-space position of each vertex is defined by the value of the position texture at the corresponding uv coordinates. The fragment shader is provided and is completely standard: it uses the position and a varying normal to compute simple Lambertian lighting. Hints: don t forget you will need to compute the normal. The computation of the normal uses exactly the same ideas as in displacement mapping, but is much simpler. You don t need the varyings derivu and deriv from the assignment, but you do still need to differentiate the texture. You can use exactly the same approach as in the assignment (a finite difference with a fixed stepsize of 0.0001), and don t be concerned about any discretization or quantization issues for this problem. // --- begin vertex shader --- // = object.matrixworld uniform mat4 modelmatrix; // = camera.matrixworldinverse * object.matrixworld uniform mat4 modeliewmatrix; // = camera.projectionmatrix uniform mat4 projectionmatrix; // = camera.matrixworldinverse uniform mat4 viewmatrix; // = inverse transpose of modeliewmatrix uniform mat3 normalmatrix; // = camera position in world space uniform vec3 cameraposition; attribute vec2 uv; varying vec3 vnormal; Page 5

varying vec4 vposition; uniform sampler2d positiontexture; main() { // TODO: write your code here } //--- end vertex shader --- //--- begin fragment shader --- uniform vec3 lightcolors[ NUM_LIGHTS ]; uniform vec3 lightpositions[ NUM_LIGHTS ]; // in eye space uniform float exposure; uniform vec3 diffusecolor; varying vec3 vnormal; varying vec4 vposition; vec3 to_srgb(vec3 c) { return pow(c, vec3(1.0/2.2)); } vec3 from_srgb(vec3 c) { return pow(c, vec3(2.2)); } Page 6

void main() { vec3 N = normalize(vnormal); vec3 = normalize(-vposition.xyz); vec3 finalcolor = vec3(0.0, 0.0, 0.0); for (int i = 0; i < NUM_LIGHTS; i++) { float r = length(lightpositions[i] - vposition.xyz); vec3 L = normalize(lightpositions[i] - vposition.xyz); vec3 H = normalize(l + ); } // calculate diffuse term vec3 Idiff = diffusecolor * max(dot(n, L), 0.0); finalcolor += lightcolors[i] * Idiff / (r*r); } // Only shade if facing the light if (gl_frontfacing) { gl_fragcolor = vec4(to_srgb(finalcolor * exposure), 1.0); } else { gl_fragcolor = vec4(170.0/255.0, 160.0/255.0, 0.0, 1.0); } //--- end fragment shader --- Page 7

Solution: // In vertex shader main() { vec3 pos = texture2d(positiontexture, uv).xyz; // 2 pts vposition = modeliewmatrix * vec4(pos, 1.0); // 3 pts // 4 pts float du = 0.001; float dv = 0.001; vec2 duector = vec2(du, 0.0); vec2 dvector = vec2(0.0, dv); vec3 posu1 = texture2d(positiontexture, uv-duector).xyz; vec3 posu2 = texture2d(positiontexture, uv+duector).xyz; vec3 pos1 = texture2d(positiontexture, uv-dvector).xyz; vec3 pos2 = texture2d(positiontexture, uv+dvector).xyz; // 3 pts vec3 dpdu = (posu2 - posu1)/du/2.0; vec3 dpdv = (pos2 - pos1)/dv/2.0; vec3 normal = normalize(cross(dpdu, dpdv)); // 3 pts vnormal = normalize(normalmatrix * normal); // 3 pts } gl_position = projectionmatrix * vposition; // 2 pts Page 8

4. [20 points] True/False Circle whether each statement is true (T) or false (F). If it is true, explain why. If it is false, explain why or provide a counterexample. (a) T F An indexed triangle mesh will generally consume more storage than the same mesh written as a list of separate triangles. (b) T F The normal vector used for shading on a triangle mesh is always perpendicular to the surface being shaded. (c) T F In barycentric coordinates, the coordinates of a triangle s vertices are all zeros and ones. (d) T F Storing one additional byte at every vertex of an indexed triangle mesh generally uses more space than storing one additional byte for every triangle of the same mesh. (e) T F The intersection between a ray and a sphere is generally found using an implicit model of the sphere. (f) T F In orthographic viewing, the viewing rays all share the same ray direction. (g) T F If A and B are two 2 2 rotation matrices, then AB = BA. (h) T F If a 4 4 matrix represents a 3D affine transformation in homogeneous coordinates, then its last row must be [ 0 0 0 1 ]. (i) T F If a projective transformation is applied to a set of points evenly spaced along a line, the resulting points will be collinear and evenly spaced. (j) T F In homogeneous coordinates, the 3D point (x, y, z) is represented by the 4D point (x, y, z, 0). Page 9

Solution: (a) False. When stored as a list of separate triangles, each vertex is stored once for each triangle of which it is a member, which is twice on average. When stored as an indexed mesh, each vertex is stored only once. The total storage is on average 72 bytes per vertex when stored as a list of separate triangles compared to 36 bytes per vertex in an indexed mesh. (b) False. If this were the case, then all triangle meshes would appear to be faceted. Averaging normals creates a smoother appearance by using normals that are not perpendicular to any of their adjacent triangles. (c) True. The coordinates of a triangle s vertices in barycentric coordinates are (1, 0, 0), (0, 1, 0), and (0, 0, 1). The coordinates represent linear combinations of the vertices positions. (d) False. There are on average two triangles per vertex in a mesh. When using an indexed mesh, the statement would be false because vertices are stored once. The statement would be true if the mesh is not indexed: then each vertex is stored on average 6 times, so one additional byte per vertex would generally use more space. (e) True. Storing a sphere as an implicit model allows an algebraic solution for t when intersecting the sphere with a ray and results in a smooth sphere appearance. (f) True. This is the definition of orthographic viewing. (g) True. The rotations are around the origin, so it does not matter in which order they are applied. (h) True. When not using projective transformations, homogeneous coordinates allow for compact representation of affine transformations, including translations. The final row then serves to copy over the w = 1 term. (i) False. Perspective projection provides a counterexample. The denominator of y varies with z, so distance is not preserved: distances that are farther away will appear to be shorter than equivalent distances that are closer. The statement is true for affine transformations. (j) False. (x, y, z) is represented by (x, y, z, 1) in homogeneous coordinates. (x, y, z, 0) represents a point at infinity in the direction of (x, y, z). Page 10

5. [20 points] Meshes Consider four OBJ files, all with the same vertex data but indexed in different ways. (a) Which of these will produce faceted shading? (b) Which has continuous texture coordinates? (c) For each of these 4 meshes, match them with the corresponding rendering on the next page. --- vertex data --- v 1 1 1 v 0 1 1 v 1 0 1 v 1 1 0 vn 1 1 1 vn 1 0 0 vn 0 1 0 vn 0 0 1 vn 0 1 1 vn 1 0 1 vn 1 1 0 vt 0 0 vt 0 1 vt 1 0 vt 0.75 0.75 vt 0.5 0.5 --- index data 1 --- f 1/5/4 2/5/4 3/5/4 f 1/5/2 3/5/2 4/5/2 f 1/5/3 4/5/3 2/5/3 Texture File: Hint: OBJ indices start at 1, and the ordering of index data is position/texture/normal. Has Faceted Shading? Has Continuous Texture Coords? Rendering Number --- index data 2 --- f 1/1/4 2/2/4 3/3/4 f 1/1/2 3/2/2 4/3/2 f 1/1/3 4/2/3 2/3/3 --- index data 3 --- f 1/1/1 2/2/5 3/3/6 f 1/1/1 3/2/6 4/3/7 f 1/1/1 4/2/7 2/3/5 --- index data 4 --- f 1/4/1 2/1/5 3/2/6 f 1/4/1 3/2/6 4/3/7 f 1/4/1 4/3/7 2/1/5 Page 11

Solution: Has Faceted Has Continuous Rendering Number Shading? Texture Coords? Yes Yes 2 Yes No 7 No No 8 No Yes 3 Page 12

(1) (2) (3) (4) (5) (6) (7) (8) Page 13