Blue colour text questions Black colour text sample answers Red colour text further explanation or references for the sample answers

Similar documents
CS 381 Computer Graphics, Fall 2008 Midterm Exam Solutions. The Midterm Exam was given in class on Thursday, October 23, 2008.

Pipeline Operations. CS 4620 Lecture 10

Pipeline Operations. CS 4620 Lecture 14

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

CHAPTER 1 Graphics Systems and Models 3

Computer Graphics I Lecture 11

Graphics for VEs. Ruth Aylett

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Introduction to Visualization and Computer Graphics

Graphics for VEs. Ruth Aylett

Rasterization Overview

Topics and things to know about them:

Institutionen för systemteknik

Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL

CS 4620 Program 3: Pipeline

CS 130 Final. Fall 2015

Graphics Hardware and Display Devices

CS452/552; EE465/505. Intro to Lighting

QUESTION BANK 10CS65 : COMPUTER GRAPHICS AND VISUALIZATION


CEng 477 Introduction to Computer Graphics Fall 2007

CS 130 Exam I. Fall 2015

Comp 410/510 Computer Graphics. Spring Shading

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Three Main Themes of Computer Graphics

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

ECS 175 COMPUTER GRAPHICS. Ken Joy.! Winter 2014

Illumination & Shading

Lecture 17: Shading in OpenGL. CITS3003 Graphics & Animation

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

CS451Real-time Rendering Pipeline

CS 130 Exam I. Fall 2015

Introduction Rasterization Z-buffering Shading. Graphics 2012/2013, 4th quarter. Lecture 09: graphics pipeline (rasterization and shading)

Models and Architectures

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

CS559 Computer Graphics Fall 2015

Objectives. Shading II. Distance Terms. The Phong Reflection Model

Computer Graphics (CS 4731) Lecture 16: Lighting, Shading and Materials (Part 1)

CMSC427: Computer Graphics Lecture Notes Last update: November 21, 2014

CS 381 Computer Graphics, Fall 2012 Midterm Exam Solutions. The Midterm Exam was given in class on Tuesday, October 16, 2012.

Homework #2. Shading, Projections, Texture Mapping, Ray Tracing, and Bezier Curves

Computer Graphics (CS 543) Lecture 7b: Intro to lighting, Shading and Materials + Phong Lighting Model

GLOBAL EDITION. Interactive Computer Graphics. A Top-Down Approach with WebGL SEVENTH EDITION. Edward Angel Dave Shreiner

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources.

Visualisatie BMT. Rendering. Arjan Kok

CS 464 Review. Review of Computer Graphics for Final Exam

CS4620/5620: Lecture 14 Pipeline

Computer Graphics Introduction. Taku Komura

Ray Tracer Due date: April 27, 2011

Deferred Rendering Due: Wednesday November 15 at 10pm

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

RASTERISED RENDERING

WHY WE NEED SHADING. Suppose we build a model of a sphere using many polygons and color it with glcolor. We get something like.

Graphics and Interaction Rendering pipeline & object modelling

Shading. Brian Curless CSE 557 Autumn 2017

COMP environment mapping Mar. 12, r = 2n(n v) v

Reflection and Shading

Objectives Shading in OpenGL. Front and Back Faces. OpenGL shading. Introduce the OpenGL shading methods. Discuss polygonal shading

Ray Tracing COMP575/COMP770

Computer Graphics. Shading. Based on slides by Dianna Xu, Bryn Mawr College

For each question, indicate whether the statement is true or false by circling T or F, respectively.

CS Illumination and Shading. Slide 1

OpenGL/GLUT Intro. Week 1, Fri Jan 12

C P S C 314 S H A D E R S, O P E N G L, & J S RENDERING PIPELINE. Mikhail Bessmeltsev

Spring 2012 Final. CS184 - Foundations of Computer Graphics. University of California at Berkeley

The Rasterization Pipeline

Shading. Why we need shading. Scattering. Shading. Objectives

Introduction to Computer Graphics 7. Shading

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

CS 488. More Shading and Illumination. Luc RENAMBOT

Viewing with Computers (OpenGL)

Homework #2. Shading, Ray Tracing, and Texture Mapping

Lecture 15: Shading-I. CITS3003 Graphics & Animation

Topic 12: Texture Mapping. Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping

lecture 18 - ray tracing - environment mapping - refraction

Homework #2 and #3 Due Friday, October 12 th and Friday, October 19 th

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

Topic 11: Texture Mapping 11/13/2017. Texture sources: Solid textures. Texture sources: Synthesized

Graphics Hardware and OpenGL

CS 354R: Computer Game Technology

OpenGL refresher. Advanced Computer Graphics 2012

Models and Architectures. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Today s Agenda. Basic design of a graphics system. Introduction to OpenGL

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

CS5620 Intro to Computer Graphics

CS354 Computer Graphics Ray Tracing. Qixing Huang Januray 24th 2017

2.11 Particle Systems

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

CS130 : Computer Graphics Lecture 8: Lighting and Shading. Tamar Shinar Computer Science & Engineering UC Riverside

Lecture 4 Advanced Computer Graphics (CS & SE )

Topic 11: Texture Mapping 10/21/2015. Photographs. Solid textures. Procedural

Illumination & Shading I

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer

Overview. By end of the week:

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

CPSC 314 LIGHTING AND SHADING

Lecture 4. Viewing, Projection and Viewport Transformations

Module Contact: Dr Stephen Laycock, CMP Copyright of the University of East Anglia Version 1

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

The University of Calgary

Transcription:

Blue colour text questions Black colour text sample answers Red colour text further explanation or references for the sample answers Question 1. a) (5 marks) Explain the OpenGL synthetic camera model, including how it relates to a pinhole camera and how it relates 3D space to 2D images. The synthetic camera used in OpenGL is based on the pinhole camera model in which the rays of all the scene points pass through the centre of projection before projecting onto the image plane. Projection of 3D space to 2D image is governed by the equations: x p z p = x z y p z p = y z z p = d Because the pinhole camera model produces upside down and left-right reversed images, the synthetic camera model of OpenGL modifies this by moving the image plane to the same side as the scene, giving the diagram shown on the right. The projected image is now the right way up and no longer left-right reversed. b) (3 marks) Describe the shape of the OpenGL default view volume. Describe what you would do if you want to use a view volume other than the default one (Hint: think about other shapes of the view volume). The default view volume of OpenGL is a cube centred at the origin, having sides of length 2 units. Page 1 of 10

Other view volumes can be defined by specifying the bottom, top, left, right, near, and far clipping planes. For instance, we can use the Ortho function to define a rectangular view volume for orthographic projection. We can have a view volume taking the shape of a truncated pyramid for perspective projection, e.g., by calling the Perspective or the Frustum function. For clarity, I have put some words in italics in the sample answer above. In the exam, you don t need to worry about doing the same. You can underline words if they are special terms (or names) that you want to distinguish from the rest of the sentence. c) (2 marks) What is clipping in the context of OpenGL, and where does it fit in the steps of the OpenGL pipeline architecture? OpenGL has a default view volume defined. Only objects falling within the volume would be processed further. This is similar to cameras having a finite field of view. In the OpenGL pipeline architecture, the clipping step takes place after the vertex processing step. After the vertices of the object have been transformed by the vertex processor, vertices are grouped together to form primitives. Primitives falling outside the view volume are clipped away. Clipping takes place near the beginning of the pipeline for an obvious reason only primitives survive the clipping need to be processed further. d) (2 marks) What is rasterization in the context of the OpenGL pipeline architecture, and what are the inputs and outputs to the rasterization step? The rasterizer s job is to produce a set of fragments from each primitive that is passed to it. In the OpenGL pipeline architecture, the rasterizer step is located after the primitive assembler and clipping step but before the fragment processor step. The inputs to the rasterizer are therefore those primitives that survive the clipping. The output produced by the rasterizer Page 2 of 10

are fragments. Each fragment is a potential pixel carrying the pixel location, but it usually contains additional information such as depth, colour, alpha (opacity), etc. If normals, view vectors, etc., are passed down to the rasterizer, then they would be interpolated and output to the fragment processor also. Question 2. a) (6 marks) Describe request mode and event mode. What is the difference between them? Use appropriate diagrams in your explanation. Request mode: The program issues an input request from an input device (e.g., mouse click) at a particular point of the program. The program just hangs there until the Trigger Process detects the input (the user clicks the mouse). A trigger is sent to the Measure Process which returns a measure (in the mouse click example, the measure would include the mouse click position, which mouse button is clicked, the status of the mouse button (up or down)) to the program. In request mode, the program is hard coded to process events in a pre-defined order. Event mode: In event mode, the program specifies a number of input events that are of interest. The program gets into an event handling loop. Each trigger of an event generates a measure for that event. The measures of all the events are put into an event queue by the windows manager. Each time in the event handling loop, the windows manager goes through each event and passes its measure to the appropriate callback function. In event mode, the program does not need to know what events would be invoked by the user at what time during execution. b) (2 marks) What is the purpose of the idle callback in OpenGL? Include an example of what it can be used for. The purpose of an idle callback function is to specify what the program should do when no windows events are received. At each pass of the event handling loop, if no events are received, the windows manager would automatically call the idle callback. Because of that, idle callback functions are suitable when we want to have continuous animation in the program, e.g., a cube continuously rotating. Page 3 of 10

c) (2 marks) What is double buffering? When would it be suitable to use double buffering? Double buffering refers to using two buffers for rendering and displaying. While the front buffer is being displayed, the scene is rendered and stored in the back buffer for the next frame. Double buffering would be suitable when we want to have smooth animation (i.e., avoid flickering). d) (2 marks) Give three common applications of vertex shaders in OpenGL. Three common applications (See topic 5, slides 18-19): (a) Geometric transformation change relative location, rotation, and scale of objects; apply 3D perspective transformation (make far objects appear smaller). (b) Morphing vertices, e.g., compute wave motion and deformation, simulate particle effects such as fire, smoke, rain. (c) Lighting/shading calculate the shading colour using light and surface properties (pervertex shading). Question 3. a) (6 marks) What is the difference between the Phong model of shading and the modified Phong/Blinn-Phong model, and what are their relative advantages? Both models are shading formulae involving the diffuse, specular, and ambient terms for each colour (R, G, and B) component. Phong model: (Phong model) (Blinn-Phong model) I = k d I d (l n) + k s I s (v r) α + k a I a Blinn-Phong model: I = k d I d (l n) + k s I s (n h) β + k a I a where k d, k s, k a : diffuse, specular, and ambient coefficients for the material Page 4 of 10

I d, I s, I a : similar coefficients but for the light source l: light source vector n: normal vector v: view (or eye) vector r: perfect mirror reflection vector of l w.r.t. n h: half-way vector between l and v α: shininess coefficient β: shininess coefficient modified from α Phong model: Advantage: gives better rendering result, but requires the reflected vector r to be computed. Blinn-Phong: Advantage: the half-way vector h is cheaper to compute; rendering result is only slightly worse than the Phong model; is the default shading model used in OpenGL. In the exam, you don t need to darken a symbol to boldface in order to denote that it is a vector. You can just write it the same as scalar symbols. Alternatively, you can use one of the following common notations for denoting vectors in hand-writing: (1) put an arrow on top of the symbol (such as n, l ); or (2) put a tilde below the symbol (such as ñ, l ). b) (2 marks) Explain the difference between a directional light and a point source light in the context of OpenGL. A point source is modelled with a position p 0 = (x, y, z) and an intensity I = (r, g, b). It illuminates equally in all directions. A distance term can be incorporated to attenuate I. A directional light illuminates objects with parallel rays of light. It is modelled with a direction and an intensity I = (r, g, b). c) (4 marks) Why is a fourth component w added to the usual three dimensional coordinates in computer graphics systems like OpenGL? Give two common uses of the extra entries in the corresponding 4 4 matrices (those in the last row and column). If 3 3 matrices are used to transform 3D points and vectors, we have to treat translation differently. For 3D rotation and scaling, we can define a 3 3 transformation matrix M, and transform a 3D point p to p by matrix-vector multiplication: p = Mp. However, for translation, we would have to do vector addition: p = p + d, where d = (d x, d y, d z ) is a 3D vector. Page 5 of 10

By adding a fourth component w so that 3D points (and 3D vectors also) are represented as homogeneous coordinates, all the transformation can be done via matrix-vector multiplication. The 3 3 matrix M earlier becomes a 4 4 matrix: 0 M 0 0 0 0 0 1 For translation, the vector d can now appear in the last column of the matrix: 1 0 0 d x 0 1 0 d y 0 0 1 d z 0 0 0 1 Not only so, the last (4th) row of the 4 4 matrix can be used for perspective projection. For affine transformation, the last row of the 4 4 matrix is (0, 0, 0, 1). For perspective transformation, the last row is (a, b, c, 1) where a, b, and c are some non-zero numbers. Question 4. a) (5 marks) Explain clearly what each function call does in the code below: glutinit(&argc, argv); glutinitdisplaymode(glut_double GLUT_RGBA); glutinitcontextversion( 3, 2 ); glutinitcontextprofile( GLUT_CORE_PROFILE ); glutcreatewindow("example"); glutmousefunc(mouse); glutreshapefunc(reshape); glutdisplayfunc(display); init(); glutmainloop(); Line glutinit: initializes GLUT using the supplied command line arguments. Line glutinitdisplaymode: specifies that the program requires double and colour (including alpha) buffers. Line glutinitcontextversion: specifies the GLUT context version number needed by the program. Line glutinitcontextprofile: specifies that GLUT Core profile is needed. Page 6 of 10

Line glutcreatewindow: asks to create a window with heading = Example. Line glutmousefunc: registers function mouse as the mouse callback function. Line glutreshapefunc: registers function reshape as the reshape callback function. Line glutdisplayfunc: registers function display as the display callback function. Line init: calls the user-defined init function. Line glutmainloop: program enters the event handling loop (never returns). b) (3 marks) Suppose you want to build a program using OpenGL that draws two kinds of objects - shiny billiard balls, and dull wooden balls. How would you implement these in a way that realistically represents the difference in shininess between the two kinds of objects? For the shiny billiard balls, I would look at the colour of the shiny billiard balls and set their diffuse coefficients (k d,r, k d,g, k d,b ) appropriately. For the specular coefficients (k s,r, k s,g, k s,b ) I would adjust them to larger numbers than the diffuse coefficients. So, when the Phong (or modified Phong) shading formula is applied, the specular term dominates. I would also set the shininess coefficient α to be large (between 100 and 200). For the dull wooden balls, I would set the diffuse coefficients appropriately for the colour of wood. For the specular coefficients, I would set them smaller in proportion, so the diffuse term dominates in the shading computation. I would also set α to a small number (e.g., 5 to 10). c) (2 marks) Suppose you have a situation where both the viewer and the only light source are always very distant from the objects in a scene. How could you take advantage of this to perform the lighting calculations more efficiently? Since the objects would be quite small, I would avoid doing per-fragment shading completely. I would program the vertex shader or the application program to compute the colour and let the colour pass through the pipeline to the fragment shader. If the rendering result looks acceptable, I would use flat shading (one shading computation per polygon) to further reduce computation. d) (2 marks) What are Euler angles? Briefly explain how you would represent a rotation using these angles. Euler angles are three angles for describing the orientation of an object with respect to a fixed coordinate system. The Euler angles are the three rotation angles α, β, and γ, about the x, y, and z axes respectively. Any orientation can be described by a rotation matrix R computed as the product R x (α)r y (β)r z (γ), where R x (α) denotes rotating an angle α about the x-axis and so on. Page 7 of 10

Question 5. a) (3 marks) Consider a horizontal row of fragments that form part of a triangle during rasterization. Suppose the coordinates, normals and texture coordinates of the first and last fragment in the row have already been calculated. How can we quickly calculate the coordinates of the other fragments as we scan along the row from the first fragment to the last fragment? We can calculate the coordinates of other fragments by interpolating the coordinates of the first and last fragments. We firstly compute the distance D (in pixel units) of the two end fragments. Each time we move along the row one pixel to the right, the distance of the current pixel to the first pixel is increased by 1. Let d be this distance value. Then the coordinates of each pixel along the row can be computed as: (pseudo code) for d = 1 to D { x = D d D x 1 + d D x 2 } where x 1 and x 2 denote the coordinates of the two end fragments. The normals and texture coordinates can be interpolated similarly. b) (2 marks) The painter s algorithm for hidden surface removal is an alternative to the z-buffer algorithm that instead sorts scene objects by distance and then draws them starting from the furthest away, with each overwriting (or overpainting ) what has previously been drawn. Describe two situations where this algorithm will not work well. The painter s algorithm fails under two situations: 1. when the triangles intersect. 2. when the triangles form a cycle of overlapping depth. In the exam, you can draw simpler diagrams and no need to put in colours. c) (2 marks) What is mipmapping? What is the advantage of using mipmapping? Mipmapping is a technique where an original high-resolution texture map is scaled and filtered into multiple lower resolution ones. Mipmapping lessens interpolation error for smaller textured objects. d) (2 marks) Describe the two options for texture sampling in OpenGL. Contrast these options in terms of their rendering outputs and computation times. OpenGL supports two options for sampling textures: Page 8 of 10

1. Point sampling. This sampling option uses the value of the texture element that is closest to the texture coordinates output by the rasterizer. 2. Linear filtering. This sampling option computes the weighted average of texture elements in the neighbourhood of the texture coordinates output by the rasterizer. Option 1 is faster but produces poorer rendering results. Option 2 is slower but produces better results. (Topic 16 OpenGL Texture Mapping, slide 13) e) (3 marks) What is aliasing? Describe a way to ease the aliasing problem in texture mapping. Aliasing refers to the process by which smooth curves or lines become jagged due to the resolution of the graphics device or insufficient data to represent the curves. In texture mapping, we can ease the aliasing problem by considering that each pixel is not just a point but but it occupies a finite area (region) on the computer screen. With such consideration, we should extract the texture area corresponding to the four corners of each pixel and assign the average value of the texture region to the pixel. Mapping area to area like so helps to ease the aliasing problem. (Topic 15 Texture Mapping, slides 17-18.) Question 6. a) (3 marks) What kinds of objects are generally suitable for modelling using a subdivision surface technique? Include an example of where using a subdivision surface technique would be appropriate, and another of where it wouldn t. Objects that are locally smooth having continuous 1st derivatives everywhere and continuous 2nd derivatives almost everywhere; and can be recursively generated from a simple polygonal mesh acting as the control cage are suitable for modelling using subdivision surface techniques. Examples: objects suitable for modelling using subdivision surface techniques include: articulated figures, cartoon characters. objects not suitable: industrial engineering parts. For such objects, precision is important. As subdivision surface techniques will smooth the object in the subdivision process, they are not suitable for modelling this type of objects. In subdivision surfaces terminology, A control cage is a coarse polygonal mesh where the vertices of the mesh are used as control points for subdivision. Some references refer to subdivision surface techniques loosely as subdivision surfaces. Sometimes the term subdivision surfaces is used to referred to the mesh being subdivided. Page 9 of 10

b) (3 marks) Suppose we apply the Catmull-Clark technique to subdivide surfaces starting with a box shape. What happens to the resulting surface if we add additional vertices near one corner without altering the shape of the box? The Catmull-Clark technique uses an approximation scheme to subdivide surfaces. Under this scheme, the subdivided mesh does not necessarily pass through the vertices on the control cage. However, since all control points have the same weight on the subdivision surfaces, if we add additional vertices (control points) near one corner of the box shape (the control cage), then with the extra weights from the added control points, the subdivision surfaces will be pulled toward these control points. We effectively create a sharp corner at that region. In general, if we want a sharp corner, we put several control points close together; if we want a smooth surface, we put the control points further apart. Some other subdivision surface techniques use the interpolation scheme. c) (3 marks) What are bone weights in the context of animation? In animation of articulated figures, vertices on the mesh should follow the movement of the underlying bones smoothly. In particular, for vertices that are near the joint of two or more bones, their movements should be affected by all of these bones. In animation, we therefore need a small number (4 is often sufficient) of bone weights and bone IDs attached to each vertex. These bone weights should be non-negative and should sum to 1. Bones that are close to a vertex should have more influence and their weights should be higher (close to 1). When animating the articulated figure, each vertex is then transformed by the weighted sum of the 4 bone transformations. d) (3 marks) In a typical implementation of skinning in OpenGL that takes advantage of shader programs, like that in lectures, say what the main static data is that s passed to the shaders (i.e., not changing between frames) and what the main dynamic data is that s passed to the shaders (i.e., changing each frame). Please inspect the vertex shader in part 2 of your project. What static data are passed from the application program to the vertex shader? vertices of the mesh? vertex normals? bone ids? bone weights? anything else? What main dynamic data is passed to the shader? bone transforms? anything else? END OF PAPER Page 10 of 10