QUESTION 1 [10] 2 COS340-A October/November 2009

Similar documents
Lectures OpenGL Introduction

FAKULTI TEKNOLOGI MAKLUMAT DAN KOMUNIKASI BITM INTERACTIVE COMPUTER GRAPHICS LAB SESSION 4. C++ - OpenGL

OpenGL/GLUT Intro. Week 1, Fri Jan 12

2 Transformations and Homogeneous Coordinates

Fundamental Types of Viewing

Programming with OpenGL Part 3: Three Dimensions

3D computer graphics: geometric modeling of objects in the computer and rendering them

COS340A Assignment 1 I Pillemer Student# March 25 th 2007 p1/15

To Do. Computer Graphics (Fall 2008) Course Outline. Course Outline. Methodology for Lecture. Demo: Surreal (HW 3)

CHAPTER 1 Graphics Systems and Models 3

Ulf Assarsson Department of Computer Engineering Chalmers University of Technology

C OMPUTER G RAPHICS Thursday

// double buffering and RGB glutinitdisplaymode(glut_double GLUT_RGBA); // your own initializations

Modeling Objects by Polygonal Approximations. Linear and Affine Transformations (Maps)

Cameras (and eye) Ideal Pinhole. Real Pinhole. Real + lens. Depth of field

SOURCES AND URLS BIBLIOGRAPHY AND REFERENCES

CS559: Computer Graphics. Lecture 12: OpenGL Transformation Li Zhang Spring 2008

Lecture 07: Buffers and Textures

Computer Graphics. Chapter 10 Three-Dimensional Viewing

UNIT 7 LIGHTING AND SHADING. 1. Explain phong lighting model. Indicate the advantages and disadvantages. (Jun2012) 10M

Models and Architectures

CMSC 425: Lecture 4 More about OpenGL and GLUT Tuesday, Feb 5, 2013

Lecture 3. Understanding of OPenGL programming

Precept 2 Aleksey Boyko February 18, 2011

Graphics Programming

Computer Graphics (Basic OpenGL)

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination

Textures. Texture Mapping. Bitmap Textures. Basic Texture Techniques

Graphics. Texture Mapping 고려대학교컴퓨터그래픽스연구실.

Interaction Computer Graphics I Lecture 3

Computer graphics Labs: OpenGL (1/3) Geometric transformations and projections

Interaction. CSCI 480 Computer Graphics Lecture 3

Programming with OpenGL Part 2: Complete Programs Computer Graphics I, Fall

CSC 470 Computer Graphics

FROM VERTICES TO FRAGMENTS. Lecture 5 Comp3080 Computer Graphics HKBU

1 (Practice 1) Introduction to OpenGL

Drawing Primitives. OpenGL basics

GL_COLOR_BUFFER_BIT, GL_PROJECTION, GL_MODELVIEW

Computer Graphics Course 2005

Ulf Assarsson Department of Computer Engineering Chalmers University of Technology

Lecture 22 Sections 8.8, 8.9, Wed, Oct 28, 2009

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Order of Transformations

Overview. Pipeline implementation I. Overview. Required Tasks. Preliminaries Clipping. Hidden Surface removal

Modeling Transform. Chapter 4 Geometric Transformations. Overview. Instancing. Specify transformation for objects 李同益

Graphics Programming. August 31, Programming of the Sierpinski gasket. Programming with OpenGL and C/C++

CS 130 Final. Fall 2015

Computer Graphics I Lecture 11

Books, OpenGL, GLUT, GLUI, CUDA, OpenCL, OpenCV, PointClouds, and G3D

Using OpenGL with CUDA

1.2 Basic Graphics Programming

Computer Graphics. Making Pictures. Computer Graphics CSC470 1

Duc Nguyen CSE 420 Computer Graphics 10/10/2018 Homework 1

Institutionen för systemteknik

Hierarchical Modeling: Tree of Transformations, Display Lists and Functions, Matrix and Attribute Stacks,

Visualizing Molecular Dynamics

Models and Architectures. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Blue colour text questions Black colour text sample answers Red colour text further explanation or references for the sample answers

Programming using OpenGL: A first Introduction

OpenGL refresher. Advanced Computer Graphics 2012

Computer Graphics and Visualization. Graphics Systems and Models

Lecture 17: Shading in OpenGL. CITS3003 Graphics & Animation

Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

Graphics Hardware and OpenGL

QUESTION BANK 10CS65 : COMPUTER GRAPHICS AND VISUALIZATION

C++ is Fun Part 13 at Turbine/Warner Bros.! Russell Hanson

Transformation Pipeline

Introduction to Computer Graphics with WebGL

CS 4204 Computer Graphics

Computer graphics MN1

Rendering Pipeline/ OpenGL

6.837 Introduction to Computer Graphics Assignment 5: OpenGL and Solid Textures Due Wednesday October 22, 2003 at 11:59pm

Computer Graphics (CS 4731) Lecture 11: Implementing Transformations. Prof Emmanuel Agu. Computer Science Dept. Worcester Polytechnic Institute (WPI)

Announcements. Written Assignment 2 is out see the web page. Computer Graphics

Lecture 2 CISC440/640 Spring Department of Computer and Information Science

Basic Graphics Programming

Computer graphics MN1

CS559: Computer Graphics. Lecture 12: OpenGL Li Zhang Spring 2008

CS 4204 Computer Graphics

Viewing with Computers (OpenGL)

End-Term Examination

Grafica Computazionale

OpenGL Transformations

Introduction to Visualization and Computer Graphics

MET71 COMPUTER AIDED DESIGN

TSBK03 Screen-Space Ambient Occlusion

Computer Graphics (CS 4731) Lecture 11: Implementing Transformations. Prof Emmanuel Agu. Computer Science Dept. Worcester Polytechnic Institute (WPI)

Clipping. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

CSC Graphics Programming. Budditha Hettige Department of Statistics and Computer Science

Lecture 5: Viewing. CSE Computer Graphics (Fall 2010)

Shading/Texturing. Dr. Scott Schaefer

Advanced Computer Graphics (CS & SE )

Scientific Visualization Basics

Lecture 4. Viewing, Projection and Viewport Transformations

Lighting. Figure 10.1

Objectives Shading in OpenGL. Front and Back Faces. OpenGL shading. Introduce the OpenGL shading methods. Discuss polygonal shading

CITSTUDENTS.IN VIEWING. Computer Graphics and Visualization. Classical and computer viewing. Viewing with a computer. Positioning of the camera

Texture Mapping. CS 537 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science

Transcription:

2 COS340-A QUESTION 1 [10] a) OpenGL uses z-buffering for hidden surface removal. Explain how the z-buffer algorithm works and give one advantage of using this method. (5) Answer: OpenGL uses a hidden-surface method called z-buffering T that saves depth information into an extra buffer which is used as objects are rendered so that parts of objects that are behind other objects do not appear in the image. The z-buffer records the depth or the distance from the COP of a pixel. T Thus, as polygons are scan-converted in random order, the algorithm checks the z-value of the pixel at the coordinates (in the frame buffer) that it is trying to draw to. If the existing pixel at these coordinates (i.e. if there already was a pixel) has a z-value further from the viewer than the pixel we are trying to draw, then the colour and depth of the existing pixel are updated, otherwise it continues without writing to the colour buffer or the depth buffer. T T Advantages:(1 mark for either answer) easy to implement in either software or hardware it is compatible with pipeline architectures - can execute at the same speed at which vertices are passing b) Orthogonal, oblique and axonometric view scenes are all parallel view scenes. Define the term parallel view and explain the differences between orthogonal, axonometric, and oblique view scenes. (5) Answer: Parallel views - views with the COP at infinity. T Orthogonal views - projectors are perpendicular to the projection plane and projection plane is parallel to one of the principal faces of an object. T A single orthogonal view is restricted to one principal face of an object.t Axonometric view - projectors are perpendicular to the projection plane but projection plane can have any orientation with respect to object.t Oblique projection - projectors are parallel but can make an arbitrary angle to the projection plane T and projection plane can have any orientation with respect to object.

3 COS340-A QUESTION 2 [8] a) Define the term homogeneous coordinates and explain why they are used in computer graphics. (4) Answer: Homogeneous coordinates are four dimensional column matrices used to represent both point and vectors in three dimensions. TWhen points and vectors are represented using 3-dimensional column matrices one cannot distinguish between a point and a vector, with homogeneous coordinates we can make this distinction.tt marks can be awarded for any of the points mentioned below: A matrix multiplication in 3-dimensions cannot represent a change in frames, while this can be done using homogeneous coordinates.t All affine transformations can be represented as matrix multiplications in homogeneous coordinates.t Less arithmetic work is involved when using homogeneous coordinates.t The uniform representation of all affine transformations makes carrying out successive transformations far easier than in 3 dimensional space.t Modern hardware implements homogeneous coordinates operations directly, using parallelism to achieve high speed calculations.t b) Consider the line segment with endpoints a and b at (1, 2, 3) and (2, 1, 0) respectively. Compute the coordinates of vertices a and b that result after an anticlockwise rotation by 15E about the z- axis.(show your workings) (4) Hint: The transformation matrix for rotation around the z-axis is (where θ is the angle of anticlockwise rotation).

Then, to rotate a' and b' by 15E about the z-axis, we use the rotation matrix 4 COS340-A Similarly So a'' = (0.448, 2.191, 3) TT and b'' = (1.673, 1.484, 0) TT (4) QUESTION 3 [8] OpenGL is a pipeline model. In a typical OpenGL application a vertex will go through a sequence of 6 transformations or change of frames. In each frame the vertex has different coordinates. a) Name these coordinates in the order that they occur in the OpenGL vertex transformation process. [5]

5 COS340-A 1/2 mark each for correct names ; 2 marks for correct order Object or model coordinates World coordinates Eye or camera coordinates Clip coordinates Normalized device coordinates Window or screen coordinates b) Define the term perspective division and state specifically where it is used in the OpenGL vertex transformation process. [3] Perspective division is the division of homogeneous coordinates by the w component T. It is used to transform the clip coordinates, which are still represented by homogenous coordinates into threedimensional representations in normalized device coordinates.tt QUESTION 4 [12] a) Discuss the difference between the RGB colour model and the indexed colour model with respect to the depth of the frame (colour) buffer. (4) In both models, the number of colours that can be displayed depends on the depth of the frame (colour) buffer. T The RGB model is used when a lot of memory is available, eg 12 or 24 bits per pixel. T/2 These bits are divided into three groups, representing the intensity of red, green and blue at the pixel, respectively. T The RGB model becomes unsuitable when the depth is small, because shades become too distinct/discreet. The indexed colour model is used where memory in the colour buffer is limited. T/2 The bits per pixel are used to index into a colour-lookup table where any shades of any colours can be specified (depending only on the colours that the monitor can show). T b) Describe, with the use of diagrams, the Cohen-Sutherland line clipping algorithm. (8)

6 COS340-A Angel 7.4.1 (2 bonus marks) The algorithm starts by extending the sides of the window to infinity, thus breaking up space into nine regions.ueach region is assigned a 4 -bit binary number, or outcode. as shown in the diagram below.u 1001 1000 1010 1 0 10 101 100 110 UU For each endpoint of a line segment, we first compute the endpoint s outcode. Consider a line segment whose outcodes are given by X(endpoint 1) and Y(endpoint 2).We have four cases: UU X = Y = 0 :Both endpoints are inside the clipping window(ab in diagram), segment can be rasterized.u X 0, Y = 0. One endpoint is inside the window and one is outside.(segment CD) We must shorten the line segment by calculating 1 or 2 intersections.u X & Y 0: By taking the butwise AND of the outcodes we determine whether or not the two endpoints lie on the same outside side of the window.if so the line is discarded.(segment EF)U X & Y = 0: Both segments are outside but they are outside on different edges of the window.we cannot tell from outcodes whether segment can be discarded or must be shortened(segments IJ and GH). Need to intersect with one side of window and determine outcode of resulting point. U

7 COS340-A QUESTION 5 [12] a) Discuss the distinguishing features of ambient, point, spot and distant light sources. (8) 2 marks each: An ambient light source produces light of constant intensity throughout the scene. All objects are illuminated from all sides. A point light source emits light equally in all directions, but the intensity of the light diminishes with (i.e. proportional to the inverse square of) the distance between the light and the objects it illuminates. Surfaces facing away from the light source are not illuminated. A spot light source is similar to a point light source except that its illumination is restricted to a cone in a particular direction. A spot light source can also have more light concentrated in the centre of the cone. A distant light source is like a point light source except that the rays of light are all parallel. The main difference is the improved speed of rendering calculations. b) Describe the difference between flat and smooth shading. (4) Flat shading is where a polygon is filled with a single colour or shade across its surface. A single normal is calculated for the whole surface, and this determines the single colour. TT Smooth shading is where colour is interpolated across the surface of a polygon. This is usually used to give the appearance of a rounded surface, and is achieved by defining different normals at each of the vertices of the polygon, and interpolating the normals across the polygon. TT QUESTION 6 [8] a) Describe environment maps, and explain the difference between the use of cube maps and spherical maps to implement them. (4)

8 COS340-A One way to create fairly realistic rendering of an object with a highly reflective surface is to map a texture to its surface (called an environment map) which contains an image of other objects around the highly reflective object as they would be seen reflected in its surface. TT A spherical map is an environment map in the form of a sphere, which is then converted to a flat image (by some projection) to be applied to the surface. T A cube map is in the form of a cube, i.e. six projections of the other objects in the scene. The renderer then picks the appropriate part of one of these projections to map to each polygon on the surface of the reflective object (as a texture). T (b) Briefly discuss the main difference between the opacity of a surface and the transparency of a surface with respect to alpha blending. (4) Answer: Angel Section 7.9: The alpha channel is the fourth colour in the RGBA (or RGBα) colour mode. Like the other colour components, the application program can control the value of A (or α) for each pixel. If blending is enabled in RGBA mode, the value of α controls how the RGB values are written into the frame buffer. T Because surfaces of multiple objects lying behind one another can contribute to the colour of a single pixel, we say that the colours of these surfaces are blended or composited together. T Opacity of a surface is a measure of how much light penetrates through that surface. T An opacity of 1 (α = 1) corresponds to a completely opaque surface that blocks all light from surfaces hidden behind it. A surface with an opacity of 0 is transparent: All light passes through it. The transparency or translucency of a surface with opacity α is given by 1! α.t

9 COS340-A QUESTION 7 [12] Consider the following OpenGL program that draws two walls of a room that meet at a corner. In the middle of the room is a rotating square suspended in mid air. Answer the questions that follow 1. #include <gl/glut.h> 2. #include <fstream> 3. #include <cmath> 4. float eye_pos[3] = {100, 50, -100}; 5. GLfloat angle = 0; 6. bool rotating = true; 7. bool clockwise = true; 8. void drawpolygon() 9. { 10. glcolor3f (1.0, 1.0, 0.0); 11. glbegin(gl_polygon);// Draw square 12. glvertex3d(20, 20, 0); 13. glvertex3d(20, -20, 0 ); 14. glvertex3d(-20, -20, 0); 15. glvertex3d(-20, 20, 0); 16. glend(); 17. } 18. void drawroom() 19. { 20. glcolor3f (1.0, 1.0, 1.0); // left wall 21. glbegin(gl_polygon); 22. glvertex3d(0, 0, 0); 23. glvertex3d(0, 100, 0 ); 24. glvertex3d(0, 100, -100); 25. glvertex3d(0, 0, -100); 26. glend(); 27. glbegin(gl_polygon); // right wall 28. glvertex3d(0, 0, 0); 29. glvertex3d(100, 0, 0); 30. glvertex3d(100, 100, 0); 31. glvertex3d(0, 100, 0); 32. glend(); 33. } 34. void display() 35. { glclear(gl_color_buffer_bit GL_DEPTH_BUFFER_BIT);

10 COS340-A 36. glmatrixmode(gl_modelview); 37. glloadidentity(); 38. glulookat(eye_pos[0], eye_pos[1], eye_pos[2], 0.0, 50.0, 0.0, 0.0, 1.0, 0.0); 39. drawroom(); 40. gltranslatef(40, 40, -40); 41. glrotatef(angle, 0, 1, 0); 42. drawpolygon(); 43. glflush(); 44. glutswapbuffers();} 45. void idle() 46. { 47. if (rotating) 48. { if (!clockwise) // rotate counter-clockwise {angle-= 0.5; if (angle < -360.0) angle += 360.0;} else // rotate clockwise 49. { angle+= 0.5; if (angle > 360.0) angle -= 360.0;} 50. } 51. glutpostredisplay(); 52. } 53. void myinit() 54. { 55. glpolygonmode(gl_front, GL_FILL); 56. glmatrixmode(gl_projection); 57. gluperspective(90, -1, 10, 210); 58. } 59. int main(int argc, char** argv) 60. { 61. glutinitdisplaymode(glut_double GLUT_RGB GLUT_DEPTH); 62. glutinitwindowsize(700, 700); 63. glutcreatewindow("room with floating shapes"); 64. myinit(); 65. glutdisplayfunc(display); 66. glutidlefunc(idle); 67. glutmainloop(); 68. return 0; 69. }

11 COS340-A a) Say the following function is inserted in the program, and is called between lines 64 and 65: void setupmenus() { int main_menu = glutcreatemenu(mainmenucallback); glutaddmenuentry("rotate Clockwise", 0); glutaddmenuentry("rotate anti-clockwise", 1); glutaddmenuentry("pause Rotation", 2); glutaddmenuentry("quit", 3); glutattachmenu(glut_right_button); } Write the callback function for this menu to do the following: Rotate square clockwise Rotate square anti-clockwise Pause rotation of square Exit Program Hint : The rotation of the square is controlled by boolean variables rotating and clockwise.(see lines 6 and 7 as well as function void idle ). [4] void mainmenucallback(int id) { switch (id) { case 0:rotating = true; clockwise = true; break; case 1: rotating = true; clockwise = false; break; case 2: rotating = false; break; case 3: exit(0); }

12 COS340-A b) Write a keyboard callback function that will allow the user to zoom in towards the corner of the room and back out again. Use the < to zoom in and > key to zoom out.. You do not need to set maximum or minimum values for the camera position.the y value of the camera position will remain constant and the zoom increment can be set to 3.0. You can assume that glutkeyboardfunc(keyboard) is added to the main function Hint: Take note of the initial position of the camera in line 4 relative to the visible corner of the room. void keyboard(unsigned char key, int x, int y) {switch (key) {case '<' : eye_pos[0] += 3.0;//increment value can be different eye_pos[2] -= 3.0; break; case '>' : eye_pos[0] -= 3.0; eye_pos[2] += 3.0; break; } return; } c) Consider the following function for loading an image from a file into an array, and for specifying it as a texture: void settexture() { const int WIDTH = 512; const int HEIGHT = 1024; GLubyte image[width][height][3]; FILE * fd; fd = fopen("texture.bmp","rb"); fseek(fd, 55, SEEK_SET); // Move file pointer past header info fread(&image, 1, WIDTH*HEIGHT*3, fd); fclose(fd); gltexparameteri(gl_texture_2d, GL_TEXTURE_WRAP_S, GL_REPEAT); gltexparameteri(gl_texture_2d, GL_TEXTURE_WRAP_T, GL_REPEAT); gltexparameteri(gl_texture_2d, GL_TEXTURE_MAG_FILTER, GL_LINEAR); gltexparameteri(gl_texture_2d, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glteximage2d(gl_texture_2d, 0, 3, WIDTH, HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, image); } The file texture.bmp contains a 512x1024 image.if this function is inserted in the above program (and called in the main function just before myinit is called), give the statements that

13 COS340-A need to be inserted in the drawroom function (and state where) to apply the texture to the surface of the right wall of the room. In function drawroom, insert the following calls of gltexcoord2f before the respective calls of gkvertex3f: gltexcoord2f(0, 0); //after line 27 T gltexcoord2f(0, 1); //after line 28 T gltexcoord2f(1, 1); //after line 29 T gltexcoord2f(0, 0); //after line 30 T