Graphics for VEs Ruth Aylett
Overview VE Software Graphics for VEs The graphics pipeline Projections Lighting Shading
VR software Two main types of software used: off-line authoring or modelling packages runtime systems. Some runtime systems support some authoring Sometimes limited; e.g Java3D Sometimes part of large-scale toolkit: eg WorldViz, WorldToolkit Toolkits are generally expensive and have had trouble sustaining a market
Runtime VR systems Two major parts: initialisation and update loop. Initialisation executed once at program start-up time generates or loads a run-time world database containing description of all entities in the virtual world. Usually read from a file Generated by separate authoring or modelling program Import formats supported must be known also loads lighting models, checks input/output devices are working.
Cyclic executive Check and initialise hardware Import or create 3D world Create view points Render world Check input devices Software architecture can be used to implement simple runtime VR systems. Main loop simulation update loop continues forever until program is halted Each loop represents one time step for simulation Renders current state of the virtual world Checks input devices and implements changes to world
Tasks within an update loop Updating state: reading values from input devices applying those values to the objects being controlled by each device (including the participant's viewpoint or body positions). For entities with autonomous behaviours: execute algorithms to calculate behaviours for current time step e.g., computing new position and orientation of falling or tumbling object, self-propelled object, intelligent object. If support for collision detection, must check all possible collisions and adjust object movements as needed
Game Engines Increasing recent use of 3D game engines Doom, Quake Unreal Tournament now the most popular Half-Life and NeverWinter Nights growing Games engines have optimised 3D engine features Also large user communities for support Supply of generic toolkits helps cut development time supports collision detection, physics-based models, AI behaviours, multiplayer client/server modes etc. Downsides include licensing problems Also turning off the shooting Proprietary formats for models and scriptings
Free Runtime Software A number of open source systems VR Juggler: with a separate scenegraph system Virtual Rendering System VRS3D. OpenSG OpenSceneGraph 3dml Java 3D extension API to core Java SDK probably fundamentally too slow to be a serious tool for the development of professional VEs On the web badly impacted by extension status and lack of browser functionality (especially MS IE)
Graphics Pipeline 3D DATA TRANSFORM 3D DATA IN CAMERA COORDS TRANSFORM 2D DATA IN SCREEN COORDS HIDDEN SURFACE & RENDER DATA FULLY ON SCREEN
Scenegraph and pipelines
Scene Description Z World Cartesian Coordinates 2 Dimensions (X,Y) 3 Dimensions (X,Y,Z) Y Objects Shape Colour Location X Z X Y
Object Representation POINTS (X,Y,Z) LINES (X 1,Y 1,Z 1 ), (X 2,Y 2,Z 2 ) POLYGONS (X 1,Y 1,Z 1 ) (X n,y n,z n ) MESHES Set of Vertices V 1 = (X 1,Y 1,Z 1 ) V n = (X n,y n,z n ) Set of Polygons P1, P2, Pm Each Polygon Pm is represented as ordered list of vertices
Object Colour RGB System Red - Green - Blue Additive Light System Colour Represented as (R,G,B) blue Examples (1, 1, 0) Yellow (0, 0, 0) black (1, 1, 0) green red green
Building a World Built from a set of objects Objects translated into position Objects rotated as appropriate Include light sources Ambient Non-directional Spotlight
Transforming objects Translation Add an off set to every vertex on an object Rotation Use trigonometry to calculate new position of each vertex Allow rotation around each cartesian axis May be represented as matrices Scale Multiply each vertex by constant scale factor Always apply before object put into scene
Translation An object is translated by adding an offset to every vertex Y (x+t x, y+t y ) (x, y) (T x, T y ) X
Matrix Rotation: 2 dimensions Y (X, Y) a (X 1, Y 1 ) X (X 1, Y 1 ) = cos(a) - sin(a) sin(a) - cos(a) X Y
Matrix Rotation: 3 dimensions X Rotation (X 1, Y 1, Z 1 ) = 1 0 0 X 0 cos(a) -sin(a) Y 0 sin(a) cos(a) Z Z Y Rotation (X 1, Y 1, Z 1 ) = cos(a) 0 sin(a) X 0 1 0 Y -sin(a) 0 cos(a) Z Z Rotation (X 1, Y 1, Z 1 ) = cos(a) -sin(a) 0 X sin(a) cos(a) 0 Y 0 0 1 Z X Y
Projection Objects exist in 3 dimensions, final image is in 3 dimensions Projection reduces the dimensions by one to two Perspective projection mimics scaling of distance eye Isometric projection does not scale with distance
Camera & Projection World Projection Plane
Camera & Projection The camera moves according to the head movement The projection plane is either fixed in relation to: the camera the world
Camera & Projection Projection plane fixed to the camera x x x World
Characteristics: Camera & Projection Projection plane fixed to the camera Constant Field of View Projection plane normal to the viewing direction
Camera & Projection Projection plane fixed to the world World
Characteristics: Camera & Projection Projection plane fixed to the world Field of View depends on camera position viewing direction not always normal to projection plane
Clipping Projected objects may lie: Totally on screen Partially on screen Completely off screen Polygons clipped to sides of screen Must make sure new polygons are complete Remaining polygons may obscure each other screen
Polygon Sorting & Printing Aim to identify which parts of a polygon are visible Many solutions developed for special cases Consider two methods: Painters algorithm Z buffer algorithm
Screen Picture is drawn in a frame buffer Frame buffer is an area of memory Think of the screen as a grid of squares Each square represents a pixel Every pixel can hold a colour (R, G,B) Drawing a picture is just colouring the squares
Painters algorithm Method Generate all the polygons Order wrt distance from camera Draw polygons furthest away first and move towards viewer Visible polygons drawn over distant polygons Works when polygons are well-ordered Some cases cannot be solved
Z-buffer algorithm Method: Retain distance information for each vertex Frame-buffer extended to have a depth-buffer Whilst drawing the polygon, calculate the pixel depth for each pixel For each pixel in polygon Calculate depth of pixel in polygon Compare depth with current frame-buffer depth IF polygon pixel distance less the one stored in frame buffer THEN save polygon colour and depth in frame buffer NEXT pixel
Illumination Model The illumination model allows the colour to be calculated at a surface Approximates the behaviour of light within a scene Approximate since only considers individual surfaces Does not model reflections between them The Phong model is typically used
Phong Model A simple model that can be computed rapidly Three lighting components Diffuse Specular Ambient Uses four vectors To source (light) To viewer Normal Perfect reflector
Phong Illumination Model The 3 types of lighting: Ambient: constant background illumination Same value for all objects in scene Gives constant shading Diffuse: light widely scattered by surface Depends only on angle between light and surface and surface material Specular: highlights on object Depends both on position of light wrt surface and position of viewer wrt surface As well as surface material
Diffuse Reflection Mathematical Description N normal to object at intersection L i vector from intersection to light I i intensity of light from light I K d coefficient of diffuse reflection a i angle between Li and N L a Object N eye Component is given by: - k d ( l 1 cos(a 1 ) + l 2 cos(a 2 ) + l 3 cos(a 3 ) + )
Specular Reflection Mathematical Description k s L i I i R n normal to object at intersection vector from intersection to light i intensity of light from light I direction of reflection incident level of specularity R L q eye Specular component given by: - k s (I 1 ( L 1 *R) n + I 2 (l 2 *R) n + )
Light Sources In the Phong Model, we add the results from each light source Each light source has separate diffuse, specular, and ambient terms to allow for maximum flexibility even though this form does not have a physical justification Separate red, green and blue components Hence, 9 coefficients for each point source I dr, I dg, I db, I sr, I sg, I sb, I ar, I ag, I ab
Partial Illumination Model Illumination Model Ambient light Ambient light diffuse reflections single source, k s = 0 Full Illumination Model Ambient light diffuse reflections specular reflections I r = I a k ar +I i (k dr (L i *N)+k s (L i *R) n ) I g = I a k ag +I i (k dg (L i *N)+k s (L i *R) n ) I b = I a k ab +I i (k db (L i *N)+k s (L i *R) n )
Object Shading Flat Shading Gouraud Shading Phong Shading
Object Shading Flat Shading Gouraud Shading Phong Shading Simplest way to shade a polygon Apply the Phong shading model once All pixel polygons are shaded the same Quick but limited realism of meshes Flat (R 2,G 2,B 2 ) (R 1,G 1,B 1 ) (R 3,G 3,B 3 )
Object Shading Flat Shading Gouraud Shading Phong Shading Normal to each polygon vertex (r, g, b) colour computed for each vertex Pixel colour calculated by linear interpolation Slower, smooth appearance across polygon Gouraud (R2,G2,B2) (R1,G1,B1) (R3,G3,B3) Colour is linear function of (Ri,Gi,Bi)
Object Shading Flat Shading Gouraud Shading Phong Shading Phong Normal to each polygon vertex Interpolate normals along edges (r, g, b) colour computed for each normal Interpolate colour across polygon Very slow, smooth appearance, good quality (R2,G2,B2) (R1,G1,B1) (R3,G3,B3) Normal N is linear function of Ni, colour is computed using N for illumination model
Texturing V Two-dimensional texturing Use an image to map onto an object Either scale the image to fit once on the object or tile the image with a number of copies of the image common projection methods: plane map cylindrical map spherical map V 1 image 1 image U 1 U h object Theory: (u,v) - (r*cos(2*pi*u), r*sin(2*pi*u),h*v 1
Projections: plane map V object 1 image U 1
Projections: cylindrical map Theory: (u,v) - (r*cos(2*pi*u), r*sin(2*pi*u),h*v V 1 image U h 1
Projections: spherical map Theory (u,v) - (r*cos(2*pi*u)*cos(pi*v), r*sin(2*pi*u)*cos(pi*v), r*sin(pi*v)) V 1 image U
3D Texturing 3 Dimensional Texturing 2 Dimensional textures only cover the surface of an object 3 Dimentional textures are defined for all points in space Objects are cut from the texture space - much like sculpting a statue from marble
Alternative rendering techniques Raytracing Aims to improve realism of image Slower than polygon rendering Suitable for producing still images and animations Radiosity Further improvement of the shading model Attempts to improve the treatment of ambient light Slow, but can be pre-computed and used with polygon rendering or raytracing
Problems of the graphics pipeline Difficult to achieve realism Reflections and transparency are not accurate Real shadows increase rendering time Polygonal objects are not smooth Gouraud/Phong shading needed to increase quality Difficult to render shapes defined mathematically The pipeline is not a natural approach
Ray Tracing - the concept paper Draw a grid on a piece of paper grid scene Cut a square hole in a card and connect some wires across the hole, again forming a grid Hold the card in front of the scene to be painted and draw the image seen through each grid square on the corresponding square on the paper If the image though the grid square is to complex, increase the number of squares on paper and card When the grid is suitably fine just paint the average colour seen through a grid hole in the corresponding position on the paper
Ray Tracing 1 eye 3 N 2 N 4 N Trace a ray from the eye through a pixel, into the scene Find the nearest object intersected by the ray Cast a new reflected and refracted ray Calculate light arriving from reflected, refracted rays and direct illumination from lights in the scene Apply shading model to get the colour of the object at the intersection Paint the pixel with the computed colour
Radiosity Radiosity is a physically based model of global diffuse illumination Developed at Cornell University in 1994 from radiative heat transfer Assumes that all surfaces are idealambertian (diffuse) - ray tracing assumes ideal specular Radiosity discretises the scenes and produces data independent of the viewer In general, Radiosity takes longer than ray tracing Build the Environment Determine the form factors Solve the Radiosity Equation Render the Environment
Radiosity Radiosity is not a rendering technique Independent of viewer s position Produces data to be rendered later Needs additional rendering to produce an image Rendering can be done through a graphics pipeline or using ray tracing methods Best results when both ray tracing and radiosity are used A radiosity feature is colour bleeding