Terrain Rendering using Multiple Optimally Adapting Meshes (MOAM)

Size: px
Start display at page:

Download "Terrain Rendering using Multiple Optimally Adapting Meshes (MOAM)"

Transcription

1 Examensarbete LITH-ITN-MT-EX--04/018--SE Terrain Rendering using Multiple Optimally Adapting Meshes (MOAM) Mårten Larsson Department of Science and Technology Linköpings Universitet SE Norrköping, Sweden Institutionen för teknik och naturvetenskap Linköpings Universitet Norrköping

2 LITH-ITN-MT-EX--04/018--SE Terrain Rendering using Multiple Optimally Adapting Meshes (MOAM) Examensarbete utfört i Medieteknik vid Linköpings Tekniska Högskola, Campus Norrköping Mårten Larsson Handledare: Dr. Doug Roble Examinator: Prof. Anders Ynnerman Norrköping

3 Avdelning, Institution Division, Department Institutionen för teknik och naturvetenskap Datum Date Department of Science and Technology Språk Language Svenska/Swedish Engelska/English Rapporttyp Report category Examensarbete B-uppsats C-uppsats D-uppsats ISBN ISRN LITH-ITN-MT-EX--04/018--SE Serietitel och serienummer ISSN Title of series, numbering URL för elektronisk version / Titel Title Terrain Rendering using Multiple Optimally Adapting Meshes (MOAM) Författare Author Mårten Larsson Sammanfattning Abstract Images of computer generated terrain is used widely in the visual effects industry. Entire render programs for terrain have been developed with great success. They create photorealistic landscapes with correct atmosphere. One problem these systems can have is the long render times. Creating an image of a procedurally generated terrain, at high enough resolution, takes a long time even for very fast computers. This thesis describes a new way to render terrain and landscapes for visual effects in feature films. The goal of the thesis is to make the render times shorter for a terrain renderer used at Digital Domain (Venice CA, USA) called Terragen 3.5. It will present a new way to use real-time terrain rendering techniques for a non real-time render system. These techniques use frame to frame caching and by doing so making the render times much shorter. A new way to use bucket rendering that maximises the effect of the caching scheme when doing a distributed rendering in a network (in a render farm) will also be presented. Nyckelord Keyword Terrain rendering, visual effects, ROAM, MOAM, Digital Domain

4 Terrain Rendering With Multiple Optimally Adapting Meshes (MOAM) Mårten Larsson 1 February 23, marten@martenlarsson.com

5 Abstract Images of computer generated terrain is used widely in the visual effects industry. Entire render programs for terrain have been developed with great success. They create photorealistic landscapes with correct atmosphere. One problem these systems can have is the long render times. Creating an image of a procedurally generated terrain, at high enough resolution, takes a long time even for very fast computers. This thesis describes a new way to render terrain and landscapes for visual effects in feature films. The goal of the thesis is to make the render times shorter for a terrain renderer used at Digital Domain (Venice CA, USA) called Terragen 3.5. It will present a new way to use real-time terrain rendering techniques for a non real-time render system. These techniques use frame to frame caching and by doing so making the render times much shorter. A new way to use bucket rendering that maximises the effect of the caching scheme when doing a distributed rendering in a network (in a render farm) will also be presented.

6 Contents 1 Introduction and background Introduction Terrain rendering Terragen renderer Problem description Render Speed Holes Approach Cache Frame to frame coherency Buckets The new rendering system Overview Binary tree mesh Node Split Merge Shadow nodes Vertex blending Planet Generating the planet Reset loop Split loop Merge loop Sort loop Raytracing Bucket rendering Bucket frame Network rendering Results Render speed Single images Sequences Holes

7 5 Conclusion 37 6 Future work 38 7 Acknowledgements 39 2

8 List of Figures 1.1 Overview of the Terragen render loop Bin tree and mesh relationship. Showing the first three levels of a bintree and the mesh A node s tree- and mesh pointers Triangle definition. v x = vertex, e x = edge and n x = neighbour T-intersection, creating a hole in the mesh Definition of a triangle split Pair splitting and an example of multiple force splitting Sketch of how the backside of the planet has a lower resolution Overview of necessary steps to generate the planet Example One, single image with few shaders Example Two, single image with many shaders Example Three, image sequence with some camera movement Example Four, image sequence with a lot of camera movement and roll Example Five, an image rendered with a depth shader. All black dots are holes in the mesh. The top image is rendered with the old Terragen renderer and the lower image is rendered with a MOAM geometry

9 Chapter 1 Introduction and background 1.1 Introduction Generating images of terrain or landscapes is an important part of making visual effects for feature films. Its uses ranges from creating set extensions 1 to generating complete, computer generated, photorealistic landscapes. Digital Domain (Venice CA, USA) is a visual effects company that produces visual effects for feature films. This report will present the results of a master s project done at Digital Domain. After starting to work on a feature film that required a large amount of computer generated terrain shots, it was decided that the software used for landscape rendering was to be revised. Although it is a state of the art terrain renderer that generates very realistic terrain images, it is very slow. The main goal of this master s project is to increase the render speed of the software. The reader is expected to have good knowledge of computer graphics. A good reference and introduction to computer graphics can be found in [12] Terrain rendering When generating images of landscapes, the landscape is often represented by a triangle mesh 2. It is not uncommon that these meshes contains millions of triangles, in order to accurately represent the surface without the triangle mesh structure being visible in the final rendered image. This many triangles are slow, and often unnecessary, to render. A way to reduce the number of triangles is to use a relative level of detail. This means that triangles far away in the image are allowed to be bigger in the 3d world and triangles closer in the image are made smaller. The aim is to make all triangles roughly the same size in the rendered image. This will give rise to a problem. When an object such as a terrain is big enough, it will contain triangles that are close in the image as well 1 A set extension means replacing parts of the background in an image with a model or a computer generated image. 2 A mesh consisting only of triangles such that each edge is adjacent to at most two triangles. 4

10 as triangles that are far away. When moving the camera 3 over the terrain an artifact known as popping will occur. This happens when a triangle moves from one level of detail to the next. This is addressed in [7], [6] and [9]. [14] gives a good overview of rendering computer graphics in general. To give the landscape and surface its structure and colour real height measurements from real terrain are often used. These measurements frequently do not provide enough resolution. When viewing the computer generated landscape from a short distance the details in the surface disappear. This is due to the fact that the terrain measurements do not have data of small details in the landscape, typically the data-sets have measurements taken 30 meter apart. To get around this problem fractals are often used. A very complete reference of the techniques involved in generating virtual landscapes and a good, comprehensive introduction to fractals is given in [10]. Several techniques for generating large landscapes in real-time have been presented [1], [2], [3], [4] and [5]. Real-time visualisations have too low detail and often have too simple atmosphere models to be used in visual effects for feature film Terragen renderer The terrain renderer used at Digital Domain is a software called Terragen 3.5. This section will briefly describe the rendering architecture of Terragen. Definitions The Terragen render engine is an object based 4 renderer. Terragen uses a scene to store all information about all objects in the 3d environment. Typically the scene stores all materials, all particles, all lights and all geometries (triangles). Everything that will be rendered, and everything that is needed for rendering is stored in the scene. Triangles are added to the scene by a scenemaker. Particles are added by a particle inflator. A scenemaker is the basic geometry in Terragen. The standard scenemaker is the planet. All scenemakers generate a set of triangles called root triangles. These root triangles are a rough representation of the geometry that the scenemaker represents. A scenemaker is also responsible for creating a material that is added to the scene and associated with this scenemaker s root triangles. A material contains references to a tree-like structure of shaders. A shader is an object that, when given a point in the 3d world (for example a triangle vertex), can move it to a new position and give it a new color or set any other surface property. This is called shading. A particle inflator adds a particle to the scene. This particle is an object that is not rendered, instead it has a reference to the particle inflator that created it. In every frame that is rendered, the particle asks its particle inflator to add triangles to the scene. This way the particle has a very flexible definition, it can be any kind of geometry or particle system. 3 See [14] chapter 8 for definition of a camera 4 It renders one triangle from the world at a time, instead of one pixel in the final image at a time. 5

11 Render loop When a render starts all scenemakers add their root triangles to the scene, and all particle inflators add their particles to the scene. All lights in the scene are set up. The rendering loop then take a triangle from the scene and adds it to the renderer s render stack 5. In the next round of the rendering loop the top triangle in the render stack is removed from the stack and its size is tested. If the triangle is too big it is divided into two new triangles. The new point that is created, when the triangle is divided into two triangles, is sent to the tree of shaders that are associated with the triangle s material. The shaders move the point s coordinates in the 3d world and sets its surface properties. The two new triangles are then added to the render stack. Next round in the loop the top triangle is removed from the stack and its size is tested and the same procedure is carried out on that triangle until it is small enough to draw. When it is small enough to be drawn, it is transformed into a 2d triangle and drawn. When the rendering stack is empty a new triangle is added from the scene, and the loop starts over for this triangle. This continues until there are no more triangles left to be rendered in the scene. For particles the loop is almost identical. The only difference is that its triangles are not in the scene. Instead, when the renderer asks the scene for a root triangle, the particle asks its particle inflator to add triangles directly to the render stack. These triangles are then rendered as the scenemakers triangles. See frigure 1.1 for an overview of the Terragen render loop. 1.2 Problem description This architecture has some advantages. One of the main advantages is the small amount of memory needed to be able to render images. Terragen typically uses around 200 MB memory (whith no textures or any shadows) to render an image. This means that it can render images on almost any computer that has 256 MB memory or more Render Speed One of the main disadvantages with this architecture is that it is slow. Very slow. There are two reasons for this. Multiple shader evaluation The first and least significant reason it is slow is the fact that the same point in space will be calculated more than once by the shaders. This is due to the fact that an arbitrary number of triangles can share the same point, and since the triangles are evaluated one at a time on the rendering stack the corner vertices will be evaluated at least once per triangle. Evaluating the shaders is an expensive operation that takes a lot of time. Terragen can have an arbitrary number of shaders in its shader tree and typically a lot of shaders is needed to make a realistic landscape image. In Terragen this problem is minimized 5 A stack that contains triangles to be rendered. See [11] for a definition of a stack. 6

12 Render loop starts Add all scenmakers root triangles to the scene Take a triangle from the scene and add it to the renderstack Remove the top triangle from the renderstack Split the triangle and add the two new triangles to the renderstack YES YES Is the triangle too big? YES NO Draw the triangle in to the image and then delete it Are there more triangles in the scene? NO Are there more triangles on the stack? NO Stop, render is done. Figure 1.1: Overview of the Terragen render loop. 7

13 by using a cache to store newly shaded points in space. The reason this cache works reasonably well is due to the spacial coherence of the triangles on the stack. But this only works on some of the triangles (how many is depending on the size of the cache). The reason for this is the fact that there will be no cached points for the bigger triangles when they are added to the stack, since their vertices are too far away (in 3d world distance) from the cached vertices. No storing between frames The main reason why this rendering scheme is slow is that no information is saved about the image or the 3d environment between frames. When a triangle is small enough to be drawn, it is drawn in to the image and then thrown away. This is one of the things that enables Terragen to use so little memory, there are only a small number of triangles simultaneously in memory at any given time. When the rendering of an image is done, there is no information left about the triangle mesh, that is the landscape. If the camera does not move too much between the frames in an animation (witch it typically doesn t) most of the landscape that was visible in the current frame will be visible in the next frame. So in any given frame most of the shader evaluations will be calculated on the exact same points as they were in the previous image. This means that the same point will calculated at least once per frame in a number of successive frames where the point is visible, and that takes a lot of computation and a long time to do Holes Another problem that this rendering scheme can cause is holes in the landscape. Depending on how the decision of what edge to divide in the triangles when they are split is defined, two neighbouring triangles might not be split on the same edge. This means that one triangle can have one or more points, that are moved in space by the shaders, on an edge where a neighbour has no points. The triangle without points on that edge will be flat along the entire edge, while the triangle having one or more points (after one or more splits) on the edge can be any shape. This shows up in the rendered image as holes or cracks in the landscape, places where you can see straight through the surface. 8

14 Chapter 2 Approach Since one of the most timeconsuming and computation heavy tasks in the renderer is the evaluation of the shader tree for every new vertex created, the overall goal for the new system is to save everything that can be saved. It sould also ensure that no property or value is computed twice if it is not absolutely necessary. 2.1 Cache To be able to store all the information about the mesh that forms the landscape, some kind of cache is needed. It should be able to store all geometry data such as triangles and triangle vertices. Other properties it needs to store is surface properties. It should also store information in a way that minimises the amount of memory needed to store it. 2.2 Frame to frame coherency A goal of the new system is to store this cached information between frames. Since most of the surface from one frame is visible in the next frame, saving the surface between frames can save a lot of computation. Typically surface geometry information and surface properties (such as colour, specular colour, specular components and so on) can be saved. Some calculations can not be saved. Anything that is dependant on camera position can not be saved since the camera will most likely move from frame to frame. Atmosphere calculations for example can not be stored since they change depending on where in the atmosphere the camera is, and how much atmosphere is between the camera and the surface. Shadows can not be stored either. This is due to the fact that vertices in the surface mesh that are not in shadow in one frame could end up in shadow in a later frame when the camera moves closer to that point. This happens because more detail is added to the surface the closer the camera is to it. At a lower resolution concavities in the surface may not be visible until the surface reaches a certain resolution. 9

15 2.3 Buckets The whole scene can not be rendered in one go. There is simply not enough memory in the computers used today to render a full 2K 1 frame and at the same time save the mesh between frames. So the new system must be able to handle bucket rendering 2 and still use caching and frame to frame coherency. 1 A 2K frame is 2048 x 1556 pixels big. This is the resolution used for most visual effects today. 2 Bucket rendering means rendering the image as several smaller images instead of rendering the whole image in one go. This will be explained further in

16 Chapter 3 The new rendering system The new system is partly based on ideas from a real-time technique called Real-time Optimally Adapting Mesh (ROAM) [1]. This is a technique used for visualising terrain models with adapting level of detail in real-time. The basic idea with ROAM is to use a binary tree 1 (called bin tree in the following text) for storing the triangles and the mesh between frames. It also includes a scheme for adapting the mesh s level of detail to fit every frame. The leaves in the bin tree are the triangles that are visible in the landscape. All triangles in the tree above the leaf level are triangles from the different subdivisions that are performed to get to the right triangle size that the leaves have. The tree is generated in the first frame, and saved for the following frames. In every frame after the first all the triangles in the tree are tested to see if they are too big. If a triangle is too big it is split into two triangles. If a triangle is too small it is merged with its bin tree neighbour making their bin tree parent a leaf triangle to be used in the mesh instead. This idea is used for the new Terragen system. 3.1 Overview The new implementation of ROAM in Terragen is called Multiple Optimally Adapting Meshes (MOAM). The term Multiple will be explained in 3.4. The MOAM implementation of ROAM has a different scheme for adapting the mesh for every frame. It also has a different definition of the mesh. A MOAM planet or any other MOAM geometry starts with a set of root triangles just as a normal scenemaker. Each of these triangles is the root of a bin tree with triangles. When the root triangles are created they do not have any children. This makes every root triangle a bin tree of depth one with just one triangle in it. The MOAM geometry is added to the Terragen scene as a particle. It interacts with the render loop as a particle. This means that it does not just add its root triangles to the scene and lets the normal render loop split, displace and draw them, as any scenemaker would do. Instead, it adds a particle to the scene. When the render loop asks the particle for its triangles, the MOAM geometry starts to subdivide its root triangle trees until the leaf triangles in all 1 See [11] for explanation of a binary tree. 11

17 the trees are the right size. It then adds the leaf triangles to the normal render loop, that draws them. This is a big difference to how a normal scenemaker works in Terragen. The final leaf triangles are added and just drawn, instead of the root triangles being added, split and then drawn. The bin trees inside the MOAM geometry are then saved for the next frame. When a new frame is to be rendered, the MOAM geometry loops through all the bin trees and adjusts them to fit the current frame and starts adding new leaf triangles to be drawn, to the render loop. The following section will describe the definition of the bin tree structure and the algorithms used to generate a geometry with a MOAM mesh. 3.2 Binary tree mesh All triangles are stored in a number of bin trees. The leaf nodes in a tree are the triangles that are in the current landscape mesh. These are the only triangles in a bin tree that are visible. All other triangles are intermediate split steps before the mesh was split enough to be drawn. The bin tree is a regular bin tree with each node being a triangle. A triangle can be divided into two new triangles. The two new triangles then become child nodes to the triangle that was split to create them. This way, a node s child nodes are always neighbours in the landscape triangle mesh. And also, two child nodes can be replaced by their parent without any gaps being created in the mesh. Figure 3.1 shows the relationship between the mesh and the bin tree Node Each node has a reference to its two children and to its parent in the bin tree. These references are referred to as the tree pointers, they are used to keep track of relationships between nodes in the bin tree. Each node also has a reference to each of its three neighbours in the triangle mesh. These references are referred to as the mesh pointers, they are used to keep track of relationships between the triangles in the mesh. A clear distinction is made between the mesh information and the tree information. There does not have to be any relationship between the two. This way, a node (triangle) has full knowledge of both its surroundings in the tree and of its surroundings in the mesh. Figure 3.2 shows a node s different pointers. A triangle s vertices are defined and numbered counter clockwise. The triangle edges are defined by the lowest vertex number on the edge. The mesh neighbours are defined by the number on the edge that the mesh neighbour is neighbour on. Figure 3.3 shows the triangle definition. All triangles also store two values. These are called splittability and shadow splittability. They are used to store a calculated value each based on the triangle s size in screen space 2. This value is used as a measure to see if a triangle is small enough to be drawn or if it needs to be split into two new triangles. This will be explained further in All triangles that have a vertex in the same exact position as another triangle will share the same vertex. This is one important step to remove holes. If the point is moved in space, all triangles sharing that point will automatically be moved as well. 2 Size in screen space is the size the triangle will have in the rendered image. 12

18 T T T 0 T 1 T 0 T 1 T 00 T 11 T 00 T 01 T 10 T 11 T 01 T 10 Figure 3.1: Bin tree and mesh relationship showing the first three levels of a bintree and the mesh. 13

19 Tree pointers Mesh pointers Figure 3.2: A node s tree- and mesh pointers. 14

20 v 2 e 2 e1 v 0 n 2 v 2 v 0 e 0 v 1 n 0 n 1 v 1 Figure 3.3: Triangle definition. v x = vertex, e x = edge and n x = neighbour. 15

21 T-intersection Hole Figure 3.4: T-intersection, creating a hole in the mesh Split In each frame the splittability and shadow splittability of every triangle is calculated. These values are a measure of how big the triangle is in the 3d world and in the final image (screen space). splittability is defined as: 1 splittability = 0.5+(log 2 (sres detail) 0.5) 0 <= detail <= 10 < blend <= 1 blend (3.1) Where sres is the length of the triangle s longest edge in screen space, detail is a variable used to control how small the triangles will appear in the final image. The variable blend is another variable used to control how much blending will be applied. Blending will be explained in For now, assume that the term blend is set to one. The Terragen definition of shadow splittability is similar but based on 3d world size of the triangle edge instead of screen size. Triangle split definition If a triangle is too large (it s splittability is too high) it will be split into two new triangles. How this split is done is critical to prevent holes. The split in the new system ensures that no T-intersections will appear. Figure 3.4 shows a T-intersection. The reason why T-intersections must be avoided is that they will create a hole in the surface. The point in the T-intersection will lie in the middle of the neighbouring triangles edge. If the point is moved a hole will appear since the neighbouring triangle will be flat on the edge where the T-intersection appears. When a triangle is to be split in the bin tree, a new vertex, v n, is placed in the middle of the longest edge (in 3d world space) on that triangle. That longest edge is called the base edge. Then the triangle is split along the line connecting the new vertex and the apex vertex, v a. The triangle that is split (called T ) will get two child nodes in the bin tree. The left bin tree child node will be the left triangle in the split (called T 0 ), and the right bin tree child node will be the right triangle in the split (called T 1 ). T 0 and T 1 take T s place in the triangle mesh. See Figure 3.5 for the definition of the split. 16

22 v a split v a T T 0 T 1 v n Figure 3.5: Definition of a triangle split. 17

23 split (pair) T T 0 T 1 T B merge (pair) T B1 T B0 forced split T T B Figure 3.6: Pair splitting and an example of multiple force splitting. Forced split In order to completely avoid T-intersections all triangles are split in pairs. This means that when a triangle (T ) is to be split and the edge to split it on is decided, the neighbour on that edge, called the base neighbour (T B ), must be forced to split as well. If the base neighbour s (T B s) edge that is facing the triangle (T ) is not the longest edge on T B, T B must be split on another edge first. This split will force another neighbour of T B to be split as well. One split might not be enough to make the edge on T B that is facing T, the longest edge on T B. Several splits might be necessary. The only time nodes are not split in pairs is when a triangle is on the edge of a mesh and the triangle edge to split is on the mesh edge. Forcing splits to be made in pairs with the force split scheme will guarantee that the mesh can not have any holes. See figure 3.6 for an example of a force split and the definition of pair splitting. The next section will explain the split algorithm. Split algorithm The split algorithm splits a triangle and its neighbour (and possibly more neighbours) and replaces itself in the mesh with its new child nodes. It also sets all 18

24 tree- and mesh-pointers on all nodes it splits, and their neighbours, and makes sure that all triangles are split on their longest edge, even when they are force split. When going through the full algorithm it is very useful to look at figures 3.3, 3.5 and 3.6 to keep track of the triangle definitions. See Algorithm 1 for the full split algorithm. Algorithm 1 Triangle splitting algorithm. 1: Triangle to split is T, T will be split on edge number x called e x in this example 2: get the neighbour (T B ) on the edge e x 3: while T B does not have it s longest edge to T s edge e x do 4: split T B on it s longest edge (and force split T B s neighbour on that edge) 5: end while 6: create the new vertex (v n ) on T s base edge 7: run all shaders on v n 8: for T and T B do 9: create child triangles T (B)0 and T (B)1 by copying T (B) 10: set T (B)0 s vertex v ((x+1) mod 3) to be v n 11: set T (B)1 s vertex v x to be v n 12: set T (B)0 s and T (B)1 s parent tree pointer to be T (B) 13: set T (B) s left tree child pointer to be T (B)0 and the right to be T (B)1 14: set the neighbour pointer in T (B) s neighbour on edge (x + 1) mod 3 that is pointing to T (B) to point to T (B)1 15: set the neighbour pointer in T (B) s neighbour on edge (x + 2) mod 3 that is pointing to T (B) to point to T (B)0 16: end for 17: set T 0 s neighbour pointer number x to point to T B1 18: set T 0 s neighbour pointer number (x + 1) mod 3 to point to T 1 19: set T 1 s neighbour pointer number (x + 2) mod 3 to point to T 0 20: set T 1 s neighbour pointer number x to point to T B0 21: set T B0 s neighbour pointer number x to point to T 1 22: set T B0 s neighbour pointer number (x + 1) mod 3 to point to T B1 23: set T B1 s neighbour pointer number (x + 2) mod 3 to point to T B0 24: set T B1 s neighbour pointer number x to point to T Merge When the bin tree is traversed in the beginning of a frame, to adapt the cached tree from the previous frame to the this frame, some of the nodes in the tree will be too small. Keeping too small nodes in the tree will result in using an unnecessary large amount of memory. These nodes must be removed from the tree. This is done by the merge operation. Triangle merge definition If a triangle, that is not a leaf in the bin tree, is small enough to be drawn (its splittability is low enough) and its child nodes are too small (their splittability is too low) it can merge its child nodes. All merge operations are made in pairs 19

25 just as the split. This makes sure that no T-intersections (holes) are introduced in the merge. A merge works very much like a split but backwards. The triangle that will merge its child nodes is called T. The left child node is called T 0 and the right is called T 1. If T 0 and T 1 both are child nodes they can be merged together. The vertex v n that was added when T was split into T 0 and T 1 is deleted. T 0 and T 1 are also deleted and T takes their place as a leaf in the tree and becomes a triangle in the mesh. Merge algorithm In short, the algorithm checks if a merge is possible on the triangle T, finds its neighbour T B and checks if T B can be merged. If both can be merged, it merges T s child nodes by deleting them and replaces them with T in the mesh. The same thing happens to T B. All mesh- and tree-pointers are restored to the state they were prior to T and T B was split. The figures (3.3, 3.5 and 3.6) are useful to look at to keep track of the triangle definitions while going through the merge algorithm. See Algorithm 2 for the full merge algorithm. Algorithm 2 Triangle merging algorithm. 1: Triangle to merge the child nodes on is T the vertex that will be removed in the merge lies on T s edge number x called e x in this example 2: get the neighbour (T B ) on the edge e x 3: if T and T B is small enough to be drawn (their splittability is small enough) then 4: try to merge T s and T B s child node s children 5: if T s and T B s child node s were merged ok (making T s and T B s child nodes leaf nodes) then 6: for T and T B do 7: set T (B) s neighbour pointer number x to point to the same node that T (B)1 s neighbour pointer number x is pointing at 8: set T (B) s neighbour pointer number (x + 1) mod 3 to point to the same node that T (B)1 s neighbour pointer number (x + 1) mod 3 is pointing at 9: set T (B) s child tree pointers to nothing 10: set the pointer poitning to T (B)0 in T (B)0 s neighbour on edge number (x + 1) mod 3 to point to T (B) 11: set the pointer poitning to T (B)1 in T (B)1 s neighbour on edge number x to point to T (B) 12: end for 13: set T s neighbour pointer number (x + 2) mod 3 to point to T B 14: set T B s neighbour pointer number (x + 2) mod 3 to point to T 15: delete T 0, T 1, T B0 and T B 1 16: end if 17: end if 20

26 3.2.4 Shadow nodes In order to use the mesh for effective raytracing 3 some extra calculations and concepts must be added. To perform raytracing in the mesh on the leaf level in the tree will make the raytracing too slow. There are simply too many leaf nodes for this to be effective. In order to make any raytracing effective in the landscape mesh, some extra information is stored in the nodes. The concept of shadow nodes is added. A shadow node is a node that is used for raytracing. The shadow nodes are nodes higher up in the bin tree, some levels above the leaf level. These nodes represent the mesh at a coarser subdivision level. That means the mesh at lower resolution, before its triangles were split into the leaf level 4. This is the mesh that will be used for raytracing. To decide at what level the shadow nodes are located, the shadow splittability value and a multiplier is used. All nodes with a shadow splittability higher than a certain value will have the extra pre calculated values and be part of the shadow nodes. All nodes with a shadow splittability value lower than this value will not have any pre calculated values for raytracing. This saves memory. These nodes are not part of the shadow nodes. No raytracing will be done on these nodes. The calculations stored in the shadow nodes are pre calculated values that make the raytracing faster. The information stored is the four coefficients of the implicit plane equation of the plane that the node triangle is part, of and a bounding sphere 5 that will be big enough to include the node, and all its child shadow nodes. The raytracing algorithm will be explained in Vertex blending When a triangle is split a new vertex is added to the mesh. This vertex will be in a different location than where the edge it was created from is. This is because the shaders will move the new vertex. It can also get a different colour. For every split, the resolution of the mesh is increased. When a new vertex is created and moved into place in a frame, that part of the image will get a different appearance than the image before the split (since the new vertex changed the structure of the surface when it was moved). It will look like the landscape is popping, small parts of it will suddenly move to a new position as the resolution of the mesh increases. This is an undesirable effect that must be removed. This is done by blending. A MOAM geometry has two kinds of blending, both used on vertices. These are position blending and colour blending. In every frame, all vertices are given a blend value. This value is based on the splittability value of the two triangles that was split when the vertex was created (T, and T B ). The blend value is defined as: where: blend = splittability min blendcutof f 0 <= splittability min <= blendcutoff (3.2) 3 See [12] and [13] for a definition of raytracing. 4 The tree contains all the resolution levels of the mesh. All the way from the root triangle mesh (the top level of the tree) to the final mesh that is rendered (the leaf level of the tree). 5 A sphere that fully encloses the triangle in space. 21

27 and: splittability min = min(splittability T, splittability TB ) (3.3) In equation 3.2 blendcutoff is a variable that defines how fast the blend changes relative to the splittability min, and in equation 3.3 splittability T and splittability TB is the splittability of T and T B. Position blending In every frame all vertices are traversed and their blended position, called P w, is calculated. This position is the position that is used when the triangles that share this vertex are drawn in the image. When a vertex is created it is moved to its final position by the shaders. We call this final position P. All vertices also have a blended position. When a new vertex is created it is created on the center point of the edge that was split on T and T B. This point is called P start. P w is then calculated as: Colour blending P w = P + (P start P ) blend (3.4) Before a leaf node is given to the render loop to be drawn its colour is blended with its parent s colours. This is calculated with the following simple linear interpolation scheme: C blend = (C v C blendparent ) blend + C blendparent (3.5) where: (C blendtvx + C blendtv(x+1) ) C blendparent = (3.6) 2 In equation 3.5 C blend is the blended colour for the vertex beeing blended and C v is the colour of the vertex before it is blended. In equation 3.6 C blendtvx and C blendtv(x+1) is the blended colours of the two vertices on the edge on T where the vertex being blended was created from. The colours of C blendtvx and C blendtv(x+1) are calculated in the same way as C blend. 3.3 Planet The bin trees are used in the MOAM geometries in Terragen to store the meshes used for the landscape. The most basic geomtry in Terragen is the planet. All landscapes are whole planets. This means that it is possible to fly from space, viewing the whole planet, to any point on the surface of the planet in the same render sequence. This means that the whole planet will be in memory at the same time in any render. The way this is made possible is by making sure that the backside of the planet and anything not visible in the frame is not subdivided more than to a very coarse level. Almost all of the memory used by a MOAM geometry will be used by the triangles close to the camera that will be visible in the rendered image. See figure 3.7 for a sketch of how the backside of the planet has a lower resolution. 22

28 Figure 3.7: Sketch of how the backside of the planet has a lower resolution Generating the planet A planet is generated in four loops. These are the reset loop, the split loop, the merge loop, and the sort loop. In the first frame of a render pass the root triangles, that represents the coarsest level of the geometry, are created. For the planet geometry this is a box made of twelve root triangles. The root triangles are only created in the first frame of a render. After they have been created the normal generate algorithm starts. The generate algorithm is run for every new frame. The first thing that happens in the generate algorithm is that all the triangles in the bin trees are traversed and every node s split- and blend-values are set to zero. This step is called the reset loop. All triangles are then tested for their size and their splittability- and blendvalues are set for the current frame. If a triangle is too big (splittability is too high) and it is a leaf in the tree, it is split. In every split at least one new vertex and two new triangles are created as defined in Every new vertex created is sent to the shader tree. The shader tree will displace the point (moving it to a new position in the 3d world) and set its surface properties. This is the only time the point is sent to the shaders, once it is shaded it will not be shaded again. The new triangles created from a split are tested for size and given a splittability value. If they are too big they will be split. This continues recursively until all the leaf nodes in the mesh are fine enough to be drawn. This recursive splitting and shading is called the splitting loop. The bin trees are then traversed again and all triangles that are too small (their splittability is too low) and have a parent node that is small enough to be drawn, are removed from the tree. This is done with the merge algorithm explained in This step is called the merge loop. The nodes are then traversed a final time in the generate pass. Nodes at a level in the tree defined by a multiplier and the shadow splittability value are added to a list. The nodes in the list are then sorted. The triangles that are closest to the camera will be put first in the list. This list is called the sorted 23

29 Start Create root triangles Reset all triangles Split all triangles that are too big Merge all triangles that are too small New frame Create a sorted draw list Draw the triangles in the sorted draw list Figure 3.8: Overview of necessary steps to generate the planet. draw list and is later used to add triangles to be drawn to the rendering loop. This final sorting is called the sorting loop. This results in a number of bin trees of triangles, each with a root triangle from the basic geometry as tree root. All leaf nodes in the trees are exactly the right size to be drawn for the current frame. The sorted draw list, created in the sort loop, is now traversed. The triangles in this list are not leaf triangles, so every triangle has more triangles under it in the tree. The triangles under each sorted draw list triangle are traversed and all the leaf nodes are added to the drawing loop in the renderer and drawn in the image. All illumination and all atmosphere calculations are done in the drawing loop in the renderer. This means that all raytracing is started from there. When the drawing loop illuminates a triangle, a ray is created going from the triangle center to the sun. All planets and geometries are then asked if they block this ray. The raytracing for a MOAM geometry is described in See figure 3.8 for an overview of the planet generation steps Reset loop The reset loop is very simple. It iterates through all the triangles and sets all vertices blend values, the splittability and the shadow splittability to zero. The algorithm is described in Algorithm 3 24

30 Algorithm 3 Node reset loop. 1: Triangle to be reset is T, T s right and left child nodes in the bin tree are T 0 and T 1 2: reset blend, splittability and shadow splittability on T 3: if T is not a leaf node then 4: send T 0 and T 1 to the reset loop 5: end if Split loop The split loop is also very simple. In this loop the blend value, the splittability value and the shadow splittability value is set. This is also where the mesh is created and subdivided to the resolution that it will be drawn in. The split loop is described in Algorithm 4 Algorithm 4 Tree split loop. 1: the triangle to be split is T, T s right and left child nodes in the bin tree are T 0 and T 1 2: set blend value, splittability and shadow splittability on T 3: set blend position on T based on it s blend value 4: if T is a leaf node then 5: if splittability > 1 then 6: split the node 7: if the split was successful then 8: send the newly created T 0 and T 1 to the split loop 9: end if 10: end if 11: else 12: send T 0 and T 1 to the split loop 13: end if Merge loop In the merge loop all triangles that are too big for the current frame are removed and replaced by their parents. Pre calculations for raytracing are also calculated in this loop. The merge loop is described in Algorithm Sort loop The sort loop is where the sorted draw list is created. The reason that this is done is to save time when the triangles are drawn. If triangles that are closer to the camera are drawn before triangles that are farther away, all triangles that will be occluded by closer triangles can be detected before they are drawn. These triangles will not be illuminated or raytraced. What level in the bin tree the triangles that will be added to the sort list are at is decided by their shadow splittability and a value called sortdetail that can be set by the user. The sort loop is described in Algorithm 6 25

31 Algorithm 5 Tree merge loop. 1: the triangle to merge it s child nodes is T, T s right and left child nodes in the bin tree are T 0 and T 1 2: if T is a root node or if T s parent s shadow splittability > 0 then 3: create raytracing pre calculations for T 4: else 5: remove the raytracing pre calculations for T 6: end if 7: if T is not a leaf node then 8: if T 0 and T 1 are not leaf nodes then 9: send T 0 and T 1 to the merge loop 10: end if 11: if T 0 and T 1 could merge their childnodes (making them leaf nodes) then 12: try to merge T 0 and T 1 13: end if 14: end if Algorithm 6 Node sort loop. 1: the triangle to be added to the sort list is T, T s right and left child nodes in the bin tree are T 0 and T 1 2: if T is a leaf node or if T s shadow splittability is <= sortdetail then 3: if any leaf node in the tree under T will be drawn (i.e. will be visible in the final image) then 4: add T to the sorted draw list 5: end if 6: else 7: send T 0 and T 1 to the sort loop 8: end if 9: sort the sorted draw list based on triangle distance to the scene camera 26

32 3.3.6 Raytracing When a triangle is given to the renderer to be drawn it will be illuminated. When this is done, the render loop creates a ray going from the triangle to each light source (the sun and any number of fill lights). All geometries in the scene are then asked if they block this ray. If any object blocks the ray, the triangle is in a shadow. Any object in Terragen can cast a shadow onto any other object, or onto itself. This can be a very slow process in the landscape mesh, since it is made of so many triangles. Because of this all triangles in the mesh are not used for the raytracing. Only triangles down to the shadow node level in the tree are used. But the raytracing is still a fairly slow process. In order to speed up this process two techniques are used. The first one takes advantage of the tree structure to quickly terminate testing triangles in any sub tree that is too far away from the ray. The other one uses ray coherency to avoid raytracing all rays in the tree. In order to be able to have geometry that is outside of the frame (and therefore not subdivided to the resolution it should for casting shadows) cast shadows into the frame, the raytracing uses the split algorithm to subdivide the mesh to a finer level, if necessary. The two techniques, used to speed up the raytracing, and the full algorithm is explained more in detail in the following sections. Using the bin tree All nodes in the mesh that are part of the shadow nodes (see 3.2.4) have pre calculated values used in the raytracing. They also have a bounding sphere. This bounding sphere will completely surround the triangle and all shadow nodes below it in the tree. When a ray is going to be tested for intersections in the mesh, it is first tested against the bounding spheres of the root triangles. If they occlude the ray their child node s bounding spheres are tested for intersections. This continues down in the bin tree. If a node s bounding sphere does not occlude the ray, the sub tree under that node is not tested for intersection since nothing in that sub tree is outside the bounding sphere of the node that was just tested. This continues down in the tree until the bottom of the shadow nodes in the tree is reached. Any node, at the bottom of the shadow node tree, who s bounding sphere occlude the ray is then tested for intersection with the ray. As soon as any node that occludes the ray is found, the testing is terminated. Ray coherency Most of the rays used for the raytracing are more or less parallel since they will have the origin of the ray very close to each other (triangles close to each other in the mesh) and have the end in the same point (a light). The triangles that are close to each other in the mesh will also be raytraced almost directly after each other (since triangles that are close to each other on the tree often are close to each other in the mesh, and the tree is traversed in a regular pattern when the triangles are drawn). Because of this, a technique for speeding up the raytracing significantly can be used. The idea is that if the mesh is tested for intersections by a ray, all the triangles that are tested for intersection (not triangles just tested for bounding sphere intersection) will be stored in a list. Together with the triangles the start and end position of the ray that was used to find the triangles are stored. A 27

33 flag that indicates if the triangle that created the list hit anything is also stored with the list. A number of lists like this are stored. When a new ray is going to be tested for intersections in the mesh the lists are searched first. If a list is found that was created by a ray with its ray origin close enough to the new ray s origin and its ray end close enough to the new ray s end, then the triangles in the list are tested for intersection instead of testing the whole bin tree. If the new ray gets a different result when testing the triangles in the list than the ray that created the list, the whole tree is tested. Otherwise the result from testing the triangles in the list is used. Using these lists increases the speed of the raytracing significantly. Algorithm The full intersection algorithm for a geometry is described in Algorithm 7, and the full bin tree intersection algorithm is described in Alborithm 8 Algorithm 7 Intersect MOAM geometry algorithm. 1: the ray to be used for intersection is R 2: if a stored triangle list is found that was created by a ray with origin and end close enough to R s origin and end then 3: test all triangles in that list for intersections 4: if if the result of the intersection tests in the list for R is the same as for the ray that created the list then 5: use this result as the result of the intersect test 6: else 7: start full bin tree intersection test 8: end if 9: else 10: start full bin tree intersection test, and create a new list for this ray 11: end if 3.4 Bucket rendering Bucket rendering is nothing new in renderers. It has been used for a long time for different reasons. Simply put, the image is rendered in steps. Smaller parts of the image, called buckets, are rendered separately. The reason for this can be that it is more efficient or it saves a lot of memory. Bucket rendering is fully supported in Terragen. When rendering in Terragen with a MOAM geometry the whole landscape mesh is saved internally in the bin trees. This takes up a lot of memory. Because of this it impossible to render the whole image in one go. The bucket rendering mechanism must be used to keep the image size down, this also keeps the size of the mesh down since it is dependant on detail setting and image size. The MOAM geometry works well with bucket rendering, the mesh can easily be adapted to the new bucket with the normal planet generation loop. The only problem is that the caching mechanism will not be effective at all with normal bucket rendering. All triangles visible in a new bucket will not have been visible in the previous (since the buckets are images next to each other in the same 28

34 Algorithm 8 Intersect bin tree algorithm. 1: the ray to be used for intersection is R and the triangle to be tested for intersection is T, T s child nodes are T 0 and T 1 2: if T s bounding sphere intersects the ray then 3: if T is a leaf and it has shadowsplittability > 0 then 4: split the node with the splitting algorithm 5: else 6: if T is a leaf or it has shadowsplittability = 0 then 7: test T for intersection with R and return the result 8: end if 9: end if 10: send T s left child node T 0 to the intersect loop 11: if T 0 was hit then 12: stop intersection tests and return the result 13: else 14: send T s right child node T 0 to the intersect loop and return the result 15: end if 16: end if frame). This means that none of the triangles from the previous bucket can be used in the new bucket. All triangles in the new bucket will have to be generated in the split loop, and all triangles from the previous bucket will be thrown away in the merge loop. In order to take advantage of both the caching of the MOAM geometry and rendering in buckets, a new bucket rendering scheme has been developed for MOAM geometries Bucket frame The concept of a bucket frame is introduced. In order to take advantage of the caching, each bucket will render a number of frames before moving on to the next bucket. This way, the cache works since most of the triangles in the same bucket will be visible from frame to frame as they would be if the image was not rendered with buckets. A bucket frame is defined as a specific bucket with a number of frames associated with it. Bucket frame 0 will then be bucket 0 in the image and x number of frames. What this means is that the first bucket in the image will have been rendered for x number of frames before the next bucket is rendered. When all buckets have rendered x number of frames, the first bucket is rendered with the next x number of frames and so on. This is a new way of rendering with buckets, that works very well with MOAM geometry and takes full advantage of its cache mechanism Network rendering The concept of bucket frame is used when network rendering Terragen with a MOAM geometry. Each CPU 6 in the render farm 7 would normally be assigned to render all buckets in a single frame. This is the normal way of doing a network 6 CPU refers to a processor in a computer in the render farm. 7 A render farm is a network of computers used for rendering. 29

35 render. With a MOAM geometry each CPU is assigned to a bucket frame. This means that a single CPU renders one bucket, x frames. Another advantage with this is that a single frame can be network rendered, since every bucket can be sent to a CPU. The images from the different buckets in a network render are then stitched together when the whole render is done. 30

36 Chapter 4 Results All images of result renders will only be shown with the images from renders with MOAM geometry. The images rendered with the normal Terragen planet are identical, and therefore not included. 4.1 Render speed Single images Example One This example shows a one frame render with very few shaders. This is a situation when most of the advantages of using a MOAM geometry are not as effective. Frame caching has no effect on a single frame and the prevention of multiple shader evaluation on the same point has very little effect due to the small number of shaders in the shader tree. Evaluating few shaders is not time consuming. The image is made at resolution 720x306 pixels with a detail set to 0,5. The normal Terragen render took 8 minutes and 48 seconds, and the same image with a MOAM geometry took 6 minutes and 26 seconds. The speed increase with the MOAM geometry in single images is due to the fact that it prevents multiple shader evaluation on the same point in space. See figure 4.1 for the rendered image. Example Two This example shows a one frame render with more shaders. As example one, this is also a situation when most of the advantages of using a MOAM geometry are not as effective. The effect of the prevention of multiple shader evaluation can bee seen in this example too. The image is made at resolution 720x306 pixels with a detail set to 0,5. The normal Terragen render took 31 minutes and 21 seconds, and the same image with a MOAM geometry took 27 minutes and 40 seconds. See figure 4.2 for the rendered image. 31

37 Figure 4.1: Example One, single image with few shaders. Figure 4.2: Example Two, single image with many shaders. 32

38 Figure 4.3: Example Three, image sequence with some camera movement Sequences Example Three This example shows a ten frame sequence render with few shaders. This example takes advantage of the frame to frame caching. Since it is only ten frames and few shaders, the effect of the caching is not that obvious. These images are made at resolution 720x306 pixels with a detail set to 0,5. Each bucket that is rendered with MOAM geometry is rendered for 10 frames per bucket. The normal Terragen render for ten frames took 5 hours and 12 minutes, and the same render with a MOAM geometry took 2 hours and 5 minutes. See figure 4.3 for the rendered images. 33

39 Example Four This example shows a 50 frame sequence render with many shaders and fast moving camera. This example takes advantage of the frame to frame caching. The fast camera motion is reducing the amount of cached vertices that can be used from the previous frame, but the effect of the caching is still significant. These images are made at resolution 720x306 pixels with a detail set to 0,5. Each bucket that is rendered with MOAM geometry is rendered for 30 frames per bucket. The normal Terragen render for 50 frames took 34 hours and 23 minutes, and the same render with a MOAM geometry took 8 hours and 23 minutes. In frames after the first, the MOAM geometry render spent 2 minutes and 50 seconds on shader evaluation per frame. The rest of the time is spent on atmosphere, shadow raytracing and triangle drawing. The normal renderer spent about 34 minutes per frame on shader evaluations. See figure 4.4 for the rendered images. 4.2 Holes Example Five This example is rendered with a z-depth shader. This means that the surface will be lighter the farther away it is from the camera. Everything that is not the planet will be black (such as the sky). The top image is rendered with the normal Terragen renderer and the bottom image is rendered with a MOAM geometry. All black dots in the upper image are holes in the mesh. The images are both rendered at detail The displacement is applied by a fractal-shader. 34

40 Figure 4.4: Example Four, image sequence with a lot of camera movement and roll. 35

41 Figure 4.5: Example Five, an image rendered with a depth shader. All black dots are holes in the mesh. The top image is rendered with the old Terragen renderer and the lower image is rendered with a MOAM geometry 36

Design and Implementation of an Application Programming Interface for Volume Rendering

Design and Implementation of an Application Programming Interface for Volume Rendering LITH-ITN-MT-EX--02/06--SE Design and Implementation of an Application Programming Interface for Volume Rendering Examensarbete utfört i Medieteknik vid Linköpings Tekniska Högskola, Campus Norrköping Håkan

More information

Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts

Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts MSc Computer Games and Entertainment Maths & Graphics II 2013 Lecturer(s): FFL (with Gareth Edwards) Fractal Terrain Based on

More information

Parallel Physically Based Path-tracing and Shading Part 3 of 2. CIS565 Fall 2012 University of Pennsylvania by Yining Karl Li

Parallel Physically Based Path-tracing and Shading Part 3 of 2. CIS565 Fall 2012 University of Pennsylvania by Yining Karl Li Parallel Physically Based Path-tracing and Shading Part 3 of 2 CIS565 Fall 202 University of Pennsylvania by Yining Karl Li Jim Scott 2009 Spatial cceleration Structures: KD-Trees *Some portions of these

More information

Universiteit Leiden Computer Science

Universiteit Leiden Computer Science Universiteit Leiden Computer Science Optimizing octree updates for visibility determination on dynamic scenes Name: Hans Wortel Student-no: 0607940 Date: 28/07/2011 1st supervisor: Dr. Michael Lew 2nd

More information

CEng 477 Introduction to Computer Graphics Fall 2007

CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection Visible surface detection or hidden surface removal. Realistic scenes: closer objects occludes the

More information

TSBK03 Screen-Space Ambient Occlusion

TSBK03 Screen-Space Ambient Occlusion TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm

More information

Shadows in the graphics pipeline

Shadows in the graphics pipeline Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects

More information

Homework #2. Hidden Surfaces, Projections, Shading and Texture, Ray Tracing, and Parametric Curves

Homework #2. Hidden Surfaces, Projections, Shading and Texture, Ray Tracing, and Parametric Curves Computer Graphics Instructor: Brian Curless CSE 457 Spring 2013 Homework #2 Hidden Surfaces, Projections, Shading and Texture, Ray Tracing, and Parametric Curves Assigned: Sunday, May 12 th Due: Thursday,

More information

Spatial Data Structures

Spatial Data Structures Spatial Data Structures Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees Constructive Solid Geometry (CSG) [Angel 9.10] Outline Ray tracing review what rays matter? Ray tracing speedup faster

More information

CSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018

CSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018 CSE 167: Introduction to Computer Graphics Lecture #9: Visibility Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018 Announcements Midterm Scores are on TritonEd Exams to be

More information

Sung-Eui Yoon ( 윤성의 )

Sung-Eui Yoon ( 윤성의 ) CS380: Computer Graphics Ray Tracing Sung-Eui Yoon ( 윤성의 ) Course URL: http://sglab.kaist.ac.kr/~sungeui/cg/ Class Objectives Understand overall algorithm of recursive ray tracing Ray generations Intersection

More information

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural 1 Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural to consider using it in video games too. 2 I hope that

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

Advanced 3D-Data Structures

Advanced 3D-Data Structures Advanced 3D-Data Structures Eduard Gröller, Martin Haidacher Institute of Computer Graphics and Algorithms Vienna University of Technology Motivation For different data sources and applications different

More information

CSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling

CSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling CSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015 Announcements Project 4 due tomorrow Project

More information

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters.

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters. 1 2 Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters. Crowd rendering in large environments presents a number of challenges,

More information

Homework #2. Shading, Projections, Texture Mapping, Ray Tracing, and Bezier Curves

Homework #2. Shading, Projections, Texture Mapping, Ray Tracing, and Bezier Curves Computer Graphics Instructor: Brian Curless CSEP 557 Autumn 2016 Homework #2 Shading, Projections, Texture Mapping, Ray Tracing, and Bezier Curves Assigned: Wednesday, Nov 16 th Due: Wednesday, Nov 30

More information

Spatial Data Structures

Spatial Data Structures 15-462 Computer Graphics I Lecture 17 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees Constructive Solid Geometry (CSG) March 28, 2002 [Angel 8.9] Frank Pfenning Carnegie

More information

Spatial Data Structures

Spatial Data Structures 15-462 Computer Graphics I Lecture 17 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees Constructive Solid Geometry (CSG) April 1, 2003 [Angel 9.10] Frank Pfenning Carnegie

More information

Illumination and Geometry Techniques. Karljohan Lundin Palmerius

Illumination and Geometry Techniques. Karljohan Lundin Palmerius Illumination and Geometry Techniques Karljohan Lundin Palmerius Objectives Complex geometries Translucency Huge areas Really nice graphics! Shadows Graceful degradation Acceleration Optimization Straightforward

More information

Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions.

Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions. 1 2 Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions. 3 We aim for something like the numbers of lights

More information

9. Visible-Surface Detection Methods

9. Visible-Surface Detection Methods 9. Visible-Surface Detection Methods More information about Modelling and Perspective Viewing: Before going to visible surface detection, we first review and discuss the followings: 1. Modelling Transformation:

More information

Hidden surface removal. Computer Graphics

Hidden surface removal. Computer Graphics Lecture Hidden Surface Removal and Rasterization Taku Komura Hidden surface removal Drawing polygonal faces on screen consumes CPU cycles Illumination We cannot see every surface in scene We don t want

More information

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03 1 Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03 Table of Contents 1 I. Overview 2 II. Creation of the landscape using fractals 3 A.

More information

Lecture 25 of 41. Spatial Sorting: Binary Space Partitioning Quadtrees & Octrees

Lecture 25 of 41. Spatial Sorting: Binary Space Partitioning Quadtrees & Octrees Spatial Sorting: Binary Space Partitioning Quadtrees & Octrees William H. Hsu Department of Computing and Information Sciences, KSU KSOL course pages: http://bit.ly/hgvxlh / http://bit.ly/evizre Public

More information

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11 Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION

More information

Chapter 17: The Truth about Normals

Chapter 17: The Truth about Normals Chapter 17: The Truth about Normals What are Normals? When I first started with Blender I read about normals everywhere, but all I knew about them was: If there are weird black spots on your object, go

More information

Interpolation using scanline algorithm

Interpolation using scanline algorithm Interpolation using scanline algorithm Idea: Exploit knowledge about already computed color values. Traverse projected triangle top-down using scanline. Compute start and end color value of each pixel

More information

Advanced Shading I: Shadow Rasterization Techniques

Advanced Shading I: Shadow Rasterization Techniques Advanced Shading I: Shadow Rasterization Techniques Shadow Terminology umbra: light totally blocked penumbra: light partially blocked occluder: object blocking light Shadow Terminology umbra: light totally

More information

INFOGR Computer Graphics. J. Bikker - April-July Lecture 11: Visibility. Welcome!

INFOGR Computer Graphics. J. Bikker - April-July Lecture 11: Visibility. Welcome! INFOGR Computer Graphics J. Bikker - April-July 2016 - Lecture 11: Visibility Welcome! Smallest Ray Tracers: Executable 5692598 & 5683777: RTMini_minimal.exe 2803 bytes 5741858: ASM_CPU_Min_Exe 994 bytes

More information

Announcements. Written Assignment2 is out, due March 8 Graded Programming Assignment2 next Tuesday

Announcements. Written Assignment2 is out, due March 8 Graded Programming Assignment2 next Tuesday Announcements Written Assignment2 is out, due March 8 Graded Programming Assignment2 next Tuesday 1 Spatial Data Structures Hierarchical Bounding Volumes Grids Octrees BSP Trees 11/7/02 Speeding Up Computations

More information

Computer Graphics. Si Lu. Fall uter_graphics.htm 11/22/2017

Computer Graphics. Si Lu. Fall uter_graphics.htm 11/22/2017 Computer Graphics Si Lu Fall 2017 http://web.cecs.pdx.edu/~lusi/cs447/cs447_547_comp uter_graphics.htm 11/22/2017 Last time o Splines 2 Today o Raytracing o Final Exam: 14:00-15:30, Novermber 29, 2017

More information

Computer Graphics Introduction. Taku Komura

Computer Graphics Introduction. Taku Komura Computer Graphics Introduction Taku Komura What s this course all about? We will cover Graphics programming and algorithms Graphics data structures Applied geometry, modeling and rendering Not covering

More information

CS 465 Program 5: Ray II

CS 465 Program 5: Ray II CS 465 Program 5: Ray II out: Friday 2 November 2007 due: Saturday 1 December 2007 Sunday 2 December 2007 midnight 1 Introduction In the first ray tracing assignment you built a simple ray tracer that

More information

GUERRILLA DEVELOP CONFERENCE JULY 07 BRIGHTON

GUERRILLA DEVELOP CONFERENCE JULY 07 BRIGHTON Deferred Rendering in Killzone 2 Michal Valient Senior Programmer, Guerrilla Talk Outline Forward & Deferred Rendering Overview G-Buffer Layout Shader Creation Deferred Rendering in Detail Rendering Passes

More information

EDAN30 Photorealistic Computer Graphics. Seminar 2, Bounding Volume Hierarchy. Magnus Andersson, PhD student

EDAN30 Photorealistic Computer Graphics. Seminar 2, Bounding Volume Hierarchy. Magnus Andersson, PhD student EDAN30 Photorealistic Computer Graphics Seminar 2, 2012 Bounding Volume Hierarchy Magnus Andersson, PhD student (magnusa@cs.lth.se) This seminar We want to go from hundreds of triangles to thousands (or

More information

Pipeline Operations. CS 4620 Lecture 10

Pipeline Operations. CS 4620 Lecture 10 Pipeline Operations CS 4620 Lecture 10 2008 Steve Marschner 1 Hidden surface elimination Goal is to figure out which color to make the pixels based on what s in front of what. Hidden surface elimination

More information

Real-Time Reyes Programmable Pipelines and Research Challenges

Real-Time Reyes Programmable Pipelines and Research Challenges Real-Time Reyes Programmable Pipelines and Research Challenges Anjul Patney University of California, Davis This talk Parallel Computing for Graphics: In Action What does it take to write a programmable

More information

Physically-Based Laser Simulation

Physically-Based Laser Simulation Physically-Based Laser Simulation Greg Reshko Carnegie Mellon University reshko@cs.cmu.edu Dave Mowatt Carnegie Mellon University dmowatt@andrew.cmu.edu Abstract In this paper, we describe our work on

More information

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students) TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students) Saturday, January 13 th, 2018, 08:30-12:30 Examiner Ulf Assarsson, tel. 031-772 1775 Permitted Technical Aids None, except

More information

Procedural modeling and shadow mapping. Computer Graphics CSE 167 Lecture 15

Procedural modeling and shadow mapping. Computer Graphics CSE 167 Lecture 15 Procedural modeling and shadow mapping Computer Graphics CSE 167 Lecture 15 CSE 167: Computer graphics Procedural modeling Height fields Fractals L systems Shape grammar Shadow mapping Based on slides

More information

Homework #2. Shading, Ray Tracing, and Texture Mapping

Homework #2. Shading, Ray Tracing, and Texture Mapping Computer Graphics Prof. Brian Curless CSE 457 Spring 2000 Homework #2 Shading, Ray Tracing, and Texture Mapping Prepared by: Doug Johnson, Maya Widyasari, and Brian Curless Assigned: Monday, May 8, 2000

More information

Ray Tracing III. Wen-Chieh (Steve) Lin National Chiao-Tung University

Ray Tracing III. Wen-Chieh (Steve) Lin National Chiao-Tung University Ray Tracing III Wen-Chieh (Steve) Lin National Chiao-Tung University Shirley, Fundamentals of Computer Graphics, Chap 10 Doug James CG slides, I-Chen Lin s CG slides Ray-tracing Review For each pixel,

More information

Lecture 11: Ray tracing (cont.)

Lecture 11: Ray tracing (cont.) Interactive Computer Graphics Ray tracing - Summary Lecture 11: Ray tracing (cont.) Graphics Lecture 10: Slide 1 Some slides adopted from H. Pfister, Harvard Graphics Lecture 10: Slide 2 Ray tracing -

More information

SUMMARY. CS380: Introduction to Computer Graphics Ray tracing Chapter 20. Min H. Kim KAIST School of Computing 18/05/29. Modeling

SUMMARY. CS380: Introduction to Computer Graphics Ray tracing Chapter 20. Min H. Kim KAIST School of Computing 18/05/29. Modeling CS380: Introduction to Computer Graphics Ray tracing Chapter 20 Min H. Kim KAIST School of Computing Modeling SUMMARY 2 1 Types of coordinate function Explicit function: Line example: Implicit function:

More information

LOD and Occlusion Christian Miller CS Fall 2011

LOD and Occlusion Christian Miller CS Fall 2011 LOD and Occlusion Christian Miller CS 354 - Fall 2011 Problem You want to render an enormous island covered in dense vegetation in realtime [Crysis] Scene complexity Many billions of triangles Many gigabytes

More information

CS 563 Advanced Topics in Computer Graphics QSplat. by Matt Maziarz

CS 563 Advanced Topics in Computer Graphics QSplat. by Matt Maziarz CS 563 Advanced Topics in Computer Graphics QSplat by Matt Maziarz Outline Previous work in area Background Overview In-depth look File structure Performance Future Point Rendering To save on setup and

More information

Spatial Data Structures

Spatial Data Structures CSCI 420 Computer Graphics Lecture 17 Spatial Data Structures Jernej Barbic University of Southern California Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees [Angel Ch. 8] 1 Ray Tracing Acceleration

More information

CSE 167: Introduction to Computer Graphics Lecture #11: Visibility Culling

CSE 167: Introduction to Computer Graphics Lecture #11: Visibility Culling CSE 167: Introduction to Computer Graphics Lecture #11: Visibility Culling Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017 Announcements Project 3 due Monday Nov 13 th at

More information

Introduction to Visualization and Computer Graphics

Introduction to Visualization and Computer Graphics Introduction to Visualization and Computer Graphics DH2320, Fall 2015 Prof. Dr. Tino Weinkauf Introduction to Visualization and Computer Graphics Visibility Shading 3D Rendering Geometric Model Color Perspective

More information

Spatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

Spatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Spatial Data Structures and Speed-Up Techniques Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Spatial data structures What is it? Data structure that organizes

More information

The Terrain Rendering Pipeline. Stefan Roettger, Ingo Frick. VIS Group, University of Stuttgart. Massive Development, Mannheim

The Terrain Rendering Pipeline. Stefan Roettger, Ingo Frick. VIS Group, University of Stuttgart. Massive Development, Mannheim The Terrain Rendering Pipeline Stefan Roettger, Ingo Frick VIS Group, University of Stuttgart wwwvis.informatik.uni-stuttgart.de Massive Development, Mannheim www.massive.de Abstract: From a game developers

More information

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic. Shading Models There are two main types of rendering that we cover, polygon rendering ray tracing Polygon rendering is used to apply illumination models to polygons, whereas ray tracing applies to arbitrary

More information

Ray Tracing with Spatial Hierarchies. Jeff Mahovsky & Brian Wyvill CSC 305

Ray Tracing with Spatial Hierarchies. Jeff Mahovsky & Brian Wyvill CSC 305 Ray Tracing with Spatial Hierarchies Jeff Mahovsky & Brian Wyvill CSC 305 Ray Tracing Flexible, accurate, high-quality rendering Slow Simplest ray tracer: Test every ray against every object in the scene

More information

Graphics and Interaction Rendering pipeline & object modelling

Graphics and Interaction Rendering pipeline & object modelling 433-324 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Lecture outline Introduction to Modelling Polygonal geometry The rendering

More information

Spatial Data Structures

Spatial Data Structures CSCI 480 Computer Graphics Lecture 7 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids BSP Trees [Ch. 0.] March 8, 0 Jernej Barbic University of Southern California http://www-bcf.usc.edu/~jbarbic/cs480-s/

More information

REYES REYES REYES. Goals of REYES. REYES Design Principles

REYES REYES REYES. Goals of REYES. REYES Design Principles You might be surprised to know that most frames of all Pixar s films and shorts do not use a global illumination model for rendering! Instead, they use Renders Everything You Ever Saw Developed by Pixar

More information

PGT - A path generation toolbox for Matlab (v0.1)

PGT - A path generation toolbox for Matlab (v0.1) PGT - A path generation toolbox for Matlab (v0.1) Maria Nyström, Mikael Norrlöf Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping, Sweden WWW:

More information

Werner Purgathofer

Werner Purgathofer Einführung in Visual Computing 186.822 Visible Surface Detection Werner Purgathofer Visibility in the Rendering Pipeline scene objects in object space object capture/creation ti modeling viewing projection

More information

Ray tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1

Ray tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1 Ray tracing Computer Graphics COMP 770 (236) Spring 2007 Instructor: Brandon Lloyd 3/19/07 1 From last time Hidden surface removal Painter s algorithm Clipping algorithms Area subdivision BSP trees Z-Buffer

More information

Hello, Thanks for the introduction

Hello, Thanks for the introduction Hello, Thanks for the introduction 1 In this paper we suggest an efficient data-structure for precomputed shadows from point light or directional light-sources. Because, in fact, after more than four decades

More information

Programming projects. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer

Programming projects. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer Programming projects Rendering Algorithms Spring 2010 Matthias Zwicker Universität Bern Description of assignments on class webpage Use programming language and environment of your choice We recommend

More information

Pipeline Operations. CS 4620 Lecture 14

Pipeline Operations. CS 4620 Lecture 14 Pipeline Operations CS 4620 Lecture 14 2014 Steve Marschner 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives

More information

Computing Visibility. Backface Culling for General Visibility. One More Trick with Planes. BSP Trees Ray Casting Depth Buffering Quiz

Computing Visibility. Backface Culling for General Visibility. One More Trick with Planes. BSP Trees Ray Casting Depth Buffering Quiz Computing Visibility BSP Trees Ray Casting Depth Buffering Quiz Power of Plane Equations We ve gotten a lot of mileage out of one simple equation. Basis for D outcode-clipping Basis for plane-at-a-time

More information

Direct Rendering of Trimmed NURBS Surfaces

Direct Rendering of Trimmed NURBS Surfaces Direct Rendering of Trimmed NURBS Surfaces Hardware Graphics Pipeline 2/ 81 Hardware Graphics Pipeline GPU Video Memory CPU Vertex Processor Raster Unit Fragment Processor Render Target Screen Extended

More information

Lecture 12: Advanced Rendering

Lecture 12: Advanced Rendering Lecture 12: Advanced Rendering CSE 40166 Computer Graphics Peter Bui University of Notre Dame, IN, USA November 30, 2010 Limitations of OpenGL Pipeline Rendering Good Fast, real-time graphics rendering.

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

Computergrafik. Matthias Zwicker. Herbst 2010

Computergrafik. Matthias Zwicker. Herbst 2010 Computergrafik Matthias Zwicker Universität Bern Herbst 2010 Today Bump mapping Shadows Shadow mapping Shadow mapping in OpenGL Bump mapping Surface detail is often the result of small perturbations in

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Participating Media Measuring BRDFs 3D Digitizing & Scattering BSSRDFs Monte Carlo Simulation Dipole Approximation Today Ray Casting / Tracing Advantages? Ray

More information

Advanced Ray Tracing

Advanced Ray Tracing Advanced Ray Tracing Thanks to Fredo Durand and Barb Cutler The Ray Tree Ni surface normal Ri reflected ray Li shadow ray Ti transmitted (refracted) ray 51 MIT EECS 6.837, Cutler and Durand 1 Ray Tree

More information

Applications of Explicit Early-Z Culling

Applications of Explicit Early-Z Culling Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of

More information

4.5 VISIBLE SURFACE DETECTION METHODES

4.5 VISIBLE SURFACE DETECTION METHODES 4.5 VISIBLE SURFACE DETECTION METHODES A major consideration in the generation of realistic graphics displays is identifying those parts of a scene that are visible from a chosen viewing position. There

More information

Adaptive Point Cloud Rendering

Adaptive Point Cloud Rendering 1 Adaptive Point Cloud Rendering Project Plan Final Group: May13-11 Christopher Jeffers Eric Jensen Joel Rausch Client: Siemens PLM Software Client Contact: Michael Carter Adviser: Simanta Mitra 4/29/13

More information

Enabling immersive gaming experiences Intro to Ray Tracing

Enabling immersive gaming experiences Intro to Ray Tracing Enabling immersive gaming experiences Intro to Ray Tracing Overview What is Ray Tracing? Why Ray Tracing? PowerVR Wizard Architecture Example Content Unity Hybrid Rendering Demonstration 3 What is Ray

More information

Movie: For The Birds. Announcements. Ray Tracing 1. Programming 2 Recap. Programming 3 Info Test data for part 1 (Lines) is available

Movie: For The Birds. Announcements. Ray Tracing 1. Programming 2 Recap. Programming 3 Info Test data for part 1 (Lines) is available Now Playing: Movie: For The Birds Pixar, 2000 Liar Built To Spill from You In Reverse Released April 11, 2006 Ray Tracing 1 Rick Skarbez, Instructor COMP 575 November 1, 2007 Announcements Programming

More information

COMP 175: Computer Graphics April 11, 2018

COMP 175: Computer Graphics April 11, 2018 Lecture n+1: Recursive Ray Tracer2: Advanced Techniques and Data Structures COMP 175: Computer Graphics April 11, 2018 1/49 Review } Ray Intersect (Assignment 4): questions / comments? } Review of Recursive

More information

Chapter 5. Spatial Data Manipulation

Chapter 5. Spatial Data Manipulation Spatial Data Manipulation 95 Chapter 5. Spatial Data Manipulation Geographic databases particularly those supporting three-dimensional data provide the means to visualize and analyze the world around us

More information

COMP 4801 Final Year Project. Ray Tracing for Computer Graphics. Final Project Report FYP Runjing Liu. Advised by. Dr. L.Y.

COMP 4801 Final Year Project. Ray Tracing for Computer Graphics. Final Project Report FYP Runjing Liu. Advised by. Dr. L.Y. COMP 4801 Final Year Project Ray Tracing for Computer Graphics Final Project Report FYP 15014 by Runjing Liu Advised by Dr. L.Y. Wei 1 Abstract The goal of this project was to use ray tracing in a rendering

More information

Real Time Cloth Simulation

Real Time Cloth Simulation Real Time Cloth Simulation Sebastian Olsson (81-04-20) Mattias Stridsman (78-04-13) Linköpings Universitet Norrköping 2004-05-31 Table of contents Introduction...3 Spring Systems...3 Theory...3 Implementation...4

More information

Ray Tracing. Computer Graphics CMU /15-662, Fall 2016

Ray Tracing. Computer Graphics CMU /15-662, Fall 2016 Ray Tracing Computer Graphics CMU 15-462/15-662, Fall 2016 Primitive-partitioning vs. space-partitioning acceleration structures Primitive partitioning (bounding volume hierarchy): partitions node s primitives

More information

Computer Graphics and Image Processing Ray Tracing I

Computer Graphics and Image Processing Ray Tracing I Computer Graphics and Image Processing Ray Tracing I Part 1 Lecture 9 1 Today s Outline Introduction to Ray Tracing Ray Casting Intersecting Rays with Primitives Intersecting Rays with Transformed Primitives

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 11794--4400 Tel: (631)632-8450; Fax: (631)632-8334

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Reading for Today A Practical Model for Subsurface Light Transport, Jensen, Marschner, Levoy, & Hanrahan, SIGGRAPH 2001 Participating Media Measuring BRDFs

More information

3 Polygonal Modeling. Getting Started with Maya 103

3 Polygonal Modeling. Getting Started with Maya 103 3 Polygonal Modeling In Maya, modeling refers to the process of creating virtual 3D surfaces for the characters and objects in the Maya scene. Surfaces play an important role in the overall Maya workflow

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 11794--4400 Tel: (631)632-8450; Fax: (631)632-8334

More information

Lighting and Shading

Lighting and Shading Lighting and Shading Today: Local Illumination Solving the rendering equation is too expensive First do local illumination Then hack in reflections and shadows Local Shading: Notation light intensity in,

More information

Volume Shadows Tutorial Nuclear / the Lab

Volume Shadows Tutorial Nuclear / the Lab Volume Shadows Tutorial Nuclear / the Lab Introduction As you probably know the most popular rendering technique, when speed is more important than quality (i.e. realtime rendering), is polygon rasterization.

More information

CS 4620 Program 4: Ray II

CS 4620 Program 4: Ray II CS 4620 Program 4: Ray II out: Tuesday 11 November 2008 due: Tuesday 25 November 2008 1 Introduction In the first ray tracing assignment you built a simple ray tracer that handled just the basics. In this

More information

Character Modeling IAT 343 Lab 6. Lanz Singbeil

Character Modeling IAT 343 Lab 6. Lanz Singbeil Character Modeling IAT 343 Lab 6 Modeling Using Reference Sketches Start by creating a character sketch in a T-Pose (arms outstretched) Separate the sketch into 2 images with the same pixel height. Make

More information

Chapter 11 Global Illumination. Part 1 Ray Tracing. Reading: Angel s Interactive Computer Graphics (6 th ed.) Sections 11.1, 11.2, 11.

Chapter 11 Global Illumination. Part 1 Ray Tracing. Reading: Angel s Interactive Computer Graphics (6 th ed.) Sections 11.1, 11.2, 11. Chapter 11 Global Illumination Part 1 Ray Tracing Reading: Angel s Interactive Computer Graphics (6 th ed.) Sections 11.1, 11.2, 11.3 CG(U), Chap.11 Part 1:Ray Tracing 1 Can pipeline graphics renders images

More information

Deferred Rendering Due: Wednesday November 15 at 10pm

Deferred Rendering Due: Wednesday November 15 at 10pm CMSC 23700 Autumn 2017 Introduction to Computer Graphics Project 4 November 2, 2017 Deferred Rendering Due: Wednesday November 15 at 10pm 1 Summary This assignment uses the same application architecture

More information

Visible Surface Detection Methods

Visible Surface Detection Methods Visible urface Detection Methods Visible-urface Detection identifying visible parts of a scene (also hidden- elimination) type of algorithm depends on: complexity of scene type of objects available equipment

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

Stackless BVH Collision Detection for Physical Simulation

Stackless BVH Collision Detection for Physical Simulation Stackless BVH Collision Detection for Physical Simulation Jesper Damkjær Department of Computer Science, University of Copenhagen Universitetsparken 1, DK-2100, Copenhagen Ø, Denmark damkjaer@diku.dk February

More information

Assignment 6: Ray Tracing

Assignment 6: Ray Tracing Assignment 6: Ray Tracing Programming Lab Due: Monday, April 20 (midnight) 1 Introduction Throughout this semester you have written code that manipulated shapes and cameras to prepare a scene for rendering.

More information

Computer Graphics Fundamentals. Jon Macey

Computer Graphics Fundamentals. Jon Macey Computer Graphics Fundamentals Jon Macey jmacey@bournemouth.ac.uk http://nccastaff.bournemouth.ac.uk/jmacey/ 1 1 What is CG Fundamentals Looking at how Images (and Animations) are actually produced in

More information

CSE328 Fundamentals of Computer Graphics: Concepts, Theory, Algorithms, and Applications

CSE328 Fundamentals of Computer Graphics: Concepts, Theory, Algorithms, and Applications CSE328 Fundamentals of Computer Graphics: Concepts, Theory, Algorithms, and Applications Hong Qin Stony Brook University (SUNY at Stony Brook) Stony Brook, New York 11794-4400 Tel: (631)632-8450; Fax:

More information

Real-Time Universal Capture Facial Animation with GPU Skin Rendering

Real-Time Universal Capture Facial Animation with GPU Skin Rendering Real-Time Universal Capture Facial Animation with GPU Skin Rendering Meng Yang mengyang@seas.upenn.edu PROJECT ABSTRACT The project implements the real-time skin rendering algorithm presented in [1], and

More information