Volumetric Methods for Indirect Illumination

Size: px
Start display at page:

Download "Volumetric Methods for Indirect Illumination"

Transcription

1 Volumetric Methods for Indirect Illumination Jakob Udsholt and Christian Thode Larsen Kongens Lyngby 2008 IMM-B.Sc

2 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800 Kongens Lyngby, Denmark Phone , Fax

3 Summary This thesis describes a volumetric approach to indirect illumination of a scene in computer graphics. The method computes light in an environment which has been modeled as a volumetric grid, and the goal is to get visual results comparable to those of methods like radiosity, but with a runtime that is closer to that of realtime computer graphic techniques. Originally described by Rune Vendler, the method is implemented, analyzed and compared to existing indirect illumination methods, and then extended with functionality which should yield more visually realistic results. Finally a possible approach to a shader implementation of the method is described, and further extensions are proposed.

4 ii

5 Acknowledgements This project was done under the supervision of Jakob Andreas Baerentzen and Bent Dalgaard Larsen, both of which provided valuable input and guidance during the entire course. Rune Vendler described the method that we have implemented and extended on the Computer Graphics Visionday at DTU 2007, and without his brilliant presentation, we would never have been inspired to do a project like this.

6 iv

7 Contents Summary i Acknowledgements iii 1 Introduction Motivation Hypothesis Theory and concepts Global Illumination Rendering Equation Light Intensity Radiosity Irradiance Volumes Volume Graphics

8 vi CONTENTS 3 Dynamic Irradiance Volumes Concept Design Implementation Cell visibility Problem Constraints Solution GPU Implementation Motivation Implementation Results Discussion Futher work Conclusion 61 Bibliography 63

9 Chapter 1 Introduction 1.1 Motivation The goal of global illumination in computer graphics is to create a realistic approximation of the way light behaves in reality. In most real time applications, applications where changes to the scenery are visible immediately, light is calculated as a local light model, where only interaction between light source and object is taken into account. In a global light model, light interaction between objects are also calculated, but unfortunately this is a costly process. Techniques exist to approximate this form of light interaction, radiosity and ray tracing being some of the more prominent approaches [1]. These techniques produce images with a quality that is by far superior to that of local light models, but are not suitable in real time applications due to their calculation cost. Naturally, visual realism in real-time applications are desirable in many cases, namely computer games. Thus, with a goal of achieving real-time rendering and realistic lighting, developing and experimenting with new lighting techniques becomes viable. As previously mentioned, neither ray tracing nor radiosity in their mathematically correct implementations are suited for real time applications. 1 This justifies examination and experimentation with alternative techniques, irradiance 1 Variations can be made which improves running time at the expense of image quality.

10 2 Introduction volumes being one option. Conceptually, this means to model the environment as a volume which contains irradiance samples, which is then used to compute dynamic object illumination [3]. Unfortunately, this technique does not handle dynamic lighting well, because the irradiance samples are computed as a pre-process before environment visualization. At the Graphical Visionday 2007 at DTU, Rune Vendler proposed an approach to dynamic irradiance volumes, where an environment is modeled as a volumetric grid with light that is distributed from the light source to the grid cells based on a blur kernel. The kernel then calculates the irradiance at a given cell by sampling the neighbor cell irradiance [5]. This method results in light that is calculated and converges during runtime, as opposed to preprocess light calculations, and is more tolerant in regards to dynamic lights and objects. The technique yields visually appealing results, and this makes the prospects of improving the method interesting. Finally, computation effectiveness and speed optimizations are worthwhile goals for any application, and so an attempt of archiving these are a motivation in itself. Figure 1.1: Dynamic Irradiance Volumes

11 1.2 Hypothesis Hypothesis The proposal of the dynamic irradiance volume method as an alternative to real time global illumination approximations is arguably visually appealing as seen in [5]. We describe, implement and discuss this method, and compare it to the global illumination technique radiosity, with the expectation of comparable visual results, but much faster computations. While the dynamic irradiance method produces indirect illumination, it does so without any considerations of directional flow of light. This means light spreads through the environment freely, much like a fluid. The indirect illumination as well as the fluid like light diffusion are both consequences of the way the the dynamic irradiance volume blur kernel works. Light sampling occurs in cells which in reality would be occluded by the geometry, and thus only would receive illumination by light reflected from other objects. As a result, the geometry of these cells will receive an incorrectly high amount of light. From this follows a wrong light distribution throughout cells that sample from cells containing these high light intensities. Consequently, too much light is distributed throughout the environment, and the indirect illumination produces over lit results. The solution to this could possibly be to include directional information about the flow of light in the method. This has proven to be, to say the least, a somewhat impossible task. For this reason, we propose calculation and storage of visibility of cells as a possible solution to this problem. By letting cell visibility affect how the light is sampled, it should be possible to determine the areas that would only receive reflected light, and thus possible to reduce the amount of light that is wrongly sampled in these occluded cells. As mentioned in section 1.1, we want to approximate global illumination in a real-time program. For this reason, it is plausible to look at possible improvements to the method, which will affect computation time. We suggest a GPU implementation, which should improve computation speed by several multiples.

12 4 Introduction

13 Chapter 2 Theory and concepts 2.1 Global Illumination Global illumination conceptually means to represent how light behaves in a global context. That is, not only are light interactions between light sources and objects taken into account, but also interactions between objects, which produces indirect illumination. This is opposed to local light models, where only the light interaction between light source and object is calculated [1]. Global illumination techniques generally try to approximate the rendering equation, described in more detail in section 2.2. The purpose of global illumination is to create a visually realistic light simulation, and as a product of that, create realistic images. Different techniques exist, where some handle diffuse light interaction and others specular, in particular radiosity (diffuse interaction) and ray tracing (specular interaction). Common for these techniques are, that they cannot handle both types of light interaction, and that they are quite costly in computation time (as long as the light models are implemented in their mathematically correct definitions) [1]. As mentioned in section 1.1, it is viable to look at alternative techniques that uses more rough but faster approximations, such as the dynamic irradiance volume method. It is loosely based on regular irradiance volumes, which rely

14 6 Theory and concepts on radiosity for precalculations of object illumination [4]. The dynamic irradiance volume method essentially illuminates scenery by performing light diffusion through a volume, and for this reason radiosity will be used to compare and discuss the visual results achieved. Radiosity will be further described in section Rendering Equation The rendering equation was proposed by Kajiya, Jim in 1986 [1]. It is defined as: I(x, x ) = g(x, x )[ɛ(x, x ) + ρ(x, x, x )I(x, x )dx ] s The term I(x, x ) describes the light transfer from x to x. g(x, x) represents the visibility between the two points, and is described by the inverse square distance equation, which is elaborated in section 2.3. In the case of no visibility, this factor is 0. ɛ(x, x ) is the emitted light from x to x, and ρ(x, x, x ) is the energy reflected towards point x from x, originally arrived from point x [1]. The purpose of the rendering equation is to describe true light behavior for any given point on a surface. The equation is the foundation for most global illumination techniques, all of which try to solve the equation in different ways. The equation is recursive, and because of this, it is necessary to determine some threshold for the calculations to avoid an infinite loop. This is typically based on the given technique, and factors like time, depth of light reflections, or the amount of light exchanged. The rending equation is often written in a different form, the radiance equation, because this is a more useful definition for global illumination techniques [1]: L o (x, w ref ) = L e (x, w ref ) + ρ(x, w in, w out )L i (x, w in )(w in n)dw in Ω The equation expresses the outgoing radiance L o (x, w ref ) in direction w ref for any given point x on a surface. The emitted radiance L e (x, w ref ) is summed with the reflected light ρ(x, w in, w out ) for all ingoing light directions L i (x, w in ) multiplied by an attenuation factor, the dot product, w in n. This is also the theoretical basis for the radiosity technique, which is basically a finite element

15 2.3 Light Intensity 7 approach to solving the equation. Radiosity is conceptually described in section 2.4. Intuitively the radiance equation tells us, that light is distributed into the scene from the light sources (the L e term), and is then exchanged between the elements as calculations progress. This means that the total amount of light in the environment is fixed to the amount of light emitted from the light sources, and that light continues to converge in the environment until the calculations are terminated. This is an important observation for later comparisons, because dynamic irradiance volumes deviate from this by not keeping a constant amount of light in the environment, but instead has an upper bound on the total amount of light, described in further detail in section Light Intensity We generally experience that light intensity in any given point on a surface in an environment is reduced, the further away the surface point is from the light source. This is due to the fact that light, or more accurately energy, emitted from a point light source, is equally distributed in all directions from the point light source outwards in a sphere. This means, that the further away the surface point is from the light source, the bigger the sphere becomes, and less energy is distributed to each point in the area of the sphere. Light intensity based on distance can then be defined as the light intensity multiplied by the inverse of the sphere volume: I point = I source 1 4πr 2 which can then be derived to the more simple proportional relationship, which simply tells us that light intensity is inversely proportional to the squared distance between light source and point: I point = I source There is an important note about this relation, namely that the environment has preservation of light (or energy). In other words, no light is lost during the transfer over distance, it is simply spread out resulting in a lower intensity due to the increased area. As a result of this, light intensity can be estimated in 1 d 2

16 8 Theory and concepts all surface points visible from the light source, and their light intensity summed together equals the intensity of the light source. The relation between distance and intensity is used in the determination of form factors for radiosity patches, further described in section 2.4. Other relations are commonly used to diminish light as a function of distance, typically an inverse linear or exponential relation, but these do not maintain light equilibrium, and will not be described further. As mentioned in section 2.2, dynamic irradiance volumes do not preserve a constant amount of light or energy, but increases it up to an upper bound. This makes the intensity/distance relation relevant, because of the fundamentally different approximation to light transfer, and thus illustrates major differences between existing global illumination techniques, and the dynamic irradiance volume method. This is further elaborated in section Radiosity As mentioned in section 2.2, radiosity consists of a finite element approach to solve the rendering equation. We recall, that the rendering equation expresses the light exchange between infinitely many points on all surfaces in an environment. This is simplified and made finite by splitting all surfaces into patches, and then solving the equation for the patches, instead of points. First, form factors between all patches are calculated, which represents the geometric relationships between the patches, based on visibility, how the patches are oriented and the inverse squared distance relationship, mentioned in section 2.3. It is important to note, that the form factors have to be calculated in the case where the geometry move, which is the case for dynamic objects. This is the major reason why dynamic objects and radiosity does not work well together. The form factors simply are too costly to recalculate between all patches for each update. This is also the reason why the dynamic irradiance method performs better, as it does not use this kind of factors to estimate the light. This will be covered in section The patches then exchange energy, based on an NxN matrix, where N is the amount of patches. Several radiosity implementations exist, and while it is out of scope to describe radiosity in detail, the approach called progressive radiosity, or shooting, is worth mentioning, as it gradually increases lighting for all patches, by continuously distributing the light from the patches with the highest intensity. This makes it possible to get a visual confirmation about image quality rather early in the iterations of light exchange. This is useful, because it may take a considerable amount of time to get light convergence, depending on the amount of patches [1].

17 2.4 Radiosity 9 Figure 2.1: Radiosity, rectangles highlight shadows Radiosity has the ability to produce a very accurate diffuse light interaction between objects. The technique produces shadows because of the visibility calculations (the form factors), seen in figure 2.1, and it allows color bleeding (color from one object that bleeds onto another object), seen in figure 2.2. The technique is very slow though, and is not suited for real time applications [1]. However, it is a very good way of creating a precalculated radiance data set. This dataset may then be used for light maps, or, in the context of this thesis, for the creation of an irradiance volume [4]. Also, because radiosity and the dynamic irradiance volume method both handle diffuse light interaction, progressive radiosity will be used to compare the visual results between both techniques. Figure 2.2: Color bleeding

18 10 Theory and concepts 2.5 Irradiance Volumes Irradiance Volumes is one of the fundamental concepts upon which the dynamic irradiance volume method is built. For this reason, a brief and conceptual description of irradiance volume theory is provided. Irradiance is defined as the light energy density, measured in watt per square meter, that hits a given surface without directional dependencies. An irradiance volume contains irradiance samples, and is a way to utilize these based on a spatial data structure partitioning scheme. The irradiance samples are precomputed, and are then available during runtime for the illumination calculations [3]. Distribution functions are used to calculate irradiance samples in any given point in space, based on precalculated surface irradiance with a technique like radiosity. These sample points provide an approximation of the light intensity at that given point. Based on the sampling points, the light intensity for any point in space can then be approximated by the use of interpolation, and geometry can be accordingly illuminated [3]. The overall purpose of this technique is to get a map of light intensities based the global illumination of a given scenery, much like the way ambient light maps work. The difference is, that the samples can be used to illuminate dynamic objects instead of using a local light model [3]. An illustration of irradiance volumes, borrowed from [3] can be seen in figure 2.3. While this makes it possible to approximate some degree of global illumination of a dynamic object during runtime, it is impossible to adjust the illumination without having to recompute the irradiance samples. In other words, the technique is able to handle dynamic objects correctly, but the light itself is static and cannot be changed. As a consequence of this, the technique is not able to handle dynamic light sources, that is, light sources that roam the scene or change intensity and/or color. The dynamic irradiance volume method solves this, although at the cost of accuracy. This will be elaborated in section Volume Graphics To work with dynamic irradiance volumes, the world has to be represented in a grid. This means everything will be partioned in cells, normally cubic, and thus some way to represent traditional meshes in the grid is needed. Volume Graphics is a logical approach to solve this.

19 2.6 Volume Graphics 11 Figure 2.3: An irradiance volume The term volume graphics means to represent a model in a volume representation, and is an integral part of dynamic irradiance volumes. One way to represent a volume, is by partitioning it in volume pixels (which from now on will be referred to as voxels). A voxel is simply a pixel, but represented in three dimensions and thus defining a volumetric unit. Various methods can be used to create a voxel representation of an object, depending on how the original model is represented. One is to use several planes, or layers, acquired by scanning the model. These planes are then used to identify the voxels between each plane, and will result in a voxel based representation [1]. Another method, proposed by Jakob Andreas Baerentzen and Henrik Aanaes, is to use signed distance computations using angle weighted pseudu normals, to determine whether a point is inside or outside a mesh. This can then be used to identify whether a voxel overlaps part of the model. The model would have to be closed to ensure correct inside/outside evaluation [6]. Yet another approach, and the one we chose, is to identify the hull of a mesh represented object (intersection tests between the triangles and the voxels), which results in a voxel model that is hollow. Filling algorithms can then be used to mark the voxels in the hollow space, under the assumption that the model is closed (if not, the algorithm would fill the entire volume). For our purposes, it is not necessary to fill the model. This is further elaborated in section

20 12 Theory and concepts

21 Chapter 3 Dynamic Irradiance Volumes 3.1 Concept Before describing the dynamic irradiance volume method in detail, we provide a brief description of a concept implementation, that shows how the method distribute light into the scenery. This should allow for a pretty good idea as to how the technique works, and at the same time illustrate the things the technique handle well, and those it does not. We begin with a demonstration of the technique in two dimensions, and then extend it to three (which is fairly trivial). Conceptually, we need two things for a simple implementation. We need some way of representing an environment in a grid, and a blur kernel which will perform the light diffusion from the light source throughout the environment in every display loop. Basically, the grid is a two dimensional array containing cells, which then again contains a light value, and a boolean that describes if the cell is solid or not. The blur kernel iterates over all cells in the grid, and updates the current cell light by sampling the upper, lower, left and right cell light values. For each vertex, illumination can then be approximated by interpolation of the closest light values. Kernel and grid will be described in detail in section and section

22 14 Dynamic Irradiance Volumes Figure 3.1: 2D Concept Demo, three light sources, no dynamic light A two dimensional demonstration of the technique can be seen in figure 3.1. It is clear how the light has spread from the light sources, and is blended together when light from different sources collide. Light blending occurs as a direct consequence of the blur kernel sampling light, without differentiating between light or colors, and is one of the advantages of the technique. The blurring of light produces the approximation of indirect illumination in areas which normally only would receive reflected light from other objects, and not from the light sources. This results in one of the biggest visual artifacts, namely that too much light is blurred in the areas which should only be indirectly illuminated. This problem will be subject to further work in order to get a more correct light diffusion of indirect light, which is covered in chapter 4. Sampling in this manner has one added benefit, namely that the technique does not grow in computation cost as light sources are added. It only depends on the number of cells in the environment. Figure 3.2 illustrates the same situation as figure 3.1, but with a dynamic light roaming the two dimensional grid. Because the computation cost of the dynamic irradiance volume method scale with the amount of cells and not the number of light sources, there is no difference in cost when introducing the dynamic light.

23 3.1 Concept 15 Figure 3.2: 2D Concept Demo, three light sources, a dynamic light roams the scene It is visually clear how light is blurred through the cells, how the colors of static and dynamic light are blended together to provide a final color in each cell and how the light blurs around corners. The effect of the light moving around would be far better illustrated by running a demonstration program, but it should be intuitively clear that the dynamic light source is able to roam, is distributed into the environment as it moves around, and gradually disappears when it moves to another area. This tolerance against light that moves, is one of the advantages of the dynamic irradiance volume method compared to normal irradiance volumes. The technique would be similarly tolerant against dynamic objects, which is a major weakness of radiosity due to the form factor recalculations, mentioned in section 2.4. In figure 3.3 the demonstration has been extended to three dimensions. This is a straight forward implementation, and requires the blur kernel to sample in the 6 cells up, down, left, right, forward and backward (as a minimum). Other than that, the technique remains the same. The same visual qualities as well as problems should be noticeable here, especially how the green light is able to flow around the corner. To summarize, the dynamic irradiance method is very tolerant against changes in environment, as opposed to irradiance volumes and radiosity. We have focused on dynamic lights only in this chapter, but a more in depth discussion about dynamic objects and radiosity will be covered in chapter 6.

24 16 Dynamic Irradiance Volumes Figure 3.3: 3D Concept Demo, three light sources, a dynamic light roams the scene

25 3.2 Design Design This section contains detailed descriptions of the various elements that constitute a dynamic irradiance volume. Some handle the actual mechanics of the technique (blur kernel and grid being two of them), while others are not strictly necessary to implement the technique (voxelisation of models for example). The latter are nevertheless relevant, because the foundation for a qualitative comparison of techniques, like radiosity and dynamic irradiance volumes, is to be able to use the same test environments in both cases. Following the individual element descriptions, a full implementation of the technique will be reviewed in regards to visual results, computation efficiency and possible extensions. This will provide a natural basis for describing pursued extensions in detail, namely the inclusion of cell visibility and implementing the technique on the GPU Blur Kernel The blur kernel defines the way that light is sampled in the cell of interest. The kernel can have various shapes, based on what kind of sampling is desired. As mentioned in 3.1, the kernel samples surrounding cell light values, then stores an averaged result in the cell being processed, as seen in figure 3.4 and figure 3.5. This has several implications, namely how various parameters in the sampling and averaging process will produce different results in the light value stored. Relevant parameters to consider are whether the cell being sampled from are solid or not (do we want to average the sampled light values based on the amount of cells sampled, or only those that actually contain light), do we simply sum Figure 3.4: Simplest possible 2D blur kernel, the center cell accumulate 1/4 of the light from each sample cell

26 18 Dynamic Irradiance Volumes Figure 3.5: Simplest possible 3D blur kernel, the center cell accumulate 1/6 of the light from each sample cell together the light contributions and average with the number of sampled cells, do we weigh the cells differently based some kind of factor, do we use diagonal cells when sampling, and if that is the case, how do we average the light? Differences between kernel design and their visual results will be illustrated in section 3.3. As mentioned in section 3.1, this blur mechanic also is the reason that the dynamic irradiance volume method produce indirect illumination, although in a theoretically incorrect manner. Cell visibility, covered in chapter 4, is an attempt to correct this. An important point in regards to these considerations, is that manipulating the mentioned parameters will affect how the light is distributed. Ultimately, the kernel has to control the amount of light diffused, based on certain parameters (normally by averaging the surround cell light value sum). If the light values are not reduced, they continue to increase, which results in errors like over lighting and/or numerical errors due to overflow. This can be seen in figure 3.6. Depending on how the kernel performs the reduction/averaging of the light values, it is important to note that light might have to be represented in high dynamic range (the term will following this be referred to by its abbreviation, HDR), to make sure that light survives when performing the light diffusion [5]. It is important to note, that when we speak of HDR light sources, we simply mean representing them with intensities higher than 1, with the added benefit that more precision in light values are achieved. We do not, at any point, work with the full HDR technique or tonemapping, neither are they part of the scope

27 3.2 Design 19 Figure 3.6: Scene overlighting, light values are not limited by an upper bound, and explode due to numerical errors of this thesis. An example is a standard blur kernel, where light is calculated as follows (n is the amount of surrounding cells sampled upon): L current = n i=1 L i 1 n This results in light values with an upper bound of L source. It should be intuitively clear, that light values sampled in the surrounding of cells that does not contribute light, solids in particular, will not be able to reach the upper bound. This is illustrated in figure 3.7, where the light sources all have intensity values in the range of [0..1]. In particularly geometry heavy models, this may result in light values that are reduced to zero within a few cells distance of the light source, as can be seen in figure 3.8. The difference between the two illustrations should be clear. By introducing HDR light sources, the light are able to flow further, and the intensity of the light source then becomes a parameter that controls how far light is able to flow, as seen in figure 3.9. Recalling the key factors about light behavior in regards to global illumination techniques from section 2.2 and section 2.3, namely the relationship between light intensity and distance as well as energy preservation, it is clear that a standard blurkernel behaves rather differently compared to global illumination techniques. The total amount of light spanning all cells does not remain constant. Instead, light intensity continues to increase in each cell, as light is blurred

28 20 Dynamic Irradiance Volumes Figure 3.7: Illustration of the method with a basic blur kernel. The light sources have intensities in the range of [0..1] Figure 3.8: Another basic blur kernel illustration, but with more geometry. The geometry heavily reduces the light diffusion.

29 3.2 Design 21 Figure 3.9: Introducing HDR light sources, resulting in a better light flow back and forth between cells as the iterations over all cells progress. This continues until the upper bound for a cell, which depends on kernel design, is reached. In other words, the restraint on light intensity becomes the geometry and the way the blur kernel samples surrounding cells, not the distance between cell and light source. This is obviously wrong compared to true light behavior, and while the visual results look fairly good, it is a rather rough way of calculating illumination. A different way to control the how far light will be able to flow from the light source, is by introducing a light preservation factor, while keeping solid cells with a light contribution of zero out of the calculation (n is now non-solid cells only, and f p a scalar representing the preservation factor in the range of [0..1]): L current = n i=1 L i 1 n f p An example can be seen in figure Because any given light value is the average of the surrounding cell values, and each of these have been multiplied with the preservation factor as well, it is necessary to keep it fairly close to 1,

30 22 Dynamic Irradiance Volumes Figure 3.10: Blurring the light while disregarding solids for sampling. preservation factor ensures that the light is gradually reduced The to avoid light dying off too quickly. It should be noted that the preservation is easily combined with light sources that have HDR intensities, the preservation factor simply has to be adjusted correspondingly. In situations where numerical inaccuracies may result in loss of light, it even may be necessary to use HDR light sources. An intuitive thought one might have, following the introduced preservation factor, is that it may be possible to remedy the lost accuracy in light intensity by connecting preservation with distance. While it most certainly is possible to express a relationship between cell distance and intensities in a general sense (without considering how the blur kernel actually works), it it is important to remember one of the key attributes of the technique, namely that it does not rely on the amount of light sources. By simply blending the light from the light sources as the light is spread through the environment, a computation cost that scales with the amount of light sources is avoided. But, this also limits the parameters available when blurring the light. To evaluate light intensity in a cell based on distance to a light source, the following implications would have to be evaluated as well: Which light source are we evaluating the distance to, and how should visibility of cells be handled in

31 3.2 Design 23 regards to light equilibrium, as we recall from section 2.3 and section 2.4 that the inverse square law applies to a geometrical observation about surface point visibility? How should the visibility be handled for multiple light sources, and what should be done when cells have light intensities from multiple light sources blended? Ultimately, the inverse square law and light equilibrium are factors that depend on a global evaluation of light flow and geometry. Considerations like these are fundamental for the results achieved with global illumination techniques, but they are also the reason that these techniques are so costly. This introduces two major problems, when trying to achieve similar results with dynamic irradiance volumes. The first problem is, that the blur kernel only evaluates local data, and does not, at any point, evaluate global factors like visibility and distance. The second problem is actually implied by the first, because of the lack of global information when performing blur kernel sampling. While it is possible to increase the amount of information included in the blur kernel cell sampling, it is not possible to do so in real-time without increasing the computation cost. Any considerations about geometry, light sources, the positions of the light sources, the visibility between light sources and geometry and distances between them, would introduce computation costs that scale with the amount of light sources and/or geometry. This would invalidate one of the main points of the technique; that the computation cost is linear in the amount of cells ONLY. This point is important to maintain, because it becomes difficult to run dynamic irradiance volumes in real-time the bigger the amount of cells become, which is mentioned in section 3.3. These considerations are heavily intertwined with extending the method with information about cell visibility, and the problems, limitations as well as possible workarounds will be further elaborated and illustrated in chapter 4.

32 24 Dynamic Irradiance Volumes Environment Grid A grid is basically a two or three dimensional array, which may store an amount of objects identical to the grid size, and of a type that the grid has been designed to contain (or by using templates, whatever data structure is needed based on implementation and purpose). Objects in a grid are accessed by their index value, which lies in the range of [0..n] for each dimension. It is an advantage to represent the grid in a local coordinate system, where each positive axis defines the index values for each dimension (values below zero, or above the size of the grid in the x, y or z direction should be handled accordingly). A transformation matrix may then be defined, which translates the grid from local to world coordinates. Quite conveniently, the invert of this matrix may be used to bring geometry from world coordinates into grid coordinates, and appropriate cells are then easily accessed and modified as needed. Another important aspect of this matrix is scaling of the grid. By defining the scaling parameters in the transformation matrix, it is possible to increase or reduce the grid size, which becomes useful if you need to adjust how many details you want to encapsulate in the grid. The use of this is described in more detail in section Finally it should be noted, that it is also possible to rotate the grid. This is simply a possibility should some implementation require grids which are not aligned to the worlds x, y and z coordinate axis. For the purpose of dynamic irradiance volumes, the grid, two and three dimensional versions both, partitions the environment into cells, which contains the light values and geometry information. The cells are later on extended to contain information about cell visilibity, explained in detail in chapter 4. All cells in the grid are then evaluated in every display loop, and the values are updated according to the kernel design. Because the kernel iterates over all cells, and updates their values one cell at a time, it is necessary to implement the grid as a double buffer. If not implemented as such, the light diffusion will produce incorrect light sampling. One buffer contains the light values from the last iteration, which are used by the kernel to update the light values of the other buffer. The buffers are then swapped after each iteration.

33 3.2 Design Voxels As mentioned in section 3.2, it is necessary to compare techniques based on models loaded from the same resources, but with different internal representations based on the technique to be used. Most models are loaded in a mesh representation, and for many purposes, this is sufficient. In radiosity, the meshes are then converted to patch representations, see section 2.4, while they are converted to volumetric representations in the dynamic irradiance volume method. The choice of volume graphics is logical, because volumetric representations are a very efficient way to populate the grid with geometry (solid/non-solid markings), and it makes it easy to perform intersection tests as well as manipulate how the blur kernel samples in the regions of the grid that are populated. To convert mesh based models to volumetric representations, we use intersection tests to populate the grid. Each triangle of the mesh is brought into grid coordinates with the inverted grid transformation matrix, see section The bounding box of the triangle is then used to decide which cells possibly contain the geometry, and finally intersection tests between each cell of interest and triangle are performed. As seen in figure 3.11 and 3.12, this method marks the environment geometry in the grid, although the volumetric result is somewhat rough. It is possible to refine the result into a more close volumetric approximation by increasing the grid size combined with adjusting the grid matrix scaling, which effectively partitions the environment into more cells. By doing this, it is possible to fine grain the light diffusion, and, depending on the level of detail and the environment geometry, light flow might improve substantially. A consequence is that increasing the grid size comes at an increased computation cost. Thus, it is necessary to do a cost/benefit analysis of how detailed the environment should be represented in voxels, to achieve a visually satisfying approximation. Marking the geometry based on the cells which encapsulate the geometry, results in only the hull of models being marked in the grid. This has one downside, but several benefits. The downside is that light sampling occurs inside the models, which in some cases may result in unnecessary calculations. One benefit is, that it is not necessary to determine if the models are watertight or not. If they are watertight, light will not be able to flow into the inside of the model, and the calculations inside will simply result in light values of zero. If the models are not watertight, light will be able to flow inside, but it can then be argued, that that may actually be desired because of how that model was designed to look, and that no intelligent decision can be made when filling the grid, in regards to whether the model is supposed to be watertight or not. Another benefit is, that the kernel can be kept fairly simple, because it does not have to make intelligent decisions about how to sample the light inside/outside models.

34 26 Dynamic Irradiance Volumes Figure 3.11: Voxelized scene, interior screenshot Figure 3.12: Voxelized scene, exterior screenshot

35 3.3 Implementation Implementation This section describes a full implementation of a dynamic irradiance volume, as described and presented by Rune Vendler. Light is simply blurred into all cells, without considering areas that might actually be occluded as a result of geometry and light source positions [5]. The first step is preprocessing the information needed to perform the calculations: begin load the scene; load light sources; voxelize the scene; prepare the kernel; end During the preprocess step, the environment model is loaded into the program. Light sources for the environment are loaded (Including dynamic lights), and the environment is then converted to a voxel representation. Based on grid size, a world to grid transformation matrix is generated, which allows the model to be completely encapsulated by the grid. To avoid that vertices that reside in the border of the grid becomes dark due to light value look ups in the border of the light texture, the grid is padded with an extra layer of cells, to ensure correct light values for all vertices. The cells of the grid are then flagged solid or non-solid, as described in section Finally the kernel is set up. Following this comes trivial extensions, like setting up the camera and movement, and applying textures. When preprocessing are complete, the program then enters the display loop: begin move dynamic light; update grid; for all shapes do draw the shapes using grid light values; end end For every iteration the grid is updated. Based on light source positions, corresponding cells are updated with light values. The program then iterates over all cells in the grid, and light sampling is performed based on kernel design. After the kernel has performed its work, a volumetric texture is then generated

36 28 Dynamic Irradiance Volumes Figure 3.13: A simple blurkernel at work based on the produced light values. The light values are clamped to [0..255], then stored in the texture. Finally the texture is uploaded to the gfx memory, and vertices are illuminated with a shader program by performing look ups in the texture. This has the benefit that the irradiance samples are interpolated automatically for the vertices. An illustration of the results produced by a simple blur kernel can be seen in figure Sampling takes place as mentioned in section 3.1 and 3.2.1, and can be described by the following pseudo code: begin current cell irradiance set to zero; for six cells to be sampled do add sampling cell irradiance to current cell irradiance (solids always contribute 0 light); end divide current cell irradiance by six; store current cell irradiance in buffer grid; end Light values are summed together, then divided by six to keep the amount of light diffused constant. As seen in figure 3.14, this results in reduced light in the vicinity of geometry.

37 3.3 Implementation 29 Figure 3.14: Dark edges and corners can be seen in the red rectangles As mentioned in section 3.2.1, this can be modified to disregard solid cells, in which case the kernel only samples on cells that actually allow light flow. This can be computed as a preprocess step, and is simply a counter by which the final irradiance value is divided. Do note that the kernel still calculates light in the solid cells to get a sample in the close vicinity of the geometry occupying the voxel. This is necessary if the shader is to interpolate automatically. Without introducing a light preservation factor, this provides the result shown in figure From figure 3.15 it should be clearly visible, that the cells always reach their upper bound on light intensity, because the geometry and their zero light contributions no longer are considered in the sampling. The following pseudocode describes the computation: begin current cell irradiance set to zero; for six cells to be sampled do add sampling cell irradiance to current cell irradiance if sampling cell not solid; end divide current cell irradiance by precomputed counter; store current cell irradiance in buffer grid; end

38 30 Dynamic Irradiance Volumes Figure 3.15: Contribution kernel that disregards solid cells, but does not have a light preservation factor To remedy this, the light preservation factor is then introduced. Compared to the previous blur kernel design, the only difference lies in the preservation factor which is multiplied with the stored irradiance value, and for this reason pseudo code is omitted. The result can be seen in figure This is a much better result, and provides a decent illumination of the geometry. As mentioned in section 3.2.1, the preservation factor can be considered a very rough approximation to distance, but with previously mentioned limitations. Figure 3.16: Contribution kernel with light preservation

39 3.3 Implementation 31 Figure 3.17: Edge and corner illumination with contribution kernel Returning to the issue of illumination around edges and corners, notice the difference between figure 3.14 and The edges and corners generally receive more light with the contribution kernel, but they still keep a constant dark shade. This happens as a result of the interpolation of vertex illumination due to the the voxelisation of the environment and the kernel design. The dark areas occur in cells that is completely surrounded by solid cells, or cells that has not received light. As a consequence, these irradiance samples have a value of 0. This specifically happens in corner cells where light cannot flow due to the surrounding geometry and occupied voxels. As a result these samples contribute a light value of 0 to the light interpolation, which results in dark vertices in the vicinity of the sample. When smooth shading is performed, the result is the visible dark edges. This can be avoided by changing the kernel design to include sampling in diagonal cells, but in turn increases computation costs due to the additional sampling. Adding the diagonals to the sampling produces the result shown in figure Alternatively, the vertex illumination can be modified to disregard irradiance samples from solid cells for the interpolation. This would also require the interpolation of the samples to be implemented manually in the shader program. Another issue with light sampling exists, illustrated in figure 3.19, namely that the wall surface receive a brighter illumination than the rest of the wall. Recall from section how the blur kernel disregards solid cells. This ensures that

40 32 Dynamic Irradiance Volumes Figure 3.18: Diagonals added to cell sampling the sampled light values only get averaged by the amount of cells that possibly contribute light, and thus ensures a generally higher amount of light in the cell. But what happens, if light is sampled from a non solid cell, that is never able to receive light? The upper bound on the cell light intensity becomes smaller, because the kernel still average based on the solid/non-solid evaluation. This happens when sampling on the solid cells of a model hull, as we recall from section that only the hull of models are marked solid, not the interior cells. In the case of figure 3.19 the problem is actually inversed, because the light diffusion takes place inside a model that is fully enclosed in solids due to the walls. This means light never escapes outside the model, and once again results in wrong light sampling in the wall regions. Because the interpolation of the light value for vertex illumination uses samples with no light, the geometry consequently receives a darker shade. An easy fix is to evaluate if the surrounding cells actually contain light, and then average accordingly; If the cell contains light, the value is added, and a counter is increased (used for averaging the final result). If it does not contain light, nothing is done. This produces geometry that is lit up accordingly to where light flow occurs, although it does not differentiate between different light sources that produce different flows on either side of the solid. The final contribution kernel design is illustrated in figure 3.20.

41 3.3 Implementation 33 Figure 3.19: Issue with lighting because both sides of the wall contribute to the illumination Figure 3.20: Final contribution kernel

42 34 Dynamic Irradiance Volumes The following pseudocode describes the complete contribution kernel design and blur process: begin current cell irradiance set to zero; set lightcount to zero; for six cells to be sampled do if sampling cell not solid then add sampling cell irradiance to current cell irradiance; if sampling cell contains light and is not solid then increase lightcount by one; end end end if current cell not solid then divide current cell irradiance by precomputed counter and multiply current cell irradiance with light preservation factor; else if lightcount not zero then divide current cell irradiance by lightcount and multiply current cell irradiance with light preservation factor; end end store current cell irradiance in buffer grid; end

43 Chapter 4 Cell visibility 4.1 Problem As mentioned in section 1.2 and 3.1, the dynamic irradiance volume method perform indirect illumination by a blur mechanism. Other methods like ray tracing and radiosity use reflections of light to calculate the indirect illumination, which is costly, but also far more accurate than the blur kernel. More specifically, the kernel does not consider global parameters like geometry and light source positions, which are key elements when calculating a reflection. This was already partially covered in section 3.2.1, and nicely illustrated by figure 3.1. This issue may be more or less negligible depending on the kernel design, which is most easily observed by inspection of the different two dimensional kernel illustrations in section But, even if the visual error is seen only by a keen observer, it remains viable to examine if the kernel design can be modified to produce a more correct light diffusion. This introduces the question of how to achieve this goal. As covered in section 3.2.1, solving this problem probably implies that some extra information has to be computed, because the problem fundamentally is a global consideration of how the geometry is positioned relative to the light sources. In other words, some areas of the environment are only able to receive indirect illumination, and it is these areas that receives too much light with the blur kernel. Since

44 36 Cell visibility the blur kernel performs the update of cells, the solution should extend the cells with the needed information, and then let the kernel update the information together with the light value computations. We now have a somewhat loose definition on what the problem we want to solve is, and how we should approach the task of solving it. To make the solution domain more clear, it is necessary to examine the constraints that the solution should be devised under. 4.2 Constraints As mentioned in section 3.2.1, dynamic irradiance volumes has a linear computation cost in the number of cells. This should be considered a constraint for two reasons. The first reason is a general observation about computation costs in general, namely that linear computation costs scale well as a program and the input to be evaluated grow. The second reason is the level of detail that the cells represent. The more cells we are able to represent in our program without compromising a goal of running the technique in real-time, the more detailed the light diffusion will become. If the number of cells already provide a satisfactory level of detail in the light approximation, then we may instead increase the environment size until the computation cost has grown to make the program update slower than real-time. This constraint tells us one important fact: The linear computation cost is fundamental for the real-time success of the dynamic irradiance volume method, and if we increase the cost by an additional dependency like the number of light sources and/or geometry, the technique is not likely to work in real-time applications. This effectively leaves us with a fairly limited amount of options when we attempt to solve the problem. Rune Vendler mentions two other constraints, namely that the environment must not change too fast, and that the environment should be kept fairly small. The first constraint is an implication of the fact that we want to present the user with a scenery where the light has achieved a decent level of convergence, something which is not compatible with fast changes in scenery. A good example of this would be trailing of a dynamic light; the faster the dynamic light moves, the bigger the light trail becomes, and the light is never able to illuminate its surroundings sufficiently. This constraint is important on a general level because the overall goal is to achieve visually good results, but it is not really relevant in regards to the light diffusion problem. The second constraint is actually an implication of the previously mentioned relation between level of detail and computation cost when increasing the amount of cells.

45 4.3 Solution Solution Analysis and design With the problem outlined in section 4.1 and the solution domain somewhat defined in section 4.2, we are now able to analyze the problem and devise a solution. To get some idea of an approach that solves the problem, we first make some observations about general light behavior and why they are fundamentally different from the blur kernel. Strictly speaking, all light can be considered to be energy dispersed into the environment, in one direction for a directional (parallel) light source, equally in all directions of a sphere for an ideal point light source, or in a limited set of directions based on angles for a spotlight [2]. In the case of the dynamic irradiance volume method, we operate with an attempt to simulate a point light source, due to the light sampling mentioned in section The light then flows from the light source in the given directions. As the rendering equation mentioned in section 2.2 accounts for, the light is then reflected and/or absorbed by objects that obstructs the flow [1]. For a global illumination technique, the absorbed light would be used to consider illumination of the encountered object, and the reflected light would proceed its journey through the environment until obstructed. The light values of cells in dynamic irradiance volumes are only used to provide an estimate of light intensities near objects, and the technique does not consider light reflections at all. The reason why the dynamic irradiance volume method still can be considered an indirect illumination method, is due to the way the blur kernel spreads light into areas which only should be able to receive reflected light. 1. Even so, the general observation about light behavior is still important, because it tells us, that light only is able to reach the backside of an object through reflections from another object. In other words, it becomes clear that this is a question of the directional flow of light. From this we are now able to raise the question about how to handle this directional flow. Regardless of approach, we fundamentally have to evaluate something that depends on the position of the light source and position of the cell that receives light, to be able to derive the directional information. This tells us two things. The first is that we have to work with positional information about the light sources on a per cell basis, but recall that we do not allow the computation cost to scale with the number of light sources. A way to handle this must be deter- 1 How to combine light reflections and dynamic irradiance volumes may be a good area for further examination, further elaborated in section 6.1

46 38 Cell visibility mined. The second is, that we must devise a way to encapsulate this directional information, in such a way that the information can be used to control the light diffusion during the blurring process of the kernel. We have considered two possible approaches, and reached the conclusion that the first will not work for reasons to be clarified, while the second solves the initial problem, but does so with limited applicability. Furthermore, it potentially violates the definition of the dynamic irradiance volume method as a way to produce indirect illumination. Even so, it remains the best solution we have been able to devise within the problem domain, due to the core mechanics of the dynamic irradiance method. This will also be elaborated. The first approach would be to define a set of light transfer directions for each light source, and then let these directions blur through the cells, much in the same way as the light intensities. When encountering an area of interest, more specifically cells that contain geometry, the directions should then be accordingly updated to account for the reduction in light flow, and the light values updated relative to the allowed directions of light transfer. The benefit of this would be, that it would make it possible to achieve a more correct light flow. It would also make light reflections possible when sampling light values near solid cells. Unfortunately, there are some fundamental problems with this approach, which relate to the blur kernel mechanics. This was already described partially in section and briefly mentioned in section 4.1. As stated by general theory about illumination in section 2.2 and 2.3, emission of light from the light source takes place in given directions. This is the largest difference between conventional illumination theory and the dynamic irradiance volume method, because the blur process of kernel is omni directional. This fundamental mechanism is the reason why indirect illumination is produced by the kernel, and also the reason why directions and blurring are conflicting concepts. The blur process makes it very difficult, if not impossible, to perform any intelligent decision about how the light should be weighted and flow depending on the surrounding geometry. The result of this is, that we cannot simply define a light direction, or range of light directions for each light source, which are then blurred together with the light intensity values. To put it differently, following this approach would mean that the kernel at some point will evaluate cells, where one or several of the surrounding cells are occupied by geometry. Evaluation of the solid cells may be performed to affect the directional information, but because the directional information is spread through the cells by blurring, directional information that is only valid 2 for a single cell still blurs into the surrounding cells, which produces wrong results. If, on the other hand, we instead choose not to sample the directions because it produce directions that are invalid, then we end up with no directional in- 2 Partially valid, as the blurring process still obscures the information, but to a lesser extent.

47 4.3 Solution 39 Figure 4.1: Attempting to visualize directional information as a blur process. The arrows represent the directions of light flow, and the question marks illustrates the cells where deciding the light flow becomes problematic formation at all. Also, this does not account for the situation where directions from multiple light sources meet. If sampled, the result would be wrong. If not sampled, the directions would collide and die. The problem is most easily visualized by considering the general case, where a cell has to be updated based on a mix of solids and non solids. If a solid obstructs the light flow, then naturally all cells in the direction of the light flow behind the solid should be occluded. This can be determined when the kernel is evaluating the solid, but as soon as it samples non-solid cells again, the directional information about the limited light transfer blends with directional information from the surrounding non-solids that has not been obstructed. An attempt to visualize this can be seen in figure 4.1. The problem essentially boils down to a question of what directions of light really represent. For the sake of illustrating the difference in behavior, which figure 4.2 and 4.3 attempt to visualize, imagine that the light is considered a directional flow of particles of energy. These particles have a fixed direction as soon as they have been emitted from the light source. This is not compatible with the cell and blur kernel representation, because transfer of light never can be fixed to a single direction. If we tried to blur in separate directions, we would have to evaluate the light flow per direction and/or light source. This is far better represented with techniques like ray tracing and radiosity. The ideas from the first approach serves as inspiration for the second. We have gained a lot of information about the limitations we work with in the dynamic

48 40 Cell visibility Figure 4.2: Expected particle behavior, the particles only change direction when obstructed Figure 4.3: Blur kernel behavior, the particles change direction, even when not obstructed

49 4.3 Solution 41 irradiance volume method, namely that the blur process allows the light, or particles of energy if you will, to traverse in multiple directions. It has also become clear, that directional light flow is a concept which conflicts with the dynamic irradiance volume method. This limits our options, so instead, we change the definition of how the correct light flow should be achieved. Instead of trying to control the light flow itself, we try manipulate the light flow in areas that would only receive indirect illumination. This redefinition simplifies the problem, as the solution now becomes a question about deciding how much light should be allowed to flow in cells. In other words, pretty much a question of cell occlusion. It also avoids the major limitations previously encountered, more specifically the constraint of computation cost (under the assumption that the cell visibility information is available when the kernel blurs the light), and the blur process itself, because the blur kernel simply continues to work like described in section 3.2.1, without (major) modifications. One question remain, and that is how to determine the degree to which each cell allow light flow. If this information can be made available when the kernel samples, the constraints are not violated because the kernel knows where and how much the light should be blurred. But the information still has to be computed, it has to be computed during runtime, and it has to account for changes in the scenery such as moving light or geometry. As previously mentioned, cell visibility is a question of how the solid cells occlude other cells relative to the light source position, and fundamentally depends on the amount of geometry and number of light sources. Pre computing the visibility could be one suggestion, but this invalidates the major strength of the dynamic irradiance volume method, namely its ability to handle dynamic light sources and objects. To avoid all of the previously mentioned complications, we now decide that each cell should keep an updated position of the closest light source, which are then used to determine occluded areas behind solids. This is not correct from a global perspective, because the actual visibility is a global consideration about all geometry and light. But from a perspective that states distance as the primary factor as to which light sources potentially are most influential in a given area of the environment, and with the given constraints, this is our best option. To acquire the position of the closest light source, we simply update the cells with the positions as the blur kernel work. In the case where different light source positions are seen in the cells to be sampled, a squared distance evaluation is used to determine which light source is the closest, and the cells are updated accordingly. Keeping an updated light position for each cell, we are now able to evaluate the relation between light source and position of a solid. The information are then used to determine which cells the solids occlude (and the degree of occlusion), and the occluded areas are then expanded as the blur kernel process all

50 42 Cell visibility cells. Because the light source positions and the occluded areas are constantly updated, the approach is tolerant against light that moves and/or changes; the visibility and thus light flow is updated accordingly. One visual issue remains with this approach because of the lack of global evaluation. Cells that are occluded by the nearest light source, but visible from one further away will incorrectly occluded. This results in area that are incorrectly dark, where they in reality would appear lit up. Instead of allowing too much light to flow, we now allow too little, a consequence that cannot be avoided by the given approach. For this reason, and several others, cell visibility may not be an optimal solution. One major concern is, that cell visibility may confuse the dynamic irradiance method for a direct illumination method, which it is not. This is due to the (rough) shadows produced by the visibility. It should be clear that the cell visibility only is an attempt to correct the lighting approximations for indirect illumination, and not to reduce the dynamic irradiance method to a direct illumination method only. Other techniques exist for the purposes of direct illumination and shadows, and they do a rather good job of it. For shadows, one such is the shadow map method. Even so, a shadow effect is an unavoidable consequence of this approach. Another concern is the roughness of the produced shadows, and the fact that cell visibility only handles the primary light source for each cell. For this reason it may actually be a better idea to keep the standard blur kernel, and then apply a shadow technique like shadow maps. The quality of the various possible approaches, visually as well as computationally, will be further discussed in chapter 6. Even so, cell visibility remains the solution we were able to devise to solve the initial problem. The implementation details and visual results will be further described in section

51 4.3 Solution Implementation The implementation of cell visibility consists of two extensions to the standard technique. First, the cells has to be extended with a scalar that represents the factor of visibility and an id of the closest light source. Second, the blur kernel has to update each cell with the id the nearest light source, which are then used to perform the visibility update and following light diffusion. The visibility factor is defined to lie in the interval [0..1], and solids are initially set to a value of 0, non-solids 1. The closest light source position is spread out from the light source, as the blur kernel process the cells. If the kernel evaluates a cell that has no light source defined, and sees one in one of the sampling cells, the cell is updated with the light source position. If the kernel sees multiple light sources from the sampling cells, a distance evaluation is performed based on the squared distance, and the closest light source is then stored. A distance vector is then calculated from the position of the light source and the evaluated cell. The sign of each component of this vector are used to decide which cells should be examined when determining the visibility of the current cell. Following this, the current cell visibility has to be calculated based on the cells determined to contribute. This is done by weighting how much each component contributes compared to the total of all components, or more specifically by dividing the absolute value of each component by the sum of all components. These weightings ensure that sampling cells are correctly weighted, and that a cell always has a visibility in the range of [0..1]. Each sampling cell visibility are multiplied with the corresponding weight, and finally summed together and stored in the current cell. An illustration of the stored visibility can be seen in figure 4.4. Figure 4.5 illustrates the cell visibility (not light!) for a single dynamic light source during runtime. Figure 4.4: Cell visibility calculated in two dimensions

52 44 Cell visibility Figure 4.5: Demonstration of cell visibility in two dimensions Finally the light value is calculated for the cell by blurring according to one of the approaches mentioned in section 3.3, with the small difference that the final value is updated according to cell visibility. If the visibility is above a certain threshold, the cell is assumed to allow full light light flow and the light value is multiplied by 1.0. If the visibility is below the threshold, the light value is multiplied with the cell visibility. This makes it possible to control when a cell is considered non-visible, and from this adjust the sampling in the contribution kernel accordingly. One of the advantages by calculating the cell visibility this way is, that the occlusion scales instead of simply marking cells visible/non-visible. This produces a more soft change in illumination, which can be adjusted by using a threshold to determine when a cell is considered to allow full light flow. This is illustrated by figure 4.6. Finally, this approach also behaves rather well when working on a dynamic light source, illustrated in three dimensions by figure 4.7.

53 4.3 Solution 45 Figure 4.6: Illustration of difference in threshold and soft shadows The full implementation of cell visibility roughly follows this pseudo code: begin find the light source closest to the current cell; if current cell not solid then compute distance vector between closest light source and current cell; if if squared distance greater than zero then store absolute values of vector components and their sum; evaluate cell visibility samples; set visibility to zero; weigh and sum the visibility samples; end end blur the light, adjust according to threshold, and multiply with cell visibility for each value; end

54 46 Cell visibility Figure 4.7: Demonstration of cell visibility in three dimensions with a single dynamic light source

55 Chapter 5 GPU Implementation 5.1 Motivation The development of programmable GPUs (graphics processing unit) allows programmers to override the old fixed-function pipeline and put graphic cards to use in a wide range of other applications, which are not necessarily graphical in nature (physical simulations etc.). In our case we wish to move the core elements of the dynamic irradiance volume method from the CPU to the GPU, in hope that we will be able to achieve a noticeable speedup in the overall performance. This would also allow us to increase the amount of cells in the grid and, in turn, produce more appealing visual results. That main reason that we can consider the GPU a possible speedup for the method, lies in the fact that specialized graphics hardware is extremely fast at parallel processing of data, especially floating point values [8]. This is due to the pipelined design, which allows multiple primitives to be handled simultaneously. The speed gained by pipelining does however come with constraints on the type of data that can be processed. One of these constraints is that the calculation of the data cannot depend on any calculation which might take place at the same time, as this would forfeit the whole purpose of pipelining. Input data for processing is contained in textures, and the results are written

56 48 GPU Implementation back into a texture by redirecting the fragment data from the framebuffer. For the same reasons as stated in the previous paragraph it is not possible to use the same texture for both reading and writing at the same time. This means, that we must use a solution very similar to our double buffering grid (see 3.2.2), where we read from the back texture and write into the front texture. This comes with the added bonus that we no longer have to upload the light texture from the main memory to the graphics memory on every pass, a major slow down in the non GPU-based version. 5.2 Implementation As stated in 5.1, we hope to take advantage of the GPU by utilizing its multiple pipelines. Basically this means, that we want to create a shader that can process a texture, specifically an irradiance grid, in the same way our kernel does it. The result should be a texture which holds the three dimensional light data. The method we use is usually referred to as the ping-pong technique, so named for having two textures targets and iteratively ping-ponging(swapping) between them, always building on the last result. We use the OpenGLs own shading language, GLSL, for writing our shader code. GLSL provides a C interface to OpenGLs data structure, and therefore allows the code to be ported to the shader with relative ease. The static data for the calculations, the geometric features of the scene, will held in a texture that is created as part of the initialization, while the variable parameters such as light positions and colors will be passed to the shader using uniform variables. The shader code itself resembles the code for the CPU version of the method quite a bit. In fact the only major difference is, that grid access has been replaced by texture lookups. The ping-pong technique requires the use of a single framebuffer object(fbo) with two associated textures of equal dimensions. Two textures are needed because its not possible, nor would it make sense, to use at texture for both reading and writing at the same time. Framebuffer are an OpenGL extension that allows the programmer to render the contents of buffers such as color, depth and stencil to a texture rather than to their original destination. Both the textures will hold color data, for which reason we attach these to the FBO at GL COLOR ATTACHMENT0 EXT and GL COLOR ATTACHMENT1 EXT. This means that the textures can potentially be the target color buffer. The actual target is selected by calling gldrawbuffer with the desired attachment point. By continuously swapping between calling gldrawbuffer( GL COLOR ATTACHMENT0 EXT ) and gldrawbuffer( GL COLOR ATTACHMENT1 EXT ) we write to one texture,

57 5.2 Implementation 49 Figure 5.1: Layout of the 3D texture in 2D Figure 5.2: Layout of the 3D texture in 3D while we read from the other. The texture setup requires a bit of attention. The first problem is that framebuffers does not support rendering to a 3D texture. Thus, we have to lay out the data in a 2D texture. We store the entire grid in a single texture with each layer as a slice. The y coordinate specifies the slice number while the x and z coordinate is location within that slice, mapped as: (x, y ) = (x + y width, z). The dimension of the textures is thus (width height, depth). This can be seen in figure 5.1 and 5.2. Using this layout means that we want use GLSL 3D texture sampling to perform the lookup, but have to implement our own tri-linear interpolation using samples in the 2D texture. For more transparent code we utilize the texture rectangle extension. One of its features is that the texture can be accessed by using pixel-coordinates rather than normal texture coordinates in the range [0..1]. The result is, that our

58 50 GPU Implementation kernel code can be translated without scaling into texture coordinates. Textures will also have interpolation disabled setting GL TEXTURE MIN FILTER and GL TEXTURE MAG FILTER to GL NEAREST. A final consideration is the choice of a proper texture format for the data. As discussed in section 3.2 its necessary to represent the irradiance information in high dynamic range. We use OpenGL floating point textures, or rather half-float as 32 bit precision is to slow for real-time applications, with GL RGBA16F ARB [7]. Half-floats offers higher dynamic range compared to normalized 8 bit floats, but represents a significant loss in precession compared to 32 bit floats. They are however sufficient for our needs. In order to perform the calculation we need every fragment of the viewport covered. Prior to this the viewport must be resized to the size of the textures. The fragments must have an associated pair of texture coordinates that matches the corresponding location in the read texture. This can be done by resizing the viewport to (width height, depth) and setting up an orthographic projection. Then we render a quad, filling the viewport and using the rectangular texture coordinates. Now there is a 1:1 correspondence between locations in the read texture (texture coordinates) and the write texture (fragment coordinates). To condense the details of the of the prior sections we present some code that covers the process of performing the dynamic irradiance volume on the GPU. begin create texture rectangles read, write and associate then as color buffers with the framebuffer fbo; load shader; create a 3D texture based on the scene geometry; while drawing do update light positions and colors; enable fbo and set write as drawbuffer; setup viewport, projection and enable the shader; pass light positions, colors and the geometry to the shader; render a quad filling the viewport; reset viewport, project and disable the shader and fbo; render the scene using write as lightmap; swap read and write; end destroy textures, framebuffer and shader; end

59 5.3 Results Results The initial analysis suggested that the dynamic irradiance volume method could benefit from the multi-pipelined architecture of modern graphics hardware. In this section we will examine whether or not this is actually the case. This examination will not rely on fps measurements, but rather on experiences gathered while experimenting with different grid sizes. Our primary test machines have been current high-end desktop hardware with Nvidia 8800 graphics acceleration. On these machines we experienced a significant drop off in performance when we used grid sizes above 32 x 16 x 32 when using the CPU implementation. At 64 x 32 x 64 the performance could no longer be considered real-time. On the same hardware the GPU version yielded much better results. We reached a grid size of 128 x 64 x 128, before we where cutoff by the constraints on textures sizes 1. At no point did we encounter problems with performance. To push the stress on the hardware we decided to do multiple updates to the grid per frame. This counters the problem briefly mentioned in 4.2 where light begins to trail after a moving light source due to the delay in convergence, a problem that becomes more visible at higher grid sizes. By increasing the amount of updates the scene converges faster. We did not notice a impact in performance before we reached the, unreasonable high, number of 9 updates per second. Keeping in mind that a single update invokes calculation for each cell once, the number of total single updates is = , or close to 9.5 million updates. Comparing this to the CPU limit of = , it is obvious that the dynamic irradiance volume method lends itself well to a GPU solution. Roughly compared, this is a factor 72 speedup on the GPU compared to the CPU. Seeing as 9 updates are very excessive, the speedup could be used to handle more than one grid, perhaps one per dynamic light source. This would address the issues we encountered in our cell visibility extension 4.2. Testing the GPU solution on relatively new but less powerful hardware, still supporting the various extensions required by our project, yielded some interesting results. On a Nvidia 6100 Go (a notebook card) the performance was better using the CPU in every way, even at low grid sizes. Reasons for this include shared system and graphics memory, low performance in mobile graphics hardware. From this it becomes apparent, that a GPU implementation may not necessarily be the best choice, depending on the target hardware. The increased cell count also means that the grid turns into a better geometric representation of the scene. Figure 5.3 show a comparison between two grid 1 This could be remedied by choosing a better texture layout.

60 52 GPU Implementation Figure 5.3: Detail level at different grid sizes. Left: 32x16x32 Right: 128x64x128 operating at 32 x 16 x 32 and 128 x 64 x 128 respectively. It is clear that a lot more detail is visible in the high resolution grid, this detail could only be made available in the low resolution grid by combining it with a another light model, e.g. local lighting.

61 Chapter 6 Discussion The purpose of this chapter is to discuss the visual results of the dynamic irradiance volume method compared with radiosity, for the various kernel types, visibility and gpu extension. Based on this comparison, the usability of the technique as well as the extensions will be judged, specifically how these may find a use in contexts like image rendering or computer games. Reference screenshots of a radiosity illumination of our test environment, as well as screenshots showing dynamic irradiance volumes that emphasize the standard technique and the extensions can be seen in figures 6.2 to 6.7. The figures are meant to illustrate the points made during the following discussion. For the purpose of comparing radiosity and dynamic irradiance volumes, it is useful to highlight the key aspects of both. The main disadvantage of radiosity is speed due to the computation cost. Each iteration of the technique is far too slow for real-time purposes, and the image needs a long time, depending on level of detail several minutes, to achieve a converged state. On the other hand, the technique accurately represents diffuse light interaction between objects, it produces color bleeding and soft shadows, and most importantly, the images produced are very convincing for artificial creation of realistic sceneries, an example of which can be seen in figure 6.1. It should also be noted that many variations have been made to the radosity technique, with the result that the technique achieves a performance closer to a possible application for realtime rendering. We have not covered any of these variations, but the internet

62 54 Discussion provides an abundance of suggestions for ways of improving its speed. Figure 6.1: An example of the visual qualities achieved by radiosity Dynamic irradiance volumes behaves far better in regards to computation, but the light diffusion does not resemble the traditional light models, which was highlighted in section The indirect illumination produced is a very rough approximation of light behavior compared to other global illumination techniques, if it can be called an approximation at all. The major reason for this, which was highlighted in section 4.3.1, is the lack of light reflections in any form, and the flow of light which is omnidirectional. Due to the lack of directional light flow and of reflections, the dynamic irradiance volume method does not produce light interaction between objects, something radiosity excel at. Again, this is a consequence of the light diffusion being a blurring process, which simply distributes the light like a fluid. For this reason, the concept of object light interaction does not really make much sense, but may be subject to futher examination, which will be mentioned in section 6.1. Because of the lack of light interaction, dynamic irradiance volumes does not produce color bleeding either, another possible subject for further work. Finally, the indirect illumination produced by blurring produces too much light in areas that normally would be illuminated by reflected light only. This may be partly remedied with the introduction of the visibility extension. The visual result becomes more correct, but is still visibly inaccurate due the decision not to scale the computation costs with the amount of light sources. Effectively, this produces areas of conflict where too little or no flow of light is allowed, as described in section and section The GPU implementation allows for a higher detail in the voxel representation, which consequently results in a more detailed light diffusion and object illumination. As mentioned in section 5.3, the speed is increased by several multiples, and this raises the question if it would be plausible to loosen the constraint of a linear computation cost, to allow for a dynamic irradiance volume per light source for the cell visibility calculations. While increasing computation and storage costs several times, the resources available may actually be large enough to

63 55 allow for a real time calculation, with the additional benefit that the areas of conflict would be eliminated, thus resulting in a combined light picture where the flow of light has been accurately represented. We consider this suggestion out of scope in the context of our work, as it violates one of the fundamental benefits of the dynamic irradiance volume method, but may be an interesting thing to consider, regardless. Then, with such an apparent level of inaccuracy compared to traditional techniques, what are dynamic irradiance volumes really good for? It is clear that dynamic irradiance volumes should not be used for applications where a high accuracy in approximation to real world illumination is needed. A specific example would be synthetic images, or games where a high level of realism is desired. A standard dynamic irradiance volume implementation results in inaccurate, but fast illumination. Disregarding how the kernel samples the light, which is fundamentally just a cosmetic issue, we recall the major benefits that the technique introduce, namely the ability to dynamically insert, remove, change and move light and objects around the environment without increasing the computation cost. This includes parameters such as the amount of light sources and objects, their positions and the color and intensity od the light sources. From this follows, that applications where a high level of flexibility with the light and objests is needed, and where realism is not an issue, dynamic irradiance volumes will most likely be sufficient, if not favorable, and produce results that are visually attractive. This was also the major application mentioned by Rune Vendler at Visionday 2007 [5]. By combining the kernel with cell visibility, it is possible to manipulate the level of indirect illumination. While this would be essential if the technique was to be used for more realistic purposes, it is our opinion, that the limitations mentioned in section and section impairs any practical application for illumination of a scene, possibly with an exception for a single light source scenery. Instead, the volume may be useful for a dynamic real-time map. This map could then provide information about dynamic light source illumination as a look up feature for another visualization method, which would use the map to enhance its own illumination. As previously mentioned, the use of a GPU implementation might actually prove to be a possible solution to the limitations due to the visual dependency on the amount of light sources. We have mentioned tolerance toward dynamic elements as a one of the strengths of the dynamic irradiance volume method. The actual implementations have dealt mainly with dynamic lights, but its also worthwhile discussing the properties of dynamic objects in connection with the method. A major drawback of radiosity, as stated in section 2.4, is that if the geometry is updated, it is

64 56 Discussion necessary to recompute the form factors for the surfaces, which is a very costly process. This is not the case with the dynamic irradiance volume method, as only updates to the volumetric grid is required for the entire algorithm to adapt to a change. This can be done in many different ways and is a subject of research in itself, but a simple way of doing this could be to use bounding boxes, or spheres, as no mesh voxelization would be required. Rapidly changing geometry would prove a challenge for the same reasons that fast moving light sources can be a problem, it takes time for the light to converge and reflect the change. But as we described in section 5.3, raising the updates per frame can counteract this, if the hardware permits it. The ability to effectively consider dynamic objects in the world is a really interesting aspect that makes dynamic irradiance volumes, under the circumstances described, a viable solution over radiosity. A point we have stressed is that dynamic irradiance volumes is a method for indirect illumination. The cell visibility extension is attempt to correct light behavior by including directions even though the results resemble a attempt to create shadows. This manner of direct shadows is the domain of shadow maps and shadow volumes that are time tested and handle these things much better. We have include two figures 6.6 and 6.7 where the first is a example of our cell visibility, and the latter is the contribution kernel combined with omni directional shadow maps. Examining the pictures it is evident why the dynamic irradiance volume method should not be considered a alternative to direct shadows. Even with a rather crude shadow map implementation the results are much more convincing and many extensions for shadow maps exists can improve the effect (for example soft shadows). Using a low threshold value would dim light in the shadow areas, not force a complete absence of light. This is the behavior that we wanted from cell visibility and could be used in connection with shadow maps to create are more correct direct and indirect illumination. Finally it should be noted, that the cell visibility computations abide to the constraint of a linear computation cost in the number of cells, and the extra cost of the extension is not great. If accuracy is not an issue, it may be enough to slightly dim the illumination with the cell visibility, and then create shadows only for dynamic objects with the shadow map technique. To summarize, dynamic irradiance volumes cannot compete with other global illumination techniques in regards to realism (nor should it, as it is only a method of indirect illumination). It does, however, produce an illumination that (arguably) looks good. If accuracy is not an issue, it can provide easy, flexible and dynamic illumination which scales well with increase or decrease in detail, and it is easy to implement.

65 57 Figure 6.2: Our demonstration environment with radiosity. Figure 6.3: The contribution kernel at work. A much better light flow is achieved, and a fairly good visual result is produced. Figure 6.4: GPU implemented contribution kernel. Notice how the HDR lighting produces very realistic approximation of the light source intensity, with visually good results.

66 58 Discussion Figure 6.5: Extending the dynamic irradiance volume method with cell visibility. A more correct light flow is achieved, but the reduction in light that moves around geometry also reduces the indirect illumination in the less visible areas. This can be scaled by adjusting a threshold for the flow of light in the less visible areas. Figure 6.6: Extending the dynamic irradiance volume method with cell visibility. Figure 6.7: Omnidirectional shadowmaps combined with the contrib kernel.

67 6.1 Futher work Futher work As mentioned earlier in this chapter, several subjects of examinations remain for further work on the dynamic irradiance volume method. Reflection of light could be one such subject. While this may prove to be just as incompatible with the method as the directional information about light flow (since light reflection also is a question of the positions of light sources and the reflection points on the geometry), it would nevertheless be an interesting extension, because it would increase the resemblance between the dynamic irradiance volume method and other global illumination techniques like radiosity. While we think a combination of the blur kernel with directional light is incompatible for previously stated reasons, it is also possible that further research could disclose a compatible approach which we have not covered in our work. A fairly straight forward extension, which we have not examined due to the time available for the project, is color bleeding. It is possible to provide a cell with information about the geometry that occupies it, and from this information, it becomes possible to look up the material of the geometry, and perform some overall estimation about how the color of light reflected from the cell should be affected by the material. It would also be plausible to implement a more correct way of looking up the light intensity values for each vertex, instead of simply performing a tri-linear interpolation of the vertex light intensity. It might make sense to look at the normal in the given vertex, and then lookup, interpolate and weigh the result accordingly. This would eliminate vertices being lit up from both sides of the geometry, for example vertices on the surface of a thin wall. Rune vendler mentioned this in his presentation at the Visionday 2007 [5]. Vendler also mentioned various types of cell occlusion, such as directional occlusion and occlusion resulting in semi transparent objects. Extensions like this bear strong resemblance to our visibility extension, as we use occlusion to restrict light flow, and remains (in our opinion), the most viable way to control the blur kernel. Finally, we have discussed the applications of dynamic objects. Combining indirect illumination with dynamic objects is generally not desired in other global illumination techniques, as it is not possible to get a converged result due to the object moving, which degrades the visual product. The dynamic irradiance volume method is tolerant against dynamic objects, albeit the changes in environment has to take place slowly. This allows the dynamic objects to influence the volume, but sampling the volume for the illumination of the dynamic objects may not be desirable, due the computation costs and hardware compability. Instead a direct illumination approach can be used, which simulates the light intensities and positions in the volume. To get object shadows and reflection, traditional techniques, such as shadow maps, could be used. All of these are then combined in the final presentation of the environment.

68 60 Discussion

69 Chapter 7 Conclusion As stated in section 1.2, the dynamic irradiance volume method has been fully described, analysed and implemented, the details of which can be found in chapter 3. A brief review shows that the method is easy to implement, but heavily deviates from existing illumination techniques, and does not, from a theoretic point of view, respect the rendering equation mentioned in section 2.2 and the theory about light preservation mentioned in section 2.2 and section 2.3. Simple modifications to the kernel design in regards to sampling has been found to produce superior visual results compared to a standard blur kernel, and modifications of said designs remains a way of making cosmetic changes to the results. The method has been found to produce visually attractive results, but also comes with inaccuracies, specifically in regards to the light flow, the issues of which are visible to the keen observer. The inaccurate light flow, which is a question of too much light flowing into areas only reachable by light reflections, has been analyzed and the reasons accounted for. A solution has been devised based on the constraint of a linear computation cost in the number of cells, and the fact that the blur kernel actually produce omni directional light flow. Because of the latter, introducing a light flow which respects directional information has been deemed very difficult, if not impossible, and for this reason an alternative approach has been taken, which instead manipulates the area which light is allowed to flow in. This achieves the goal of a better control with the

70 62 Conclusion indirect illumination, but with a limited scope because the mentioned manipulation only accounts for the nearest light source, a problem with directly relates to the constraint about computation cost. The full problem, analysis and implementation description can be read in chapter 4. Furthermore, the cell visibility has the unfortunate side effect that too much manipulation of the light flow makes the technique resemble a direct illumination technique, which produces a shadowy side effect. Far better results when creating shadows are achieved with a technique like shadow maps, and the cell visibility should only be seen as an attempt to adjust the illumination of areas only reachable by reflected light. For this reason an optimal result may possibly be achieved by combining a contribution kernel with shadow maps. Alternatively the cell visibility kernel, running with a low grade of occlusion, can be used to adjust the illumination and then combine the result with shadow maps. It would then be necessary to determine if the result is worth the extra computations. Finally, the question of optimizing the computation speed by implementing the method on GPU has been described in chapter 5. An implementation has been suggested, and the computation speed has been found to increase by several multiples due to the parallel pipelining of the data. Furthermore, the GPU implementation allows for a far better level of detail in the environment, which greatly increases the visual results. Due to the large increase in speed, it has been suggested that a scaling of computation cost with the number of light sources, while violating a key aspects of the method, may actually be viable because it remedies a major limitation in the cell visibility extension. This has been covered in section 6, together with the possible applications of the standard dynamic irradiance method, the cell visibility extension and GPU implementation. Figure 7.1: Disco mode enabled

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary Summary Ambien Occlusion Kadi Bouatouch IRISA Email: kadi@irisa.fr 1. Lighting 2. Definition 3. Computing the ambient occlusion 4. Ambient occlusion fields 5. Dynamic ambient occlusion 1 2 Lighting: Ambient

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03 1 Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03 Table of Contents 1 I. Overview 2 II. Creation of the landscape using fractals 3 A.

More information

Applications of Explicit Early-Z Culling

Applications of Explicit Early-Z Culling Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of

More information

Cheap and Dirty Irradiance Volumes for Real-Time Dynamic Scenes. Rune Vendler

Cheap and Dirty Irradiance Volumes for Real-Time Dynamic Scenes. Rune Vendler Cheap and Dirty Irradiance Volumes for Real-Time Dynamic Scenes Rune Vendler Agenda Introduction What are irradiance volumes? Dynamic irradiance volumes using iterative flow Mapping the algorithm to modern

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

x ~ Hemispheric Lighting

x ~ Hemispheric Lighting Irradiance and Incoming Radiance Imagine a sensor which is a small, flat plane centered at a point ~ x in space and oriented so that its normal points in the direction n. This sensor can compute the total

More information

Introduction. Chapter Computer Graphics

Introduction. Chapter Computer Graphics Chapter 1 Introduction 1.1. Computer Graphics Computer graphics has grown at an astounding rate over the last three decades. In the 1970s, frame-buffers capable of displaying digital images were rare and

More information

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology Point Cloud Filtering using Ray Casting by Eric Jensen 01 The Basic Methodology Ray tracing in standard graphics study is a method of following the path of a photon from the light source to the camera,

More information

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural 1 Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural to consider using it in video games too. 2 I hope that

More information

Chapter 7 - Light, Materials, Appearance

Chapter 7 - Light, Materials, Appearance Chapter 7 - Light, Materials, Appearance Types of light in nature and in CG Shadows Using lights in CG Illumination models Textures and maps Procedural surface descriptions Literature: E. Angel/D. Shreiner,

More information

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading CENG 477 Introduction to Computer Graphics Ray Tracing: Shading Last Week Until now we learned: How to create the primary rays from the given camera and image plane parameters How to intersect these rays

More information

MIT Monte-Carlo Ray Tracing. MIT EECS 6.837, Cutler and Durand 1

MIT Monte-Carlo Ray Tracing. MIT EECS 6.837, Cutler and Durand 1 MIT 6.837 Monte-Carlo Ray Tracing MIT EECS 6.837, Cutler and Durand 1 Schedule Review Session: Tuesday November 18 th, 7:30 pm bring lots of questions! Quiz 2: Thursday November 20 th, in class (one weeks

More information

CPSC GLOBAL ILLUMINATION

CPSC GLOBAL ILLUMINATION CPSC 314 21 GLOBAL ILLUMINATION Textbook: 20 UGRAD.CS.UBC.CA/~CS314 Mikhail Bessmeltsev ILLUMINATION MODELS/ALGORITHMS Local illumination - Fast Ignore real physics, approximate the look Interaction of

More information

Computer Graphics and GPGPU Programming

Computer Graphics and GPGPU Programming Computer Graphics and GPGPU Programming Donato D Ambrosio Department of Mathematics and Computer Science and Center of Excellence for High Performace Computing Cubo 22B, University of Calabria, Rende 87036,

More information

Global Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Global Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller Global Illumination CMPT 361 Introduction to Computer Graphics Torsten Möller Reading Foley, van Dam (better): Chapter 16.7-13 Angel: Chapter 5.11, 11.1-11.5 2 Limitation of local illumination A concrete

More information

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye Ray Tracing What was the rendering equation? Motivate & list the terms. Relate the rendering equation to forward ray tracing. Why is forward ray tracing not good for image formation? What is the difference

More information

TSBK03 Screen-Space Ambient Occlusion

TSBK03 Screen-Space Ambient Occlusion TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm

More information

Assignment 6: Ray Tracing

Assignment 6: Ray Tracing Assignment 6: Ray Tracing Programming Lab Due: Monday, April 20 (midnight) 1 Introduction Throughout this semester you have written code that manipulated shapes and cameras to prepare a scene for rendering.

More information

Global Illumination The Game of Light Transport. Jian Huang

Global Illumination The Game of Light Transport. Jian Huang Global Illumination The Game of Light Transport Jian Huang Looking Back Ray-tracing and radiosity both computes global illumination Is there a more general methodology? It s a game of light transport.

More information

Schedule. MIT Monte-Carlo Ray Tracing. Radiosity. Review of last week? Limitations of radiosity. Radiosity

Schedule. MIT Monte-Carlo Ray Tracing. Radiosity. Review of last week? Limitations of radiosity. Radiosity Schedule Review Session: Tuesday November 18 th, 7:30 pm, Room 2-136 bring lots of questions! MIT 6.837 Monte-Carlo Ray Tracing Quiz 2: Thursday November 20 th, in class (one weeks from today) MIT EECS

More information

Deferred Rendering Due: Wednesday November 15 at 10pm

Deferred Rendering Due: Wednesday November 15 at 10pm CMSC 23700 Autumn 2017 Introduction to Computer Graphics Project 4 November 2, 2017 Deferred Rendering Due: Wednesday November 15 at 10pm 1 Summary This assignment uses the same application architecture

More information

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows are what we normally see in the real world. If you are near a bare halogen bulb, a stage spotlight, or other

More information

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker Rendering Algorithms: Real-time indirect illumination Spring 2010 Matthias Zwicker Today Real-time indirect illumination Ray tracing vs. Rasterization Screen space techniques Visibility & shadows Instant

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Matthias Zwicker Universität Bern Herbst 2009 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation of physics Global

More information

The Rendering Equation & Monte Carlo Ray Tracing

The Rendering Equation & Monte Carlo Ray Tracing Last Time? Local Illumination & Monte Carlo Ray Tracing BRDF Ideal Diffuse Reflectance Ideal Specular Reflectance The Phong Model Radiosity Equation/Matrix Calculating the Form Factors Aj Ai Reading for

More information

Final Project: Real-Time Global Illumination with Radiance Regression Functions

Final Project: Real-Time Global Illumination with Radiance Regression Functions Volume xx (200y), Number z, pp. 1 5 Final Project: Real-Time Global Illumination with Radiance Regression Functions Fu-Jun Luan Abstract This is a report for machine learning final project, which combines

More information

Graphics for VEs. Ruth Aylett

Graphics for VEs. Ruth Aylett Graphics for VEs Ruth Aylett Overview VE Software Graphics for VEs The graphics pipeline Projections Lighting Shading VR software Two main types of software used: off-line authoring or modelling packages

More information

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources.

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources. 11 11.1 Basics So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources. Global models include incident light that arrives

More information

Lighting. To do. Course Outline. This Lecture. Continue to work on ray programming assignment Start thinking about final project

Lighting. To do. Course Outline. This Lecture. Continue to work on ray programming assignment Start thinking about final project To do Continue to work on ray programming assignment Start thinking about final project Lighting Course Outline 3D Graphics Pipeline Modeling (Creating 3D Geometry) Mesh; modeling; sampling; Interaction

More information

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1) Computer Graphics Lecture 14 Bump-mapping, Global Illumination (1) Today - Bump mapping - Displacement mapping - Global Illumination Radiosity Bump Mapping - A method to increase the realism of 3D objects

More information

CS451Real-time Rendering Pipeline

CS451Real-time Rendering Pipeline 1 CS451Real-time Rendering Pipeline JYH-MING LIEN DEPARTMENT OF COMPUTER SCIENCE GEORGE MASON UNIVERSITY Based on Tomas Akenine-Möller s lecture note You say that you render a 3D 2 scene, but what does

More information

SEOUL NATIONAL UNIVERSITY

SEOUL NATIONAL UNIVERSITY Fashion Technology 5. 3D Garment CAD-1 Sungmin Kim SEOUL NATIONAL UNIVERSITY Overview Design Process Concept Design Scalable vector graphics Feature-based design Pattern Design 2D Parametric design 3D

More information

Recollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows

Recollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows Recollection Models Pixels Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows Can be computed in different stages 1 So far we came to Geometry model 3 Surface

More information

Chapter 11 Global Illumination. Part 1 Ray Tracing. Reading: Angel s Interactive Computer Graphics (6 th ed.) Sections 11.1, 11.2, 11.

Chapter 11 Global Illumination. Part 1 Ray Tracing. Reading: Angel s Interactive Computer Graphics (6 th ed.) Sections 11.1, 11.2, 11. Chapter 11 Global Illumination Part 1 Ray Tracing Reading: Angel s Interactive Computer Graphics (6 th ed.) Sections 11.1, 11.2, 11.3 CG(U), Chap.11 Part 1:Ray Tracing 1 Can pipeline graphics renders images

More information

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students) TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students) Saturday, January 13 th, 2018, 08:30-12:30 Examiner Ulf Assarsson, tel. 031-772 1775 Permitted Technical Aids None, except

More information

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11 Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION

More information

CS 4620 Midterm, March 21, 2017

CS 4620 Midterm, March 21, 2017 CS 460 Midterm, March 1, 017 This 90-minute exam has 4 questions worth a total of 100 points. Use the back of the pages if you need more space. Academic Integrity is expected of all students of Cornell

More information

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic. Shading Models There are two main types of rendering that we cover, polygon rendering ray tracing Polygon rendering is used to apply illumination models to polygons, whereas ray tracing applies to arbitrary

More information

Physically-Based Laser Simulation

Physically-Based Laser Simulation Physically-Based Laser Simulation Greg Reshko Carnegie Mellon University reshko@cs.cmu.edu Dave Mowatt Carnegie Mellon University dmowatt@andrew.cmu.edu Abstract In this paper, we describe our work on

More information

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source Topic 9: Lighting & Reflection models Lighting & reflection The Phong reflection model diffuse component ambient component specular component Spot the differences Terminology Illumination The transport

More information

OpenGl Pipeline. triangles, lines, points, images. Per-vertex ops. Primitive assembly. Texturing. Rasterization. Per-fragment ops.

OpenGl Pipeline. triangles, lines, points, images. Per-vertex ops. Primitive assembly. Texturing. Rasterization. Per-fragment ops. OpenGl Pipeline Individual Vertices Transformed Vertices Commands Processor Per-vertex ops Primitive assembly triangles, lines, points, images Primitives Fragments Rasterization Texturing Per-fragment

More information

Accelerated Ambient Occlusion Using Spatial Subdivision Structures

Accelerated Ambient Occlusion Using Spatial Subdivision Structures Abstract Ambient Occlusion is a relatively new method that gives global illumination like results. This paper presents a method to accelerate ambient occlusion using the form factor method in Bunnel [2005]

More information

Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007

Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007 Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007 Programming For this assignment you will write a simple ray tracer. It will be written in C++ without

More information

Real-Time Universal Capture Facial Animation with GPU Skin Rendering

Real-Time Universal Capture Facial Animation with GPU Skin Rendering Real-Time Universal Capture Facial Animation with GPU Skin Rendering Meng Yang mengyang@seas.upenn.edu PROJECT ABSTRACT The project implements the real-time skin rendering algorithm presented in [1], and

More information

Computer Graphics 1. Chapter 7 (June 17th, 2010, 2-4pm): Shading and rendering. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2010

Computer Graphics 1. Chapter 7 (June 17th, 2010, 2-4pm): Shading and rendering. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2010 Computer Graphics 1 Chapter 7 (June 17th, 2010, 2-4pm): Shading and rendering 1 The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons

More information

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component Topic 9: Lighting & Reflection models Lighting & reflection The Phong reflection model diffuse component ambient component specular component Spot the differences Terminology Illumination The transport

More information

Computer Graphics. Lecture 13. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura

Computer Graphics. Lecture 13. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura Computer Graphics Lecture 13 Global Illumination 1: Ray Tracing and Radiosity Taku Komura 1 Rendering techniques Can be classified as Local Illumination techniques Global Illumination techniques Local

More information

Graphics and Interaction Rendering pipeline & object modelling

Graphics and Interaction Rendering pipeline & object modelling 433-324 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Lecture outline Introduction to Modelling Polygonal geometry The rendering

More information

A Brief Overview of. Global Illumination. Thomas Larsson, Afshin Ameri Mälardalen University

A Brief Overview of. Global Illumination. Thomas Larsson, Afshin Ameri Mälardalen University A Brief Overview of Global Illumination Thomas Larsson, Afshin Ameri Mälardalen University 1 What is Global illumination? Global illumination is a general name for realistic rendering algorithms Global

More information

SAMPLING AND NOISE. Increasing the number of samples per pixel gives an anti-aliased image which better represents the actual scene.

SAMPLING AND NOISE. Increasing the number of samples per pixel gives an anti-aliased image which better represents the actual scene. SAMPLING AND NOISE When generating an image, Mantra must determine a color value for each pixel by examining the scene behind the image plane. Mantra achieves this by sending out a number of rays from

More information

Motivation. Sampling and Reconstruction of Visual Appearance. Effects needed for Realism. Ray Tracing. Outline

Motivation. Sampling and Reconstruction of Visual Appearance. Effects needed for Realism. Ray Tracing. Outline Sampling and Reconstruction of Visual Appearance CSE 274 [Fall 2018], Special Lecture Ray Tracing Ravi Ramamoorthi http://www.cs.ucsd.edu/~ravir Motivation Ray Tracing is a core aspect of both offline

More information

Global Illumination. CSCI 420 Computer Graphics Lecture 18. BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch

Global Illumination. CSCI 420 Computer Graphics Lecture 18. BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch CSCI 420 Computer Graphics Lecture 18 Global Illumination Jernej Barbic University of Southern California BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch. 13.4-13.5] 1 Global Illumination

More information

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker CMSC427 Shading Intro Credit: slides from Dr. Zwicker 2 Today Shading Introduction Radiometry & BRDFs Local shading models Light sources Shading strategies Shading Compute interaction of light with surfaces

More information

CS 488. More Shading and Illumination. Luc RENAMBOT

CS 488. More Shading and Illumination. Luc RENAMBOT CS 488 More Shading and Illumination Luc RENAMBOT 1 Illumination No Lighting Ambient model Light sources Diffuse reflection Specular reflection Model: ambient + specular + diffuse Shading: flat, gouraud,

More information

Advanced Computer Graphics CS 563: Screen Space GI Techniques: Real Time

Advanced Computer Graphics CS 563: Screen Space GI Techniques: Real Time Advanced Computer Graphics CS 563: Screen Space GI Techniques: Real Time William DiSanto Computer Science Dept. Worcester Polytechnic Institute (WPI) Overview Deferred Shading Ambient Occlusion Screen

More information

CS770/870 Spring 2017 Radiosity

CS770/870 Spring 2017 Radiosity CS770/870 Spring 2017 Radiosity Greenberg, SIGGRAPH 86 Tutorial Spencer, SIGGRAPH 93 Slide Set, siggraph.org/education/materials/hypergraph/radiosity/radiosity.htm Watt, 3D Computer Graphics -- Third Edition,

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

CS 325 Computer Graphics

CS 325 Computer Graphics CS 325 Computer Graphics 04 / 02 / 2012 Instructor: Michael Eckmann Today s Topics Questions? Comments? Illumination modelling Ambient, Diffuse, Specular Reflection Surface Rendering / Shading models Flat

More information

CS 465 Program 5: Ray II

CS 465 Program 5: Ray II CS 465 Program 5: Ray II out: Friday 2 November 2007 due: Saturday 1 December 2007 Sunday 2 December 2007 midnight 1 Introduction In the first ray tracing assignment you built a simple ray tracer that

More information

Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim

Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim Cel shading, also known as toon shading, is a non- photorealistic rending technique that has been used in many animations and

More information

Advanced Ray Tracing

Advanced Ray Tracing Advanced Ray Tracing Thanks to Fredo Durand and Barb Cutler The Ray Tree Ni surface normal Ri reflected ray Li shadow ray Ti transmitted (refracted) ray 51 MIT EECS 6.837, Cutler and Durand 1 Ray Tree

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions.

Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions. 1 2 Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions. 3 We aim for something like the numbers of lights

More information

Pipeline Operations. CS 4620 Lecture 14

Pipeline Operations. CS 4620 Lecture 14 Pipeline Operations CS 4620 Lecture 14 2014 Steve Marschner 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives

More information

Computer Graphics I Lecture 11

Computer Graphics I Lecture 11 15-462 Computer Graphics I Lecture 11 Midterm Review Assignment 3 Movie Midterm Review Midterm Preview February 26, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Rendering Computer Animation and Visualisation Lecture 9 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Volume Data Usually, a data uniformly distributed

More information

Volume Illumination, Contouring

Volume Illumination, Contouring Volume Illumination, Contouring Computer Animation and Visualisation Lecture 0 tkomura@inf.ed.ac.uk Institute for Perception, Action & Behaviour School of Informatics Contouring Scaler Data Overview -

More information

CS770/870 Spring 2017 Radiosity

CS770/870 Spring 2017 Radiosity Preview CS770/870 Spring 2017 Radiosity Indirect light models Brief radiosity overview Radiosity details bidirectional reflectance radiosity equation radiosity approximation Implementation details hemicube

More information

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows.

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows. CSCI 480 Computer Graphics Lecture 18 Global Illumination BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch. 13.4-13.5] March 28, 2012 Jernej Barbic University of Southern California

More information

The Rendering Equation and Path Tracing

The Rendering Equation and Path Tracing The Rendering Equation and Path Tracing Louis Feng April 22, 2004 April 21, 2004 Realistic Image Synthesis (Spring 2004) 1 Topics The rendering equation Original form Meaning of the terms Integration Path

More information

CMSC427: Computer Graphics Lecture Notes Last update: November 21, 2014

CMSC427: Computer Graphics Lecture Notes Last update: November 21, 2014 CMSC427: Computer Graphics Lecture Notes Last update: November 21, 2014 TA: Josh Bradley 1 Linear Algebra Review 1.1 Vector Multiplication Suppose we have a vector a = [ x a y a ] T z a. Then for some

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Pipeline Operations. CS 4620 Lecture 10

Pipeline Operations. CS 4620 Lecture 10 Pipeline Operations CS 4620 Lecture 10 2008 Steve Marschner 1 Hidden surface elimination Goal is to figure out which color to make the pixels based on what s in front of what. Hidden surface elimination

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

Dominic Filion, Senior Engineer Blizzard Entertainment. Rob McNaughton, Lead Technical Artist Blizzard Entertainment

Dominic Filion, Senior Engineer Blizzard Entertainment. Rob McNaughton, Lead Technical Artist Blizzard Entertainment Dominic Filion, Senior Engineer Blizzard Entertainment Rob McNaughton, Lead Technical Artist Blizzard Entertainment Screen-space techniques Deferred rendering Screen-space ambient occlusion Depth of Field

More information

Computer Graphics Ray Casting. Matthias Teschner

Computer Graphics Ray Casting. Matthias Teschner Computer Graphics Ray Casting Matthias Teschner Outline Context Implicit surfaces Parametric surfaces Combined objects Triangles Axis-aligned boxes Iso-surfaces in grids Summary University of Freiburg

More information

CS354 Computer Graphics Ray Tracing. Qixing Huang Januray 24th 2017

CS354 Computer Graphics Ray Tracing. Qixing Huang Januray 24th 2017 CS354 Computer Graphics Ray Tracing Qixing Huang Januray 24th 2017 Graphics Pipeline Elements of rendering Object Light Material Camera Geometric optics Modern theories of light treat it as both a wave

More information

Local vs. Global Illumination & Radiosity

Local vs. Global Illumination & Radiosity Last Time? Local vs. Global Illumination & Radiosity Ray Casting & Ray-Object Intersection Recursive Ray Tracing Distributed Ray Tracing An early application of radiative heat transfer in stables. Reading

More information

CS 4620 Program 4: Ray II

CS 4620 Program 4: Ray II CS 4620 Program 4: Ray II out: Tuesday 11 November 2008 due: Tuesday 25 November 2008 1 Introduction In the first ray tracing assignment you built a simple ray tracer that handled just the basics. In this

More information

Radiosity. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Radiosity. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen Radiosity Radiosity Concept Global computation of diffuse interreflections among scene objects Diffuse lighting changes fairly slowly across a surface Break surfaces up into some number of patches Assume

More information

Volume Illumination and Segmentation

Volume Illumination and Segmentation Volume Illumination and Segmentation Computer Animation and Visualisation Lecture 13 Institute for Perception, Action & Behaviour School of Informatics Overview Volume illumination Segmentation Volume

More information

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer The exam consists of 10 questions. There are 2 points per question for a total of 20 points. You

More information

Raytracing & Epsilon. Today. Last Time? Forward Ray Tracing. Does Ray Tracing Simulate Physics? Local Illumination

Raytracing & Epsilon. Today. Last Time? Forward Ray Tracing. Does Ray Tracing Simulate Physics? Local Illumination Raytracing & Epsilon intersects light @ t = 25.2 intersects sphere1 @ t = -0.01 & Monte Carlo Ray Tracing intersects sphere1 @ t = 10.6 Solution: advance the ray start position epsilon distance along the

More information

Wednesday, 26 January 2005, 14:OO - 17:OO h.

Wednesday, 26 January 2005, 14:OO - 17:OO h. Delft University of Technology Faculty Electrical Engineering, Mathematics, and Computer Science Mekelweg 4, Delft TU Delft Examination for Course IN41 5 1-3D Computer Graphics and Virtual Reality Please

More information

Ray Tracing. Computer Graphics CMU /15-662, Fall 2016

Ray Tracing. Computer Graphics CMU /15-662, Fall 2016 Ray Tracing Computer Graphics CMU 15-462/15-662, Fall 2016 Primitive-partitioning vs. space-partitioning acceleration structures Primitive partitioning (bounding volume hierarchy): partitions node s primitives

More information

Effects needed for Realism. Computer Graphics (Fall 2008) Ray Tracing. Ray Tracing: History. Outline

Effects needed for Realism. Computer Graphics (Fall 2008) Ray Tracing. Ray Tracing: History. Outline Computer Graphics (Fall 2008) COMS 4160, Lecture 15: Ray Tracing http://www.cs.columbia.edu/~cs4160 Effects needed for Realism (Soft) Shadows Reflections (Mirrors and Glossy) Transparency (Water, Glass)

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Introduction to Visualization and Computer Graphics

Introduction to Visualization and Computer Graphics Introduction to Visualization and Computer Graphics DH2320, Fall 2015 Prof. Dr. Tino Weinkauf Introduction to Visualization and Computer Graphics Visibility Shading 3D Rendering Geometric Model Color Perspective

More information

Effects needed for Realism. Ray Tracing. Ray Tracing: History. Outline. Foundations of Computer Graphics (Spring 2012)

Effects needed for Realism. Ray Tracing. Ray Tracing: History. Outline. Foundations of Computer Graphics (Spring 2012) Foundations of omputer Graphics (Spring 202) S 84, Lecture 5: Ray Tracing http://inst.eecs.berkeley.edu/~cs84 Effects needed for Realism (Soft) Shadows Reflections (Mirrors and Glossy) Transparency (Water,

More information

Computergrafik. Matthias Zwicker. Herbst 2010

Computergrafik. Matthias Zwicker. Herbst 2010 Computergrafik Matthias Zwicker Universität Bern Herbst 2010 Today Bump mapping Shadows Shadow mapping Shadow mapping in OpenGL Bump mapping Surface detail is often the result of small perturbations in

More information

Computer Graphics. Lecture 10. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura 12/03/15

Computer Graphics. Lecture 10. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura 12/03/15 Computer Graphics Lecture 10 Global Illumination 1: Ray Tracing and Radiosity Taku Komura 1 Rendering techniques Can be classified as Local Illumination techniques Global Illumination techniques Local

More information

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University Global Illumination CS334 Daniel G. Aliaga Department of Computer Science Purdue University Recall: Lighting and Shading Light sources Point light Models an omnidirectional light source (e.g., a bulb)

More information

Advanced 3D Game Programming with DirectX* 10.0

Advanced 3D Game Programming with DirectX* 10.0 Advanced 3D Game Programming with DirectX* 10.0 Peter Walsh Wordware Publishing, Inc. Acknowledgments Introduction xiii xv Chapter I Windows I A Word about Windows I Hungarian Notation 3 General Windows

More information

Reflection and Shading

Reflection and Shading Reflection and Shading R. J. Renka Department of Computer Science & Engineering University of North Texas 10/19/2015 Light Sources Realistic rendering requires that we model the interaction between light

More information

6.837 Introduction to Computer Graphics Final Exam Tuesday, December 20, :05-12pm Two hand-written sheet of notes (4 pages) allowed 1 SSD [ /17]

6.837 Introduction to Computer Graphics Final Exam Tuesday, December 20, :05-12pm Two hand-written sheet of notes (4 pages) allowed 1 SSD [ /17] 6.837 Introduction to Computer Graphics Final Exam Tuesday, December 20, 2011 9:05-12pm Two hand-written sheet of notes (4 pages) allowed NAME: 1 / 17 2 / 12 3 / 35 4 / 8 5 / 18 Total / 90 1 SSD [ /17]

More information

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016 Computergrafik Matthias Zwicker Universität Bern Herbst 2016 Today More shading Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection and refraction Toon shading

More information

Screen Space Ambient Occlusion TSBK03: Advanced Game Programming

Screen Space Ambient Occlusion TSBK03: Advanced Game Programming Screen Space Ambient Occlusion TSBK03: Advanced Game Programming August Nam-Ki Ek, Oscar Johnson and Ramin Assadi March 5, 2015 This project report discusses our approach of implementing Screen Space Ambient

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Photorealistic 3D Rendering for VW in Mobile Devices

Photorealistic 3D Rendering for VW in Mobile Devices Abstract University of Arkansas CSCE Department Advanced Virtual Worlds Spring 2013 Photorealistic 3D Rendering for VW in Mobile Devices Rafael Aroxa In the past few years, the demand for high performance

More information