Real-Time Caustics Rendering

Size: px
Start display at page:

Download "Real-Time Caustics Rendering"

Transcription

1 Real-Time Caustics Rendering Master s Thesis at IMM, DTU Andrei Diakonov, s IMM-M.Sc Revised Edition - March 31 st 2009

2 Supervisor - Niels Jørgen Christensen Second Supervisor - Bent Dalgaarg Larsen Technical University of Denmark Institute of Mathematical Modelling Building 321, DK-2800 Kongens Lyngby, Denmark Phone , Fax reception@imm.dtu.dk

3 Abstract This report focuses on two different methods for rendering physically-correct real-time caustics. The first method, known as caustics mapping, focuses on rendering flat or surfaced-based caustics, while the second one, caustics volumes, as the name suggests, deals with volumetric caustics. Caustics mapping is conceptually similar to shadow mapping and involves creating a caustics texture, which is then rendered at the point where the refracted light rays intersect with the scene s geometry. Caustics volumes requires no textures, but instead works by creating a volume mesh out of the caustics-casting object, which is then extruded to form a volume. These two methods are implemented to run on graphics hardware, leaving the CPU free for other tasks. While the implementation of the first method was successful, and the algorithm produces good-looking results while boasting high performance, the second method was never implemented correctly and the volumes produced do not look realistic. So if one is to introduce any improvements to this investigation it is important to get caustics volumes to work correctly and also, add support for multiple light sources in the implementation of the caustics mapping algorithm. 3

4 DTU,

5 Acknowledgments First of all I would like to thank my supervisor - Niels Jørgen Christensen for the guidance and advice he provided during the course of this project. I also would like to express my gratitude to Bent Dalgaard Larsen, who helped with the choice of topic during the early stages of this project. A very big thank you goes to Anders Wang Kristensen, who helped me out a lot with the technical problems I was having in the course of this project. Thanks to his advice I was able to overcome several implementation issues I was having with the caustics mapping algorithm. Another thank you goes to Mayas Fares who was the proof-reader of this report an pointed out some important mistakes that I made. Last, but not least I want to thank my friends and family, both in Denmark and abroad, who showed great support, even when I had my moments of doubt. If I forgot to mention someone, I hope that he or she accepts my apology and allows me to make it up by saying Thank you, sir/mam!. 5

6 DTU,

7 Contents 1 Introduction 9 2 Theory and Related Work Background Theory Related Work Problem Analysis 27 4 Analysis of Algorithms Caustics Mapping Caustics Volumes Shadow Mapping Reflections And Refractions Implementation Program Overview Caustics Mapping Caustics Volumes Adding Shadows Reflections And Refractions Results 55 7 Conclusion 67 8 Further Work 69 A Image Credits 73 7

8 DTU, 2009 CONTENTS 8

9 Chapter 1 Introduction Rendering photo-realistic scenes has been the main focus of computer graphics research for many years now. Even with todays advances in computer hardware there are still many challenges in rendering physically correct scenes at high speeds. One of the greatest challenges arises when rendering reflective or refractive objects, because the presence of a visual phenomenon know as caustics is paramount to the physical correctness of the rendered scene. Being visually quite attractive in real life, having caustics along in a 3D renderer will make the scene achieve a new level of realism. So what are caustics? Well, simply put, caustics are complex patterns of bright light that form on non-reflective surfaces in presence of a reflective or refractive objects, like those formed next to a glass of water, which is standing close to window. While a more formal definition is that whenever multiple light rays converge on the same point on a surface, they cause that region to become relatively brighter than the surrounding regions. It is these nonuniform distributions of bright and dark regions that are known as caustics. The convergence of the light rays is generally caused by reflective or refractive objects, which, as the name suggests, reflect or refract the rays causing them to change their direction and focus on a single point. Although from the definition above it may seem that rendering caustics may be simple, it is, in fact, quite a complex process and it was only in 1986 that an algorithm was invented, which could render realistic caustics. However, even with the constant increase in computing power, algorithms that could render caustics could not run in real-time, since they involved some really complex and computationally expensive light-ray intersection tests with the scene geometry. This is a big problem, because 3D applications like computer games, obviously require real-time algorithms. For example, when playing a first-person shooter, one cannot be expected to wait five minutes for the next scene to load. Therefore, in order to make this desirable physical phenomenon with its high visual appeal, run in real-time, one had to come up with another way of rendering caustics, one which doesn t involve making numerous calculation for each step during the render process. Fortunately, besides algorithms that involve all these 9

10 DTU, 2009 CHAPTER 1. INTRODUCTION Figure 1.1: Example of caustics formed by refractive objects light-ray intersection tests there are image space algorithms, which do not involve path tracing, intersection testing, etc. Essentially, image-space algorithms allow real-time rendering of various optical effects that are otherwise intolerably slow if rendered using conventional ray tracing techniques. Hence it is possible to incorporate such algorithms in real-time 3D applications such as computer games and virtual reality systems. As the title of this report suggests the focus of the entire investigation are real-time algorithms, which can be used to render realistic caustics. The final aim is to be able to render an interactive scene that possesses both caustics and shadows. The algorithms used will be implemented on the GPU, thus ensuring that they are able to run in real-time. And since, over the years, there have been many attempts to render caustics, several different algorithms will be looked at, so that in the end, the best and most efficient one can be chosen. 10

11 Chapter 2 Theory and Related Work 2.1 Background Theory Before taking a closer look at various attempts to implement caustics on computers, let us have a look at the physics behind this optical effect. As it has been mentioned in the introduction, the formal definition of caustics is the envelope of light rays reflected or refracted by a curved surface or object, or the projection of that envelope of rays on another surface 1. From this definition it is clear that when rendering physically realistic caustics one has to account for reflection and refraction of light rays, or in computer terms, render a light ray that is emitted by a light source, which then hits a specular surface, gets reflected or refracted and then hits a diffuse surface, where caustics will be formed. The sections below deal with the theory behind these two phenomena and also a brief explanation of the physical nature of light in general. Radiometry The formal definition of radiometry is the measurement of the intensity of electromagnetic radiation in absolute units. Visible light detected by the human eye is a type of EM radiation with a wavelength between approximately 400nm and 700nm. If radiation having a frequency in the visible region of the EM spectrum reflects off of an object, like a glass sphere (an object of great importance to this investigation), and then strikes our eyes, this results in our visual perception of the scene. Our brain s visual system processes the multitude of reflected frequencies into different shades and hues, and through this physical phenomenon, most people perceive a glass sphere. Hence all things visible are covered by radiometry, which is the basis for all equations used in this project. So let us take a closer look at the most important concepts of radiometry. The most basic concept is that of a wave, which is basically a disturbance that propagates through space and time. Each 1 Definition from Wikipedia: http : //en.wikipedia.org/wiki/caustic (optics) 11

12 DTU, 2009 CHAPTER 2. THEORY AND RELATED WORK and every wave has a certain wavelength defined as the distance between the repeating units of a propagating wave. Since this investigation deals with light transport, waves of light shall be represented by photons, which are the basic units of light. It is a well-known fact that light is basically a large number of photons emitted from the light source. For a certain wavelength λ each photon has the energy defined by e λ = hc (2.1) λ where h is the Planck s constant equal to J s and c is the speed of light (299,792,458 m/s). Since photon is a particle it gives light rays a physical nature, a concept often used in computer graphics. Thus if a light source emits a large number of photons - n lamda, it has a total radiant energy of Q λ = e λ n λ (2.2) However since each color in the EM spectrum has its own wavelength, the radiant energy Q is the integral over all wavelengths. Q = 0 Q λ dλ (2.3) If the radiant energy is the measure of total energy of a light source, then radiant flux - Φ is the measure of total power. It is given by the following equation Φ = dq (2.4) dt while its density is given by E(x) = dφ (2.5) da The power per unit area is given by radiant exitance - when light is emerging from the surface and irradiance - when light is incident on the surface, defined in the equation above. Light s intensity is measured by radiant intensity, defined in the following formula: I( ω) = dφ (2.6) d ω This equation gives the power per unit solid angle (see next section). Finally, radiance and spectral radiance are radiometric measures that describes the amount of light that passes through or is emitted from a particular area, and falls within a given solid angle in a specified direction. It is defined by d 2 Φ L = dadωcosθ Φ (2.7) ΩAcosθ The units of radiance is watts per steradian per unit projected area. Radiance is useful because it indicates how much of the power emitted by an emitting or reflecting surface will be received by an optical system (in this investigation it 12

13 2.1. BACKGROUND THEORY DTU, 2009 is the view camera) looking at the surface from some angle of view. In this case, the solid angle of interest is the solid angle subtended by the optical system s entrance pupil. Since the eye is an optical system, radiance is a good indicator of how bright an object will appear. Solid Angle The solid angle Ω is the angle in 3D space that an object subtends 2 at a point. Solid angle is a measure of how big an object appears to an observer looking from that point. For instance, a small object nearby could subtend the same solid angle as a large object far away. It is proportional to the surface area, S, of a projection of that object onto a sphere centered at that point, and inversely proportional to the square of the sphere s radius, R. A solid angle is related to the surface of a sphere in the same way an ordinary angle is related to the circumference of a circle. It is value often used in radiometry and has the unit steradian (sr). When measured from an inside point in a sphere the solid angle is 4π sr and the solid angle subtended at the center of a cube by one of its faces is one-sixth of that, or 2π/3 sr. Illumination from a light source depends on the solid angle Ω subtended by that light source. Figure 2.1: Illustration of a solid angle Figure 2.1 shows an illustration of a solid angle in a hemisphere. The solid angle Ω subtended by a surface S is defined as the surface area Ω of a unit sphere covered by the surface s projection onto the sphere. This can written as: Ω = S ˆn da r 2 (2.8) where ˆn is the unit vector from the origin, da is the differential area of a surface patch, and r is the distance from the origin to the patch. 2 an angle subtended by an arc is one whose two rays pass through the endpoints of the arc 13

14 DTU, 2009 CHAPTER 2. THEORY AND RELATED WORK Rendering Equation In computer graphics, the rendering equation is an integral equation in which the equilibrium radiance leaving a point is given as the sum of emitted plus reflected radiance under a geometric optics approximation. It was simultaneously introduced into computer graphics by David Immel [10] and James Kajiya [11] in Various realistic rendering techniques in computer graphics attempt to solve this equation. The physical basis for the rendering equation is the law of conservation of energy. Given that L denotes radiance, we have that at each particular position and direction, the outgoing light, L o, is the sum of the emitted light, L e, and the reflected light. The reflected light itself is the sum of the incoming light, L i, from all directions, multiplied by the surface reflection and cosine of the incident angle. The rendering equation can be written as L o (x, ω, λ, t) = L e (x, ω, λ, t) + f r (x, ω, ω, λ, t)l i (x, ω, λ, t)( ω n)dω Ω (2.9) where λ is a particular wavelength of light t is time L o (x, ω, λ, t) is the total amount of light of wavelength λ directed outward along direction ω at time t from a particular position x L e (x, ω, λ, t) is emitted light Ω dω is an integral over a hemisphere of inward directions f r (x, ω, ω, λ, t) is the bidirectional reflectance distribution function, the proportion of light reflected from ω to ω at position x, time t and at wavelength λ L i (x, ω, λ, t) is light of wavelength λ coming inward toward x from direction ω at time t ω n is the attenuation of inward light due to incident angle Solving the rendering equation for any given scene is the primary challenge in realistic rendering. There are several approaches to solving the equation like the radiosity algorithm, path tracing, photon mapping, and Metropolis light transport, among others, but all of them have one thing in common - to render a physically realistic scene on a computer. 14

15 2.1. BACKGROUND THEORY DTU, 2009 Reflection Reflection occurs as a result of the change in direction of a wavefront at an interface between two different media so that the wave front returns into the medium from which it originated. Although there are many types of reflection (sound, water waves, electromagnetic waves, etc), this project focuses on light reflection. There are two types of light reflection, specular (that is, mirror-like) or diffuse (that is, not retaining the image, only the energy) depending on the nature of the interface. A mirror provides the most common model for specular light reflection, and typically consists of a glass sheet in front of a metallic coating where the reflection actually occurs. It is also possible for reflection to occur from the surface of transparent media, such as water or glass. Figure 2.2: Example of light-ray reflection In Figure 2.2, a light ray P O strikes a vertical mirror at point O, and the reflected ray is OQ. By projecting an imaginary line through point O perpendicular to the mirror, known as the normal, we can measure the angle of incidence, θ i and the angle of reflection, θ r. The law of reflection states that θ i = θ r, or in other words, the angle of incidence equals the angle of reflection. Reflection of light may occur whenever light travels from a medium of a given refractive index into a medium with a different refractive index. In the most general case, a certain fraction of the light is reflected from the interface, and the remainder is refracted. Fresnel equations, can be used to predict how much of the light reflected, how much is refracted in a given situation. In Figure 2.3, an incident light ray P O strikes at point O the interface between two media of refractive indexes n 1 and n 2. Part of the ray is reflected 15

16 DTU, 2009 CHAPTER 2. THEORY AND RELATED WORK Figure 2.3: Example of both light-ray reflection and refraction as ray OQ and part refracted as ray OS. The angles that the incident, reflected and refracted rays make to the normal of the interface are given as θ i, θ r and θ t, respectively. The relationship between these angles is given by the law of reflection and Snell s law. The fraction of the incident power that is reflected from the interface is given by the reflection coefficient R, and the fraction that is refracted is given by the transmission coefficient T. The calculations of R and T depend on polarization of the incident ray. If the light is polarized with the electric field of the light perpendicular to the plane in Figure 2.3 (s-polarized). The reflection coefficient is given by R s = = [ ] 2 sin(θt θ i ) = sin(θ t + θ i ) [ n1 cos(θ i ) n 2 [ n1 cos(θ i ) n 2 cos(θ t ) n 1 cos(θ i ) + n 2 cos(θ t ) 1 ( n1 n 2 sin(θ i ) ) 2 ( ) 2 n 1 cos(θ i ) + n 2 1 n1 n 2 sin(θ i ) ] 2 ] 2 (2.10) where θ t can be derived from θ i using Snell s law. In the case of the incident light being polarized in the plane of Figure 2.3 (p-polarized), then R is given by: [ ] 2 [ ] 2 tan(θt θ i ) n1 cos(θ i ) n 2 cos(θ t ) R p = = tan(θ t + θ i ) n 1 cos(θ i ) + n 2 cos(θ t ) ( [ 2 n1 1 n1 ] n 2 sin(θ i )) n2 cos(θ i ) 2 = ( (2.11) 2 n n n 2 sin(θ i )) + n2 cos(θ i ) The transmission coefficient can then be calculated using the following: T s = 1 R s (2.12) 16

17 2.1. BACKGROUND THEORY DTU, 2009 T p = 1 R p (2.13) If the incident light is unpolarised (containing an equal mix of s- and p- polarizations), the reflection coefficient is defined as: R = R s + R p 2 (2.14) Relying on these equations it is possible to find out how much light is reflected and how much is refracted. Refraction Refraction is the change in direction of a wave due to a change in its speed. This phenomenon can usually be seen when a wave passes from one medium to another. As with reflection above, only refraction of light is of any interest to this project. Figure 2.4: Light-ray refraction in real life Basically, refraction works in the following way: If a person looks at a straight object, such as a pencil or straw, which is placed at a sharp angle, and partially submerged in water (see Figure 2.4), the object appears to bend at the water s surface. This happens due to the bending of light rays as they move from the water to the air. Once the rays reach the eye, the eye traces them back as straight lines (lines of sight). The lines of sight intersect at a higher position than where the actual rays originated. This causes the straw to appear higher and the water to appear shallower than it really is. 17

18 DTU, 2009 CHAPTER 2. THEORY AND RELATED WORK Refraction is described by Snell s law, which states that the angle of incidence is related to the angle of refraction by the following formula: sin(θ i ) sin(θ r ) = v 1 v 2 = n 2 n 1 (2.15) where θ i and θ r are the angles between the normal plane and the incident waves, v 1 and v 2 are the wave velocities through the respective media and n 1, n 2 are the refractive indices. In optics, refraction occurs when light waves travel between two mediums with different refractive indices. At the boundary between the media, the wave s phase velocity is altered, it changes direction, and its wavelength increases or decreases but its frequency remains constant. For example, a light ray will refract as it enters and leaves glass and it is due to this effect that caustics form on the surface next to the glass. While air has a refractive index of about , and water has a refractive index of about 1.33, glass has index of refraction of about 1.5 (depending on the type of glass used). Keeping all this in mind, it is possible account for light refraction on computer and simulate physically correct caustics. Ray Tracing Ray tracing is a commonly-used technique for generating an image by tracing the path of light through pixels in an image plane. The technique is capable of producing a very high degree of photorealism, but at a high computational cost. In nature, a light source emits a ray of light which travels to a surface that interrupts its progress. One can think of this ray as a stream of photons traveling along the same path. In a perfect vacuum (or a computer simulated environment) this ray will be a straight line. In reality, any combination of four things might happen with this light ray: absorption, reflection, refraction and fluorescence. A surface may reflect all or part of the light ray, in one or more directions. It might also absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). There are two types of ray tracing: backwards (or eye-based) ray tracing [1], which is the process of shooting rays from the eye to the light source to render an image and light-based ray tracing, where the ray is shot from the light source (see Figure 2.5). While the direct illumination is generally best sampled using backwards ray tracing, an algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, is usually a better choice when rendering caustics. Eye-based ray tracing algorithm is presented below 3 : 1 For each p i x e l in image 2 { 3 Algorithm taken from tracing (graphics) 18

19 2.1. BACKGROUND THEORY DTU, 2009 Figure 2.5: Ray tracing algorithm 3 Create ray from eyepoint p a s s i n g through t h i s p i x e l 4 I n i t i a l i z e NearestT to INFINITY and NearestObject to NULL 5 6 For every o b j e c t in scene 7 { 8 I f ray i n t e r s e c t s t h i s o b j e c t 9 { 10 I f t o f i n t e r s e c t i o n i s l e s s than NearestT 11 { 12 Set NearestT to t o f the i n t e r s e c t i o n 13 Set NearestObject to t h i s o b j e c t 14 } 15 } 16 } I f NearestObject i s NULL 19 { 20 F i l l t h i s p i x e l with background c o l o r 21 } 22 Else 23 { 24 Shoot a ray to each l i g h t source to check i f in shadow 25 I f s u r f a c e i s r e f l e c t i v e, g e n e r a t e r e f l e c t i o n ray : r e c u r s e 26 I f s u r f a c e i s transparent, g e n e r a t e r e f r a c t i o n ray : r e c u r s e 27 Use NearestObject and NearestT to compute shading f u n c t i o n 28 F i l l t h i s p i x e l with c o l o r r e s u l t o f shading f u n c t i o n 29 } 30 } Although in most cases ray tracing sacrifices speed in the name of physical 19

20 DTU, 2009 CHAPTER 2. THEORY AND RELATED WORK correctness it is possible to adapt an algorithm similar to the one mentioned above in a real-time environment. Such a real-time adaptation of the ray tracing algorithm will be used in this project. 2.2 Related Work Having presented the physics behind reflection and refraction, we shall now take a look at various attempts to account for phenomena on computer, all in the name of drawing caustics. Rendering realistic caustics has been a hot topic in computer graphics for more than 20 years and many different algorithms were created, which utilized various techniques to create the much desirable caustics effect. However since this report focuses on real-time caustics rendering, mostly attempts to create an algorithm, which could run in real-time will be mentioned here, with a few exceptions, however. Naturally, any implementation of any computer graphics algorithm heavily depends on the graphics hardware it runs on and since powerful and, what s more more important, programmable graphics hardware on a standard PC became available only after year 2000, consequently the real-time algorithms mentioned below are from that time. But let us first take a look at earlier attempts to recreate caustics on computer. On of the very first works published in this field is called Backward Ray Tracing [1]. In this algorithm simulating specularly reflected or refracted light originating from point light sources involved one or more passes of the backward ray tracer and construction of illumination steps as a pre-processing step. As seen in Figure 2.6, the algorithm produces rather nice-looking results, however, with its current implementation it will not run in real-time. Figure 2.6: The Backward Ray Tracing algorithm by [1] In 1996, ten years after Backward Ray Tracing a paper called Global Illumination Using Photon Maps [5] was published. It introduced a whole new 20

21 2.2. RELATED WORK DTU, 2009 way of rendering scenes based on the concept of photon maps. By using one high resolution caustics photon map to render caustics that are visualized directly and one low resolution photon map that is used during scene rendering, which utilizes a distribution ray tracing algorithm, the algorithm produced really goodlooking results, seen in Figure 2.7. Figure 2.7: The Photon Mapping algorithm implemented by [5] However, as it was with Backward Ray Tracing, this algorithm is too slow to run in real-time (it took 56 minutes to render the image in Figure 2.7 at 1280x960 resolution). The two algorithms mentioned above, although lacking performance, make it all up in quality and the results produced by the photon mapping algorithm will be used as a benchmark for quality comparisons with real-time algorithms. So now let us see how well do the real-time algorithms perform, both quality and speed-wise. The Real-Time Caustics algorithm[3], discretizes a specular surface into sample points, where each point is treated as a pinhole camera, which projects the image of the incoming light onto receiver surfaces. This algorithm runs in real-time for very simple scenes, however it doesn t compute any visibility for the light sources and only one specular bounce can be handled. The results of this implementation can be seen in Figure 2.8. Bent Larsen[4], has used Henrik Jensen s[5] photon mapping algorithm, which handles caustics in a natural manner on arbitrary geometry and can also support volumetric caustics in participating media. It uses both the GPU and the CPU to render in real-time a nice-looking, but simple scene. The main drawback is the number of photons is increased, the performance of the algorithm drops drastically and it no longer is able to run in real-time, however the results it produces do look very attractive as seen in Figure 2.9. In their algorithm Johannes Gunther, Ingo Wald and Philipp Slusallek[6] suggest using distributed photon maps. The algorithm uses real-time photon mapping combined with ray tracing implemented to run in parallel on several 21

22 DTU, 2009 CHAPTER 2. THEORY AND RELATED WORK Figure 2.8: Real-Time Caustics algorithm created by [3] Figure 2.9: Real-Time Photon Mapping algorithm made Bent Larsen[4] 22

23 2.2. RELATED WORK DTU, 2009 PCs making the performance heavily dependant on number of CPUs and photons used (the authors used 25 CPUs for highest performance). Shortcomings of this implementation are that there is no direct support for non-caustic (global) illumination and too high hardware requirements. And although the images produced (see Figure 2.10) do look rather nice, even with multiple CPUs the performance cannot be called interactive (the average frame rate for rendered images was around 15 fps). Figure 2.10: Use of distributed photon maps, as in [6] Christoffer Sandberg and Tomas Falemo[7] introduce an improvement on volumetric rendering of caustics method, achieved by allowing arbitrary caustic receivers and non-planar light volumes, with all caustic calculations done on hardware shaders. Being implemented on the GPU performance of this algorithm depends on the graphics card used. Its main shortcoming is that it doesn t really support for other types of caustics (e.g. glass) and it renders no true refraction in water as seen from the eye. The water caustics it produces also lack realism, as Figure 2.11 illustrates and run extremely slowly (less than 10 fps) at high resolutions. In his algorithm, Sune Laresen[8] uses photon mapping with accelerated ray tracing implemented in DirectX SDK and using HLSL shader language. As in [4] the performance of this algorithm depends on the number of emitted photons, displaying crude quality of created caustics for low number of photons and low speeds for high number of emitted photons. The results produced by this algorithm although are rendered in real-time, unfortunately do not look all that attractive (see Figure 2.12). While the graphics community was trying to come up with a real-time versions of ray tracing or photon mapping, an entirely new approach called Caustics Mapping[2] was suggested. It works by creating a caustic map texture and then rendering caustics onto a non-shiny surface. Boasting both high performance (from 40 up to 200+ fps) and good image quality (seen in Figure 2.13) this algorithm is, so far, the best attempt to render realistic caustics in real-time. Its only shortcomings are that the method doesn t extend to volumetric caustics 23

24 DTU, 2009 CHAPTER 2. THEORY AND RELATED WORK Figure 2.11: Volumetric caustics rendering suggested by [7] Figure 2.12: Photon Mapping with accelerated ray tracing implemented by [8] 24

25 2.2. RELATED WORK DTU, 2009 and does not support area lights. Figure 2.13: The Caustics Mapping algorithm, created by [2] The latest work published in this area, called Hierarchical Caustic Maps[9] utilizes geometry processing stage of newer GPUs, to avoid processing every photon in the map and to render a pyramidal caustic map, which allows photon splats of varying diameters without inherently increased costs in rasterization of large splats. As with all other algorithms, which use photons, the performance depends on photon count and complexity of reflective/refractive object, however visually, the results are very impressive as demonstrated in Figure Figure 2.14: Example of Hierarchical Caustic Maps algorithm made by Chris Wyman[9] But the algorithm lacks support for area/environment lights and has no interactive reflection/refraction. 25

26 DTU, 2009 CHAPTER 2. THEORY AND RELATED WORK All of the papers mentioned above, although having a different approaches, share one single goal - to render realistic caustics. And even though some algorithms are better than others, both visually and performance-wise, the general tendency is that the overall quality improves with time, since computer hardware is becoming more and more powerful, allowing for faster and more accurate implementations of the visual phenomenon known as caustics. 26

27 Chapter 3 Problem Analysis As it has been mentioned in the introduction, the aim of this project is to render realistic caustics in real-time. Now that the reader is familiar with various attempts to render caustics, the time has come to present the method, which will be used in this project. In the previous chapter, from all the algorithms, Caustics Mapping yielded the best results, when one considers both quality and performance. Therefore it is this algorithm, which will be used to render caustics in this project. However it will not just be an implementation of an already-existing algorithm - an attempt will be made to solve the two main problems of the Caustics Mapping algorithm, which are lack of support for multiple light sources and no volumetric caustics. While Caustics Mapping is based on [2], and is quite good at rendering flat (or surface-based) caustics, it is quite tedious to render volumetric caustics using this algorithm. That is why to render volumetric caustics one needs an all new algorithm. In an attempt to add these volumetric caustics to the scene an algorithms called Caustics Volumes, which, to the author s knowledge, hasn t been tried before will be used. It is the opinion of the author of this report, that the combination of these two algorithms could produce some very nice looking and physically correct caustics, however the focus will be on rendering surface-based caustics, since, unlike surface-based caustics, volumetric ones are not a very common phenomenon. And even though this entire project focuses on rendering caustics, no scene is complete without shadows. So to finalize the scene, the shadow mapping algorithm, which in several ways is similar to caustics mapping, will be used to add the much-needed shadows and make the resulting scene even more realistic. 27

28 DTU, 2009 CHAPTER 3. PROBLEM ANALYSIS 28

29 Chapter 4 Analysis of Algorithms This section covers the theory behind the two methods of rendering caustics in real-time. Although having the same purpose and both operating in image space, these methods have several key differences, not only in their implementation, but also in the results produced. If caustics mapping involves creating, like the name suggests, a caustics map and then using it to render caustics, then the second method involves creating a caustics mesh, for the caustics volume, which is then extruded to form a volume. In theory, creation of the caustic mesh in the second method allows rendering caustics volumes, with relative simplicity, while possessing both high quality and performance. First, the theory behind caustics mapping method will be discussed in detail, then caustics volumes will be discussed and finally the theory behind shadow mapping and cubic environment mapping will be presented. 4.1 Caustics Mapping As it has been mentioned before, the first method is called caustics mapping, which is an image space algorithm, first implemented by [2]. The main advantage of the caustics mapping algorithm is that it runs entirely on the GPU, so the calculations needed to render the scene are done on the fly. It can be used for rendering caustics caused both by reflective and refractive objects without any modifications to the algorithm itself and can also be combined with such popular techniques as shadow mapping to add shadows to the scene making it even more realistic. The main drawback of the algorithm is that, in its original implementation, it only supports a single point light source. So a natural improvement is to add support of multiple light sources, disregarding whether they are point or area, or even volumetric ones. This however might not be as simple as it sounds, since the calculations involved become rather complex when trying to account for more than a single point light. All of this will be discussed below. The algorithm itself is rather straightforward and somewhat intuitive. It 29

30 DTU, 2009 CHAPTER 4. ANALYSIS OF ALGORITHMS begins with tracing light rays through a refractive object. (The algorithm works just as well with reflections, but the focus of this investigation are caustics caused by refractive objects. However one must note, that reflections and/or refractions have to be rendered separately.) As mentioned above, the algorithm runs on the GPU, so tracing a light ray, is also by the GPU. When the light ray is traced through the object and refracted its footprints are then collected onto an image plane facing the light source to create a caustics map. Caustics are then rendered onto this image plane by looking up the caustics map. Figure 4.1: The caustics mapping algorithm: creation of the caustics map texture using point slatting. Small orange circles represent a photon emitted by the light source and traveling along a light ray (black line). Basically the main part of the algorithm involves just three steps: first, shoot a light ray from the light source and refract or reflect this ray when it encounters a specular object; second, find out where the reflected/refracted light ray intersects with the scene geometry; third, render the caustics texture at this intersection point. Although the whole process of rendering caustics this way may sound simple, there are a few major problems that need to be addressed. First of all, the intersection points of the refracted light rays with the scene geometry need to be computed to determine where the actual caustics will form. This has to be done fast enough so that the real-time performance is not compromised. It means that one needs a function that takes a light ray, refracts it as it passes through an object and then returns the intersection point with the receiver geometry at virtually no cost, so one can therefore shoot a light ray from the light source to each point on the surface of the refractive object, perform the refraction, and then use the intersection function to compute where the refracted light ray intersects with the surrounding scene (See figure 4.1). Normally, this requires expensive ray-geometry intersection testing which is not feasible for real-time applications, but caustics mapping uses a much simpler image-space algorithm for estimating this intersection point. But before one can even start estimating the intersection point, one has to obtain the 3D positions of the receiver geometry. This is done by rendering the receiver geometry 30

31 4.1. CAUSTICS MAPPING DTU, 2009 to a positions texture from the light s point of view, which is obtained by outputting 3D world coordinates for each pixel instead of color. This positions texture can now be used for ray-intersection estimation. Figure 4.2: Diagram of the intersection estimation algorithm. Solid-lined arrows correspond to the position texture lookups. Estimating the ray-intersection point is then done in the following way: suppose v is the position of the current vertex and r is the normalized refracted light vector. Points along the refracted ray are thus defined as: P = v + d r (4.1) where d is the distance from the vertex v. To estimate the point of intersection one has to estimate the value of d, which is the distance between v and the receiver geometry along r. An initial value of 1 is assigned to d and a new position, P 1, is computed: P 1 = v + 1 r (4.2) P 1 is then projected into the light s view space and used to look up the positions texture, which was just found. The distance, d, between v and the looked up position is used as an estimate value for d in Equation 4.1 to obtain a new point, P 2. Finally, P 2 is projected in to the light s view space and the positions texture is looked up once more to obtain the estimated intersection point. This is illustrated in Figure 4.2. Having an estimate for the intersection point, we can find the amount of light that is deposited in the caustics map at the location of projection of intersection point by each ray. When several refracted light rays intersect at the same point, multiple light deposits will be accumulated in that region causing it to get illuminated and thus forming caustics. However, this ray-accumulation process presents another challenge. Since the algorithm is implemented completely on the GPU, one cannot randomly write to pixels; they can only be written in the order that they are rasterized. Therefore, this means that the pixels have to be rasterized more than once, since one needs to write to certain pixels more 31

32 DTU, 2009 CHAPTER 4. ANALYSIS OF ALGORITHMS than once. The caustics mapping algorithm solves this problem using vertex splatting (also known as point splatting), which simply displaces the vertices of the refractive object along the refracted light direction and renders them as point primitives. As a result of the displacement, multiple vertices can end up in the same position thus providing the repeated rasterization of the same pixels that is required for the light accumulation. And finally, additive blending can be used to sum up the contribution from each individual fragment. Using vertex splatting to solve the problem above, gives rise to another small problem. Since the algorithm works using vertex splatting, the quality of the caustics produced depends heavily on the number of vertices in the refractive object mesh. For example, if one wants to render caustics from a glass cube, which would generally be modeled using only eight vertices, the end result will be random, disjoint pattern of light patches. A simple work-around is to tessellate the mesh so that it contains a large number of vertices. However, using highly tessellated meshes in games and other real-time applications is not desired, especially for simple objects such as a cube. Thus a more elegant solution is utilized. One has to create an auxiliary mesh, known as the refractive vertex grid, containing a set of vertices evenly distributed over the surface of the refractive object visible from the light source. This mesh can then be used for the splatting process instead of the original refractive object mesh. Having solved this last problem with vertex splatting, all the steps of the caustics mapping algorithm have been covered and one can move on to the next part - rendering volumetric caustics. 32

33 4.2. CAUSTICS VOLUMES DTU, Caustics Volumes If caustics mapping has some similarities with the shadow mapping algorithm, then the second method, caustic volumes, draws inspiration from shadow volumes. The main advantage of this method is that it is able to render volumetric caustics in real-time, just like shadow volumes renders volumes in real-time. However, from step one, there is a slight problem. When rendering shadow volumes all geometry that lies within the shadow volume should not be lit by the particular light source. The shadow volume itself consists of three parts: a front cap, a back cap, and the side. If the front and back caps are created from the shadow-casting geometry, i.e. some object, then the side is usually created by first determining the silhouette edges of the shadow-casting geometry then generating faces that represent the silhouette edges extruded for a large distance along the light direction. This means that the resulting volume is as wide as the object itself, if the object (shadow-casting geometry) is very close to the receiver geometry and gradually gets wider as the distance between the shadow-casting geometry and the receiver geometry grows. Figure 4.3 shows a shadow volume generated by a dwarf figure. Figure 4.3: A shadow volume created from occluding geometry While this provides physically correct results for shadows, it is most incorrect for caustics. If the shadow volume should expand with distance, then caustic volume, should, on the contrary, get thinner, as seen on Figure 4.4. Therefore, to render physically correct caustics volumes a work-around solution must be found. To find such a solution one has to look closer at the shadow volume implementation. When rendering shadows a mesh that can represent the shadow volume of the shadow-casting geometry regardless of light direction is generated. Once the mesh has been generated a vertex shader is used to perform vertex extrusion as the shadow volume mesh is rendered. The key word in the solution of the problem, is mesh. What if instead of generating the mesh out of the object itself, one would use a scaled (downsized) version of the object to create the mesh and then instead of performing vertex extrusion, one 33

34 DTU, 2009 CHAPTER 4. ANALYSIS OF ALGORITHMS Figure 4.4: A Physically correct caustic volume would orient the mesh so it gets thinner the further away it is from the object (in this case - the caustic-casting geometry). Once this is done, the mesh can be rendered with a caustics texture, to make it look illuminated. Of course, this is not strictly physically correct, since light ray refraction is not taken into account, however, it can produce rather nice-looking results and, what s more important, it is able to run in real-time. Having created caustics volume one could combine the result with the caustics mapping algorithm, described in section 4.1, which does account for light refraction and reflection and in itself does produce very visually appealing results. The combination of the two caustics rendering algorithms, could give a 3D scene that has both volumetric and surface-based caustics. 34

35 4.3. SHADOW MAPPING DTU, Shadow Mapping It is quite natural that any individual wants to achieve the best results in whatever he or she is doing. And to get the best results in the current investigation, it is vital to add shadows, because no scene is complete without them. There are several techniques available to render realistic shadows, however for the purpose of this investigation only two were considered - shadow mapping and shadow volumes. Both of these techniques run quite well in real-time and can be combined with other algorithms when rendering the final scene. After certain considerations shadow mapping was chosen for rendering shadows. The main reason is that it is quite similar to caustics mapping, which is the focus of the entire investigation, but also simpler to implement and runs entirely on the GPU (at least the version of shadow mapping used in this investigation). Having decided which algorithm to use for rendering shadows, let us take a closer look at the details behind shadow mapping. The concept of shadow mapping is pretty straightforward (even simpler than caustics mapping). If one looks out from a source of light, all of the objects you can see would appear in light. But anything behind those objects would be in shadow. One starts by rendering the entire scene to a texture from the light s perspective. This texture is called a shadow map and is generated by using vertex and pixel shaders (i.e. generated by the GPU). In the pixel shader the pixel depth is written instead of the pixel color. The resulting output from this pixel shader can be seen in Figure 4.5. Figure 4.5: Results of depth map tests When the scene is rendered, the distance between each pixel and the light is compared to the corresponding depth and then saved in the shadow map. If they match, then the pixel is not in shadow. If the distance in the shadow map is smaller, the pixel is in shadow, and the shader can update the pixel color accordingly. Figure 4.6 shows the final scene rendered with a shadow map. 35

36 DTU, 2009 CHAPTER 4. ANALYSIS OF ALGORITHMS Figure 4.6: Scene rendered using shadow mapping The most suitable type of light to use with shadow maps is a spotlight since a shadow map is rendered by projecting the scene onto it. For real-time shadows, this technique is less accurate than shadow volumes, because the accuracy of a shadow map depends on the texture memory allotted to it, while shadow volumes are accurate to the pixel. However, shadow mapping can sometimes be a faster alternative depending on how much fill time is required for either technique in a particular application. Shadow mapping also does not require the use of an additional stencil buffer, and can sometimes be modified to produce shadows with a soft edge. However, unlike shadow volumes, the accuracy of a shadow map is limited by its resolution, which, in this investigation, is the same as the resolution of the caustics map. Despite these few downsides, due to its ease of implementation and high performance, it is this algorithm that shall be used for creating shadows in the final scene. 36

37 4.4. REFLECTIONS AND REFRACTIONS DTU, Reflections And Refractions Having caustics in a scene is indeed very nice (it is in fact the aim of the entire project), however they will look ridiculous if drawn next to a completely opaque object. So for the final scene to make sense, it is of utmost importance to include reflection or refraction, or both (to my knowledge there are no perfect mediums in nature, i.e. no substances that let 100% of light through, so if rendering, say, glass some light should be always reflected). For rendering accurate reflections and refractions a technique known as cubic environment-mapping was chosen (yet another mapping technique). Much like shadow or caustics mapping, this technique also runs entirely on the GPU, ensuring fast and efficient rendering. In cubic environment-mapping the camera is placed inside the reflective/refactive object and the environment surrounding this object is rendered into a cube texture map (much like with shadow maps, only there the object is rendered into a texture as well), so that the object can use the cube map to achieve complex lighting effects without memory intensive lighting calculations. To achieve more realistic results, the technique uses floating-point textures, because color values in floating-point textures don t get clamped to [0, 1], much like lights in real-world. And unlike traditional textures in integer format, floating-point textures are capable of storing a wide range of color values. It has one downside, however. The problem is that when rendering to a cube texture, the camera looks out of the center of an object, while in reality a light ray gets reflected/refracted off the edge of that object. This is illustrated in Figure 4.7. However for smaller scenes this error in approximation is negligible and the rendered reflections/refractions are accurate enough. Figure 4.7: Figure showing the error in approximation of reflected light ray Once in possession of a cube texture, the vertex and pixel shaders are used to calculate the direction of the reflected or refracted light rays. The final result 37

38 DTU, 2009 CHAPTER 4. ANALYSIS OF ALGORITHMS of cubic environment mapping can be seen in Figure 4.8. Figure 4.8: A scene rendered using cubic environment mapping And since rendering is done by the shaders it is quite easy to combine reflections and refractions, so that one ends up with a realistic semi-transparent object, which in real world would produce caustics. 38

39 Chapter 5 Implementation This section describes the different implementation issues the two methods for creating caustics have as well as the implementation details behind shadow mapping and cubic environment-mapping. As it has been mentioned in Section 4 there are a few differences between the two methods, however the implementation that yields visually best results is a combination of the two methods. But to keep things simpler, the methods shall be looked at separately, starting with caustics mapping. But, just before we get all down and dirty with the implementation details, it is worth mentioning that all code is written in DirectX 9 and the vertex and pixel shaders are in HLSL, even though none of the algorithms used have any restrictions on the language used, so one might just as well implement the following in OpenGL and GLSL/Cg. One should also note, that instead of using dozens of shader HLSL files, a single effect file is used. It contains all the shaders, constants, samplers and techniques used for compiling shaders. Although the final effect file has more than 700 lines of code, which is huge for an effect file, it is still a lot easier to use a single large file in combination with DirectX, than numerous small ones. Now that we have this clarified, let us move on to the implementation details of all the algorithms mentioned in Section Program Overview To make it easier for the reader to understand certain implementation details of the algorithms, this section will describe the structure of the entire program. Since this is quite a large project involving hundreds of lines of code, it is rather difficult to find the relevant areas where the actual rendering takes place, so hopefully, after reading this section it will be a lot easier for one to find his way around the DirectX and HLSL code. The entire project has only of two files containing code - HLSL project.cpp, which contains all DirectX/C++ code and HLSL.fx (see Appendix??) that contains all the shaders. Naturally, the first one is the one that gets compiled 39

40 DTU, 2009 CHAPTER 5. IMPLEMENTATION and executed. Although it contains more than 2500 lines of code, there are only a few really relevant functions (The code for these functions is given in Appendix??). When one runs the program, the first function to be executed is wwinmain - it is here that the callback functions are set, so is any device changes take place the application is immediately notified about these changes. The light and camera parameters are also initialized in this function. Now let us look closer at the callback functions. Even though there is quite a number of them, most of them are responsible for handling mouse or keyboard events, or checking if the hardware on which the application is run, can support floating-point textures and thus are not very relevant to the investigation itself. The first function that actually does something important is OnCreateDevice and is called immediately after the Direct3D device is created. It is here that the effect file is loaded along with object, background and light meshes. Next up is OnResetDevice, which gets called every time the Direct3D device is lost, i.e. during the first time the application is run, or during toggling between full screen and window modes. It is here that the refractive vertex grid is set up and all the textures along with depth stencil surfaces are created. To make things easier and use fewer variables, it has been decided that all the textures and depth stencil surfaces will have the same resolution. While there are only a few textures which a loaded directly from file (like the alpha mask used in the final stage of the caustics mapping algorithm), most textures are created to be used as render targets. They will later be used for caustics mapping, shadow mapping and cubic environment mapping. Finally it is here that the function which creates the caustic volume (GenerateCausticMesh) is called. The details of how this function actually works are given in Section 5.3. Finally we come to the function where all rendering actually takes place - OnFrameRender. This function is called at the end of every frame and performs all the rendering calls for the scene. The first thing to be rendered is the cube map. Here the cube texture created in OnResetDevice is set as the render target and then corresponding depth stencil surface is set. Once this takes place everything that surrounds the object, to which the cube map is applied, must be rendered. It means that both the light source and the background (receiver geometry) must be rendered, so the functions RenderLight and RenderBackground are called. Besides the current Direct 3D device, these two functions take the view and projection matrices as arguments (the details for these matrices are given in Section 5.5). The RenderLight function s job is just to draw the light source, so it uses just two shaders - VertLight and PixLight. With RenderBackground things are not so simple because it used in two cases - to render the background as is and to render the background to texture, which later is used for certain calculations (see the next section). So besides the two matrices, RenderBackground takes one boolean argument as well. When it is set to false - the functions renders the background (receiver geometry) as is, using the VertScene and PixScene shaders, while a true value makes sure that the background is drawn using the VertLightRender and PixPosTex shaders. This, in fact, is exactly what happens once the cube map has been created and 40

41 5.1. PROGRAM OVERVIEW DTU, 2009 the program moves on to rendering the positions texture used for creating the caustics map. Much like with the cube texture, the positions texture is set as the render target, but now only the background is rendered. And now, the boolean argument of the RenderBackground function is set to true. Once the positions texture has been rendered, it is the turn of the object texture. The render process is exactly the same as for the positions texture (object texture is set as the render target, etc), only now RenderObject function is called instead of Render- Background. The two functions are quite similar - if RenderBackground renders the receiver geometry, then RenderObject renders the caustics (and, of course, shadow) casting object. It too, takes the Direct 3D device, view and projection matrices and a boolean variable as arguments. If this boolean is set to true, RenderObject uses the VertLightRender and PixIntersect shaders to render the object. The output of these shaders is the object texture, used to create the caustics map. A false value ensures that VertEnvMap and PixEnvMap shaders are used, i.e. the object will be rendered with reflections and refractions. Having created the positions and object textures it is possible to create the caustics map. For this to take place the caustics texture created in OnResetDevice is set as render target and then the RenderCaustics function is called. Unlike the three other render functions mentioned above, RenderCaustics does not draw any objects, instead it draws the vertices from refractive vertex grid. The positions of these vertices are determined in the VertCaustics and PixCaustics shaders. It is also here that the alpha texture is set. After creating the caustics map, an attempt to create a caustics volume is made. Unlike all the previous render passes, the caustic volume does not use a texture, but instead uses a mesh, created in OnResetDevice using the GenerateCausticMesh function. It is here that the mesh is drawn. To make sure that the object, which will create the volume and not just the volume is drawn, there is a call to the RenderShadow function. This functions, is like RenderBackground and RenderObject put together, i.e. it renders both the object and the receiver geometry. It too, has a boolean value, which is when false uses the VertVolumes and PixVolumes shaders, to render the caustics volume. Once the volume has been rendered, it is time to render the shadow map. Much like with the caustics texture, the shadow texture is used as the render target and to make sure that both the receiver geometry and the shadow-casting object are drawn - RenderShadow function is called. However, now the boolean value is set to true and the RenderShadow function uses the VertShadow and PixShadow shaders, so that at the end of this render pass, one ends up with a complete shadow map. One should note that once a certain render pass is complete, the appropriate texture is sent to the effect file (e.g. having just now created the shadow map the g pshadowmap is passed to the effect file where it is used for shadow map look-ups. The same is done for all other textures). Now that we have the cubic environment map, the caustics map, the shadow map and the caustics volume, it is possible to render the final scene. There are no render targets here - just calls to RenderBackground, RenderObject and RenderLight functions. The view matrix used is the view camera s view matrix and the projection matrix is the view camera s projection matrix. Having done 41

42 DTU, 2009 CHAPTER 5. IMPLEMENTATION that, one will end up with a complete scene, with all the geometry, lights, shadows and, of course, caustics. The overview of all the functions and shaders mentioned above can be seen in Figure 5.1. In this figure the colored lines indicate which render function and corresponding shaders are used for rendering to a specific texture (e.g. when rendering to the Positions Texture (yellow line in figure), one uses the Render- Background function and calls the VertLightRender and PixPosTex shaders). Figure 5.1: Schematic overview of the main functions in the HLSL project.cpp class This concludes the overview of the HLSL project.cpp file and although there are quite a number of other functions, their jobs are handling events, updating the frame and destroying the Direct 3D device. The HLSL.fx effect file contains all the shaders, however they are described in detail in the following sections, so there will no in-depth analysis of them here. There are a few things worth mentioning though. Firstly, all the global variables (with the exception of ambient light) declared in the effect file are passed from the main.cpp file, so these variables are calculated in the main program, not in any of the shaders. And secondly, the shaders themselves are compiled inside the effect file using techniques 1, so when it is loaded all the shaders are automatically compiled. This, in fact, is a slightly annoying feature, 1 It is not shaders themselves that are called from the DirectX code, but these techniques, which contain the compiled Vertex and the Pixel shaders 42

43 5.1. PROGRAM OVERVIEW DTU, 2009 since if there is an error in one of the shaders, one has to check all of them to find this error, because otherwise the application fails to start altogether. Now that the reader is familiar with the general implementation details, it is possible to move on to the implementation details of the various algorithms used in this investigation. 43

44 DTU, 2009 CHAPTER 5. IMPLEMENTATION 5.2 Caustics Mapping After reading Section 4.1, one might think that the caustics mapping algorithm is rather tedious to implement. However the algorithm has only three main steps, namely: Setup refractive vertex grid. Create caustics map. Render caustics on receiver geometry using the caustics map. It has already been mentioned that caustics mapping algorithm has striking structural similarity to shadow mapping, with the exception of the first step. So let us look closer at each of the steps. First, a refractive vertex grid has to be created. As mentioned in Section 4.1, this is simply a set of vertices that will be splatted wherever rays of light hit the receiver geometry. Splatting means exactly what it sounds like. One can think of it as a paint ball being thrown at a wall, and thus creating a splat on it. In our case, the paint ball is a vertex which represents a photon (or a number of photons) emitted by the light source, and the wall is the receiver geometry, i.e., surfaces on which the caustics can be formed. The idea is that if multiple vertices, traveling along the light rays, end up getting splatted at the same point due to refraction, that region will become brighter and that this is exactly what is needed for rendering caustics. The results of this splatting are stored in a texture called the caustics map. Then in the final rendering stage, when the actual caustics are drawn, each point on the receiver geometry is projected into the caustics map texture to determine the amount of caustics it receives, if any. It is now time to get down and dirty with the implementation details. To make it easier for the reader to understand how the algorithm is brought from theory to practice each step will be looked at separately and we shall consider what needs to be done. But before we set off, the scene that is to be rendered has to be defined. First of all, a refractive (or reflective) object that will cause the light rays to converge, thus forming caustics, has to be positioned somewhere inside the scene. Second, there has to be a surface or receiver geometry on which the caustics will be formed. This surface, obviously has to be close enough to the object, to get any caustics at all. A good starting point would be to use a sphere for the refractive object and a flat plane underneath it as the caustic receiver geometry. As for lighting, instead of a point light source, a spotlight will be used. Adding several light sources will complicate things, so, for starters, let us look at a scene with just a single light. Having configured the scene, we can start with the implementation of the first step of the algorithm: setting up the refractive vertex grid. The goal of the refractive vertex grid is to provide uniform distribution of points on the surface (receiver geometry) of the refractive object visible from the light source. This can be achieved by rendering the refractive object from the light s point of view onto a texture of a certain resolution, but instead of 44

45 5.2. CAUSTICS MAPPING DTU, 2009 Figure 5.2: Final splat locations for the vertices in the refractive vertex grid. outputting color, the 3D world positions are output at each pixel. Then, if a rectangular vertex grid of the same resolution is created, then one would end up with a one to one correspondence between the pixels of the texture and the vertices of the grid. And since the texture contains 3D positions of the points on the surface of the refractive object instead of color, we essentially end up with a 3D position for each grid vertex. One should note that this is the only part of the algorithm, which does not utilize shaders - the refractive vertex grid is set up in the DirectX code and is thus created using the CPU. This concludes the first part of the implementation - a set of vertices evenly distributed over the surface of the refractive object has been created, so now it is possible to proceed with the rest of the algorithm. But before proceeding onto calculating the intersection point of the light ray with the receiver geometry it is possible to introduce an improvement to the algorithm. In Section 4.1 it says that the whole point of the refractive vertex grid is to splat the vertices at the points of intersection of the refracted light rays and the receiver geometry. So instead of points on the surface of the refractive object, why not improve the process and render the final intersection points onto the texture and apply those to the grid, as seen in Figure 5.2 And that is exactly what this improvement in the algorithm does. Having set up the refractive vertex grid, the positions where the vertices will be splatted need to determined in order to create the caustics map itself. These positions are the intersection points of the refracted lights rays with the receiver geometry. However, this being an image-space algorithm there will be no expensive (performance wise that is) ray-geometry intersection tests, instead an image-space approximation technique shall be used. This means that a function that can find these intersection point is required and this is how it works: From the light s point of view, render the receiver geometry (do not render 45

46 DTU, 2009 CHAPTER 5. IMPLEMENTATION the refractive object) onto a texture. Instead of outputting color at every pixel, the 3D coordinates in world space are output. This will be referred to as the positions texture. From the light s point of view, render the refractive object onto a texture (referred to as the object texture). At every pixel, compute the refracted light ray and estimate intersection point with the receiver geometry using the positions texture. Output a texture containing the 3D coordinates in world space of the intersection point. One starts by creating several textures in DirectX, which later will be used as render targets. The textures are of A32B32G32R32F format (or 32-bit floating point). For caustics mapping three textures are used - one for the caustics map itself, one for the positions texture and one for object texture (see below). Having created the textures actual rendering can take place. For implementing the three steps mentioned above three shaders are used - a single vertex shader called VertLightRender, which places the camera inside the light source (remember that both the positions texture and the object texture have to be rendered from the light s point of view) and two pixel shaders - PixPosTex and PixIntersectPts, one for each of the two steps mentioned above. In the firs step the receiver geometry is rendered with positions texture as the render target and then PixPosTex shader outputs the 3D coordinates in world space. Then the caustics-casting object is rendered with object texture as render target. The PixIntersectPts shader then outputs the 3D coordinates in world space of the intersection point. To get these coordinates it uses an intersection function called raygeo. This intersection function implements an iterative root finding method similar to the Newton-Raphson method. The basic idea is that the problem of finding the intersection point is posed as the problem of finding the root of a mathematical function. The function in our case is defined by the positions texture. Therefore, the root of the function in question is where a given light ray intersects the positions texture, which is essentially the intersection point that has to be to found. There are in fact many root-finding methods out there, such as the Secant method, Bisection method, etc. and all of them differ in accuracy, robustness, and the number of iterations it takes to converge to the solution. Any one of these methods can be used for the intersection estimation. After this step (recall that there are three steps all in all and this is the second one), one ends up with a texture containing the intersection points of the refracted light rays and the receiver geometry. This concludes the main part of the caustics mapping algorithm, everything after this is just simple point rendering with regular texture mapping. In order to create the caustics map, all one needs to do now is splat the vertices from the refractive vertex grid at the intersection points that were just found. This is achieved by rendering the grid as point primitives using additive alpha blending onto a texture, i.e. the caustics map. In DirectX, point primitives are rendered as screen-aligned quads. Therefore, if they are used for splatting, the splats will look like squares and create some unwanted visual artifacts. However, one can apply an alpha mask with a Gaussian falloff to the point primitives so that they have a nice 46

47 5.2. CAUSTICS MAPPING DTU, 2009 circular shape with a smooth gradient at the edges. An important step, is to chose the right size of the point primitives, because if they are too small, one ends up with a lot of dots all over the screen (depending on the resolution of the refractive vertex grid), while if they are too large, the caustics become too vague and dim. Moving on to the final step in the implementation, one starts by creating setting the caustics texture as the render target and then calling a function which draws the refractive vertex grid utilizing the VertCaustics vertex shader, where the intersection point from the texture created in the previous pass using the image-space intersection approximation is looked up. This point is where the point primitive will be rendered. However, since the point stored in the texture is in world space, it must first projected into the light s view space so that it can be stored in the caustics map. The vertex shader output also consists of the light contribution from the current vertex. This is computed by dividing the total light intensity by the resolution of the refractive vertex grid. It is absolutely vital, to get the light contribution right, since if it is too high, the caustics will end up as white areas with sharp edges on the receiver geometry, while a low light contribution, will give uniformly grey caustics. In the pixel shader (PixCaustics), the interpolated light contribution is output which gets accumulated when multiple point primitives overlap the same pixels (the more overlaps there are - the brighter the resulting caustics will become). This accumulation takes place thanks to additive alpha blending which can be enabled in the effect file containing the shader. At the end of this render pass, one ends up with the caustics map texture. All that s left to do now is perform the final rendering of the 3D scene (with caustics and shadows, naturally) and display it on the screen. For this step, one has to render the scene as one normally would. The only exception is that when computing lighting on the receiver surfaces, other than the spotlight there is an additional light source: the caustics map. Therefore, the total incident light at a point is given by the light source and the caustics falling on that point, if any. To account for this one needs a function called get_caustics that takes the current 3D point in world-space, then projects it into the light s view space, and finally, looks up the caustics map texture. If caustics are outside the cone of light from the spotlight, the function returns zero, otherwise the amount of light falling on that point due to caustics is output. In the pixel shader - PixScene, besides caustics, shadow tests are made (see Section 5.4 for more details), to avoid using extra shaders and so that the final scene is rendered with all features at the same time. This concludes the implementation of the caustics mapping algorithm and assuming everything went well in all the steps that were covered, one should end up with a visually appealing picture with caustics. 47

48 DTU, 2009 CHAPTER 5. IMPLEMENTATION 5.3 Caustics Volumes As it has been mentioned before, this method draws its inspiration from shadow volumes [12], so its implementation is practically the same. The one key difference is that the mesh generated for the caustics volume is scaled down so that the resulting volume is smaller than the corresponding shadow volume for the same object. However, it is obvious that simply scaling the mesh doesn t produce nice-looking volumetric caustics, so there is a little bit more to it. This section describes the steps needed to render volumetric caustics. In Section 4.2 it has been mentioned that one needs to create a mesh, which will represent the caustic volume. So instead of determining the silhouette and generating the caustic volume geometry on the CPU, a mesh that can represent the caustic volume of the occluding geometry regardless of light direction shall first be generated. This process of generating a volume takes place in a function called GenerateCausticMesh. Once this has been done, a vertex shader will be used to perform vertex extrusion. This will stretch the caustic volume, but unlike shadow volumes, the vertices are not extruded to infinity. Sounds simple? Not quite, unfortunately there are a few problems involved with generating volumes. While there is no problem with the front and back caps of the volume, a problem occurs at silhouette edges where one triangle faces the light and its neighbor faces away from the light. In this situation, each triangle causes its vertices to be processed differently, since the triangles facing the light stay where they are, while the ones facing away from the light will be extruded. Thus, one must determine how the vertices that are shared by the two triangles should be handled. To solve this issue, the two triangles are split by duplicating the shared vertices so that each triangle has its own unique three vertices. When the common edge between the triangles becomes a silhouette edge, one triangle stays where it is and the other moves along the light direction. Because now the triangles possess their unique set of vertices, moving one of them along light s direction will create a gap, but a closed volume cannot have any gaps or holes. This can be fixed by adding a quad to the caustic volume mesh between the two triangles. The edge that is shared by the two triangles gets split, and then the four vertices define the quad. Before vertex extrusion, the quad is degenerate because the triangles are next to each other. However, when the triangles are far apart, the quad is stretched and automatically forms the side of the shadow volume. Figure 5.3 illustrates the process of splitting two triangle faces. Figure 5.3: The process of splitting two faces 48

49 5.3. CAUSTICS VOLUMES DTU, 2009 The biggest advantage of generating a static mesh for the caustic volume and then having a vertex shader extrude its vertices is that very few CPU cycles are required to render actual caustics. The caustic volume mesh, once generated, has to be scaled so the volume caustics produced will look realistic. No other changes are needed disregarding where the light position is because the vertex shader can extrude vertices in the correct direction as it receives the vertices from the sample. This GenerateCausticMesh function mentioned above takes an input mesh and outputs a different, scaled mesh that represents the caustic volume for the input mesh. There are several things that this function does in order to generate the proper volume for the input mesh. For every edge in the input mesh, as it has been mentioned above, the function must split it up into two edges, effectively separating the two faces that share the edge. Then, it creates a quad (or two triangles) that connects the two split edges. Figure 5.4 visualizes this process. By default, these quads are degenerate, because the split edges are co-linear. However, when one face is extruded and the other is not, the quad between them gets stretched and forms the side of the caustic volume. Figure 5.4: Green mesh faces are split, and then red quads are inserted. Then the function that creates the caustic volume mesh works by iterating through the faces in the input mesh. For each face iterated, three things happen. First, three new vertices and one new face are generated for the caustic volume mesh. Each face must have its own unique three vertices because the faces are separated by degenerate quads. Then, the normal of the new vertices are computed to be the normal of the new face, as illustrated in Figure 5.5. The reason that this is necessary is because vertex extrusion is done by a vertex shader, and vertex shaders only see vertex normals, not face normals. By setting the vertex normals to match their face normals, the vertex shader will correctly extrude vertices when the faces they belong to are facing away from the light. Finally, the three edges of a face are added to an edge mapping table. An edge mapping entry contains one source edge, representing the edge in the input mesh, and two output edges, representing the split edges in the output mesh. Essentially, the table records the edges in the source mesh and what edges they split into in the output mesh. This information is needed when the quads are generated later. For each edge of the added face, the function looks through the edge mapping table, and if it cannot find an existing entry for the source edge, it creates one and initializes the source edge and one output edge of the edge 49

50 DTU, 2009 CHAPTER 5. IMPLEMENTATION mapping entry. However, if it finds that the source edge already has an entry in the table, then it has the four vertices of the quad for this edge, so it adds the two faces for the quad to the output mesh and remove the edge mapping entry from the table. Figure 5.5: Vertex normals are set up to be identical as the face normals. At this point, the output mesh contains all of the faces that are in the input mesh, and every edge in the input mesh has been converted to a quad in the output mesh. There is also a list of edges in the mapping table that represents the edges that are not shared in the input mesh. The existence of these edges implies that the input mesh has holes in it, and the holes must be patched so that the caustic mesh becomes a closed volume. The patching algorithm looks through the mapping table and finds two edges that share a vertex in the original mesh. Then it patches the hole by generating a new face and three new vertices out of the two neighboring edges vertices. After that, the code generates two quads to connect the patch face to the existing geometry of the output mesh. This process is illustrated in Figure 5.6. Figure 5.6: Creating a closed volume shadow mesh. Having created the mesh that will represent the caustic volume, it is now 50

51 5.3. CAUSTICS VOLUMES DTU, 2009 possible to draw the volume itself. This is where the implementation starts to differ from the shadow volume one. While all of the above (i.e. volume mesh generation) takes place in the DirectX code, the mesh is drawn using shaders and this is where the trouble starts. First of all, unlike shadow volumes, the vertices in the caustics volume mesh, should not be extruded to infinity, but, on the contrary, should be quite finite. Secondly, a caustic volume should not become wider the further away it is from the casting object, but narrower. Unfortunately these two issues were not solved to the full extent. The distance of extrusion can be limited using the function that calculates the distance between the object and the receiver geometry from caustics mapping algorithm. However using this function makes the volume stretch all the way to the surface and no alternative solution has yet been found. With the second problem things look even more grim - the caustic volume us generated in such a way that it becomes wider with greater distance from the object. Unfortunately, as of now, no solution to this problem has been found, therefore physically-correct caustic volumes remains something to be improved on. 51

52 DTU, 2009 CHAPTER 5. IMPLEMENTATION 5.4 Adding Shadows Having taken care of caustics, to make the final scene more realistic, it is a good idea to add shadows. And, as it has been mentioned before, since the caustics mapping shares a lot in common with shadow mapping, it is this algorithm that is used to render shadows [14]. This is how it works: First, the shadow map is constructed by rendering the scene with the shadow map texture as the render target. The shaders used for this process are VertShadow and PixShadow. VertShadow transforms the input coordinates to projected light space (the projected coordinates as if the camera is looking out from the light and then passes the projected z and w to the pixel shader, as texture coordinates, so that the pixel shader has a unique z and w for each pixel. Then PixShadow outputs z/w to the render target. This value represents the depth of the pixel in the scene, and has a range from 0 to 1. At the near clip plane, it is 0 and at the far clip plane, it is 1. When the rendering completes, the shadow map contains the depth values for each pixel and can be used for rendering the final scene. As mentioned in Section 5.2 the final scene is rendered with shadows and caustics simultaneously. It takes place in two shaders: VertScene and PixScene. VertScene transforms view position to projected coordinates and passes the texture coordinates to the pixel shader. In addition, it outputs: the vertex coordinates in view space, the vertex normal in view space, and the vertex coordinates in projected light space. The first two are used for lighting computation, while the vertex coordinate in projected light space is the shadow map (and caustic map) coordinate. This coordinate is obtained by transforming the world position to light view-space using a view matrix as if the camera is looking out from the light, then transforming the position by the projection matrix for the shadow map. Besides looking up the caustics texture using the get_caustics function, the pixel shader tests each pixel to see if it is in shadow. First, a pixel is tested to see if it is within the cone of light from the spotlight using the dot product of the light direction and the light-to-pixel vector. If the pixel is within the cone, the shader checks to see if its in shadow. This is done by converting the range of P oslight (between 0 and 1 to match a texture address range), inverting y (positive y direction is down rather than up when addressing a texture), and then using the coordinate to perform a shadow map look-up. For each texture lookup, the pixel shader does a 2-by-2 percentage closest filtering (by fetching from each of the four closest texels). For each texel fetched, the texel value is compared to the current pixel depth from the light, or P oslight.z/p oslight.w. If the value from the shadow map is smaller, then the pixel is in shadow and the lighting amount for this texel is 0, otherwise, the lighting amount for this texel is 1. After this is done for all four texels, a bilinear-interpolation is done to calculate the lighting factor for the pixel. The light source will be scaled by this factor to provide the darkening effect of the shadow and at the same time combined with the contribution from caustics, thus producing an additional light source - the caustics map. 52

53 5.5. REFLECTIONS AND REFRACTIONS DTU, Reflections And Refractions Having taken care of caustics and shadows, one thing still needs to be implemented and that is making the caustics-casting object reflective and refractive. As it is stated in 4.4 a technique called cubic environment mapping [13] was chosen to make reflections and refractions possible. Below is a brief overview of how this technique works. Just as the program loads it creates a whole range of textures one of which is the cube texture in A32B32G32R32F format (32-bits per channel). When rendering, this texture is used to construct an environment map. A stencil surface with a size equal to the size of a cube texture face is also created at this time. This stencil surface will be used as the stencil buffer when the sample renders the scene onto the cube texture. Rendering itself takes place in two functions: OnFrameRender and Render- Object, using vertex and pixel shaders. When creating the cube map in the OnFrameRender function the entire scene (minus the environment-mapped mesh) is rendered onto the cube texture. First, it saves the current render target and stencil buffer, and sets the stencil surface for the cube texture as the device stencil buffer. Next, the function iterates through the six faces of the cube texture. For each face, it sets the appropriate face surface as the render target. Then, it computes the view matrix to use for that particular face, with the camera at the origin looking out in the direction of the cube face. It then calls RenderLight and RenderBackground functions, passing along the computed view matrix and a special projection matrix. This projection matrix has a 90 degree field of view and an aspect ratio of 1, since the render target is a square (single face of a cube). After this process is complete for all six faces, the function restores the old render target and stencil buffer. The environment map is now fully constructed for the frame. The Shaders used for rendering the cubic environment map are called VertEnvMap and PixEnvMap. They are called from the RenderObject function. First, the vertex shader transforms the position from object space to screen space, then it computes the eye reflection vector (the reflection of the eye-to-vertex vector) in view space and the light-to-pixel vector. These two vectors are used to calculate the reflection and refraction vectors. HLSL has in-built functions that can calculate both reflection and refraction, however they don t always work correctly, therefore using calculations presented in the theory section of this report, reflection and refraction vectors are calculated by hand. These vectors are then passed on to the pixel shader, which uses the cube map and one of the two vectors to make a texture look-up and then outputs the reflected or refracted color (depending on which of the two vectors was used to make the look-up). The final output of the shader is reflection and refraction, each scaled by factors that a determined using a Fresnel approximation. 53

54 DTU, 2009 CHAPTER 5. IMPLEMENTATION 54

55 Chapter 6 Results Now that the reader is familiar with the implementation details, the time has come to see where this implementation has lead us. This section will focus on the results produced by the two different methods described in the previous sections. The hardware used for testing was the following 1 : CPU - Intel Core 2 Duo 8500 running at 3160 MHz Memory - 2GB of DDR3 RAM Graphics card - ATI Radeon HD 4850 with 512 MBs of memory and a GPU clock of 625 MHz Operating system - Windows Vista Home 32-bit Screen Resolution x1200 (for full screen tests) All of the tests were carried out in a confined space (basically a big box) with textured surfaces. The caustics-casting object itself, was made to be both reflective and refractive and in most screenshots presented below, one will see more reflection than refraction. Unfortunately due to lack of time and a range of technical problems, the goal of having multiple light sources never came to pass, so all testing was done with a single spot light source. Finally, the number of objects varied from test to test, however in the default application (included on the CD - see Appendix??) there is only a single object. Caustics Mapping So let us take a look at the first method, i.e. caustics mapping. It is it that posed the biggest challenge during the entire investigation, however it is also 1 Alternatively tests were run on a Nvidia GeForce 7800GT, which produced no visible caustics. This must be due some limitations of the graphics card, since the algorithm worked fine on a newer GeForce 8800GTX 55

56 DTU, 2009 CHAPTER 6. RESULTS the main focus, since, as it has been mentioned before, flat or surface-based caustics is an every-day phenomenon, while volumetric caustics, is something a bit more rare. As one can surely recall caustics mapping involved projecting a caustics texture onto a receiver geometry. This caustics texture was created by splatting multiple vertices at the point where refracted light rays converge. The two factors that determine if the implementation is any good are the quality of caustics produced and the performance of the application. Figure 6.1 shows what the result of this vertex splatting, where the causticscasting object is a sphere with rough edges. The resulting image (perhaps image is not the correct word, since the scene is fully interactive and the frame rate is approximately 100 fps) is a nice picture of caustics formed directly under the object, with a large shadow around. One should note that the observed caustics are represented by not just a uniformly white blob, but have in fact some light and dark patterns, which is something one sees in real life. Figure 6.1: Caustics formed by a sphere with rough edges If a sphere was a good starting point, and, as it has seen in Figure 6.1, does produce good looking results, let us look at a slightly more complex object, namely the Utah Teapot. Figure 6.2 shows caustics produced by the teapot, as if it was made of glass. Naturally, the resulting caustics pattern is more complex than that of a sphere. The bright circle in the middle is most likely caused by the tap of the teapot, while curved line - by its edges. Considering that the teapot is empty, the observed caustics do appear to look realistic and the frame rate for the scene is above 120 fps. To make things even more interesting, an even more complex object is inserted into the scene - a skull. Figure 6.3 illustrates the caustics produced by such an exquisite object and the observed pattern is rather complex and despite the complexity of the geometry the frame rate is maintained at about 100 fps. The author of this report, unfortunately, doesn t have a glass skull in his possession, so there is nothing to compare to, but that doesn t stop the results 56

57 DTU, 2009 looking any less appealing. Figure 6.2: Caustics from the Utah Teapot Figure 6.3: Caustics formed by a glass skull Having established that the algorithm works fine with single objects (not only do the results look good, but also the performance - between 70 and 120 fps is rather impressive), let us see what happens, if multiple objects are introduced. Figure 6.4 shows a scene with a smooth sphere and a teapot. One has to look closely at the image to notice a few details: first the caustics produced by the two objects are similar to the ones seen in figures 6.1 and 6.2, with the exception that from this angle, one gets only a single light pattern from the teapot; second the caustics formed by the sphere are reflected in both objects and finally, the light circle seen in the teapot s shadow is not an error in the algorithm, but 57

58 DTU, 2009 CHAPTER 6. RESULTS a fault in the geometry of the object itself. Surprisingly, the frame rate with two objects is higher than the frame rate for a single one (183 fps). Although the screenshot in Figure 6.4 is taken in window mode (640x480 resolution), the frame rate, doesn t really change in full screen mode. The explanation for this, is that the size of the refractive vertex grid and all the textures (recall that they all share the same size), is half of the one used for single object tests presented above. Figure 6.4: Caustics formed by a sphere and a teapot But before jumping to conclusions let us take a look at another scene with another pair of objects. Seen in Figure 6.5 is a wine glass and a cube. This is the first time where one notices some issues with the caustics mapping algorithm (at least in this implementation). If caustics formed by the wine glass look fine (one bright spot from the stem of the glass and vague line from its sides), then caustics from the cube are virtually invisible (if one looks closely enough, one can notice that part of the cube s shadow is slightly lighter, than the rest). The answer to why the cube produces such vague and dim caustics lies in its geometry. Since it is quite uniform, the refracted light rays are distributed evenly over a relatively large area and hence the point splats from the refractive vertex grid are spread out, thus producing very few overlaps, resulting in such dim caustics. This problem can partially be cured by setting a higher value (seen in Figure 6.6) for the light intensity in the caustics vertex shader (see Section 5.2 for details). Doing that will, however, result in way too bright caustics (basically white blobs) for other objects, with less uniform geometry, so unfortunately there is no direct solution to this problem right now. 58

59 DTU, 2009 Figure 6.5: Caustics from a wine glass and a cube Figure 6.6: Scene with caustics from a cube with a higher light intensity 59

60 DTU, 2009 CHAPTER 6. RESULTS To see if adding objects will have a substantial effect on performance, another object is added to the scene. Figure 6.7 shows a scene containing three objects. In this scene the sphere and the wine glass are in the lit area, hence casting both shadows and caustics, while the cube (seen to the left) is outside the spotlight cone. Surprisingly the framerate is even higher than for scenes with just two objects. At 213 frames per second it is more than impressive. Figure 6.7: Scene with three object So far we have established that the only factor that has a substantial effect on the frame rate is the resolution of the textures used. This is not really surprising, since the textures are of 32-bit floating point format, and the difference between memory needed to allocate a 512x512 texture (the resolution used in multiple object tests) and a 1024x1024 texture (single object tests) is quite large, especially when one considers that there are five of these textures used as render targets. Caustics Volumes Now let us take a look at the second method, the one with caustics volumes. Figure 6.3 shows the scene rendered with shadow and caustics volumes. Looking at this figure one can easily see two major problems. First of all, the caustic projection gets wider the further away it is from the caustics-casting object and second, the caustics produced a too sharp and well-defined to look realistic. The screenshot seen in Figure 6.8 was in fact taken during the early stages of this investigation, before there was a working version of caustics mapping. The old implementation used shadow volumes as shading technique, therefore 60

61 DTU, 2009 Figure 6.8: Caustic volumes with a single light source one can see a vague volume formed by the sphere, engulfing the bright caustics volume. Unfortunately, due to lack of time, caustics volumes were never implemented correctly, even though it is possible to use caustics mapping for rendering surface-based caustics and caustics volumes for rendering volumetric ones. Seen in Figure 6.9 is how the final scene looks with all techniques combined. Obviously the huge white cone looks nothing like a physically realistic caustics volume, however it is this author s opinion that it is possible to use the volumes technique to render good looking volumetric caustics for simpler geometries. Note that even with the addition of volumes, the frame rate is still around 90 fps, which is rather impressive, considered the steps needed to generate and render a volume. The reason why caustics volumes, in its current form, produces such poorlylooking results is that the shaders, which perform vertex extrusion, are virtually the same as the ones used in shadow volumes, so the volume produced is basically a down-scaled version of a shadow volume for the current object. The original plan to get better results was to use the distance-calculating function from caustics mapping to limit the range of the extruded volume. But that did not work because vertex extrusion takes place in the vertex shader and HLSL does not allow the use of functions in vertex shaders. Also the problem with expanding volume still remains, but introducing a few changes to the volume-generating algorithm can produce the sought result. Therefore, although presently unsuccessful, the caustics volumes technique is definitely not a dead end and can produce arguably good results if implemented properly. 61

62 DTU, 2009 CHAPTER 6. RESULTS Figure 6.9: A scene with shadows, caustics and volumes Quality And Performance Considerations So far we have witnessed that the caustics mapping algorithm performs quite well with one or more objects and a single spot light source. But let us take a closer look at the quality versus performance for this implementation. To make things a tad easier a simple, smooth glass sphere will be used as the caustics-casting object. Figure 6.10 shows caustics produced by such a sphere with the light source first positioned close to the sphere, then at a medium distance and finally - far away. If at close and medium ranges caustics are clearly visible, then at a long range, there is but a dim speck on the wall where caustics are supposed to be. This, however, is not necessarily a flaw in the algorithm, but a naturally-observable phenomenon - at a large range too few light rays focus on a particular point, thus producing such dim results. Such a phenomenon was observed in real life when the author of this report took a small flashlight and a glass sphere and gradually moved the light source further away from the sphere - when the flashlight was more than half a meter away from from the sphere, the caustics produced were barely visible. One other thing that was observed is a significant performance change - at close range the frame rate is about 80 fps, while at long range it is more than 110 frames per second. The explanation for this is, that at a large range fewer rays pass through the object and hence fewer distance and intersection point calculations have to be made (recall that for every ray that passes through the object it is necessary to calculate its direction, and intersection point with the receiver geometry). Having established the effects of moving the light source away from the object, let us now look at the quality of caustics when changing the resolution of the render target textures and the refractive vertex grid. From the results 62

63 DTU, 2009 Figure 6.10: Caustics at close, medium and long ranges presented above, it was clear that it is this resolution that plays the biggest role on performance, so just how much does the quality deteriorate with lower resolution. Figure 6.11 shows three screenshots with texture resolutions being 256x256, 512x512 and 1024x1024 pixels. Figure 6.11: Caustics at 256x256, 512x512 and 1024x1024 texture resolutions While there is not so much difference between the 256x256 and 512x512 resolutions (except some aliasing), the 1024x1024 looks a lot nicer. But this great visual appeal comes at high cost - performance. If the frame rate for the 256x256 is a whopping 407 fps, a more moderate 173 fps for the 512x512 resolution, then for the 1024x1024 case, it is a modest 61 fps. This, however is still more than acceptable and without any doubt can be considered real-time. Of course, if caustics mapping was to implemented as part of a game engine with a lot more complex scenes, the frame rate would be even lower, but in this particular implementation, it is this resolution that produces the optimal results. It is however, possible to introduce some optimizations to the algorithm, which will boost performance. So there is a limitation that the shadows and caustics generated with shadow and caustics mapping do not have sharp edges like some shadows in real life, or shadows generated with shadow volumes. This because shadow and caustics mapping are image-based techniques and even though filtering helps reduce the aliasing, but it cannot completely eliminate the artifact. But despite these drawbacks, the overall result does have great visual appeal and a high texture resolution practically eliminates the visual artifacts. Finally, to see if the current implementation managed to surpass its predecessors, it will be compared to the original caustics mapping technique [2] and photon mapping [5]. Figure 6.12 shows three similar images, where the caustics 63

64 DTU, 2009 CHAPTER 6. RESULTS casting object is a glass sphere. If the image with the sphere on a hardwood floor took five minutes to render with photon mapping, then the sphere in a Cornell box and the scene used in this investigation are completely interactive and run in real-time. Figure 6.12: Caustics created by photon mapping and caustics mapping The quality of the caustics produced using this algorithm is the same, if not better than the two other ones. It is only in this implementation that one can see distinguishable light patterns in the formed caustics, which brings it closer to real life. And speaking of real life versus computer generated images, let us make one more comparison: a photograph of caustics versus a screenshot from this implementation. To make things a bit more challenging a somewhat complex object will be used for this comparison - namely a wine glass. Figure 6.13 shows a photograph of caustics put up against a screenshot. Even though there is a noticeable difference between the two images, the most important thing - caustics are rather similar. The bright ring seen on the surface near the glass can be observed on both images. The differences in caustics are mainly attributed to the accuracy of the emulation of the refraction events that take place as light travels through the wine glass to the receiver surface. Of course the photograph bears a lot more details than the screenshot, but that is not entirely the algorithms fault - the geometry of the caustics-casting object that refracts the light rays, also plays an important role and a computer-drawn wine glass can never match the complexity of a real one. Despite a few minor drawbacks, caustics mapping performs very well, but a problem that was never solved is the fact that the original caustics mapping algorithm [2], never had support for area and multiple light sources and in its current implementation, the algorithms uses a spotlight for illumination, while multiple light sources were never implemented correctly. The reason for that, is that in several places, both in the shader and DirectX code one has to iterate through all the light sources present and that in itself produces a big challenge and a whole range of technical difficulties. But, both for caustics mapping and caustics volumes, it is not an impossible task to add a few extra light sources and can serve as motivation for further work on this project. 64

65 DTU, 2009 Figure 6.13: Caustics in real life vs caustics created by caustics mapping 65

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading CENG 477 Introduction to Computer Graphics Ray Tracing: Shading Last Week Until now we learned: How to create the primary rays from the given camera and image plane parameters How to intersect these rays

More information

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Shading Models. Welcome!

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Shading Models. Welcome! INFOGR Computer Graphics J. Bikker - April-July 2016 - Lecture 10: Shading Models Welcome! Today s Agenda: Introduction Light Transport Materials Sensors Shading INFOGR Lecture 10 Shading Models 3 Introduction

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

Lecture 7 Notes: 07 / 11. Reflection and refraction

Lecture 7 Notes: 07 / 11. Reflection and refraction Lecture 7 Notes: 07 / 11 Reflection and refraction When an electromagnetic wave, such as light, encounters the surface of a medium, some of it is reflected off the surface, while some crosses the boundary

More information

dq dt I = Irradiance or Light Intensity is Flux Φ per area A (W/m 2 ) Φ =

dq dt I = Irradiance or Light Intensity is Flux Φ per area A (W/m 2 ) Φ = Radiometry (From Intro to Optics, Pedrotti -4) Radiometry is measurement of Emag radiation (light) Consider a small spherical source Total energy radiating from the body over some time is Q total Radiant

More information

The Rendering Equation. Computer Graphics CMU /15-662

The Rendering Equation. Computer Graphics CMU /15-662 The Rendering Equation Computer Graphics CMU 15-462/15-662 Review: What is radiance? Radiance at point p in direction N is radiant energy ( #hits ) per unit time, per solid angle, per unit area perpendicular

More information

dq dt I = Irradiance or Light Intensity is Flux Φ per area A (W/m 2 ) Φ =

dq dt I = Irradiance or Light Intensity is Flux Φ per area A (W/m 2 ) Φ = Radiometry (From Intro to Optics, Pedrotti -4) Radiometry is measurement of Emag radiation (light) Consider a small spherical source Total energy radiating from the body over some time is Q total Radiant

More information

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller Local Illumination CMPT 361 Introduction to Computer Graphics Torsten Möller Graphics Pipeline Hardware Modelling Transform Visibility Illumination + Shading Perception, Interaction Color Texture/ Realism

More information

Ray Tracing: Special Topics CSCI 4239/5239 Advanced Computer Graphics Spring 2018

Ray Tracing: Special Topics CSCI 4239/5239 Advanced Computer Graphics Spring 2018 Ray Tracing: Special Topics CSCI 4239/5239 Advanced Computer Graphics Spring 2018 Theoretical foundations Ray Tracing from the Ground Up Chapters 13-15 Bidirectional Reflectance Distribution Function BRDF

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker CMSC427 Shading Intro Credit: slides from Dr. Zwicker 2 Today Shading Introduction Radiometry & BRDFs Local shading models Light sources Shading strategies Shading Compute interaction of light with surfaces

More information

Recall: Basic Ray Tracer

Recall: Basic Ray Tracer 1 Recall: Ray Tracing Generate an image by backwards tracing the path of light through pixels on an image plane Simulate the interaction of light with objects Recall: Basic Ray Tracer Trace a primary ray

More information

All forms of EM waves travel at the speed of light in a vacuum = 3.00 x 10 8 m/s This speed is constant in air as well

All forms of EM waves travel at the speed of light in a vacuum = 3.00 x 10 8 m/s This speed is constant in air as well Pre AP Physics Light & Optics Chapters 14-16 Light is an electromagnetic wave Electromagnetic waves: Oscillating electric and magnetic fields that are perpendicular to the direction the wave moves Difference

More information

The Rendering Equation. Computer Graphics CMU /15-662, Fall 2016

The Rendering Equation. Computer Graphics CMU /15-662, Fall 2016 The Rendering Equation Computer Graphics CMU 15-462/15-662, Fall 2016 Review: What is radiance? Radiance at point p in direction N is radiant energy ( #hits ) per unit time, per solid angle, per unit area

More information

GEOMETRIC OPTICS. LENSES refract light, so we need to know how light bends when entering and exiting a lens and how that interaction forms an image.

GEOMETRIC OPTICS. LENSES refract light, so we need to know how light bends when entering and exiting a lens and how that interaction forms an image. I. What is GEOMTERIC OPTICS GEOMETRIC OPTICS In geometric optics, LIGHT is treated as imaginary rays. How these rays interact with at the interface of different media, including lenses and mirrors, is

More information

SESSION 5: INVESTIGATING LIGHT. Key Concepts. X-planation. Physical Sciences Grade In this session we:

SESSION 5: INVESTIGATING LIGHT. Key Concepts. X-planation. Physical Sciences Grade In this session we: SESSION 5: INVESTIGATING LIGHT Key Concepts In this session we: Explain what light is, where light comes from and why it is important Identify what happens when light strikes the surface of different objects

More information

Final Project: Real-Time Global Illumination with Radiance Regression Functions

Final Project: Real-Time Global Illumination with Radiance Regression Functions Volume xx (200y), Number z, pp. 1 5 Final Project: Real-Time Global Illumination with Radiance Regression Functions Fu-Jun Luan Abstract This is a report for machine learning final project, which combines

More information

Simple Lighting/Illumination Models

Simple Lighting/Illumination Models Simple Lighting/Illumination Models Scene rendered using direct lighting only Photograph Scene rendered using a physically-based global illumination model with manual tuning of colors (Frederic Drago and

More information

Reflection and Shading

Reflection and Shading Reflection and Shading R. J. Renka Department of Computer Science & Engineering University of North Texas 10/19/2015 Light Sources Realistic rendering requires that we model the interaction between light

More information

Global Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Global Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller Global Illumination CMPT 361 Introduction to Computer Graphics Torsten Möller Reading Foley, van Dam (better): Chapter 16.7-13 Angel: Chapter 5.11, 11.1-11.5 2 Limitation of local illumination A concrete

More information

Light. Form of Electromagnetic Energy Only part of Electromagnetic Spectrum that we can really see

Light. Form of Electromagnetic Energy Only part of Electromagnetic Spectrum that we can really see Light Form of Electromagnetic Energy Only part of Electromagnetic Spectrum that we can really see Facts About Light The speed of light, c, is constant in a vacuum. Light can be: REFLECTED ABSORBED REFRACTED

More information

Spectral Color and Radiometry

Spectral Color and Radiometry Spectral Color and Radiometry Louis Feng April 13, 2004 April 13, 2004 Realistic Image Synthesis (Spring 2004) 1 Topics Spectral Color Light and Color Spectrum Spectral Power Distribution Spectral Color

More information

Global Illumination. CSCI 420 Computer Graphics Lecture 18. BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch

Global Illumination. CSCI 420 Computer Graphics Lecture 18. BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch CSCI 420 Computer Graphics Lecture 18 Global Illumination Jernej Barbic University of Southern California BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch. 13.4-13.5] 1 Global Illumination

More information

Chapter 12 Notes: Optics

Chapter 12 Notes: Optics Chapter 12 Notes: Optics How can the paths traveled by light rays be rearranged in order to form images? In this chapter we will consider just one form of electromagnetic wave: visible light. We will be

More information

What is it? How does it work? How do we use it?

What is it? How does it work? How do we use it? What is it? How does it work? How do we use it? Dual Nature http://www.youtube.com/watch?v=dfpeprq7ogc o Electromagnetic Waves display wave behavior o Created by oscillating electric and magnetic fields

More information

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University Global Illumination CS334 Daniel G. Aliaga Department of Computer Science Purdue University Recall: Lighting and Shading Light sources Point light Models an omnidirectional light source (e.g., a bulb)

More information

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows.

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows. CSCI 480 Computer Graphics Lecture 18 Global Illumination BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch. 13.4-13.5] March 28, 2012 Jernej Barbic University of Southern California

More information

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows.

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows. CSCI 420 Computer Graphics Lecture 18 Global Illumination Jernej Barbic University of Southern California BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Angel Ch. 11] 1 Global Illumination

More information

Phys 102 Lecture 17 Introduction to ray optics

Phys 102 Lecture 17 Introduction to ray optics Phys 102 Lecture 17 Introduction to ray optics 1 Physics 102 lectures on light Light as a wave Lecture 15 EM waves Lecture 16 Polarization Lecture 22 & 23 Interference & diffraction Light as a ray Lecture

More information

Chapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc.

Chapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc. Chapter 32 Light: Reflection and Refraction Units of Chapter 32 The Ray Model of Light Reflection; Image Formation by a Plane Mirror Formation of Images by Spherical Mirrors Index of Refraction Refraction:

More information

2/1/10. Outline. The Radiance Equation. Light: Flux Equilibrium. Light: Radiant Power. Light: Equation. Radiance. Jan Kautz

2/1/10. Outline. The Radiance Equation. Light: Flux Equilibrium. Light: Radiant Power. Light: Equation. Radiance. Jan Kautz Outline Jan Kautz Basic terms in radiometry Radiance Reflectance The operator form of the radiance equation Meaning of the operator form Approximations to the radiance equation 2005 Mel Slater, 2006 Céline

More information

I have a meeting with Peter Lee and Bob Cosgrove on Wednesday to discuss the future of the cluster. Computer Graphics

I have a meeting with Peter Lee and Bob Cosgrove on Wednesday to discuss the future of the cluster. Computer Graphics Announcements Assignment 4 will be out later today Problem Set 3 is due today or tomorrow by 9am in my mail box (4 th floor NSH) How are the machines working out? I have a meeting with Peter Lee and Bob

More information

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources.

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources. 11 11.1 Basics So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources. Global models include incident light that arrives

More information

Lighting and Shading

Lighting and Shading Lighting and Shading Today: Local Illumination Solving the rendering equation is too expensive First do local illumination Then hack in reflections and shadows Local Shading: Notation light intensity in,

More information

Computer Graphics. Lecture 13. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura

Computer Graphics. Lecture 13. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura Computer Graphics Lecture 13 Global Illumination 1: Ray Tracing and Radiosity Taku Komura 1 Rendering techniques Can be classified as Local Illumination techniques Global Illumination techniques Local

More information

Rendering Part I (Basics & Ray tracing) Lecture 25 December 1, 2015

Rendering Part I (Basics & Ray tracing) Lecture 25 December 1, 2015 Rendering Part I (Basics & Ray tracing) Lecture 25 December 1, 2015 What is rendering? Generating an image from a 3D scene model Ingredients Representation of 3D geometry Specification for camera & lights

More information

Overview. Radiometry and Photometry. Foundations of Computer Graphics (Spring 2012)

Overview. Radiometry and Photometry. Foundations of Computer Graphics (Spring 2012) Foundations of Computer Graphics (Spring 2012) CS 184, Lecture 21: Radiometry http://inst.eecs.berkeley.edu/~cs184 Overview Lighting and shading key in computer graphics HW 2 etc. ad-hoc shading models,

More information

Radiance. Radiance properties. Radiance properties. Computer Graphics (Fall 2008)

Radiance. Radiance properties. Radiance properties. Computer Graphics (Fall 2008) Computer Graphics (Fall 2008) COMS 4160, Lecture 19: Illumination and Shading 2 http://www.cs.columbia.edu/~cs4160 Radiance Power per unit projected area perpendicular to the ray per unit solid angle in

More information

Light & Optical Systems Reflection & Refraction. Notes

Light & Optical Systems Reflection & Refraction. Notes Light & Optical Systems Reflection & Refraction Notes What is light? Light is electromagnetic radiation Ultra-violet + visible + infra-red Behavior of Light Light behaves in 2 ways particles (photons)

More information

CPSC GLOBAL ILLUMINATION

CPSC GLOBAL ILLUMINATION CPSC 314 21 GLOBAL ILLUMINATION Textbook: 20 UGRAD.CS.UBC.CA/~CS314 Mikhail Bessmeltsev ILLUMINATION MODELS/ALGORITHMS Local illumination - Fast Ignore real physics, approximate the look Interaction of

More information

Chapter 35. The Nature of Light and the Laws of Geometric Optics

Chapter 35. The Nature of Light and the Laws of Geometric Optics Chapter 35 The Nature of Light and the Laws of Geometric Optics Introduction to Light Light is basic to almost all life on Earth. Light is a form of electromagnetic radiation. Light represents energy transfer

More information

1. What is the law of reflection?

1. What is the law of reflection? Name: Skill Sheet 7.A The Law of Reflection The law of reflection works perfectly with light and the smooth surface of a mirror. However, you can apply this law to other situations. For example, how would

More information

Illumination. The slides combine material from Andy van Dam, Spike Hughes, Travis Webb and Lyn Fong

Illumination. The slides combine material from Andy van Dam, Spike Hughes, Travis Webb and Lyn Fong INTRODUCTION TO COMPUTER GRAPHIC S Illumination The slides combine material from Andy van Dam, Spike Hughes, Travis Webb and Lyn Fong Andries van Dam October 29, 2009 Illumination Models 1/30 Outline Physical

More information

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye Ray Tracing What was the rendering equation? Motivate & list the terms. Relate the rendering equation to forward ray tracing. Why is forward ray tracing not good for image formation? What is the difference

More information

Advanced Graphics. Path Tracing and Photon Mapping Part 2. Path Tracing and Photon Mapping

Advanced Graphics. Path Tracing and Photon Mapping Part 2. Path Tracing and Photon Mapping Advanced Graphics Path Tracing and Photon Mapping Part 2 Path Tracing and Photon Mapping Importance Sampling Combine importance sampling techniques Reflectance function (diffuse + specular) Light source

More information

Optics. a- Before the beginning of the nineteenth century, light was considered to be a stream of particles.

Optics. a- Before the beginning of the nineteenth century, light was considered to be a stream of particles. Optics 1- Light Nature: a- Before the beginning of the nineteenth century, light was considered to be a stream of particles. The particles were either emitted by the object being viewed or emanated from

More information

Introduction: The Nature of Light

Introduction: The Nature of Light O1 Introduction: The Nature of Light Introduction Optical elements and systems Basic properties O1.1 Overview Generally Geometrical Optics is considered a less abstract subject than Waves or Physical Optics

More information

Chapter 26 Geometrical Optics

Chapter 26 Geometrical Optics Chapter 26 Geometrical Optics 26.1 The Reflection of Light 26.2 Forming Images With a Plane Mirror 26.3 Spherical Mirrors 26.4 Ray Tracing and the Mirror Equation 26.5 The Refraction of Light 26.6 Ray

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

REFLECTION & REFRACTION

REFLECTION & REFRACTION REFLECTION & REFRACTION OBJECTIVE: To study and verify the laws of reflection and refraction using a plane mirror and a glass block. To see the virtual images that can be formed by the reflection and refraction

More information

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary Summary Ambien Occlusion Kadi Bouatouch IRISA Email: kadi@irisa.fr 1. Lighting 2. Definition 3. Computing the ambient occlusion 4. Ambient occlusion fields 5. Dynamic ambient occlusion 1 2 Lighting: Ambient

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Computer Graphics. Lecture 10. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura 12/03/15

Computer Graphics. Lecture 10. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura 12/03/15 Computer Graphics Lecture 10 Global Illumination 1: Ray Tracing and Radiosity Taku Komura 1 Rendering techniques Can be classified as Local Illumination techniques Global Illumination techniques Local

More information

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker Rendering Algorithms: Real-time indirect illumination Spring 2010 Matthias Zwicker Today Real-time indirect illumination Ray tracing vs. Rasterization Screen space techniques Visibility & shadows Instant

More information

CS 325 Computer Graphics

CS 325 Computer Graphics CS 325 Computer Graphics 04 / 02 / 2012 Instructor: Michael Eckmann Today s Topics Questions? Comments? Illumination modelling Ambient, Diffuse, Specular Reflection Surface Rendering / Shading models Flat

More information

Recollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows

Recollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows Recollection Models Pixels Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows Can be computed in different stages 1 So far we came to Geometry model 3 Surface

More information

Physics 11. Unit 8 Geometric Optics Part 1

Physics 11. Unit 8 Geometric Optics Part 1 Physics 11 Unit 8 Geometric Optics Part 1 1.Review of waves In the previous section, we have investigated the nature and behaviors of waves in general. We know that all waves possess the following characteristics:

More information

MITOCW MIT6_172_F10_lec18_300k-mp4

MITOCW MIT6_172_F10_lec18_300k-mp4 MITOCW MIT6_172_F10_lec18_300k-mp4 The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for

More information

Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007

Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007 Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007 Programming For this assignment you will write a simple ray tracer. It will be written in C++ without

More information

Lecture 18: Primer on Ray Tracing Techniques

Lecture 18: Primer on Ray Tracing Techniques Lecture 18: Primer on Ray Tracing Techniques 6.172: Performance Engineering of Software Systems Joshua Slocum November 16, 2010 A Little Background Image rendering technique Simulate rays of light - ray

More information

4.5 Images Formed by the Refraction of Light

4.5 Images Formed by the Refraction of Light Figure 89: Practical structure of an optical fibre. Absorption in the glass tube leads to a gradual decrease in light intensity. For optical fibres, the glass used for the core has minimum absorption at

More information

SNC2D PHYSICS 4/27/2013. LIGHT & GEOMETRIC OPTICS L Light Rays & Reflection (P ) Light Rays & Reflection. The Ray Model of Light

SNC2D PHYSICS 4/27/2013. LIGHT & GEOMETRIC OPTICS L Light Rays & Reflection (P ) Light Rays & Reflection. The Ray Model of Light SNC2D PHYSICS LIGHT & GEOMETRIC OPTICS L Light Rays & Reflection (P.402-409) Light Rays & Reflection A driver adjusts her rearview mirror. The mirror allows her to see the cars behind her. Mirrors help

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Matthias Zwicker Universität Bern Herbst 2009 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation of physics Global

More information

Lab 10 - GEOMETRICAL OPTICS

Lab 10 - GEOMETRICAL OPTICS L10-1 Name Date Partners OBJECTIVES OVERVIEW Lab 10 - GEOMETRICAL OPTICS To examine Snell s Law. To observe total internal reflection. To understand and use the lens equations. To find the focal length

More information

x ~ Hemispheric Lighting

x ~ Hemispheric Lighting Irradiance and Incoming Radiance Imagine a sensor which is a small, flat plane centered at a point ~ x in space and oriented so that its normal points in the direction n. This sensor can compute the total

More information

Lecture 7 - Path Tracing

Lecture 7 - Path Tracing INFOMAGR Advanced Graphics Jacco Bikker - November 2016 - February 2017 Lecture 7 - I x, x = g(x, x ) ε x, x + S ρ x, x, x I x, x dx Welcome! Today s Agenda: Introduction Advanced Graphics 3 Introduction

More information

Radiometry (From Intro to Optics, Pedrotti 1-4) Radiometry is measurement of Emag radiation (light) Consider a small spherical source Assume a black

Radiometry (From Intro to Optics, Pedrotti 1-4) Radiometry is measurement of Emag radiation (light) Consider a small spherical source Assume a black Radiometry (From Intro to Optics, Pedrotti -4) Radiometry is measurement of Emag radiation (light) Consider a small spherical source Assume a black body type emitter: uniform emission Total energy radiating

More information

INTRODUCTION REFLECTION AND REFRACTION AT BOUNDARIES. Introduction. Reflection and refraction at boundaries. Reflection at a single surface

INTRODUCTION REFLECTION AND REFRACTION AT BOUNDARIES. Introduction. Reflection and refraction at boundaries. Reflection at a single surface Chapter 8 GEOMETRICAL OPTICS Introduction Reflection and refraction at boundaries. Reflection at a single surface Refraction at a single boundary Dispersion Summary INTRODUCTION It has been shown that

More information

A Brief Overview of. Global Illumination. Thomas Larsson, Afshin Ameri Mälardalen University

A Brief Overview of. Global Illumination. Thomas Larsson, Afshin Ameri Mälardalen University A Brief Overview of Global Illumination Thomas Larsson, Afshin Ameri Mälardalen University 1 What is Global illumination? Global illumination is a general name for realistic rendering algorithms Global

More information

Science 8 Chapter 5 Section 1

Science 8 Chapter 5 Section 1 Science 8 Chapter 5 Section 1 The Ray Model of Light (pp. 172-187) Models of Light wave model of light: a model in which light is a type of wave that travels through space and transfers energy from one

More information

The Rendering Equation and Path Tracing

The Rendering Equation and Path Tracing The Rendering Equation and Path Tracing Louis Feng April 22, 2004 April 21, 2004 Realistic Image Synthesis (Spring 2004) 1 Topics The rendering equation Original form Meaning of the terms Integration Path

More information

Illumination Algorithms

Illumination Algorithms Global Illumination Illumination Algorithms Digital Lighting and Rendering CGT 340 The goal of global illumination is to model all possible paths of light to the camera. Global Illumination Global illumination

More information

Lighting and Materials

Lighting and Materials http://graphics.ucsd.edu/~henrik/images/global.html Lighting and Materials Introduction The goal of any graphics rendering app is to simulate light Trying to convince the viewer they are seeing the real

More information

Photorealism: Ray Tracing

Photorealism: Ray Tracing Photorealism: Ray Tracing Reading Assignment: Chapter 13 Local vs. Global Illumination Local Illumination depends on local object and light sources only Global Illumination at a point can depend on any

More information

COMP 175 COMPUTER GRAPHICS. Lecture 11: Recursive Ray Tracer. COMP 175: Computer Graphics April 9, Erik Anderson 11 Recursive Ray Tracer

COMP 175 COMPUTER GRAPHICS. Lecture 11: Recursive Ray Tracer. COMP 175: Computer Graphics April 9, Erik Anderson 11 Recursive Ray Tracer Lecture 11: Recursive Ray Tracer COMP 175: Computer Graphics April 9, 2018 1/40 Note on using Libraries } C++ STL } Does not always have the same performance. } Interface is (mostly) the same, but implementations

More information

CS770/870 Spring 2017 Color and Shading

CS770/870 Spring 2017 Color and Shading Preview CS770/870 Spring 2017 Color and Shading Related material Cunningham: Ch 5 Hill and Kelley: Ch. 8 Angel 5e: 6.1-6.8 Angel 6e: 5.1-5.5 Making the scene more realistic Color models representing the

More information

Measuring Light: Radiometry and Cameras

Measuring Light: Radiometry and Cameras Lecture 11: Measuring Light: Radiometry and Cameras Computer Graphics CMU 15-462/15-662, Fall 2015 Slides credit: a majority of these slides were created by Matt Pharr and Pat Hanrahan Simulating a pinhole

More information

COMPUTER GRAPHICS AND INTERACTION

COMPUTER GRAPHICS AND INTERACTION DH2323 DGI17 COMPUTER GRAPHICS AND INTERACTION INTRODUCTION TO RAYTRACING Christopher Peters CST, KTH Royal Institute of Technology, Sweden chpeters@kth.se http://kth.academia.edu/christopheredwardpeters

More information

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source Topic 9: Lighting & Reflection models Lighting & reflection The Phong reflection model diffuse component ambient component specular component Spot the differences Terminology Illumination The transport

More information

CS184 LECTURE RADIOMETRY. Kevin Wu November 10, Material HEAVILY adapted from James O'Brien, Brandon Wang, Fu-Chung Huang, and Aayush Dawra

CS184 LECTURE RADIOMETRY. Kevin Wu November 10, Material HEAVILY adapted from James O'Brien, Brandon Wang, Fu-Chung Huang, and Aayush Dawra CS184 LECTURE RADIOMETRY Kevin Wu November 10, 2014 Material HEAVILY adapted from James O'Brien, Brandon Wang, Fu-Chung Huang, and Aayush Dawra ADMINISTRATIVE STUFF Project! TODAY Radiometry (Abridged):

More information

Ray Optics I. Last time, finished EM theory Looked at complex boundary problems TIR: Snell s law complex Metal mirrors: index complex

Ray Optics I. Last time, finished EM theory Looked at complex boundary problems TIR: Snell s law complex Metal mirrors: index complex Phys 531 Lecture 8 20 September 2005 Ray Optics I Last time, finished EM theory Looked at complex boundary problems TIR: Snell s law complex Metal mirrors: index complex Today shift gears, start applying

More information

CS 5625 Lec 2: Shading Models

CS 5625 Lec 2: Shading Models CS 5625 Lec 2: Shading Models Kavita Bala Spring 2013 Shading Models Chapter 7 Next few weeks Textures Graphics Pipeline Light Emission To compute images What are the light sources? Light Propagation Fog/Clear?

More information

Shadows. COMP 575/770 Spring 2013

Shadows. COMP 575/770 Spring 2013 Shadows COMP 575/770 Spring 2013 Shadows in Ray Tracing Shadows are important for realism Basic idea: figure out whether a point on an object is illuminated by a light source Easy for ray tracers Just

More information

Global Illumination. Why Global Illumination. Pros/Cons and Applications. What s Global Illumination

Global Illumination. Why Global Illumination. Pros/Cons and Applications. What s Global Illumination Global Illumination Why Global Illumination Last lecture Basic rendering concepts Primitive-based rendering Today: Global illumination Ray Tracing, and Radiosity (Light-based rendering) What s Global Illumination

More information

Physics 4C Chapter 33: Electromagnetic Waves

Physics 4C Chapter 33: Electromagnetic Waves Physics 4C Chapter 33: Electromagnetic Waves Our greatest glory is not in never failing, but in rising up every time we fail. Ralph Waldo Emerson If you continue to do what you've always done, you'll continue

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

782 Schedule & Notes

782 Schedule & Notes 782 Schedule & Notes Tentative schedule - subject to change at a moment s notice. This is only a guide and not meant to be a strict schedule of how fast the material will be taught. The order of material

More information

REAL-TIME GPU PHOTON MAPPING. 1. Introduction

REAL-TIME GPU PHOTON MAPPING. 1. Introduction REAL-TIME GPU PHOTON MAPPING SHERRY WU Abstract. Photon mapping, an algorithm developed by Henrik Wann Jensen [1], is a more realistic method of rendering a scene in computer graphics compared to ray and

More information

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component Topic 9: Lighting & Reflection models Lighting & reflection The Phong reflection model diffuse component ambient component specular component Spot the differences Terminology Illumination The transport

More information

Electromagnetic waves and power spectrum. Rays. Rays. CS348B Lecture 4 Pat Hanrahan, Spring 2002

Electromagnetic waves and power spectrum. Rays. Rays. CS348B Lecture 4 Pat Hanrahan, Spring 2002 Page 1 The Light Field Electromagnetic waves and power spectrum 1 10 10 4 10 6 10 8 10 10 10 1 10 14 10 16 10 18 10 0 10 10 4 10 6 Power Heat Radio Ultra- X-Rays Gamma Cosmic Infra- Red Violet Rays Rays

More information

Chapter 24. Geometric optics. Assignment No. 11, due April 27th before class: Problems 24.4, 24.11, 24.13, 24.15, 24.24

Chapter 24. Geometric optics. Assignment No. 11, due April 27th before class: Problems 24.4, 24.11, 24.13, 24.15, 24.24 Chapter 24 Geometric optics Assignment No. 11, due April 27th before class: Problems 24.4, 24.11, 24.13, 24.15, 24.24 A Brief History of Light 1000 AD It was proposed that light consisted of tiny particles

More information

Lecture Ray Model of Light. Physics Help Q&A: tutor.leiacademy.org

Lecture Ray Model of Light. Physics Help Q&A: tutor.leiacademy.org Lecture 1201 Ray Model of Light Physics Help Q&A: tutor.leiacademy.org Reflection of Light A ray of light, the incident ray, travels in a medium. When it encounters a boundary with a second medium, part

More information

Photorealistic 3D Rendering for VW in Mobile Devices

Photorealistic 3D Rendering for VW in Mobile Devices Abstract University of Arkansas CSCE Department Advanced Virtual Worlds Spring 2013 Photorealistic 3D Rendering for VW in Mobile Devices Rafael Aroxa In the past few years, the demand for high performance

More information

Phys102 Lecture 21/22 Light: Reflection and Refraction

Phys102 Lecture 21/22 Light: Reflection and Refraction Phys102 Lecture 21/22 Light: Reflection and Refraction Key Points The Ray Model of Light Reflection and Mirrors Refraction, Snell s Law Total internal Reflection References 23-1,2,3,4,5,6. The Ray Model

More information

Lighting. To do. Course Outline. This Lecture. Continue to work on ray programming assignment Start thinking about final project

Lighting. To do. Course Outline. This Lecture. Continue to work on ray programming assignment Start thinking about final project To do Continue to work on ray programming assignment Start thinking about final project Lighting Course Outline 3D Graphics Pipeline Modeling (Creating 3D Geometry) Mesh; modeling; sampling; Interaction

More information

Reflection and Refraction

Reflection and Refraction rev 05/2018 Equipment List and Refraction Qty Items Part Numbers 1 Light Source, Basic Optics OS-8517 1 Ray Optics Set OS-8516 2 White paper, sheet 1 Metric ruler 1 Protractor Introduction The purpose

More information

Light and the Properties of Reflection & Refraction

Light and the Properties of Reflection & Refraction Light and the Properties of Reflection & Refraction OBJECTIVE To study the imaging properties of a plane mirror. To prove the law of reflection from the previous imaging study. To study the refraction

More information

Rendering: Reality. Eye acts as pinhole camera. Photons from light hit objects

Rendering: Reality. Eye acts as pinhole camera. Photons from light hit objects Basic Ray Tracing Rendering: Reality Eye acts as pinhole camera Photons from light hit objects Rendering: Reality Eye acts as pinhole camera Photons from light hit objects Rendering: Reality Eye acts as

More information

Homework Set 3 Due Thursday, 07/14

Homework Set 3 Due Thursday, 07/14 Homework Set 3 Due Thursday, 07/14 Problem 1 A room contains two parallel wall mirrors, on opposite walls 5 meters apart. The mirrors are 8 meters long. Suppose that one person stands in a doorway, in

More information

Computer Graphics. Lecture 9 Environment mapping, Mirroring

Computer Graphics. Lecture 9 Environment mapping, Mirroring Computer Graphics Lecture 9 Environment mapping, Mirroring Today Environment Mapping Introduction Cubic mapping Sphere mapping refractive mapping Mirroring Introduction reflection first stencil buffer

More information