Monte Carlo Ray-tracing and Rendering

Similar documents
MIT Monte-Carlo Ray Tracing. MIT EECS 6.837, Cutler and Durand 1

A Brief Overview of. Global Illumination. Thomas Larsson, Afshin Ameri Mälardalen University

Path Tracing part 2. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Computer Graphics. Lecture 13. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura

Local vs. Global Illumination & Radiosity

The Rendering Equation. Computer Graphics CMU /15-662

Computer Graphics. Lecture 10. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura 12/03/15

The Rendering Equation and Path Tracing

The Rendering Equation. Computer Graphics CMU /15-662, Fall 2016

Lecture 10: Ray tracing

Global Illumination. COMP 575/770 Spring 2013

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Ground Truth. Welcome!

Global Illumination. CSCI 420 Computer Graphics Lecture 18. BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch

The Rendering Equation & Monte Carlo Ray Tracing

Sung-Eui Yoon ( 윤성의 )

Schedule. MIT Monte-Carlo Ray Tracing. Radiosity. Review of last week? Limitations of radiosity. Radiosity

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources.

COMP371 COMPUTER GRAPHICS

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows.

Photon Mapping. Due: 3/24/05, 11:59 PM

Ray Tracing. CSCI 420 Computer Graphics Lecture 15. Ray Casting Shadow Rays Reflection and Transmission [Ch ]

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows.

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Final Project: Real-Time Global Illumination with Radiance Regression Functions

Korrigeringar: An introduction to Global Illumination. Global Illumination. Examples of light transport notation light

Global Illumination. Why Global Illumination. Pros/Cons and Applications. What s Global Illumination

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading

Photon Maps. The photon map stores the lighting information on points or photons in 3D space ( on /near 2D surfaces)

Stochastic Path Tracing and Image-based lighting

Lecture 7 - Path Tracing

Raytracing & Epsilon. Today. Last Time? Forward Ray Tracing. Does Ray Tracing Simulate Physics? Local Illumination

REAL-TIME GPU PHOTON MAPPING. 1. Introduction

Monte Carlo Ray Tracing. Computer Graphics CMU /15-662

CS-184: Computer Graphics. Administrative

Lecture 12: Photon Mapping. Biased Methods

CS130 : Computer Graphics Lecture 8: Lighting and Shading. Tamar Shinar Computer Science & Engineering UC Riverside

CS580: Ray Tracing. Sung-Eui Yoon ( 윤성의 ) Course URL:

Lighting and Shading

Assignment 3: Path tracing

Introduction Ray tracing basics Advanced topics (shading) Advanced topics (geometry) Graphics 2010/2011, 4th quarter. Lecture 11: Ray tracing

Global Rendering. Ingela Nyström 1. Effects needed for realism. The Rendering Equation. Local vs global rendering. Light-material interaction

Ray tracing. EECS 487 March 19,

Lighting. To do. Course Outline. This Lecture. Continue to work on ray programming assignment Start thinking about final project

Today. Anti-aliasing Surface Parametrization Soft Shadows Global Illumination. Exercise 2. Path Tracing Radiosity

Advanced Graphics. Path Tracing and Photon Mapping Part 2. Path Tracing and Photon Mapping

11/2/2010. In the last lecture. Monte-Carlo Ray Tracing : Path Tracing. Today. Shadow ray towards the light at each vertex. Path Tracing : algorithm

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

Recent Advances in Monte Carlo Offline Rendering

Topic 12: Texture Mapping. Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping

Illumination Algorithms

Topic 11: Texture Mapping 11/13/2017. Texture sources: Solid textures. Texture sources: Synthesized

Global Illumination The Game of Light Transport. Jian Huang

Topic 9: Lighting & Reflection models 9/10/2016. Spot the differences. Terminology. Two Components of Illumination. Ambient Light Source

Part I The Basic Algorithm. Principles of Photon Mapping. A two-pass global illumination method Pass I Computing the photon map

Lighting and Shading Computer Graphics I Lecture 7. Light Sources Phong Illumination Model Normal Vectors [Angel, Ch

Computer Graphics. Ray Tracing. Based on slides by Dianna Xu, Bryn Mawr College

Topic 11: Texture Mapping 10/21/2015. Photographs. Solid textures. Procedural

Topic 9: Lighting & Reflection models. Lighting & reflection The Phong reflection model diffuse component ambient component specular component

Recollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

I have a meeting with Peter Lee and Bob Cosgrove on Wednesday to discuss the future of the cluster. Computer Graphics

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

To Do. Real-Time High Quality Rendering. Motivation for Lecture. Monte Carlo Path Tracing. Monte Carlo Path Tracing. Monte Carlo Path Tracing

Monte-Carlo Ray Tracing. Antialiasing & integration. Global illumination. Why integration? Domains of integration. What else can we integrate?

Global Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Announcements. Written Assignment 2 out (due March 8) Computer Graphics

Rendering: Reality. Eye acts as pinhole camera. Photons from light hit objects

Lecture 18: Primer on Ray Tracing Techniques

Biased Monte Carlo Ray Tracing

CPSC GLOBAL ILLUMINATION

Computer Graphics. - Ray Tracing I - Marcus Magnor Philipp Slusallek. Computer Graphics WS05/06 Ray Tracing I

Lecture 15: Shading-I. CITS3003 Graphics & Animation

Advanced Graphics. Global Illumination. Alex Benton, University of Cambridge Supported in part by Google UK, Ltd

Lecture 7: Monte Carlo Rendering. MC Advantages

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)

Ray tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1

Reflection and Shading

Illumination. The slides combine material from Andy van Dam, Spike Hughes, Travis Webb and Lyn Fong

Ø Sampling Theory" Ø Fourier Analysis Ø Anti-aliasing Ø Supersampling Strategies" Ø The Hall illumination model. Ø Original ray tracing paper

Chapter 11 Global Illumination. Part 1 Ray Tracing. Reading: Angel s Interactive Computer Graphics (6 th ed.) Sections 11.1, 11.2, 11.

Computer Graphics. Illumination and Shading

Interactive Methods in Scientific Visualization

The Rendering Equation Philip Dutré. Course 4. State of the Art in Monte Carlo Global Illumination Sunday, Full Day, 8:30 am - 5:30 pm

Motivation. Advanced Computer Graphics (Fall 2009) CS 283, Lecture 11: Monte Carlo Integration Ravi Ramamoorthi

Shading I Computer Graphics I, Fall 2008

Rendering Part I (Basics & Ray tracing) Lecture 25 December 1, 2015

Irradiance Gradients. Media & Occlusions

Global Illumination using Photon Maps

Virtual Spherical Lights for Many-Light Rendering of Glossy Scenes

Visual cues to 3D geometry. Light Reflection and Advanced Shading. Shading. Recognizing materials. size (perspective) occlusion shading

Supplement to Lecture 16

Photon Mapping. Michael Doggett Department of Computer Science Lund university

Ray Tracing: Special Topics CSCI 4239/5239 Advanced Computer Graphics Spring 2018

MITOCW MIT6_172_F10_lec18_300k-mp4

Bioluminescence Chris Fontas & Forrest Browning

Recall: Basic Ray Tracer

Raytracing CS148 AS3. Due :59pm PDT

Recursive Ray Tracing. Ron Goldman Department of Computer Science Rice University

03 RENDERING PART TWO

Transcription:

ITN, Norrko ping February 3, 2012 Monte Carlo Ray-tracing and Rendering P ROJECT IN A DVANCED G LOBAL I LLUMINATION AND R ENDERING TNCG15 Authors: Henrik Ba cklund Niklas Neijman Contact: henba892@student.liu.se nikne866@student.liu.se

Abstract Photo realistic images have been desired for as long as images have been rendered. The images produced with the early methods where not good enough in order to be considered realistic. The methods have however gotten better with time and are today fully capable of achieving photo realistic images. The purpose of this paper is to demonstrate how a Monte Carlo ray tracer has been implemented. A global illumination model that is able to render photo realistic images through stochastic sampling. The fundamentals of Monte Carlo ray tracing are discussed to give the reader a deep understanding of the method.

Contents 1 Introduction 1 1.1 Whitted ray-tracing................................ 1 1.2 Radiosity..................................... 2 1.3 Monte Carlo ray-tracing............................. 2 1.4 Two-pass rendering................................ 2 1.5 Photon mapping.................................. 3 1.6 Ray-tracing of isosurfaces............................ 3 2 Method 4 2.1 Scene intersection................................. 4 2.2 Material (BRDF)................................. 5 2.3 Diffuse...................................... 5 2.3.1 Sampling hemisphere........................... 5 2.3.2 Sample hemisphere Cosine Lobe..................... 5 2.4 Specular reflection................................ 6 2.5 Transparancy................................... 6 2.5.1 Snell s Law................................ 6 2.5.2 Brewster angle.............................. 7 2.5.3 Refracted ray............................... 7 2.5.4 Radiance distribution........................... 7 2.6 Direct Illumination................................ 7 2.6.1 Shadow ray................................ 7 2.6.2 Radiance contribution.......................... 8 2.6.3 Geometry factor............................. 8 2.6.4 Distance factor.............................. 8 2.7 Indirect Illumination............................... 9 2.8 Recursive stopping condition........................... 9 2.9 Anti-aliasing................................... 9 2.10 Gamma Correction................................ 10 3 Results and benchmarks 11 3.1 Monte Carlo ray-tracer.............................. 11 3.2 Color bleed.................................... 12 3.3 Caustics...................................... 12 3.4 Hemisphere Sampling............................... 13 3.5 Multi Sampling.................................. 13 3.6 Benchmark.................................... 14

4 Discussion 15 Bibliography 16

List of Figures 2.1 Ray-sphere intersection. Source: http://en.wikipedia.org............ 4 2.2 Ray-plane intersection. Source: http://en.wikipedia.org............. 4 2.3 Approximation errors................................ 5 2.4 Perfect reflection. Source: http://staffwww.itn.liu.se/mardi/ TNCG15-2011/Lecture2.pdf........................... 6 2.5 Refraction and reflection. Source: http://staffwww.itn.liu.se/mardi/ TNCG15-2011/Lecture2.pdf........................... 6 2.6 Shadow rays.................................... 7 2.7 Total occlusion................................... 8 2.8 Partially occluded. Source: http://www.mcglaun.com/eclwhatis.htm...... 8 2.9 Geometry factor.................................. 8 2.10 Distance factor................................... 9 2.11 Anti-aliasing.................................... 10 3.1 Final image, same image is shown in title page.................. 11 3.2 Color bleeding for the red wall........................... 12 3.3 Color bleeding for the blue wall.......................... 12 3.4 Caustics effects................................... 12 3.5 With cosine importance sampling......................... 13 3.6 Without cosine importance sampling........................ 13 3.7 10 sample per pixel................................. 13 3.8 50 sample per pixel................................. 13 3.9 100 sample per pixel................................ 13 3.10 500 sample per pixel................................ 14 3.11 5000 sample per pixel............................... 14

Chapter 1 Introduction Today there is of greater interest to make photo realistic scenes in computer graphics. But to make these scenes, the old local lighting model is not good enough because it does not consider the global lightning, which is the most important part of photo realism. Therefore the focus is to make a scene that is affected by light as in the reality i.e make it physically correct as much as possible without using supercomputers. To obtain a scene that is physically correct illuminated in computer graphics is not a straight forward approach. There are some different methods and combination of methods that can compute such scenes, some of them are explained below i.e Whitted ray-tracing, Radiosity, Monte Carlo ray-tracing, Two-pass rendering, Photon mapping and ray-tracing of isosurfaces. These methods are different solutions to the so-called rendering equation that is an equation to make an advanced globally illuminated scene. A L(x θ) = L e (x θ)+ f r (x, ψ θ)l(y ψ) V (x, y)g(x, y)da y (1.1) The rendering equation is shown in equation 1.1 and it was introduced in 1986 by J. T. Kajiya [1]. The parts in the rendering equation are explained as; L(x θ) is the total light that is obtained at x in direction θ when solving the equation. L e (x θ) is the light that x emits in direction θ and this term is added to an integral over the hemisphere. The integral includes the BRDF (Bidirectional reflectance distribution function) of the surface which gives the amount of light reflected at x. The integral also includes the light arriving at point x from direction ψ and the cosine of the normal of x and ψ. 1.1 Whitted ray-tracing Whitted ray-tracing was introduced in 1979 by Turner Whitted and this algorithm is a backward ray-tracing algorithm. Which means that a ray is shot from the eye through a pixel on the screen and into the scene i.e a camera based ray-tracing algorithm. This kind of tracing makes it more efficient computationally compared to forward ray-tracing. Because the ray is guaranteed to hit the camera when it is traced, in contrast to forward ray-tracing when a ray is shot from the light source through the scene instead and hopes for the ray to reach the camera/eye. But what about reaching the light source with Whitted ray-tracing? That is solved by launching another ray from the surface point that have been intersected towards the light source. The ray is called shadow ray and have the task to store the information about the visibility of the surface point i.e if there is an object in between the light source and surface point in question, then the surface point is not affected by the light. Whitted ray-tracing is often used with perfect reflection and perfect refraction surfaces and this is usually the only indirect contribution of the illumination of the scene. 1

In contrast to Monte Carlo ray-tracing, Whitted ray-tracing just follows at most two paths through the scene; refracted ray and reflected ray, which can be done recursively. 1.2 Radiosity Another method to solve the rendering equation is the radiosity method, which is an scene based algorithm, unlike Whitted ray-tracing that is an camera based method. This makes radiosity have its advantages. One of the greatest is that radiosity can be precalculated i.e the radiosity just have to be rendered once within a scene and it does not have to be recalculated during a camera motion, but the method is only considering perfect diffuse surfaces. To be able to calculate the radiosity, the scene have to be divided into smaller surfaces (the triangles from the mesh) and then a radiosity value for each surface have to be calculated, this is an iterative calculation. Initially only one patch is a radiosity source i.e the light source. After the first iteration the source send radiosity to the other patches through the scene. The patches that are receiving have a coefficient that tells how much of the radiosity that should get reflected. Now the new paths also sends out radiosity, even to the source. 1.3 Monte Carlo ray-tracing So far radiosity and Whitted ray-tracing have been discussed, but both methods have their limits. The main limit is that they only handles perfect reflection/refraction or perfect diffuse reflectors (e.g Lambertian) and most material in reality are not as mentioned. So, to gain even more photo realism into the scene, materials should for example almost be perfect reflectors/refractors or a mix between reflection and diffuse (i.e Glossy). By including some knowledge in probability theory such material could be obtained. What makes this algorithm so powerful is that the path of the rays are totally random. E.g when intersecting a diffuse surface, instead of just terminating the ray, a new path is created and the direction of the path is sampled over the hemispehere of the intersection point. When solving the rendering equation with Monte Carlo ray-tracing, the equation can be split into different parts as follows; Le (light emitted), direct illumination and indirect illumination. Le have been discussed earlier in the report and direct illumination reminds of the Whitted algorithm, but instead of using point light sources, direct illumination uses area light. By sampling more then one shadow ray towards the light source, soft shadows can be obtained. The most interesting part in a Monte Carlo is the indirect illumination, because that term contributes with effects like; caustics, color bleeding, soft shadows etc. Indirect light is the part that makes the scene globally illuminated. 1.4 Two-pass rendering Two pass rendering [2] is a method to combine the view independent method radiosity for diffuse interreflections with the view dependent method ray tracing for speccular reflections and highlights. Radiosity is first calculated from the light source followed by calculating the ray tracer from the view point, the result is then interpolated to get the final solution. Both methods are able to construct shadows. But, by optimizing the two pass rendering to only calculate shadows with the radiosity method, a lot of computational time can be saved. This is because the ray tracer can avoid sending out shadow rays which is a major part of the ray tracer algorithm. 2

1.5 Photon mapping Photon mapping [3] is a method where photons are traced into the scene from the light source. The photons intersect with objects and are either reflected, refracted or absorbed at the surface point. The photon and the incoming direction are stored in a photon map at each intersection point. There are two different photon maps; a global photon map and a caustics photon map. The caustics photon map is created by sending photons towards the specular objects, while the global photon map is created by sending photons towards the whole scene. The caustics photon map has a higher density of photons then the global photon map in order to obtain more accurate caustics. Besides from photons, the photon map also stores shadow photons (position of shadow), and indirect photons (positions of indirect light). The photon maps are used when rays are traced from the camera point into the scene where the shadow map is used to avoid the use of shadow rays. The indirect photon map is used to create soft indirect illumination. By combining the incoming direction and the BRDF, it is possible to calculate how much radiance is leaving a surface point in a given direction. reflections, indirect illumination etc. which is wanted if a photo realistic image is the aim. The algorithm traces a ray through the volume data (Voxel space) and calculates radiance in the intersection of the isosurface. But it can be heavy computational if an intersection test have to be done for every voxel. Therefore the process can be optimized by making bigger voxels (lower resolution of the voxels) outside the surface and the closer the ray gets surface, the higher the resolution gets. The following report will first discuss how the intersection between a ray and an object is handled and how the material of the object influences the final result. Next the report talks about how direct light works and how it is calculated by using shadow rays, geometry factors and distance factors. Indirect illumination is discussed followed by a description of how the stopping condition, anti aliasing and OpenMP is implemented. The report ends with the results obtained and a short discussion. 1.6 Ray-tracing of isosurfaces Rendering volume data have become popular and a volume is defined as a three dimensional scalar field. The data is obtained from e.g MRI scanners, CT etc. Which leads to that ray-tracing of isosurfaces [4] is useful within the medical field. The reason of using ray-tracing would be because, it could be possible to make the rendering in real time, which would not be possible for the older methods e.g Marching Cubes. The advantage of using a ray-tracer in volume rendering algorithms, is that it can contribute with some effects such as; shadows, 3

Chapter 2 Method 2.1 Scene intersection A ray that is shot into a scene can either miss an object or intersect with it. If an intersection has occurred it is desired to know what object and at what surface point that object got intersected by the ray. Since the scene is built up by planes and spheres there are only two types of intersection that can occur. The first type of intersection is the ray-sphere intersection. When a ray intersects a sphere there can be either one intersection point if the ray is parallel with the surface it hits. Or in the more general case where there are two intersection points. Figure 2.1 illustrates the two kinds of intersection that can occur when intersecting a sphere. p = t l + p 0 (2.1) ( p c) ( p c) = r 2 (2.2) By substituting equation 2.1 into the equation 2.2 the final ray-sphere equation is created (equation 2.3). This equation is then used to calculate if there are any points on the ray that intersects the sphere. (t l + p 0 c) (t l + p 0 c) r 2 = 0 (2.3) The second kind of intersection is the ray-plane intersection. Ray-plane intersection can either intersect at one point, or infinite number of intersection points if the line is parallel and within the plane. This can be seen in figure 2.2. Figure 2.1: Ray-sphere intersection. Source: http://en.wikipedia.org To be able to calculate the intersection with a sphere, two equations are needed. The first one is the definition of a ray (equation 2.1) and the second one that defines the surface of a sphere (equation 2.2). Figure 2.2: Ray-plane intersection. http://en.wikipedia.org Source: 4

To calculate if a ray intersects with a plane, it s just as in ray sphere intersection, needed to use two equations. The first one, is once again equation (2.1) and the second is the definition of a plane (equation 2.4). ( p p 0 ) n = 0 (2.4) By combining equation 2.1 and 2.4 into equation 2.5 it is possible to find a point on the ray that lies on the plane. (d l + l 0 p 0 ) n = 0 (2.5) A problem that can occur when calculating intersection points is that the actual starting position of the ray has been sampled to lie below the actual surface. The result can be that the ray intersects the same surface as it was launched from. This is illustrated in figure 2.3. 2.2 Material (BRDF) There is some different material out there that could be implemented, during this project the materials have been limited to; diffuse-, specular- (mirror) and transparent surfaces (glass), these three materials will be discussed in detail later. When talking about materials in global illumination, the term that is used is BRDF, denoted as: f r (x, ψ θ). Which is a part of the rendering equation, which also was mentioned in the beginning of the report. The BRDF computes how much radiance and in which direction the ray should reflect. This can vary, depending on what material the surface have. 2.3 Diffuse A diffuse surface have the properties that it reflect light uniformly over the hemisphere, i.e a diffuse surface would look the same from any viewing direction and the value of the BRDF is constant. 2.3.1 Sampling hemisphere Figure 2.3: Approximation errors. A solution to this problem is to move the rays starting point in the new direction of the ray according to equation 2.6. Startingpoint + ɛ direction (2.6) Where ɛ is small, but still able to lift the point to the surface. To compute the radiance at an intersection point on a diffuse surface, a random direction have to be calculated. To do that, a new direction ( ψ) is sampled over the hemisphere in the direction of the surface normal. This is done by randomly pick the azimuth angle (φ) between [0, 2π[ and the inclination angle between [0, π]. To be sure that the ray 2 points in the desired direction, a dot-product is calculated, if the value is negative then ψ is multiplied with 1. 2.3.2 Sample hemisphere Cosine Lobe 5 Even better results can be obtained by including the cosine importance sampling according to equation 2.7.

p( ψ) = cos( ψ, N x ) π (2.7) What it does is to prevent directions that is sampled perpendicular to the normal and decrease the probability to generate directions that forms a large angle with respect to the normal, which result in reduced noise in the final image. This is because the probability of the sampled direction is lower near the horizon of the hemisphere. This is desired since it has low effect, due to the geometry factor. 2.5 Transparancy Another interesting surface is the transparent surface which makes the object look like glass. In this part additional laws have to be taken into consideration to handle both reflected rays and refracted rays and also how the radiance should be distributed over the rays. 2.4 Specular reflection By using reflection laws from the physics, the perfect reflection of a surface point x can be obtained according to figure 2.4. Figure 2.5: Refraction and reflection. Source: http://staffwww.itn.liu.se/mardi/ TNCG15-2011/Lecture2.pdf Figure 2.4: Perfect reflection. Source: http://staffwww.itn.liu.se/mardi/ TNCG15-2011/Lecture2.pdf To be able to calculate the reflected ray, the angle α between the incoming ray I and the normal N is needed. R = I 2( I N) N (2.8) With some knowledge in linear algebra the ray R can be obtain without using any trigonometric functions according to equation 2.8, which will speed up the rendering process. 2.5.1 Snell s Law As shown in figure 2.5 the ray T is desired in order to create a transparent object. As before there is an incoming ray I and the angle θ 1 between I and the normal N x. By using Snell s law (equation 2.9), the angle θ 1 can be obtained, which made it possible to compute the refracted ray T. sin(θ 1 ) sin(θ 2 ) = η 2 η 1 (2.9) How much the refracted ray will be refracted depends on the material and its refraction index i.e η 1 and η 2. It also depends on the order of the intersected mediums, e.g from air to glass or vice versa. 6

2.5.2 Brewster angle The brewster angle is the critical angle where the ray is totally reflected or is divided into one refracted and one reflected. But, this just have to be taken into consideration if η 1 > η 2. Which gives that any angle of a ray with α > α brewster are totally reflected. 2.5.3 Refracted ray When calculating the refracted ray T, some simplification can be done, which makes the trigonometric function be eliminated. Equation 2.10 below shows the final expression of the calculations, it have been formed by combining Snell s law with some linear algebra theory. By using this equation a refracted ray can be computed. T = η 1 η 2 I+ N ( η1 η 2 ( N I) 1 ( η ) 1 ) η 2 (1 ( N I) 2 2 2.5.4 Radiance distribution (2.10) 2.6 Direct Illumination Direct illumination sends out rays into the scene. The rays intersect with objects in the scene and at the intersection point some calculations are made; check if the surface is occluded, if there is any radiance contribution from the surface, how the surface geometry is in relation to the area light source and how much light reaches the surface depending on its distance to the light source. The ray is then recursively traced into the scene, depending on material. The procedure then starts all over again. A more detailed explanation is given below for the different calculations. 2.6.1 Shadow ray At the intersection points one has to decide whether or not the surface points are in the shadow or if they are lit by a light source. This is done by randomly sampling shadow rays towards the area light source. The surface point is lit if the shadow rays can reach the light source as seen in figure 2.6, or in the shadow if the surface is occluded by an object and thereby can t reach the light source. So far, we know how to compute the refracted and the reflected ray. But how much radiance flows through each of the rays? This is decided by equation 2.11. ( ) 2 η1 η 2 R = (2.11) η 1 + η 2 The expression 2.11 shows an approximation of how much radiance that should be distributed to the reflected ray, which is only decided by the index of refraction. T = 1 R = 4η 1η 2 (η 1 + η 2 ) 2 (2.12) Even the refracted distribution of the refracted ray is an simple expression (2.12) and the computation is only dependent of the refraction index of the materials. Figure 2.6: Shadow rays. 7

The surface points that is totally in shadow is said to lie within the umbra. This is illustrated in figure 2.7. 2.6.2 Radiance contribution Once it s determined that an surface point is lit, one can calculate the radiance distribution from that surface. The more light that can reach the surface the more radiance will be contributed from the surface 2.6.3 Geometry factor Surfaces who are parallel to the area light source will receive much more radiance, than surfaces that are perpendicular to the area light source that will receive little radiance from the light, if none at all. This can be seen in figure 2.9. Figure 2.7: Total occlusion. The use of an area light source however will lead to surface points that are partially occluded by an object. The effect is smooth shadows between the totally shaded surfaces to the totally illuminated surfaces. The area where the smooth shadows occur is called the penumbra. The effect of area light sources is shown in figure 2.8. Figure 2.9: Geometry factor. Figure 2.8: Partially occluded. Source: http://www.mcglaun.com/eclwhatis.htm The geometry factor is included in the project by calculating the scalar product between the area light source normal and the surface normal. The scalar product produces values between [ 1.0, 1.0]. Surfaces that are parallel to each other result in a scalar product whose value is equal to 1.0. Surfaces that are perpendicular to each other will produce a scalar product equal to 0.0. Values that is bigger than 0.0 indicates that the surfaces are not facing towards each other. Values that are closer to 1.0 receive more light than values that are equal to or bigger than 0.0 who receives no light. 8

2.6.4 Distance factor The distance between a light source and a surface will affect the amount of light reaching the surface. The further away a surface is from the light source the less light the surface receives from it. Since the rays are launched in different directions from the light source, the rays of light will thus get sparser the further away from the source of light one gets. Figure 2.10 illustrates how the surface receives less light as it get further away from the light source. up intersecting the light source. No radiance flows through rays that does not end up at the light source. And the final radiance contribution of the ray becomes zero, which contributes to noise in the image. The noise is reduced by multi sampling the image. With indirect illumination the caustic effect can be obtained, due to the random paths and reflection laws. 2.8 Recursive stopping condition Figure 2.10: Distance factor. 2.7 Indirect Illumination Indirect illumination focuses on making the paths as random as possible, the random paths produces a global illumination model. This part of the rendering equation have an important role in the final rendering since it adds the effect of photo realism. The whole algorithm is recursive, even for diffuse surfaces. The reason of having diffuse surfaces reflect rays is because that will add color bleeding into the scene. As mentioned earlier in section 2.3 Diffuse the sampling of a ray have to be done over the hemisphere. To make it realistic, a diffuse surface have a high chance of absorbing the ray, which is decided by Russian Roulette (discussed in section 2.8 Recursive stopping condition ). In order for a ray to contribute with radiance to a pixel the recursion of a ray have to end As rays are traced recursively in the scene they get less important to the final result. If no stopping condition is introduced the rays will bounce around for an infinite number of times calculating values with little importance to the scene. A natural situation for rays to stop its recursive search for light emitting surfaces is when one of the surfaces along the traced path absorbs the light. The first method used to terminate rays is Russian roulette. It s a method based on that different materials have different chances to absorb light. Diffuse materials have a higher chance of absorption than specular materials. The chance to absorb rays is represented with a number α between [0.0, 1.0], where 0.0 represents no chance of absorption and 1.0 is guaranteed to absorb rays. A random number is then generated between [0.0, 1.0]. The ray get reflected or refracted as long as the random number is below the reflection value (1.0 α), otherwise it is absorbed. The Russian roulette is however combined with a second stopping condition. There is a depth variable whose purpose is to keep track of how many times a ray has been reflected or refracted. If the depth variable exceeds a predefined value it stops the recursion. The purpose of this second condition is to ensure termination of the recursion. 9

2.9 Anti-aliasing When sending out one sample per virtual pixel it is likely to get aliasing in the final image. The reason for aliasing is explained in figure 2.11 below. 2.10 Gamma Correction The reason of using gamma correction is because the human eye is not adjust to linear luminance functions, which the final rendering without gamma correction happens to be. I.e most of the data of the illumination is stored in highlights and not so much in shadow values, which the humans are most sensitive to. So, to compensate this, a gamma correction function is applied to the pixel values. This function is adapted to suit the properties of the eyes and equation 2.13 is an simplification of that. P ixel O = Intensity max P ixel γ I (2.13) Figure 2.11: Anti-aliasing. Two different directions through the same virtual pixel will give different result in the final image. In figure 2.11 a ray is launched through the virtual pixel, if the selected direction is direction 1 the returned value will be red, while if direction 2 is selected the returned value will be blue. This will lead to pixels either being perfectly red or perfectly blue. Which will lead to an aliased image. The solution to avoid aliasing is to use multi sampling, Multi sampling is a technique that sends multiple rays through the same virtual pixel and use the average of the rays result. If two rays where to be launched into the scene like in figure 2.11 the final result would be the average of blue and red, thereby returning purple. The first terms in equation 2.13 is a constant of the maximum value that a pixel can have. The first term is followed by the old pixel value (newly rendered pixel value) powered with an factor γ. The γ is chose to be 1 2.2. 10

Chapter 3 Results and benchmarks 3.1 Monte Carlo ray-tracer The image in figure 3.1 was rendered in 4 hour and 12 minutes and the resolution is 1024x768 with 4 threads. It was sampled with 20,000 sample per pixel with cosine importance over the hemisphere. the blue wall is seen through the transparent sphere. It is also possible to see the reflections of the light source and the reflective sphere in the transparent sphere. Other interesting effects affecting the scene, are color bleeding and caustics. These effects are hard to see in 3.1 and they are therefore divided into two separate sections below. The soft shadows in figure 3.1 is a result of the multiple shadow rays sent towards the light source. This was discussed in section 2.6.1 Shadow ray. Figure 3.1: Final image, same image is shown in title page. In figure 3.1 one can see four spheres, two of them is diffuse reflectors, one is reflective (left big sphere) and one is transparent (right big sphere). In the reflective sphere one can see the reflections of the room, the backside of the two diffuse reflective spheres and the reflection of the light source. This gives a visual proof that the ray shot toward the reflective sphere is totally reflected. The transparent sphere is a visual proof that rays shot from the view point is both reflected and refracted. This can be seen since the corner between the floor, the back wall and 11

3.2 Color bleed The color bleed in figure 3.2 and 3.3 is a result of the indirect light where the path to the light is randomly generated. Path from the floor to the light source including the walls will generate a surface partially affected by the wall included in its path. 3.3 Caustics Figure 3.4 shows the caustics effects and the image is a cutting from figure 3.1. As seen, there are two caustic effects, one directly under the glass ball, which is caused by the light source. The second effect (the one on the wall) is created by the mirror ball that reflects light from the light source. Figure 3.2: Color bleeding for the red wall. Figure 3.2 shows the color bleed from the red wall. The randomly generated path from the floor to the light source have included the red wall into its path, the floor has thereby been partially coloured red. Figure 3.4: Caustics effects. The area below the sphere is brighter due to the fact that the path of the ray ends up at the light source i.e the importance factor is high. The ray from the view point is sent towards the floor, where it gets reflected towards the sphere. On the sphere surface, the ray gets both reflected and refracted. However, the refracted ray carries more importance since it reaches the light source through non-absorptive surfaces. Figure 3.3: Color bleeding for the blue wall. The floor in figure 3.3 is blue due to that the blue wall has been included in the path of the ray. The bright area in the figure is called caustics and is discussed in the next section. 12

3.4 Hemisphere Sampling 3.5 Multi Sampling The image in figure 3.5 was rendered in 11 hours with 1 threads and the resolution is 800x600. It was sampled with 20,000 sample per pixel with cosine importance over the hemisphere. Figure 3.7: 10 sample per pixel. Figure 3.5: With cosine importance sampling. The image in figure 3.5 was rendered in 6 hour and 6 minutes with 1 threads and the resolution is 800x600. It was sampled with 20,000 sample per pixel without cosine importance over the hemisphere. Figure 3.8: 50 sample per pixel. Figure 3.6: sampling. Without cosine importance Figure 3.9: 100 sample per pixel. 13

3.6 Benchmark Table 3.1 shows benchmark for the figures in section 3.5 Multi Sampling. Image resolutions are 400x300 with 4 threads. Figure 3.10: 500 sample per pixel. Sample per pixel Rendering time (min) 10 0.02 50 0.1 100 0.2 500 1 5000 9.6 Table 3.1: Multi samplings inpact on rendering time. Figure 3.11: 5000 sample per pixel. In figure 3.7, 10 rays per pixel have been applied. Most likely, less then 10 rays ends up at the light source and thereby resulting in a noisy image. The solution is to send out a high amount of rays. The rays that reaches the light source will be less then the amount sent out, but since so many rays is sent out, the final image results in an image with reduced noise. This can be seen in figure 3.11. 14

Chapter 4 Discussion By looking at the final results, we can say that to get good result from our renderer, a lot of time need to be spend on the computations. In order to get rid of most of the noise one have to include multi sampling. The images that where rendered with 20,000 sample per pixel took around 6 hours (with 1 thread) to render, time is although worth spending here since the result is a photo realistic image. In order to speed up the computations some other methods should be implemented for example photon mapping. When focusing on the algorithm, the first thing that needs to be improved is the transparent surfaces and the radiance distribution. The used equation for radiance distribution, is just an approximation of the radiance that flows through each reflected and/or refracted ray. If Fresnel s equation instead where to be used one would have received an much more accurate radiance distribution. Another part of our scene in the need of improvement is the area between the roof and the back wall where a white line occurs. The reason for its appearance is that the offset moves some of the rays above the roof, when a ray starts above the roof the chance of intersecting with the light sphere are high (since the light sphere existing above the roof is large) resulting in a bright surface point. The use of OpenMP has in some cases created bad artifacts, we believe that this is a consequence from the use of a random number generator originally generating numbers based on the current time. At first the threads is in sync and thereby generating the same random numbers but with time less artifacts occurs as the calculations of different threads get separated in time. The scene just consists of implicit surfaces like spheres and planes with corresponding intersections testing. So, to make the scene more interesting, an object loader could be implemented. Meaning, shapes could be made in any 3d-program and then be imported to our program. With an object loader implemented, a ray-triangle intersection and bounding boxes would also be needed. 15

Bibliography [1] James T. Kajiya. THE RENDERING EQUATION. California Institute of Technology Pasadena, Ca. 91125. Last collected: 2011-12-08. [2] Francois Sillion, Claude Puech. A General Two-Pass Method Integrating Specular and Diffuse Reflection. Laboratoire d Informatique de l Ecole Normale Sup rieure U.R.A. 1327, CNRS. Last collected 2011-12-08. [3] Henrik Wann Jensen. Global Illumination using Photon Maps. Department of Graphical Communication The Technical University of Denmark. Last collected 2011-12-08. [4] Ingo Wald, Heiko Friedrich, Gerd Marmitt, Philipp Slusallek. Faster Isosurface Ray Tracing Using Implicit KD-Trees. IEEE Computer Society. Last collected 2011-12-08. [5] Philip Dutre, Kavita Bala, Philippe Bekaert. Advanced Global Illumination, Second Edition. A K Peters, Ltd. 888 Worcester Street, Suite 230, Wellesley, MA 02482 16