Cloud Rendering using 3D Textures

Size: px
Start display at page:

Download "Cloud Rendering using 3D Textures"

Transcription

1 Cloud Rendering using 3D Textures Edwin Vane Computer Graphics Lab University of Waterloo Abstract This paper presents a comparison of three different lighting techniques for clouds rendered using 3D textures. However, before one can experiment with lighting, methods for generating and rendering volumetric cloud data are required. One method for generating data using Perlin noise [10] is described. A method for rendering this generated volumetric cloud data as a 3D texture is also covered. The analysis of each lighting method is based on performance as well as their visual appeal (which in turn is based on a method s ability to handle physical light interaction etc.) Each method is also accompanied by implementation details that provide the basis for the comparisons. Finally the lighting methods are compared with each other. 1 Introduction Clouds are an important part of our natural environment. We are accustomed to seeing them in the sky nearly every day in various states. This should also be true of online graphics applications that model the sky. Such applications could include flight simulators or video games. The problem is that rendering clouds in a fast and realistic enough way for such online applications is nontrivial. Various assumptions can simplify the problem however. Such assumptions include using simple billboards for clouds or displaying the whole sky as a picture with clouds included. Such simplifications have limitations so it is interesting to look at other methods. Two things that are required for realistic clouds are realistic data describing clouds (produced by simulation perhaps) and lighting algorithms. With respect to data generation, simulation will not be covered. The focus of this work is to experiment with different methods for cloud lighting so realistic data generation is not as important. A simpler procedural method for generating static cloud data is used instead. The implementations are not restricted to using this procedurally generated data however. Eventually the rendering and lighting methods to be discussed could be used to visualize the output of a simulation. There are many methods available for lighting clouds, each taking into account different physical effects. Behrens proposed an algorithm for producing shadows in volumetric data that has been extended to clouds here. Harris s method tries to take into account actual physical interactions of lit clouds by modelling light scattering. Since clouds are represented by a scalar field of liquid water density values, a gradient-based lighting scheme can also be considered. To serve as a basis for comparison, an unlit method for cloud rendering is also implemented. 1.1 Previous Work Harris et al. [8] introduced a method for simulating clouds via fluid dynamics on the GPU. Their algorithm produces realistic motion of clouds for use in online applications. The work also deals with the rendering aspect of the problem as they mention a method for rendering the 3D liquid water density information provided by their simulation. The method is an extension for 3D textures of an earlier paper by Harris and Lastra [9] where they rendered clouds as particle systems. This extension is discussed in more detail by Harris in his PhD dissertation [7]. The two methods are very similar and are together based on an even earlier method introduced by Dobashi et al. [4]. The common feature between all of these algorithms is that they approximate light scattering within the cloud. For instance, Dobashi et al. [4] approximated single scattering toward the viewer whereas Harris and Lastra [9] approximated multiple forward scattering along the light vector and single forward scattering toward the viewer. All of these related algorithms occur in two passes. The first pass calculates scattering along the light vector and creates a lighting volume. This lighting volume is dependent on the light vector which means changes in the light vector require a recomputation of the lighting volume. The information in the lighting volume is then applied during rendering (to calculate single forward scattering as in Harris method for example). A simpler approach was suggested by Behrens and Ratering [3]. Their algorithm is only concerned with volumetric shadowing and not with lighting calculations of any kind. As with previously mentioned algorithms, the Behrens algorithm is carried out in two passes. The first pass calculates the shadowed 3D texture based on the opacity variations in the cloud. This has the effect of calculating how much light reaches each point in the 3D texture when the directional light source is fixed in a certain direction relative to the cloud. The second pass then renders this shadowed 3D texture using a rendering algorithm like the one described below in section 3. As with the previous algorithms, if the light direction changes, the shadowing of the 3D texture must be recomputed. Perlin noise [10] is a favorite for generating visually appealing procedural clouds using spatially coherent noise. However, Perlin noise does not generate the large empty spaces one would expect between clumps of cloud. To solve this problem, an exponential function is applied to pure Perlin noise to define sharper edges to the clouds and produce empty space. The idea was borrowed from [5]. Gradient techniques for lighting volumetric data exist in many forms. However, they are generally used for lighting opaque materials. The algorithms implemented are similar to but much simpler than those found in Arvo s work on Iso-contours [2]. 2 Data Generation As mentioned previously, the best way to get realistic cloud volume data is to carry out a numerical simulation to generate water density information over a domain. Simulation as a data source is too involved for experiments with rendering and lighting methods. Instead, a technique based on Perlin noise is used for generating data. Perlin s method produces spatially coherent noise which can produce visually appealing clouds when the parameters are tuned correctly. There are several parameters for the algorithm which allow it to be used in many situations other than for producing clouds. The most useful parameters are the number of harmonics and the weight applied when summing the harmonics together. Higher harmonics produce a finer level of detail than lower ones. When multiple harmonics are summed, noise with high and low frequencies is produced.

2 Figure 1: Effect of the number of harmonics on Perlin noise. The left image uses one harmonic. The right image uses four. Figure 3: Example of the visual artefacts associated with object oriented slices 3 Rendering Figure 2: Effect of the exponential parameters on Perlin noise. The left image uses c = 150 and s =.98 while the right uses c = 125 and s =.93 (both images use four harmonics). When used for clouds, pure Perlin noise tends to produce overcast type clouds; a domain where the water density is greater than 0 at almost every point. To produce more realistic clouds with definite edges and open space between clumps of cloud an exponential function is applied to the pure Perlin noise. The function has the effect of producing sharper edges for clouds. A simple modification to a exponential function allows for control of empty space between clouds and produces the following two equations: e = p c (1) d = s e (2) where p is the output of the Perlin noise function, transformed into the range [ ]. c is a cloud cover constant which controls how much empty space exists between clouds and can range from 0 (for no empty space) to 255 (all empty space). s is a cloud sharpness constant which controls how sharp the density falloff at the edges of clouds is. 2.1 Implementation Details The above algorithm for generating volume data was implemented as a separate program that produced input data for the main visualization program. The algorithm takes approximately 350ms to run for a volume. Figures 1 and 2 demonstrate some of the appealing results that can be generated (in the 2D domain at least). For use with the algorithms below, a D texture was created to represent the domain [ ] [ ] [ ]. Values of 150 and.98 were used for c and s respectively. Four harmonics were added together with a weight of 2 (this weight is the α value Perlin refers to[10]). With cloud volume data generated by the modified Perlin algorithm from Section 2, a rendering algorithm is now required. To visualize volumes represented as 3D textures, slicing algorithms are generally used. This involves taking a slice of the 3D texture and pasting it onto a quad. Quads are rendered close to each other and blended to render the entire volume. Increasing the number of quads used increases visual quality and better represents the data. However, this implies more work for shaders to be discussed below. In my implementation, 64 slices were used, one for each layer of depth in the 3D test volume. Slicing algorithms are classified into those that use view aligned slices and those that use object aligned slices. Object aligned slices are slices drawn perpendicular to the axes of the volume. View aligned slices are drawn perpendicular to the viewing direction. For object aligned slices, the data for one slice is contiguous in texture memory and is thus more cache friendly. However, three copies of the volume are required; one copy with slices aligned perpendicular to each of the three volume axes. View aligned slices require only one copy of the texture however voxels on one slice will probably not be contiguous in texture memory and therefore are not very cache friendly. There are visual differences between the two methods as well. With object aligned slices, an orientation is chosen based on which volume face is most visible from the current view direction. As the object axes turn away from the viewer, the space between the planes becomes more visible, causing the viewer to perceive the volume as a stack of slices. This effect can be seen in figure 3. As a new face becomes more dominant, the slice orientation will flip which is visibly noticeable. View oriented slices do not suffer from these problems. 3.1 Implementation Details Object oriented slices were chosen as the rendering method. Although this method has visible problems, the added benefit of texture cache coherency will remove the issue of cache coherency from the performance improvement phase of each lighting algorithm. All the lighting methods other than the simple unlit model have their performance restricted by other issues so the benefit of cache coherency is not likely to be noticed anyway. No formal experiments with view aligned planes were carried out to verify this hypothesis. A dot product test is used to determine which face of the volume is most visible to the user; I shall refer to this most visible face as the dominant orientation. The largest absolute value of the dot

3 product with three perpendicular face normals determines this dominant orientation. The orientation of the slices is determined by the dominant orientation as well. However, this approach causes sudden noticeable orientation changes. Smoothly blending between one orientation and another could solve this problem. Blending would require at least twice as many slices to be rendered which would imply a performance decrease. The performance drop was not investigated for two reasons: The quality of the rendering method is not important for the case of experimenting with and visualizing the lighting of cloud volumes. Work by Engel [6] can greatly increase the quality of the rendering while decreasing the number of view oriented slices required. This method would be the preferred method for improving rendering quality when quality is a concern. The actual texture that is put onto a slice depends on the lighting model. Each model has its own simple shader for doing texture mapping. In most cases straight OpenGL could have been used for texture mapping but shaders make the implementation a little easier and provide more control over texture mapping. However, there is some overhead associated with using vertex and fragment programs, and especially with using Sh. 4 Lighting With a data generation method and a way of visualizing the data, lighting methods can be investigated. Below are four lighting methods, three of which were implemented. The fourth (Harris method) is included for comparison. 4.1 The No Lighting Method The first lighting method requires no lighting at all. This method provides the basis of comparison for the gradient method and Behrens method. In the final implementation, Sh was used only for doing simple texture mapping. The input texture is a texture representing alpha values throughout the volume that have been calculated from liquid water density values. A base colour of white is used and the alpha texture controls the opacity, producing clouds. The light position does not affect this algorithm at all Analysis From a performance standpoint, the use of using a fragment shader for such trivial work is probably not a good choice. Sh incurs more overhead than using straight OpenGL (as investigated in section 4.3). However, Sh abstracts away many the details of OpenGL and thus makes the implementation a little easier. Future enhancements requiring shaders will also be easier to add. Frame rates are as high as 60fps using the nvidia architecture and 130fps on with ATI for the cloud volume so performance is already quite high anyway. Having the input volume represent strictly the alpha values and not the colour values of the volume was an attempt to save texture memory and reduce transfer times between the CPU and GPU. Since the shader for the Behrens method performs faster than the no lighting shader, it would indicate that from a shader point of view, it would actually be faster just to use an input texture that represents colour and alpha values. See Section for details on the performance of the Behrens shader. From a visual point of view, this method is satisfactory for basic visualization of the data. High frame rates allow the user to manipulate the volume with precision and therefore it is easier for the Figure 4: Example of an unlit volume complete with frame rate counter user to perceive form. However, since there is no lighting, contours on the surface of individual clouds within the volume are hard or impossible to make out. Since interaction with light is not taken into account, the cloud volume does not look very realistic either. Figure 4 shows an example of an unlit cloud volume. Notice how difficult it is to determine depth or small details on the surfaces of clouds facing the viewer. 4.2 Gradient Method Since clouds are represented as liquid water density over a three dimensional domain, one can calculate the gradient of this scalar field. A gradient operator produces a vector at every point in a scalar field. The vector orientation is the direction of greatest change. The magnitude of each vector is the amount of change. As a consequence, the gradient operator produces normals to surfaces. Surfaces are represented by areas where values are similar which means gradient vectors will point away from the surface as normals do. The normals can then be used in lighting calculations. The 3D gradient operator is defined for Cartesian coordinates as follows: «u u u u(x, y, z) = ˆx, ŷ, x y z ẑ (3) where ˆx, ŷ, and ẑ are orthonormal bases Implementation Details Equation (3) is not useful on its own for an implementation. A discrete version of this gradient operator is required. The discrete version of (3) is: (u x) i+1/2,j,k = (u i+1,j,k u i,j,k )/h (u y) i,j+1/2,k = (u i,j+1,k u i,j,k )/h (u z) i,j,k+1/2 = (u i,j,k+1 u i,j,k )/h where u = (u x, u y, u z). u i,j,k is the discrete value of u (the voxel) at position (i, j, k) in the texture. Note that the equations above define half values for indices into the texture. If these are considered the center of voxels, then values at the edges of the voxel (4)

4 are required. In my implementation the edge values are calculated as a linear interpolation between voxel (i, j, k) and its neighbours. h is the distance between the edges of a voxel which is calculated from the dimensions of the texture (in voxels) and the actual space the volume takes up. For all lighting implementations, volume dimensions of were used. These dimensions make the volume take up slightly less space than the original Perlin noise domain. The reasoning was to prevent magnification artefacts from using such a low resolution as This scheme works for all voxels except for those at the edges since edge voxels are missing neighbours. Here, it is assumed that there is a one voxel wide shell around the volume of 0 density. This allows the interpolation scheme mentioned above to be used at all points. Since the same operation is carried out at all points, the entire calculation is ideal for use as a stream program. Using the parallel SIMD structure of fragment units, the calculation of the gradient as a stream program should be faster in parallel via the fragment units than in serial on the CPU (ignoring overhead for setup, etc.). The performance comparison was not formally tested however. For the small volume used in these tests, performing the gradient operation on the CPU would likely be faster. This belief is based on the fact that the overhead for performing the operation on the GPU would outweigh any speed benefit offered by parallel execution. However, Amdahl s Law[1] implies that parallel speedup may manifest if the volume size is increased. Sh streams were used to perform the gradient calculation. Input to the stream program consists of a single channel where each record is a three-tuple; one record for an index to each voxel in the cloud volume. The stream program performs seven texture lookups for the interpolation, calculates the gradient vector for a voxel, and then writes to a single output channel. Each record in the output channel is a four-tuple; three for storing the gradient for that voxel, and the last for storing the alpha value at that voxel. Since the output from this algorithm will not be necessarily in the range [0... 1], a texture that supports unclamped values must be used. This is solved by using the nvidia texture rectangle extension. However, interpolation of such textures is not currently supported in hardware. Therefore, the shader that calculates the texture for each slice during rendering must do the interpolation itself. This operation is in addition to the calculation of diffuse lighting at every point in the volume. Interpolation in this case is bilinear per slice (no interpolation between slices is performed). This requires four texture lookups in addition to extra calculations to make sure the right voxels are chosen for interpolation for every fragment Analysis The calculation of the gradient via a stream program takes on average 185.5ms. This is a one-time cost since the gradient need only be calculated once. After the calculation, the volume can be manipulated and the light can move without any further recalculation (other than the work done by the shader texturing the slices). The speed of this algorithm could probably be improved if the Sh cond() functions were removed. On the nvidia architecture, early experiments indicated that conditional assignments are relatively slow. Future generations of hardware may improve the performance of conditional assignments, hence improving the performance of the gradient operation. The one-time cost of the gradient calculation is not as noticeable as the performance of the shader responsible for texturing the slices. For a viewport, the frame rate is approximately 3.75fps. This is a three fold improvement over the initial implementation which was < 1fps. The improvement was achieved by the following changes: Figure 5: Examples of a cloud volume lit by the gradient method. The light location is specified by the bright pink dot in the left image. In the right image, the light is above and to the left of the volume. Instead of transforming the normal for every fragment into view space to do lighting, the light was inverse transformed into model space. This eliminates many matrix multiplications and subsequent normalizations. Initial interpolation code required several floor(), frac(), and abs() commands which seem to be quite slow. After some refactoring and streamlining, this was reduced to a single floor() and frac(). The number of temporary program variables was reduced. After these optimizations, the fragment program was 55 instructions long. The performance of rendering in the gradient method is still bound by fragment shader performance. The only difference in the rendering phase between the unlit method and the gradient method is that the gradient fragment program is longer and involves more time consuming Sh functions (frac(), floor(), lerp(), and multiple texture lookups as opposed to a single texture lookup). This comparison would lead one to conclude that the gradient fragment shader is the bottleneck for performance. Figure 5 shows an example of the gradient lighting method in action. The lighting is very harsh and does not produce very realistic clouds. The lighting method is very simple and has the following problems: Diffuse calculations are carried out at every point in the volume. Therefore, if a surface is oriented away from the light it will be completely black. This is correct for opaque objects however clouds are not opaque. Clouds exhibit internal light scattering because of their transparency. Therefore no part of a cloud will ever be completely black in reality. Even though opacity information is still the same between the unlit and gradient methods, the clouds in the gradient method appear more solid for this reason. Harris method, discussed in section 4.4, attempts to model internal scattering of light in clouds. Lighting calculations are carried out inside clouds as well. Although there is no concept of occlusion, clouds can appear darker than they should if normals inside the clouds are oriented away from the light or if the normal is the zero vector. Remember, gradient vectors can be the zero vector in regions where there is no change. Since the clouds are partly transparent, the dark insides to clouds will show through the surface making the cloud look even darker. In the large regions where alpha values are 0, the gradient vectors will all have magnitude zero. If these voxels were actually opaque, they would be black. This could have negative effects on interpolation, causing the edges of clouds to be darker than they should be.

5 There is no concept of internal shadows in this lighting method. Internal shadows are very important for the lighting of clouds since it is internal shadows that give clouds their appearance. Both the Behrens (discussed in section 4.3) and Harris methods attempt to model internal shadows. l n n l The method is very good for visualizing the contours of clouds however. By moving the light around, the high contrast between shades of white, grey, and black makes it easy to see the small-scale detail of the cloud. 4.3 Behrens Method Behrens proposed a method for casting shadows inside a volume. Data in one part of the volume will cast shadows on data in other parts. The method does not take into account any surface characteristics or lighting and is not physically accurate as a result. The method was meant for applying shadows to general volumetric data such as medical images or CAD to provide depth cues. It is applied to clouds here to gauge its usefulness for online cloud rendering. Although the method is not physically correct it does take into account some key aspects of shadows with respect to semitransparent material: Opaque objects cast stronger shadows than semi-transparent ones. This makes sense since one would expect more light to pass through a semi-transparent object (thus lightening the shadow) than an opaque object. Shadows that fall on opaque objects should appear stronger than the same shadow on a semi-transparent material. This follows from the fact that for a semi-transparent object, there is less reflecting material for the shadow to fall on. The algorithm for calculating shadows works with 3D textures represented as object oriented slices. The slices are processed in order starting with the one closest to the light. The orientation of the slices is determined by the same dot product test mentioned in section 3.1 using the light direction instead of the view direction. The light is assumed to be directional so the light direction is the same at every point of a single face. Algorithm 1 describes Behrens method. In this algorithm p i is the final shadowed ver- Algorithm 1 Behrens Algorithm 1: p 1 p 1 2: s 1 no shadow 3: for each slice p i, i > 1 do 4: s i p i 1 s i 1 5: p i s i p i 6: end for sion of slice p i. The x y operation should be interpreted as blend x into y. The first blend operation adds the shadow cast by slice p i 1 to the accumulated shadow cast by all previous slices. The second blend operation applies this shadow to the current slice to produce the shadowed version of the slice. This algorithm can be interpreted as a sequential process that accumulates the shadow cast by each slice and applies this collected shadow to the the next slice. Note that the results are independent of the view direction. The results are not independent of light direction however. If the volume data or the light source direction changes then the shadows must be recalculated. If the light direction is not perpendicular to a face, then shadows are cast at an angle through the volume. In order to use algorithm 1, the stack of slices needs to be skewed to simulate light moving at an angle through the volume. This requires calculating a skew amount Figure 6: Illustration of how the stack of slices is skewed when light is not perpendicular to a volume face. The left figure shows the stack with an oblique light direction ( l). The right figure shows how the stack is blended so that the light can be assumed to come from above. based on the light direction and then blending slices at a relative offset instead of aligning two slices exactly before blending. Figure 6 illustrates this change Implementation Details Steps 4 and 5 have alpha blending equations associated with them. If α = 1 is interpreted as meaning a strong shadow and α = 0 as no shadow then the blending function for step 4 is: α = 1 (1 α s) (1 α d ) (5) where α d is the alpha value in s i 1 and α s is the alpha value of the the slice p i 1. Remember that α s = 1 where the slice is opaque and α s = 0 where the slice is transparent. The blending function for step 5 is: c = (1 α d α s) c d α = α d (6) where α d is the alpha value of slice p i and α s is the alpha value of the shadow slice s i. These blending functions cannot be implemented in OpenGL directly so a modified algorithm is proposed by Behrens. Figure 7 summarizes the new algorithm which is listed in detail in algorithm 2. The three blending operations in algorithm 2 use different blend- Algorithm 2 Behrens Algorithm: OpenGL adaptation 1: for each slice p i, i [0..n] do 2: Buffer1 p i 3: s i p i s i in Buffer 3 using (7) 4: p i s i p i in Buffer 1 using (8) 5: s i+1 p i s i in Buffer 2 (9) 6: Copy Buffer 2 to Buffer 3 7: Read shadowed slice p i out of Buffer 1 8: end for ing equations each. Step 3 uses: α = α sα d (7) which corresponds to the OpenGL blending function glblend- Func(GL ZERO,GL SRC ALPHA). Step 4 uses: c = α sc d α = α d (8) which corresponds to glblendfunc(gl ZERO,GL SRC AL- PHA) again. However, to make sure the alpha value of the result is unchanged glcolormask(gl TRUE,GL TRUE,GL - TRUE,GL FALSE) must be used. Finally, step 5 uses: α = (1 α s) α d (9)

6 Figure 7: Illustration of the steps and buffers for the OpenGL adaptation of Behrens Algorithm which corresponds to glblendfunc(gl ZERO,GL ONE MI- NUS SRC ALPHA). For these equations to work in OpenGL, the meaning of the alpha values in a shadow slice needs to be inverted. That is, α = 0 now means a strong shadow and α = 1 means no shadow. However, equations (7) and (8) as given in Behrens paper are incorrect. They result in fully lit slices (that is, slices with no shadows cast on them) that are semi-transparent to shadow themselves. For example consider (7) with α d = 1 (no accumulated shadow) and α s =.8 (a semi-transparent slice). This gives α =.8. Note that α in (7) is α s in equation (8). If c d = 1 in (8) (which means the slice is white), then the resulting colour is.8 which is incorrect. This problem was corrected by looking again at (6). c = (1 α d α s) c d = (1 α d (1 α s)) c d Invert meaning of α s = (1 (α d α sα d )) c d (10) α s has its meaning inverted since (6) treats α s = 1 as a strong shadow whereas for the OpenGL adaptation, α s = 1 should mean no shadow. (10) can now be split into two separate blending equations to replace equations (7) and (8). The new version of (8) is now: c = (1 α s) c d α = α d (11) which can be implemented in OpenGL using glblend- Func(GL ZERO,GL ONE MINUS SRC ALPHA). The replacement for (7) is: α = α s α sα d = (1 α d ) α s (12) which can be implemented in OpenGL using glblend- Func(GL ONE MINUS DST ALPHA,GL ZERO). With equations (12) and (11) replacing (7) and (8) respectively, correct shadowing behaviour is finally observed. As suggested by figure 7, all of these operations are carried out in the framebuffer. For further performance improvement, as many operations as possible were done on the graphics card without transferring data to and from the host. Setup and cleanup for algorithm 2 is summarized by the following: 1. Download the unshadowed texture to the graphics card 2. Carry out algorithm 2 and copy shadowed slices to another output texture in texture memory 3. At the end of the algorithm, download the output texture to the host using Steps 3 through 6 use the OpenGL glcopypixels() function to perform blending and copying. Step 7 uses glcopytexsubimage3d() to copy the shadowed slice to the output texture in texture memory. The output of algorithm 2 is an RGBA texture (whereas the input texture is only an alpha texture). Therefore, the shader for applying this output texture to slices for rendering is even simpler than the unlit version; the fragment shader consists of a single texture lookup Analysis Behrens claimed a frame rate of 3.7fps for a volume on an SGI Octane [3]. With the final implementation and a viewport of size a frame rate of approximately 5fps was observed with the nvidia architecture. The ATI architecture showed frame rates around 2fps. Achieving this level of performance required several steps and many tests. The initial implementation used Sh to render the initial quad into the framebuffer (ie, to perform step 2 of algorithm 2). Sh was chosen to simplify texture mapping and to provide greater control of texture lookups since a one texel to framebuffer pixel mapping is required. It was soon realized that the overhead of Sh and having vertex and fragment programs enabled was much too costly. Frame rates around 0.12fps and less were observed at first. The texture mapping performed by Sh was then replaced by straight OpenGL which greatly improved performance. The original paper [3] seemed to suggested copying a shadowed slice off the graphics card at the end of every iteration of algorithm 2. The method mentioned previously was implemented instead. That is, shadowed slices were copied to a temporary texture on the graphics card. Then at the end of the algorithm there was a single download of the entire shadowed volume to the host. This was an attempt to eliminate any overhead when copying data off the video card multiple times.

7 l l Figure 9: Example of how colour banding occurs in Behrens Algorithm at oblique light directions Figure 8: Examples of volumes lit with Behrens Algorithm. The left image shows the silver lining effect as well as too-dark cloud centers. The right image shows clouds casting shadows on other clouds. Another observation was to disable texturing after step 2 was complete. This disables costly operations that are still applied during pixel blending operations that occur when using glcopypixels() in the remaining steps. These optimization steps improved performance to current levels. An almost 42 fold improvement in performance was realized between the initial and final implementations. The performance of the final implementation is controlled by the dimensions of the cloud volume (in voxels). This could suggest three things: The algorithm is raster limited. That is, the speed and efficiency of commands like glcopypixels() and glcopy- TexSubImage3D() control the speed of the algorithm. As they are forced to copy more data around, they become slower. There are unseen, extraneous, and unwanted movements of texture data between the CPU and GPU every iteration. Such movements of data would negatively affect the performance. The algorithm is texture bandwidth limited. As the slice changes size, transfer rates between texture memory and cache will change if the texture size exceeds bandwidth of the channel. In this case, one would expect to see a sudden drop in performance as the slice size crosses a threshold of bandwidth capability. In experiments, the frame rate continues to increase from a volume to a volume. This continuous increase even for small textures would indicate either of the first two options. If the second option is true, then the problem likely lies with the driver. The first option seems to make the most sense given the evidence. The shader for applying the output texture to slices during rendering can be removed from consideration in the above analysis since the actual number of fragments was kept constant between each of the different volume sizes. The shader for displaying the results of Behrens method is actually faster than the shader for the unlit method. When not forcing recalculation due to a change in light position, frame rates are approximately 80fps on the nvidia platform and 140fps for ATI. Visually, the Behrens method is an improvement over the unlit and gradient methods. The soft shadows add greatly to the realism of the cloud even if the method is not physically plausible. The realism stems from the two features of shadows, mentioned in the method overview, that the method takes into account. An important result of the algorithm is that when the clouds are viewed from the side opposite the light direction they appear to have dark centers with light outsides. This is the silver lining effect seen in reality. A drawback is that cloud centers are still too dark. This problem is a result of the fact that light scattering is not considered in this algorithm. Harris method attempts to model a simplification of multiple light scattering within clouds to address this problem. See figure 8 for an example of too-dark cloud centers. The offset of slices within the framebuffer can have only pixelsized accuracy. Therefore, when the light direction is oblique to the surface of a face and skewing is required, the slice offsets are determined in a similar way to the Bresenham line algorithm. Figure 9 shows an example of how slices are lit with an oblique lighting direction. This figure also demonstrates what happens when the slices are visualized as a volume again; bands of shadowed and unshadowed colour appear. This has an unappealing effect on the volume. This problem can be solved if sub-pixel accuracy in the framebuffer can be obtained. Even without sub-pixel accuracy there are two tricks that have yet to be tried: Polygon anti-aliasing Convolution filters applied to slices using the imaging pipeline in OpenGL. The imaging pipeline is not supported in hardware on all architectures however (e.g. nvidia). 4.4 Harris Method This method ([7], [8], [9]) is similar in idea to Behrens method in that the 3D input texture is traversed in slices from the slice closest to the light to the furthest slice. In this case, the volume is transformed into light space so that the volume lies on the Z axis of the light space coordinate system. The slices are light oriented and parallel to the X-Y plane in light space. Therefore, the offset that was required in Behrens method is not required. The banding artefact produced by that algorithm because of the offset is also removed. This algorithm expects the input volume to represent liquid water densities and not opacity as in previous algorithms. The main idea of this algorithm is to solve two recurrence relations that describes scattering of light in clouds: ( g k 1 + T k 1 I k 1 2 k N I k = (13) I 0 k = 1 E k = S k + T k E k 1 1 k N (14) In these equations, T k is the transparency of particle k. Equation (13) handles multiple forward scattering along the light direction. It states that starting at the edge of the cloud closest to the light, the intensity of the light reaching particle k (I k ) is the light scattered from particle k 1, (g k 1 ), plus the intensity of the light that shines through particle k 1. Equation (14) handles single scattering toward the viewer. It states that the light exiting any particle k, (E k ), is the light it scatters (S k ) plus any light it doesn t absorb. The algorithms for solving these recurrence equations for 3D textures are described in some detail in Harris dissertation [7]. The algorithm for solving (13) is executed as a preprocess. It only needs to be executed once for a particular volume and light

8 direction. Changes in the view direction have no effect on the results from the preprocess. The preprocess produces a secondary 3D texture called an oriented light volume (OLV) which is used in the algorithm that solves (14). This second algorithm uses the OLV and the original input volume to produce a texture to apply to volume slices. The preprocess and rendering algorithms are very similar to each other and with Behrens method to a degree. Multi-texturing and texture matrices are used to traverse the input volume one slice at a time to produce an oriented light volume (OLV). Where Behrens method used the framebuffer and multiple pixel copies and blends to compute shadow casting, Harris method uses multi-texturing to read both the input volume and the OLV to blend in one step. Texture matrices are used to index the previous slice in the OLV so that light scattering information is carried forward. Results are blended into the framebuffer which is initially cleared to white. Results are read back and stored in the current slice of the OLV. The rendering algorithm uses multi-texturing again to combine the input volume with the OLV to texture volume slices for visualization. A texture matrix is used this time to transform texture coordinates from OLV space to world space to produce the correct combination of OLV and input volume. As view oriented slices are rendered back to front using normal alpha blending, (14) is solved. A phase function for calculating anisotropic lighting is applied per fragment as well. As with Behrens method the light is assumed to be directional. Any change in the volume or light direction requires the preprocess to be executed again Analysis Since this algorithm was not implemented, a performance analysis cannot be given. However, some conjectures based on the literature and my results from other algorithms can be put forth. This algorithm requires view oriented slices for both the preprocess and rendering algorithms. As mentioned in section 3, this method has visual improvements but possible performance problems due to texture cache coherency. Since the preprocess only needs to be executed when the light direction or volume data changes, the performance is not likely to be a problem. In reality, the light direction (ie, the location of the sun in the sky) changes continuously but slowly and the cloud volume changes slowly as well. Therefore the preprocess would not need to be executed as often as the rendering phase. As for the rendering algorithm, performance is likely to be bound by the calculations used to solve (14) and by the fragment program required to calculate the phase function for anisotropic lighting. Therefore, texture cache coherency will not be a large problem. Two 3D textures need to be maintained which implies the use of more texture memory. The OLV is usually of a coarser resolution than the actual input volume since changes in the lighting occur at a low frequency within clouds. Harris states that he uses OLVs of a resolution half that of the input texture [8]. Keep in mind that this algorithm uses view oriented slices so multiple copies of the input volume and the OLV are not required as with object oriented slices. Therefore, the Harris method will likely require less texture memory than those previously discussed. The visual results are very impressive and make up for the need for extra texture memory. Of the algorithms considered in this paper, Harris method is the only one that considers actual light transport within clouds. The result is that shadows like in Behrens method are observed but clouds are no longer unnaturally dark. As an added benefit, the anisotropic phase function provides the silver lining effect seen with Behrens method. In general the clouds lit with this algorithm look much more realistic than the algorithms considered here. See figure 10 for an example. Figure 10: Example of clouds lit with Harris method 5 Future Work Although the main goal of this paper was to experiment with different lighting methods for clouds, the original idea grew out of a desire to implement Harris Simulation of Cloud Dynamics on Graphics Hardware [8]. The first future task along this path is to fully implement the Harris method discussed above and to come up with an efficient framework for handling volumes where the data changes over time (ie, animation). After the visualization method is complete, cloud simulation [8] will be implemented. A major obstacle is the implementation of a fast Poisson equation solver. Harris implemented such a solver using the Red-Black Gauss-Seidel method but it would be interesting to try other methods for solving the Poisson equation such as domain decomposition. Any new method would then need to be implemented on graphics hardware. With the Poisson solver, the rest of the simulation can be implemented. 6 Conclusions Four different algorithms for lighting clouds have been explored, each with its own strengths and weaknesses. There are many other algorithms taking into account different aspects of the nature of clouds as well. However, not all of these algorithms are well suited for online rendering, which is what is of interest here. The unlit model performs well but is not well suited for realistic lighting of clouds. The gradient method performs the worst and is also not good for cloud lighting since it produces unrealistic images due to the assumptions made. However, the implemented gradient method is very simple and if improvements can be made, performance and visual appearance may be improved. Behrens method performs well and produces very satisfactory results. It would probably be well suited for use in interactive applications even if the lighting effects are not completely correct. Harris method would appear to be the best of the four algorithms for visual appearance. It would be excellent for generating images of static clouds since the performance of the light scattering calculations would not be as important if they only need to be performed once. The suitability of the Harris method for dynamic clouds is unknown. The Harris and Behrens methods are very similar but the Harris method appears more computationally expensive. More experimentation is required to make any conclusions about the Harris method.

9 Of the three implemented algorithms, the Behrens method is the preferred method for cloud lighting based on the visual and performance results. A Hardware Details The nvidia hardware used for all tests was a GeForce FX The ATI hardware used was a Radeon References [1] G.M. Amdahl. Validity of the single processor approach to achieving large scale computing capabilities. In AFIPS Conference Proceedings, pages , [2] James Arvo and Kevin Novins. Iso-contour volume rendering. In Proceedings of the 1994 symposium on Volume visualization, pages ACM Press, [3] Uwe Behrens and Ralf Ratering. Adding shadows to a texturebased volume renderer. In Symposium on Volume Visualization, pages 39 46, [4] Yoshinori Dobashi et al. A simple, efficient method for realistic animation of clouds. In Proceedings of ACM SIGGRAPH, pages 19 28, [5] Hugo Elias. Cloud cover [online, cited April 16, 2004]. Available from: elias/models/m_clouds.htm. [6] Klaus Engel, Martin Kraus, and Thomas Ertl. High-quality pre-integrated volume rendering using hardware-accelerated pixel shading. In Proceedings of ACM SIGGRAPH, pages 9 16, [7] Mark J. Harris. Real-time cloud simulation and rendering. PhD thesis, University of North Carolina, [8] Mark J. Harris et al. Simulation of cloud dynamics on graphics hardware. In Proceedings of Graphics Hardware, [9] Mark J. Harris and Anselmo Lastra. Real-time cloud rendering. In Eurographics Proceedings, pages 76 84, [10] Ken Perlin. An image synthesizer. In Proceedings of the 12th annual conference on Computer graphics and interactive techniques, pages ACM Press, 1985.

Applications of Explicit Early-Z Culling

Applications of Explicit Early-Z Culling Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of

More information

First Steps in Hardware Two-Level Volume Rendering

First Steps in Hardware Two-Level Volume Rendering First Steps in Hardware Two-Level Volume Rendering Markus Hadwiger, Helwig Hauser Abstract We describe first steps toward implementing two-level volume rendering (abbreviated as 2lVR) on consumer PC graphics

More information

Introduction to Visualization and Computer Graphics

Introduction to Visualization and Computer Graphics Introduction to Visualization and Computer Graphics DH2320, Fall 2015 Prof. Dr. Tino Weinkauf Introduction to Visualization and Computer Graphics Visibility Shading 3D Rendering Geometric Model Color Perspective

More information

Volume Graphics Introduction

Volume Graphics Introduction High-Quality Volume Graphics on Consumer PC Hardware Volume Graphics Introduction Joe Kniss Gordon Kindlmann Markus Hadwiger Christof Rezk-Salama Rüdiger Westermann Motivation (1) Motivation (2) Scientific

More information

The Rasterization Pipeline

The Rasterization Pipeline Lecture 5: The Rasterization Pipeline (and its implementation on GPUs) Computer Graphics CMU 15-462/15-662, Fall 2015 What you know how to do (at this point in the course) y y z x (w, h) z x Position objects

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization Volume visualization Volume visualization Volumes are special cases of scalar data: regular 3D grids of scalars, typically interpreted as density values. Each data value is assumed to describe a cubic

More information

CS GPU and GPGPU Programming Lecture 2: Introduction; GPU Architecture 1. Markus Hadwiger, KAUST

CS GPU and GPGPU Programming Lecture 2: Introduction; GPU Architecture 1. Markus Hadwiger, KAUST CS 380 - GPU and GPGPU Programming Lecture 2: Introduction; GPU Architecture 1 Markus Hadwiger, KAUST Reading Assignment #2 (until Feb. 17) Read (required): GLSL book, chapter 4 (The OpenGL Programmable

More information

Computer Graphics. Shadows

Computer Graphics. Shadows Computer Graphics Lecture 10 Shadows Taku Komura Today Shadows Overview Projective shadows Shadow texture Shadow volume Shadow map Soft shadows Why Shadows? Shadows tell us about the relative locations

More information

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Rendering Computer Animation and Visualisation Lecture 9 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Volume Data Usually, a data uniformly distributed

More information

Synthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons. Abstract. 2. Related studies. 1.

Synthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons. Abstract. 2. Related studies. 1. Synthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons sub047 Abstract BTF has been studied extensively and much progress has been done for measurements, compression

More information

2.11 Particle Systems

2.11 Particle Systems 2.11 Particle Systems 320491: Advanced Graphics - Chapter 2 152 Particle Systems Lagrangian method not mesh-based set of particles to model time-dependent phenomena such as snow fire smoke 320491: Advanced

More information

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Illumination Visualisation Lecture 11 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic. Shading Models There are two main types of rendering that we cover, polygon rendering ray tracing Polygon rendering is used to apply illumination models to polygons, whereas ray tracing applies to arbitrary

More information

Dominic Filion, Senior Engineer Blizzard Entertainment. Rob McNaughton, Lead Technical Artist Blizzard Entertainment

Dominic Filion, Senior Engineer Blizzard Entertainment. Rob McNaughton, Lead Technical Artist Blizzard Entertainment Dominic Filion, Senior Engineer Blizzard Entertainment Rob McNaughton, Lead Technical Artist Blizzard Entertainment Screen-space techniques Deferred rendering Screen-space ambient occlusion Depth of Field

More information

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 Announcements Homework project #2 due this Friday, October

More information

Volume Illumination & Vector Field Visualisation

Volume Illumination & Vector Field Visualisation Volume Illumination & Vector Field Visualisation Visualisation Lecture 11 Institute for Perception, Action & Behaviour School of Informatics Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

Computer Graphics 10 - Shadows

Computer Graphics 10 - Shadows Computer Graphics 10 - Shadows Tom Thorne Slides courtesy of Taku Komura www.inf.ed.ac.uk/teaching/courses/cg Overview Shadows Overview Projective shadows Shadow textures Shadow volume Shadow map Soft

More information

Edge Detection. Whitepaper

Edge Detection. Whitepaper Public Imagination Technologies Edge Detection Copyright Imagination Technologies Limited. All Rights Reserved. This publication contains proprietary information which is subject to change without notice

More information

Hardware Accelerated Volume Visualization. Leonid I. Dimitrov & Milos Sramek GMI Austrian Academy of Sciences

Hardware Accelerated Volume Visualization. Leonid I. Dimitrov & Milos Sramek GMI Austrian Academy of Sciences Hardware Accelerated Volume Visualization Leonid I. Dimitrov & Milos Sramek GMI Austrian Academy of Sciences A Real-Time VR System Real-Time: 25-30 frames per second 4D visualization: real time input of

More information

GPU-Accelerated Deep Shadow Maps for Direct Volume Rendering

GPU-Accelerated Deep Shadow Maps for Direct Volume Rendering Graphics Hardware (2006) M. Olano, P. Slusallek (Editors) GPU-Accelerated Deep Shadow Maps for Direct Volume Rendering Markus Hadwiger Andrea Kratz Christian Sigg Katja Bühler VRVis Research Center ETH

More information

Real-Time Hair Simulation and Rendering on the GPU. Louis Bavoil

Real-Time Hair Simulation and Rendering on the GPU. Louis Bavoil Real-Time Hair Simulation and Rendering on the GPU Sarah Tariq Louis Bavoil Results 166 simulated strands 0.99 Million triangles Stationary: 64 fps Moving: 41 fps 8800GTX, 1920x1200, 8XMSAA Results 166

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

Enhancing Traditional Rasterization Graphics with Ray Tracing. March 2015

Enhancing Traditional Rasterization Graphics with Ray Tracing. March 2015 Enhancing Traditional Rasterization Graphics with Ray Tracing March 2015 Introductions James Rumble Developer Technology Engineer Ray Tracing Support Justin DeCell Software Design Engineer Ray Tracing

More information

Clipping. CSC 7443: Scientific Information Visualization

Clipping. CSC 7443: Scientific Information Visualization Clipping Clipping to See Inside Obscuring critical information contained in a volume data Contour displays show only exterior visible surfaces Isosurfaces can hide other isosurfaces Other displays can

More information

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows are what we normally see in the real world. If you are near a bare halogen bulb, a stage spotlight, or other

More information

Volume Illumination, Contouring

Volume Illumination, Contouring Volume Illumination, Contouring Computer Animation and Visualisation Lecture 0 tkomura@inf.ed.ac.uk Institute for Perception, Action & Behaviour School of Informatics Contouring Scaler Data Overview -

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology Point Cloud Filtering using Ray Casting by Eric Jensen 01 The Basic Methodology Ray tracing in standard graphics study is a method of following the path of a photon from the light source to the camera,

More information

Shear-Warp Volume Rendering. Volume Rendering Overview

Shear-Warp Volume Rendering. Volume Rendering Overview Shear-Warp Volume Rendering R. Daniel Bergeron Department of Computer Science University of New Hampshire Durham, NH 03824 From: Lacroute and Levoy, Fast Volume Rendering Using a Shear-Warp- Factorization

More information

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker Rendering Algorithms: Real-time indirect illumination Spring 2010 Matthias Zwicker Today Real-time indirect illumination Ray tracing vs. Rasterization Screen space techniques Visibility & shadows Instant

More information

CS 130 Final. Fall 2015

CS 130 Final. Fall 2015 CS 130 Final Fall 2015 Name Student ID Signature You may not ask any questions during the test. If you believe that there is something wrong with a question, write down what you think the question is trying

More information

Rendering Smoke & Clouds

Rendering Smoke & Clouds Rendering Smoke & Clouds Game Design Seminar 2007 Jürgen Treml Talk Overview 1. Introduction to Clouds 2. Virtual Clouds based on physical Models 1. Generating Clouds 2. Rendering Clouds using Volume Rendering

More information

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering T. Ropinski, F. Steinicke, K. Hinrichs Institut für Informatik, Westfälische Wilhelms-Universität Münster

More information

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Efficient Rendering of Glossy Reflection Using Graphics Hardware Efficient Rendering of Glossy Reflection Using Graphics Hardware Yoshinori Dobashi Yuki Yamada Tsuyoshi Yamamoto Hokkaido University Kita-ku Kita 14, Nishi 9, Sapporo 060-0814, Japan Phone: +81.11.706.6530,

More information

3D Rasterization II COS 426

3D Rasterization II COS 426 3D Rasterization II COS 426 3D Rendering Pipeline (for direct illumination) 3D Primitives Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation

More information

Ray tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1

Ray tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1 Ray tracing Computer Graphics COMP 770 (236) Spring 2007 Instructor: Brandon Lloyd 3/19/07 1 From last time Hidden surface removal Painter s algorithm Clipping algorithms Area subdivision BSP trees Z-Buffer

More information

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03

Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03 1 Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03 Table of Contents 1 I. Overview 2 II. Creation of the landscape using fractals 3 A.

More information

Volume Illumination and Segmentation

Volume Illumination and Segmentation Volume Illumination and Segmentation Computer Animation and Visualisation Lecture 13 Institute for Perception, Action & Behaviour School of Informatics Overview Volume illumination Segmentation Volume

More information

Shadow Casting in World Builder. A step to step tutorial on how to reach decent results on the creation of shadows

Shadow Casting in World Builder. A step to step tutorial on how to reach decent results on the creation of shadows Shadow Casting in World Builder A step to step tutorial on how to reach decent results on the creation of shadows Tutorial on shadow casting in World Builder 3.* Introduction Creating decent shadows in

More information

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2. Prof Emmanuel Agu

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2. Prof Emmanuel Agu Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2 Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Image Processing Graphics concerned

More information

Self-shadowing Bumpmap using 3D Texture Hardware

Self-shadowing Bumpmap using 3D Texture Hardware Self-shadowing Bumpmap using 3D Texture Hardware Tom Forsyth, Mucky Foot Productions Ltd. TomF@muckyfoot.com Abstract Self-shadowing bumpmaps add realism and depth to scenes and provide important visual

More information

Applications of Explicit Early-Z Z Culling. Jason Mitchell ATI Research

Applications of Explicit Early-Z Z Culling. Jason Mitchell ATI Research Applications of Explicit Early-Z Z Culling Jason Mitchell ATI Research Outline Architecture Hardware depth culling Applications Volume Ray Casting Skin Shading Fluid Flow Deferred Shading Early-Z In past

More information

Projective Shadows. D. Sim Dietrich Jr.

Projective Shadows. D. Sim Dietrich Jr. Projective Shadows D. Sim Dietrich Jr. Topics Projective Shadow Types Implementation on DirectX 7 HW Implementation on DirectX8 HW Integrating Shadows into an engine Types of Projective Shadows Static

More information

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models 3D Programming Concepts Outline 3D Concepts Displaying 3D Models 3D Programming CS 4390 3D Computer 1 2 3D Concepts 3D Model is a 3D simulation of an object. Coordinate Systems 3D Models 3D Shapes 3D Concepts

More information

There are two lights in the scene: one infinite (directional) light, and one spotlight casting from the lighthouse.

There are two lights in the scene: one infinite (directional) light, and one spotlight casting from the lighthouse. Sample Tweaker Ocean Fog Overview This paper will discuss how we successfully optimized an existing graphics demo, named Ocean Fog, for our latest processors with Intel Integrated Graphics. We achieved

More information

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural 1 Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural to consider using it in video games too. 2 I hope that

More information

Shadow Volumes Revisited

Shadow Volumes Revisited Shadow Volumes Revisited Stefan Roettger, Alexander Irion, and Thomas Ertl University of Stuttgart, Faculty of Computer Science Visualization and Interactive Systems Group! ABSTRACT We present a method

More information

Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim

Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim Cel shading, also known as toon shading, is a non- photorealistic rending technique that has been used in many animations and

More information

White Paper. Solid Wireframe. February 2007 WP _v01

White Paper. Solid Wireframe. February 2007 WP _v01 White Paper Solid Wireframe February 2007 WP-03014-001_v01 White Paper Document Change History Version Date Responsible Reason for Change _v01 SG, TS Initial release Go to sdkfeedback@nvidia.com to provide

More information

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007 Simpler Soft Shadow Mapping Lee Salzman September 20, 2007 Lightmaps, as do other precomputed lighting methods, provide an efficient and pleasing solution for lighting and shadowing of relatively static

More information

CEng 477 Introduction to Computer Graphics Fall 2007

CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection Visible surface detection or hidden surface removal. Realistic scenes: closer objects occludes the

More information

Project report Augmented reality with ARToolKit

Project report Augmented reality with ARToolKit Project report Augmented reality with ARToolKit FMA175 Image Analysis, Project Mathematical Sciences, Lund Institute of Technology Supervisor: Petter Strandmark Fredrik Larsson (dt07fl2@student.lth.se)

More information

Render-To-Texture Caching. D. Sim Dietrich Jr.

Render-To-Texture Caching. D. Sim Dietrich Jr. Render-To-Texture Caching D. Sim Dietrich Jr. What is Render-To-Texture Caching? Pixel shaders are becoming more complex and expensive Per-pixel shadows Dynamic Normal Maps Bullet holes Water simulation

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

MASSIVE TIME-LAPSE POINT CLOUD RENDERING with VR

MASSIVE TIME-LAPSE POINT CLOUD RENDERING with VR April 4-7, 2016 Silicon Valley MASSIVE TIME-LAPSE POINT CLOUD RENDERING with VR Innfarn Yoo, OpenGL Chips and Core Markus Schuetz, Professional Visualization Introduction Previous Work AGENDA Methods Progressive

More information

Chapter 7 - Light, Materials, Appearance

Chapter 7 - Light, Materials, Appearance Chapter 7 - Light, Materials, Appearance Types of light in nature and in CG Shadows Using lights in CG Illumination models Textures and maps Procedural surface descriptions Literature: E. Angel/D. Shreiner,

More information

TSBK03 Screen-Space Ambient Occlusion

TSBK03 Screen-Space Ambient Occlusion TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm

More information

Screen Space Ambient Occlusion TSBK03: Advanced Game Programming

Screen Space Ambient Occlusion TSBK03: Advanced Game Programming Screen Space Ambient Occlusion TSBK03: Advanced Game Programming August Nam-Ki Ek, Oscar Johnson and Ramin Assadi March 5, 2015 This project report discusses our approach of implementing Screen Space Ambient

More information

Physically-Based Laser Simulation

Physically-Based Laser Simulation Physically-Based Laser Simulation Greg Reshko Carnegie Mellon University reshko@cs.cmu.edu Dave Mowatt Carnegie Mellon University dmowatt@andrew.cmu.edu Abstract In this paper, we describe our work on

More information

Deferred Rendering Due: Wednesday November 15 at 10pm

Deferred Rendering Due: Wednesday November 15 at 10pm CMSC 23700 Autumn 2017 Introduction to Computer Graphics Project 4 November 2, 2017 Deferred Rendering Due: Wednesday November 15 at 10pm 1 Summary This assignment uses the same application architecture

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

The Terrain Rendering Pipeline. Stefan Roettger, Ingo Frick. VIS Group, University of Stuttgart. Massive Development, Mannheim

The Terrain Rendering Pipeline. Stefan Roettger, Ingo Frick. VIS Group, University of Stuttgart. Massive Development, Mannheim The Terrain Rendering Pipeline Stefan Roettger, Ingo Frick VIS Group, University of Stuttgart wwwvis.informatik.uni-stuttgart.de Massive Development, Mannheim www.massive.de Abstract: From a game developers

More information

GPU-Accelerated Deep Shadow Maps

GPU-Accelerated Deep Shadow Maps GPU-Accelerated Deep Shadow Maps for Direct Volume Rendering Markus Hadwiger, Andrea Kratz, Christian Sigg*, Katja Bühler VRVis Research Center, Vienna *ETH Zurich Andrea Kratz Motivation High-quality

More information

SAMPLING AND NOISE. Increasing the number of samples per pixel gives an anti-aliased image which better represents the actual scene.

SAMPLING AND NOISE. Increasing the number of samples per pixel gives an anti-aliased image which better represents the actual scene. SAMPLING AND NOISE When generating an image, Mantra must determine a color value for each pixel by examining the scene behind the image plane. Mantra achieves this by sending out a number of rays from

More information

Volume Ray Casting Neslisah Torosdagli

Volume Ray Casting Neslisah Torosdagli Volume Ray Casting Neslisah Torosdagli Overview Light Transfer Optical Models Math behind Direct Volume Ray Casting Demonstration Transfer Functions Details of our Application References What is Volume

More information

Rendering Grass Terrains in Real-Time with Dynamic Lighting. Kévin Boulanger, Sumanta Pattanaik, Kadi Bouatouch August 1st 2006

Rendering Grass Terrains in Real-Time with Dynamic Lighting. Kévin Boulanger, Sumanta Pattanaik, Kadi Bouatouch August 1st 2006 Rendering Grass Terrains in Real-Time with Dynamic Lighting Kévin Boulanger, Sumanta Pattanaik, Kadi Bouatouch August 1st 2006 Goal Rendering millions of grass blades, at any distance, in real-time, with:

More information

CS427 Multicore Architecture and Parallel Computing

CS427 Multicore Architecture and Parallel Computing CS427 Multicore Architecture and Parallel Computing Lecture 6 GPU Architecture Li Jiang 2014/10/9 1 GPU Scaling A quiet revolution and potential build-up Calculation: 936 GFLOPS vs. 102 GFLOPS Memory Bandwidth:

More information

Atmospheric Reentry Geometry Shader

Atmospheric Reentry Geometry Shader Atmospheric Reentry Geometry Shader Robert Lindner Introduction In order to simulate the effect of an object be it an asteroid, UFO or spacecraft entering the atmosphere of a planet, I created a geometry

More information

COMP Preliminaries Jan. 6, 2015

COMP Preliminaries Jan. 6, 2015 Lecture 1 Computer graphics, broadly defined, is a set of methods for using computers to create and manipulate images. There are many applications of computer graphics including entertainment (games, cinema,

More information

Scalable multi-gpu cloud raytracing with OpenGL

Scalable multi-gpu cloud raytracing with OpenGL Scalable multi-gpu cloud raytracing with OpenGL University of Žilina Digital technologies 2014, Žilina, Slovakia Overview Goals Rendering distant details in visualizations Raytracing Multi-GPU programming

More information

Hardware Shading: State-of-the-Art and Future Challenges

Hardware Shading: State-of-the-Art and Future Challenges Hardware Shading: State-of-the-Art and Future Challenges Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken,, Germany Graphics Hardware Hardware is now fast enough for complex geometry for

More information

Non-Linearly Quantized Moment Shadow Maps

Non-Linearly Quantized Moment Shadow Maps Non-Linearly Quantized Moment Shadow Maps Christoph Peters 2017-07-30 High-Performance Graphics 2017 These slides include presenter s notes for your convenience. 1 In this presentation we discuss non-linearly

More information

GUERRILLA DEVELOP CONFERENCE JULY 07 BRIGHTON

GUERRILLA DEVELOP CONFERENCE JULY 07 BRIGHTON Deferred Rendering in Killzone 2 Michal Valient Senior Programmer, Guerrilla Talk Outline Forward & Deferred Rendering Overview G-Buffer Layout Shader Creation Deferred Rendering in Detail Rendering Passes

More information

Surface shading: lights and rasterization. Computer Graphics CSE 167 Lecture 6

Surface shading: lights and rasterization. Computer Graphics CSE 167 Lecture 6 Surface shading: lights and rasterization Computer Graphics CSE 167 Lecture 6 CSE 167: Computer Graphics Surface shading Materials Lights Rasterization 2 Scene data Rendering pipeline Modeling and viewing

More information

Bringing Hollywood to Real Time. Abe Wiley 3D Artist 3-D Application Research Group

Bringing Hollywood to Real Time. Abe Wiley 3D Artist 3-D Application Research Group Bringing Hollywood to Real Time Abe Wiley 3D Artist 3-D Application Research Group Overview > Film Pipeline Overview and compare with Games > The RhinoFX/ATI Relationship > Ruby 1 and 2 The Movies > Breakdown

More information

Generating and Rendering Procedural Clouds in Real Time on

Generating and Rendering Procedural Clouds in Real Time on Generating and Rendering Procedural Clouds in Real Time on Programmable 3D Graphics Hardware M Mahmud Hasan ReliSource Technologies Ltd. M Sazzad Karim TigerIT (Bangladesh) Ltd. Emdad Ahmed Lecturer, NSU;Graduate

More information

Volume Rendering with libmini Stefan Roettger, April 2007

Volume Rendering with libmini Stefan Roettger, April 2007 Volume Rendering with libmini Stefan Roettger, April 2007 www.stereofx.org 1. Introduction For the visualization of volumetric data sets, a variety of algorithms exist which are typically tailored to the

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Volume Shadows Tutorial Nuclear / the Lab

Volume Shadows Tutorial Nuclear / the Lab Volume Shadows Tutorial Nuclear / the Lab Introduction As you probably know the most popular rendering technique, when speed is more important than quality (i.e. realtime rendering), is polygon rasterization.

More information

GPU-based Volume Rendering. Michal Červeňanský

GPU-based Volume Rendering. Michal Červeňanský GPU-based Volume Rendering Michal Červeňanský Outline Volume Data Volume Rendering GPU rendering Classification Speed-up techniques Other techniques 2 Volume Data Describe interior structures Liquids,

More information

Practical Shadow Mapping

Practical Shadow Mapping Practical Shadow Mapping Stefan Brabec Thomas Annen Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken, Germany Abstract In this paper we propose several methods that can greatly improve

More information

Visualizer An implicit surface rendering application

Visualizer An implicit surface rendering application June 01, 2004 Visualizer An implicit surface rendering application Derek Gerstmann - C1405511 MSc Computer Animation NCCA Bournemouth University OVERVIEW OF APPLICATION Visualizer is an interactive application

More information

Chapter 10 Computation Culling with Explicit Early-Z and Dynamic Flow Control

Chapter 10 Computation Culling with Explicit Early-Z and Dynamic Flow Control Chapter 10 Computation Culling with Explicit Early-Z and Dynamic Flow Control Pedro V. Sander ATI Research John R. Isidoro ATI Research Jason L. Mitchell ATI Research Introduction In last year s course,

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

Deep Opacity Maps. Cem Yuksel 1 and John Keyser 2. Department of Computer Science, Texas A&M University 1 2

Deep Opacity Maps. Cem Yuksel 1 and John Keyser 2. Department of Computer Science, Texas A&M University 1 2 EUROGRAPHICS 2008 / G. Drettakis and R. Scopigno (Guest Editors) Volume 27 (2008), Number 2 Deep Opacity Maps Cem Yuksel 1 and John Keyser 2 Department of Computer Science, Texas A&M University 1 cem@cemyuksel.com

More information

Emissive Clip Planes for Volume Rendering Supplement.

Emissive Clip Planes for Volume Rendering Supplement. Emissive Clip Planes for Volume Rendering Supplement. More material than fit on the one page version for the SIGGRAPH 2003 Sketch by Jan Hardenbergh & Yin Wu of TeraRecon, Inc. Left Image: The clipped

More information

CS452/552; EE465/505. Clipping & Scan Conversion

CS452/552; EE465/505. Clipping & Scan Conversion CS452/552; EE465/505 Clipping & Scan Conversion 3-31 15 Outline! From Geometry to Pixels: Overview Clipping (continued) Scan conversion Read: Angel, Chapter 8, 8.1-8.9 Project#1 due: this week Lab4 due:

More information

Raycasting. Ronald Peikert SciVis Raycasting 3-1

Raycasting. Ronald Peikert SciVis Raycasting 3-1 Raycasting Ronald Peikert SciVis 2007 - Raycasting 3-1 Direct volume rendering Volume rendering (sometimes called direct volume rendering) stands for methods that generate images directly from 3D scalar

More information

Volumetric Particle Shadows. Simon Green

Volumetric Particle Shadows. Simon Green Volumetric Particle Shadows Simon Green Abstract This paper describes an easy to implement, high performance method for adding volumetric shadowing to particle systems. It only requires a single 2D shadow

More information

Visualization Computer Graphics I Lecture 20

Visualization Computer Graphics I Lecture 20 15-462 Computer Graphics I Lecture 20 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] November 20, 2003 Doug James Carnegie Mellon University http://www.cs.cmu.edu/~djames/15-462/fall03

More information

Advanced Computer Graphics CS 563: Screen Space GI Techniques: Real Time

Advanced Computer Graphics CS 563: Screen Space GI Techniques: Real Time Advanced Computer Graphics CS 563: Screen Space GI Techniques: Real Time William DiSanto Computer Science Dept. Worcester Polytechnic Institute (WPI) Overview Deferred Shading Ambient Occlusion Screen

More information

Real Time Rendering of Expensive Small Environments Colin Branch Stetson University

Real Time Rendering of Expensive Small Environments Colin Branch Stetson University Real Time Rendering of Expensive Small Environments Colin Branch Stetson University Abstract One of the major goals of computer graphics is the rendering of realistic environments in real-time. One approach

More information

Photorealistic 3D Rendering for VW in Mobile Devices

Photorealistic 3D Rendering for VW in Mobile Devices Abstract University of Arkansas CSCE Department Advanced Virtual Worlds Spring 2013 Photorealistic 3D Rendering for VW in Mobile Devices Rafael Aroxa In the past few years, the demand for high performance

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary Summary Ambien Occlusion Kadi Bouatouch IRISA Email: kadi@irisa.fr 1. Lighting 2. Definition 3. Computing the ambient occlusion 4. Ambient occlusion fields 5. Dynamic ambient occlusion 1 2 Lighting: Ambient

More information

Shadow Algorithms. CSE 781 Winter Han-Wei Shen

Shadow Algorithms. CSE 781 Winter Han-Wei Shen Shadow Algorithms CSE 781 Winter 2010 Han-Wei Shen Why Shadows? Makes 3D Graphics more believable Provides additional cues for the shapes and relative positions of objects in 3D What is shadow? Shadow:

More information

Enabling immersive gaming experiences Intro to Ray Tracing

Enabling immersive gaming experiences Intro to Ray Tracing Enabling immersive gaming experiences Intro to Ray Tracing Overview What is Ray Tracing? Why Ray Tracing? PowerVR Wizard Architecture Example Content Unity Hybrid Rendering Demonstration 3 What is Ray

More information

Shadows in the graphics pipeline

Shadows in the graphics pipeline Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects

More information