A GPU based Saliency Map for High-Fidelity Selective Rendering

Size: px
Start display at page:

Download "A GPU based Saliency Map for High-Fidelity Selective Rendering"

Transcription

1 A GPU based Saliency Map for High-Fidelity Selective Rendering Peter Longhurst University of Bristol Kurt Debattista University of Bristol Alan Chalmers University of Bristol Abstract The computation of high-fidelity images in real-time remains one of the key challenges for computer graphics. Recent work has shown that by understanding the human visual system, selective rendering may be used to render only those parts to which the human viewer is attending at high quality and the rest of the scene at a much lower quality. This can result in a significant reduction in computational time, without the viewer being aware of the quality difference. Selective rendering is guided by models of the human visual system, typically in the form of a 2D saliency map, which predict where the user will be looking in any scene. Computation of these maps themselves often take many seconds, thus precluding such an approach in any interactive system, where many frames need to be rendered per second. In this paper we present a novel saliency map which exploits the computational performance of modern GPUs. With our approach it is thus possible to calculate this map in milliseconds, allowing it to be part of a real time rendering system. In addition, we also show how depth, habituation and motion can be added to the saliency map to further guide the selective rendering. This ensures that only the most perceptually important parts of any animated sequence need be rendered in high quality. The rest of the animation can be rendered at a significantly lower quality, and thus much lower computational cost, without the user being aware of this difference. CR Categories: I.3.7 [Compute Graphics]: Three-Dimensional Graphics and Realism selective rendering, saliency map, GPU, global illu- Keywords: mination 1 Introduction While high-fidelity graphics rendering using global illumination algorithms is a computationally intensive process, selective rendering algorithms that exploit human visual attention processes have emerged as methods of speeding up these rendering techniques. Selective rendering algorithms that make use of bottom-up visual attention processes require an image preview from which a saliency map may be calculated [Itti et al. 1998]. This saliency map is then used to direct the rendering in the final step. The image preview is either calculated using a low resolution pass [Cater et al. 2003] or a rapid estimate is created using rasterisation accelerated using graphics hardware [Yee et al. 2001]. Both these approaches may take many seconds, precluding their use in a real-time rendering system. Peter.Longhurst@bristol.ac.uk In this paper we present a novel GPU implementation for calculating the image preview and subsequently generating the saliency map. The image preview is generated by rasterisation through OpenGL and programmable shaders to account for reflections, refractions, shadows and shaders similar to those in the full renderer. Furthermore, the GPU is then used to calculate a saliency map similar to that of Itti et al. [Itti et al. 1998] with novel extensions, depth, habituation and motion that can only be produced efficiently by maintaining object space information available from the image previewing phase. The paper is divided as follows. Section 2 describes related work. Section 3 provides an overview to our selective rendering framework. Section 4 describes the GPU implementation of the saliency map in detail. Section 5 provides a brief description of the test scenes we have used for the psychophysical validation in Section 6 and the timings obtained in Section 7. Finally Section 8 concludes and suggests possible future work. 2 Related Work Human visual perception has been studied for a long time. Results of this work have recently been introduced into computer graphics to improve the realism of images, for example [Meyer et al. 1986; Rushmeier et al. 1995; Ferwerda et al. 1996; Greenberg et al. 1997; Ramasubramanian et al. 1999], and to maintain visual fidelity but at a significantly reduced computational cost [Bolin and Meyer 1998; Luebke and Hallen 2001; Myskowski et al. 2001; Cater et al. 2003; Sundstedt et al. 2005]. The VDP (Visual Difference Predictor) was used by Myszkowski as part of a system to improve the efficiency and effectiveness of progressive global illumination computation [Myszkowski 1998]. Myszkowski et al. [Myszkowski et al. 2000] subsequently proposed a perceptual spatiotemporal Animation Quality Metric (AQM) designed specifically for handling synthetic animation sequences and dynamic environments. The central component of their model is the spatiovelocity contrast sensitivity function which specifies the detection threshold for a stimulus as a function of its spatial and temporal frequencies. Osberger et al. [Osberger et al. 1998] suggested that adding further extensions to such early vision models does not produce any significant gain, however, Myszkowski et al. did demonstrate applying the AQM to guide global illumination computation for dynamic environments [Myskowski et al. 2001]. Bolin and Meyer [Bolin and Meyer 1998] devised a similar scheme, also using a sophisticated vision model. They integrated a simplified version of the Sarnoff Visible Discrimination Model (VDM) [Lubin 1995] into an image synthesis algorithm. The VDM was used to detect threshold visible differences and, based on those differences, direct subsequent computational effort to regions of the image in most need of refinement. Their version of the VDM executed in approximately 1/60 th of the time taken by the original model and resulted in image generation faster than other sampling strategies. Ramasubramanian et al. [Ramasubramanian et al. 1999] reduced the cost of such metrics as the VDP and VDM by decoupling the computationally expensive spatial frequency component. They ar-

2 gued that this component would not change as global illumination is calculated. Yee et al. [Yee et al. 2001] adapted the Itti and Koch model of attention [Itti et al. 1998] in order to accelerate the global illumination computation in pre-rendered animations. They created for each frame a spatiotemporal error tolerance map [Daly 1998], constructed from data based on velocity dependant contrast sensitivity, and a saliency map. Their approach also included the addition of motion. These maps are created from either a rendered estimate of the final frame containing only direct lighting, or an OpenGL approximation. The two maps are combined to create an aleph map, which is used to dictate where computational effort should be spent during the lighting solution. Yee et al. used a version of Radiance modified so that the ambient accuracy can be modulated based on their aleph map. Haber et al. [Haber et al. 2001] created a perceptually-guided corrective splatting algorithm for interactive navigation of photometrically complex environments. Their algorithm works using a preprocessing particle tracing technique followed by a frame by frame view dependent ray tracing corresponding to a saliency using an extended Itti and Koch model taking into account volition-controlled and task dependant attention. Volitional control relates to the fact that users are observed to place objects of interest towards the centre of an image; task dependant attention is the judgment of the importance of objects due to a task at hand. An alternative approach to perceptual global illumination acceleration was taken by Cater et al. [Cater et al. 2003]. They demonstrated how properties of the HVS known as change blindness and inattentional blindness, could be exploited to accelerate the rendering of animated sequences. They proposed that by applying a prior knowledge of a viewer s task focus rendering quality could be reduced in non-task areas. Using a series of controlled psychophysical experiments, they showed that human subjects consistently failed to notice degradations in quality in areas unrelated to their assigned task. Sundstedt et al. took this further and introduced the idea of an importance map to accelerate rendering in a selective global illumination renderer [Sundstedt et al. 2005]. This map was created from a combination of a task map and a saliency map. Sundstedt et al. showed, through a detailed psychophysical investigation, that animations rendered based on their importance map were, even under free viewing conditions, perceptually indistinguishable from reference high quality animations [Sundstedt et al. 2005]. 3 Selective Rendering Framework Our framework for rapid scene visualisation allows both visual and structural knowledge to be extracted from a frame of an animation before it is rendered. Our goal is to use this information to help reduce the computational cost of producing the final image. Providing both the preview and rendered result command a similar response, the information gained from the former can be used to tune the creation of the later. An overview of the framework, which we term Snapshot, can be seen in Figure 1. A snapshot is used as a rapid image estimate of the rendering using accelerated rasterisation techniques on modern graphics cards [Longhurst et al. 2005]. The resulting image is subsequently used as input to the selective guidance which generates the saliency map again using accelerated graphics hardware. This map showing the importance of each pixel relating to the chance that it will be attended, and the chance of perceiving an error present at the location of the pixel. This map is then used, within a high fidelity rendering algorithm, to direct the required quality of each pixel. By lowering the quality in areas which are unlikely to be observed, or where errors will go unperceived, it is possible to reduce the overall time to produce a perceptually similar image. 3.1 Snapshot The Snapshot framework for producing a rasterised preview image is based on the OpenGL API. We used a combination of techniques, often more associated with games and other high performance applications, to create the preview image. Shadow mapping [Williams 1978] via cubic texture maps was used in conjunction with material shaders, written in Nvidia Cg [Fernando and Kilgard 2003], to give a detailed approximation of surface shading. Similarly we used cubic environment maps to approximate specular reflections, and stencil shadowing to accurately account for planar mirrors [Kilgard 1999]. In our system we perform one shading pass for each light source, however for scenes which contain many lights we automatically select and only use a set of the most significant sources. In order to make the preview image as similar as possible to the final result, rendered in our selective renderer, we modeled our surface shaders based on the Radiance plastic and metal material types [Ward and Shakespeare 1998]. Although this has an impact on the computational expense of creating the image, the result is closer to the global illumination solution and thus more appropriate for the subsequent saliency map generation. The time to create this preview depends on the complexity of the scene, however even for relatively complicated scenes (100,000+ triangles) with many light sources (we take the closest 32 for each frame) we are able to create this image in under 5 seconds. This time is still far less than the time we save later on by selective rendering, and several orders of magnitude less than the full global illumination solution; frames for simpler scenes can be created in real-time (upwards of 30fps). Furthermore, although we have yet to exploit this, level of detail and culling techniques could also be used to alleviate costs related to complex geometry. 3.2 Saliency map Our saliency model is based on the model first suggested by Itti and Koch [Itti and Koch 2000] and then extended by Yee [Yee et al. 2001]. We have, however, made several improvements and additions to the model. The original algorithm was designed for image processing and computer vision applications where photographs and video streams are processed. Within these applications it is assumed that there is no prior knowledge of the environment. Additionally the model is well suited for directing attention to a certain area for further processing in, for example, robot vision. It is not, however, very well suited for identifying saliency on a per pixel level [Longhurst and Chalmers 2004]. These models also suffer from lengthy execution times. Because of the complexity of the calculations required, especially when using high resolution images, this time can be in the order of many seconds. This is unacceptable for a system that we hope will approach interactive rates. Our new saliency model is described in detail in section Selective renderer For a selective renderer we used a modified version of the physicalbased light simulation package Radiance [Ward and Shakespeare 1998]. We term the modified version of Radiance s rpict, srpict. srpict performs sampling based on a jittered stratified

3 Figure 1: Overview of our framework. Channel Space Description Motion Model Pixel Saliency based on movement relative to the screen Habituation Model Object habituation (saliency reduction over time related to objects screen presence) Depth Model Saliency related to the distance an object is from the screen R-G Opponency Image Red - Green centre surround differences B-Y Opponency Image Blue - Yellow centre surround differences Intensity Image Intensity centre surround differences Edge Image Image edges (Salient due to potential aliasing artifacts) Table 1: The channels in the saliency model. sampling. A user-defined variable sets the maximum level of stratification per pixel. srpict differs from the normal version as it allows the number of rays shot to be modulated on a per pixel level according to the saliency map between the user set maximum and one. 4 Saliency model We built our model of saliency into the Snapshot framework discussed previously. The model is designed to use fragment shaders written in Nvidia Cg executed on a 6 series Geforce graphics card. This language is especially good for image processing applications as it includes many mathematical functions and fast texture lookup routines. In addition, it is advantageous for us to use the graphics card to both create and process our preview image. If instead we used an approach that required more use of the CPU, we would suffer an additional overhead transferring data to and from the GPU. As previously mentioned, we benefit from 3D scene information within our model. This means that our model can in fact be split into two discrete sections; components calculated in model space from the scene description, and image space components calculated from the Snapshot preview image. Figure 2 shows an overview of our model. In total seven channels are combined to give the final map, these are described in Table 1. The image space components taken on their own can be used as a GPU implementation of a more traditional saliency map that takes an image as input. Figure 3 outlines the program structure of the entire Snapshot framework. Figure 2: Saliency model. 4.1 Image space measures There are, broadly speaking, two measures of saliency which we compute from the image preview provided by Snapshot. The

4 first measure accounts for centre-surround differences across three channels. These occur in locations which are significantly different to the average colour of a region, for example a black patch on a white wall. The HVS is very sensitive to this form of stimulus. The three channels for which our model operates are red vs. green, blue vs. yellow, and luminance (dark vs. bright). The second image space measure accounts for edges present in the scene. Our edge map replaces the orientations channel in the Itti and Koch saliency model Creation of the gaussian pyramid To find areas salient due to the centre-surround effect, it is necessary to express an image at a variety of resolutions. An image pyramid is a standard way of generating a sequence of progressively lower resolution representations of an image. Each level in the hierarchal structure reduces the size of the previous level by a constant factor, normally 2. We adopt the approach of Itti and Koch s saliency algorithm whereby a Gaussian function is used to create the pyramid needed to calculate centre-surround differences. Each layer in the Gaussian pyramid is generated using a fragment shader program written in Cg. This program operates on a texture containing the previous layer; the other inputs to the program are the Gaussian weight, w, and the depth of the current layer. By running this shader repeatedly we generate an array of eight textures to hold the pyramid Centre-surround colour and luminance saliency maps Calculation of centre-surround differences is straightforward once the Gaussian pyramid has been created. Features are located by subtracting images from two different levels of the pyramid. Visual neurons are most sensitive in small regions of visual space (the centre), while stimuli presented in a broader area (the surround) can inhibit neuronal response. We use multiscale feature extraction and the same set of ratios between the centre and surround regions as used by Itti et al. [Itti et al. 1998]. By comparing different levels of the gaussian pyramid in total this generates 6 sub-maps for each channel (2-5,2-6,3-6,3-7,4-7,4-8). Each of the 6 comparisons is made at the resolution of the original frame. This is handled by another fragment shader program. This takes two texture maps that represent different levels of the pyramid and returns the difference. The fact that the input maps are at a lower resolution than the target output is dealt with automatically within the shader, such that our low resolution maps are smoothly increased to the original frame size with no added computational overhead. This benefit is achieved by making use of hardware texture filtering. Performing comparisons at this resolution yields an accurate per pixel result. Figure 3: Snapshot program including saliency estimation. We produce the final centre-surround map for each of three channels by combining the 6 sub-maps. Equations 1 to 3 show the operations used to create the sub-maps for each channel. In each equation n refers to the total number of sub-maps per channel (6 in our implementation), C refers to the higher resolution image (centre), and S to the lower resolution (surround) image to which this is compared. To create one map per channel the individual sub-maps are summed. R G = C g S r C r S g n (1)

5 B Y = C b S g+s r 2 C g+c r 2 S b n (2) sensitivity of the eye is greatest when motion is around 0.15 degrees/second. As retinal velocity increases above this, the range of contrast that the eye is sensitive to decreases significantly. Any speed slower than 0.15 degrees/second can be considered as stationary, as this speed is undetectable by the human eye. L L = (0.64S r S g S b ) (0.64C r C g C b ) n (3) Orientation / edge saliency map Edges are the second image space component that we include in our saliency model. To calculate the location of edges in a frame we use a method similar to the classic Canny edge detector [Canny 1986]. In Canny s algorithm there are three steps that are undertaken: 1. The image is filtered by a Gaussian to remove noise that could result in the incorrect detection of edges. 2. Edge magnitudes are found using the sum of a horizontal and a vertical Sobel filter. 3. Non-edges are suppressed so that areas can be easily segmented based on the resulting edge map. Our edge detector follows the first two stages only, the third stage is unnecessary for our application as the edge map is not further used other than as a component of our final saliency map. Although there would be no deficit in using the full Canny algorithm, we decided to abandon the last stage in favour of keeping the computational expense of the detector to a minimum. To account for the first filtering step we use the first level Gaussian, from the image pyramid computed previously. 4.2 Model space measures There are three components to our saliency map which are based on a 3D scene description rather than any visual information. These are motion saliency, habituation and depth. Habituation refers to the familiarity that objects gain over time, and depth refers to how far each object is from the virtual camera. Our model is uniquely applicable for use within computer graphics where measurements can be made within the actual 3D scene. The alternative is to use an expensive image based estimate of the environment. Figure 3 shows the order in which the different components of our saliency map are generated. As the model space parts require no visual information these can be generated before the actual preview is created. For efficiency all three are calculated simultaneously and stored as separate colour channels Motion saliency Motion affects our perception of the world both cognitively and biologically. The cells in a human eye are sensitive to movement, especially movement in the periphery of our field of vision. The speed that moving objects can be tracked by the human eye is limited to the performance of the muscles that control the eye. Such items can be detected as moving but cannot be discerned clearly and instead appear blurred. In addition the human visual system s sensitivity is decreased with motion. Kelly [Kelly and Kokaram 2004] studied this effect by measuring the threshold contrast for traveling sine waves. The contrast In our framework we are able to benefit from knowledge of scene construction to exactly calculate the movement of every pixel on the screen. This is done by virtually projecting every object twice to the screen; once for the current frame, and once for the previous frame. By subtracting the current pixel position from the previous pixel position the movement of every pixel can be found. Using these distances and the frame rate at which subsequent frames are displayed the retinal velocity of each pixel can be found, equation 4: ν = δx 2 + δy 2 t δx = δpx k δy = δpy k t = 1 f Where δpx and δpy are the distances that the pixel has moved in pixels, k is a constant representing the retinal size of a pixel (in degrees), and f is the frame rate. To describe this motion we use Equation 5: (4) S M = 1 m 2π exp ( ν AM ) 2 m 2 (5) m = 0.4 A M = 20 Where S M is the motion saliency of a pixel with velocity ν, and m, and A M are constants which control the shape of the decay. Increasing the value of A M reduces the gradient of the graph, effectively increasing the saliency of fast moving objects. We found setting value of A M to 20 gave a good result for our test scenes Depth saliency Objects which are close to us become salient due to proximity. As humans it is important for us to be aware of our immediate surrounds, both for navigation in the world, and in case of any immediate threat. This factor is easy to calculate within our model as it is simply a function of distance. To compensate for a rapid falloff in very close objects typical of a linear model we use a model of exponential decay, Equation 6. In this equation d and A D are constants; these were chosen so that the overall rate of exponential decay would approximate the linear model. S D = 1 d D2 (exp 2π d 2 )A D (6) d = 0.6 A D = 1.5

6 4.2.3 Habituation Habituation refers to the effect where objects become familiar over time. Several research groups have used models of habituation to guide robotic attention [Markou and Singh 2003]. By incorporating this effect into how a robot senses the world around it, the limited processing power incorporated into the machine can be better directed; this allows the robot to ignore persistent signals in favour of more novel ones. Habituation is controlled either by the number of presentations of a particular signal, or the time over which the signal has been present. Marsland et al. [Marsland et al. 2002] suggest, depending on the stimulus, a minimum habituation time of 3.33 seconds and a maximum time of 14.3 seconds, we used these times as a guide for our model. We initially mark every object as 100% salient when it first appears, and decrease its saliency over time. Again we use an exponential decay, similar to those used in our measures of motion and depth saliency. This decay is modeled by Equation 7 where h and A H are constants, and t is the time in seconds for which an object has appeared; 50% saliency due to familiarity is reached over a period of approximately 6 seconds (150 frames at a rate of 24fps). S H = 1 h t2 (exp 2π h 2 )A H (7) h = 8 A H = 20 Unlike our other model space measures of depth and motion, habituation is not calculated on a per pixel basis. Instead this is computed on a per object level, based on the number of frames in which an object has appeared on screen. Within our framework it would be easy to alternatively calculate habituation on a portion of an object or a per triangle basis. The visibility of each object is found using an OpenGL extension that performs occlusion queries. Before the scene is drawn to calculate saliency it is rapidly drawn twice to compute per object visibility. The first draw of the scene simply fills the depth buffer. An occlusion query is performed for every object on the second pass, see Figure 3. The result of each query is the number of pixels for the relevant object that pass depth and clipping tests (i.e. the number of pixels for the object that will appear on screen). If the result of the query is zero this indicates that the object does not appear on screen. This information is both used in every further drawing of the scene, for the current frame, and to count the number of frames in which each object has appeared. Although there is an added overhead in undertaking an occlusion query test for every object this test is essential for our model of object habituation. Furthermore, time is saved when the scene is drawn to produce the Snapshot image estimate, as there is no attempt made to draw hidden objects. 5 Test scenes In order to evaluate our saliency map we tested a number of scenes both for perceptual validation and computational times. During the course of our validation we tested two animated sequences and three still images. Every scene was rendered twice; once at high quality throughout (using srpict with no map) and once selectively using srpict and a map created by our model. Each of the still images was rendered on the same machine (a 3.2Ghz Pentium 4). This machine was not otherwise used during this process. To render the animations we used a collection of identical networked PC s (2.88Ghz Pentium 4 s). Frames from each animation were partitioned so that each machine rendered only a small portion of the whole animation. Every saliency map was generated on a PC containing an Nvidia Geforce 6600 GT PCI-Express graphics card and a 3.2Ghz Pentium 4 CPU. For each high quality reference solution, 16 rays per pixel were traced across the entire image. For the selectively rendered images a minimum of 1 ray and a maximum of 16 rays were traced per pixel according to our map. Every scene was rendered at a resolution of and filtered to for display. Every Radiance image was converted to the.png format for display. This is a lossless image storage format. Individual frames were combined into animations, again, using a lossless codec (raw rgb). Scene 1: Bristol corridor This environment is a fictional model based on the building layout of the Merchant Venturers Building, which hosts Bristol University s Department of Computer Science. Scene 2: Simple test room The second scene that we tested was designed to be relatively simple, but still allow us to test all the components of the Snapshot framework and our saliency algorithm. Scene 3: Cornell Box This is a standard scene which is commonly used to test physically based rendering techniques. Scene 4: Tables test room This was created to deliberately contain many sharp edges that would produce aliasing errors. Scene 5: Kitchen The final scene which we used to validate the work presented in this paper depicts a modern kitchen. This scene is the most complicated of our test cases, weighing in at over 3 4 million triangles. 6 Perceptual Validation To asses the quality of the result produced using our method to render images, we designed two experiments to compare this to the reference solution. The first experiment compared the animations from scenes one and two, the second the still images rendered from the other three scenes. For each experiment 16 subjects, with normal or corrected to normal vision, were used to investigate whether there was a perceivable difference between the accelerated selectively rendered result, and the high quality reference solution. This was achieved through an experimental procedure known as forced choice. In the first experiment subjects were presented sequentially with two pairs of animations. Each pair consisted of the high quality reference animation and the animation computed using our framework. All the animations were displayed full screen; the screen was blanked for five seconds between each member of the pair and for ten seconds between pairs. The second experiment took a similar form to the first, however, in this experiment observers were instead presented with three pairs of images. Each image was again either one created based on the map produced with Snapshot or a high quality reference frame. Each image was presented full screen for five seconds, a blank screen was displayed for five seconds between the members of each pair and for ten seconds between the pairs. Participants were given a verbal introduction to each experiment, in this they were told that they would be presented with pairs of images (or animations depending on the experiment) that differed in quality. In each experiment subjects were asked to choose the animation or image from the pair that they judged to be of a higher

7 Figure 4: Test scenes, starting from Scene 1 (left) to Scene 5 (right). Scenes represented by image preview (top), saliency map (middle) and selective rendered image (bottom). quality. Each observer was asked to indicate their choices in the pauses between pairs The results gathered from the second experiment (in which we compared still images), were analysed in a manner similar to that used to compare the pairs of animations. Figure 6 indicates the proportion of participants who correctly identified the HQ image over one rendered according to our map. Again, if equal proportions of subjects pick correctly as incorrectly then the result can be considered to be by chance. The results show that there is no significant perceptual difference between the high quality reference images and the selectively rendered ones. Animation results Figure 5 shows the response of the experiment s participants to the animations generated from scenes 1 and 2. A result where only 50% of participants indicate correctly which is the higher quality result is statistically indistinguishable from chance. For both of the animated sequences, the percentage of observers who correctly identify the HQ animations differs insignificantly from this chance percentage. Thus we may say that there was no perceivable difference between the high quality and selective quality animations. 80 Incorrect 60 Correct Scene 1 Scene 2 Figure 5: Participant ability to determine the difference between animations. egami QH yfitnedi yltcerroc ot elba erew ohw elpoep fo egatnecrep noitamina QH yfitnedi yltcerroc ot elba erew ohw elpoep fo egatnecrep 100 Still image results Incorrect 60 Correct Scene 3 Scene 4 Scene 5 Figure 6: Participant ability to determine the difference between still frames.

8 7 Timings In this section we present the timing results for the test scenes that were validated to be perceptually similar in the previous section. Table 2 shows the results of the test scenes including the complexity of each scene, the rendering time for traditional high quality rendering, the time taken to complete the preview image using Snapshot, the time taken to generate the saliency map, the combined time for creating the saliency map and the selective rendering time. Note that preview times depend on the scene complexity, a combination of the number of triangles and light sources. Level of detail techniques [Luebke et al. 2002] could reduce the time spent on geometry rendering considerably and occlusion techniques could improve the cost of rendering scenes with many light sources. Speedup for the selective rendering varies between 1.18 for Scene 4 to 2.9 for Scene 3. Scene 4 is the most complex in terms of generated saliency map due to the complexity of the projected geometry. Selective rendering of still images suffer an additional cost due to the habituation and motion channels contributing fully to the map since there is only one frame. Switching off these computations would result in improved speedup. Furthermore, selecting different variables than rays per pixel, such as ambient accuracy [Yee et al. 2001], to be modulated by the selective renderer would improve speedup. 7.1 GPU speedup As we have already mentioned we benefit from using the GPU to calculate our saliency map. Both the parallel nature of this processor, and the built in image processing and texture filtering operations make are our model run significantly faster than it would on a conventional CPU. To asses this performance gain we compared the image space portion of our model to the same algorithm with no GPU support. Figure 7 shows the time taken to process an image at a range of resolutions. In this test we compared the Geforce 6600GT used in our experiments to the Pentium 4 3.4Ghz present in the same system. 8 Conclusions and Future Work Selectively rendering a scene can significantly reduce the overall computation time for computing physically-based high-fidelity computer graphics, without the viewer being aware of any quality difference across the image. Such an approach offers the real potential for enabling high-fidelity images to be rendered in real-time. Saliency maps are a key component of any selective rendering system, as they are used to guide the renderer as to which pixels are perceptually the most important and thus should be rendered at the highest quality, while the others can be computed at a much lower quality. If selective rendering is to enable realism in real-time, then it is crucial that determining the saliency maps for each frame must be performed in just a few milliseconds. In this paper we have shown how a modern GPU may be used to significantly reduce the time to compute a sophisticated saliency map. Although we have yet to achieve the goal of realism in real-time, the performances we have achieved allow, for the first time, such saliency maps to be considered for real-time selective rendering. There are a number of additional techniques that can be included in our saliency map generation, for example LoD, to reduce even further the times required to process complex geometry. Future work will also consider combining our saliency maps with importance maps [Sundstedt et al. 2005], to reduce even further the number of pixels which need to be rendered at high quality, thereby lowering overall computation time even more. 9 Acknowledgements We would like to thank Veronica Sundstedt for the use of the corridor model and Patrick Ledda for the kitchen scene used in our experiments. We would also like to thank all those who took part in the experiments. This work was supported by the Rendering on Demand (RoD) project within the 3C Research programme whose funding and support is gratefully acknowledged. References BOLIN, M. R., AND MEYER, G. W A perceptually based adaptive sampling algorithm. Computer Graphics 32, Annual Conference Series, Figure 7: Linear image filtering: Nvidia 6600GT GPU vs. 3.4Ghz CPU. The graph shows that at the maximum resolution of which we tested our approach is approximately 70 times faster than the CPU based approach approach. This difference in time is likely to increase as graphics hardware has the trend of advancing faster than other components of a computer [Owens et al. 2005]. P4 CANNY, J A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8, 6, CATER, K., CHALMERS, A., AND WARD, G Maintaining perceived quality for interactive tasks using selective rendering. Eurographics Rendering Symposium. DALY, S Engineering observations from spatiovelocity and spatiotemporal visual models. Human Vision and Electronic Imaging III, SPIE 3299, FERNANDO, R., AND KILGARD, M. J The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA. FERWERDA, J. A., PATTANAIK, S. N., SHIRLEY, P., AND GREENBERG, D. P A model of visual adaptation for realistic image synthesis. Computer Graphics 30, Annual Conference Series,

9 Scene 1 Scene 2 Scene 3 Scene 4 Scene 5 Number of triangles 150,000 1, , ,000 Lights Frames High quality rendering time (mins) Preview image time (ms) 4, Saliency map generation time (ms) Total map generation time (ms) 4, Selective rendering time (mins) Speedup Table 2: Timing results for test scenes. refers to average results over the entire animations. GREENBERG, D. P., TORRANCE, K. E., SHIRLEY, P., ARVO, J., FERWERDA, J. A., PATTANAIK, S. N., LAFORTUNE, E., WAL- TER, B., FOO, S. C., AND TRUMBORE, B A framework for realistic image synthesis. In Proceedings of SIGGRAPH 1997 (Special Session), HABER, J., MYSZKOWSKI, K., YAMAUCHI, H., AND SEIDEL, H. P Perceptually guided corrective splatting. In Proceedings of EuroGraphics 2001 (Manchester, UK), ITTI, L., AND KOCH, C A saliency-based search mechanism for overt and covert shifts of visual attention. Vision research 10, 10-12, ITTI, L., KOCH, C., AND NIEBUR, E A model of saliencybased visual attention for rapid scene analysis. In Pattern Analysis and Machine Intelligence, vol. 20, KELLY, F., AND KOKARAM, A Graphics hardware for gradient-based motion estimation. Embedded Processors for Multimedia and Communications 5309, 1, KILGARD, M Creating reflections and shadows using stencil buffers. In GDC 99. LONGHURST, P., AND CHALMERS, A User validation of image quality assessment algorithms. In EGUK 04, Theory and Practice of Computer Graphics, IEEE Computer Society, LONGHURST, P., DEBATTISTA, K., AND CHALMERS, A Snapshot: A rapid technique for driving a selective global illumination renderer. In WSCG 2005 SHORT papers proceedings, LUBIN, J A visual discrimination model for imaging system design and evaluation. Vision Models for target detection and recognition, LUEBKE, D., AND HALLEN, B Perceptually driven simplification for interactive rendering. Rendering Techniques. LUEBKE, D., WATSON, B., COHEN, J. D., REDDY, M., AND VARSHNEY, A Level of Detail for 3D Graphics. Elsevier Science Inc., New York, NY, USA. MARKOU, M., AND SINGH, S Novelty detection: a review part 2: neural network based approaches. Signal Process. 83, 12, MARSLAND, S., NEHMZOW, U., AND SHAPIRO, J Environment-specific novelty detection. In ICSAB: Proceedings of the seventh international conference on simulation of adaptive behavior on From animals to animats, MIT Press, Cambridge, MA, USA, MEYER, G., RUSHMEIER, H., COHEN, M., GREENBERG, D., AND TORRENCE, K An experimental evaluation of computer graphics imagery. Transactions of Graphics 5(1), MYSKOWSKI, K., TAWARA, T., AKAMINE, H., AND SEIDEL, H Perception-guided global illumination solution for animation rendering. SIGGRAPH 2001 Conference Proceedings, MYSZKOWSKI, K., ROKITA, P., AND TAWARA, T Perception-based fast rendering and antialiasing of walkthrough sequences. IEEE Transactions on Visualization and Computer Graphics 6, 4, MYSZKOWSKI, K The visible difference predictor: Applications to global illumination problems. Proceedings of The Eurographics Workshop on Rendering, OSBERGER, W., MAEDER, A., AND BERGMANN, N., A technique for image quality assessment based on a human visual system model. OWENS, J. D., LUEBKE, D., GOVINDARAJU, N., HARRIS, M., KRGER, J., LEFOHN, A. E., AND PURCELL, T. J A survey of general-purpose computation on graphics hardware. In Eurographics 2005, State of the Art Reports, RAMASUBRAMANIAN, M., PATTANAIK, S. N., AND GREEN- BERG, D. P A perceptually based physical error metric for realistic image synthesis. In Siggraph 1999, Computer Graphics Proceedings, Addison Wesley Longman, Los Angeles, A. Rockwood, Ed., RUSHMEIER, H., LARSON, G., PIATKO, C., SANDERS, P., AND RUST, B Comparing real and synthetic images: Some ideas about metrics. In Proc. of Eurographics Rendering Workshop. SUNDSTEDT, V., DEBATTISTA, K., LONGHURST, P., CHALMERS, A., AND TROSCIANKO, T Visual attention for efficient high-fidelity graphics. In Spring Conference on Computer Graphics (SCCG 2005), WARD, G., AND SHAKESPEARE, R. A Rendering with Radiance. Morgan Kaufmann Publishers. WILLIAMS, L Casting curved shadows on curved surfaces. In SIGGRAPH 78: Proceedings of the 5th annual conference on Computer graphics and interactive techniques, ACM Press, YEE, H., PATTANAIK, S., AND GREENBERG, D. P Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments. In ACM Transactions on Graphics. ACM Press,

We present a method to accelerate global illumination computation in pre-rendered animations

We present a method to accelerate global illumination computation in pre-rendered animations Attention for Computer Graphics Rendering Hector Yee PDI / DreamWorks Sumanta Pattanaik University of Central Florida Corresponding Author: Hector Yee Research and Development PDI / DreamWorks 1800 Seaport

More information

Selective Rendering: Computing only what you see

Selective Rendering: Computing only what you see Selective Rendering: Computing only what you see Alan Chalmers University of Bristol, UK Kurt Debattista University of Bristol, UK Luis Paulo dos Santos University of Minho, Portugal Abstract The computational

More information

Selective Tone Mapper

Selective Tone Mapper Selective Tone Mapper Author: Alessandro Artusi, Intercollege Cyprus, artusi.a@intercollege.ac.cy Author: Benjamin Roch Author: Yiorgos Chrysanthou, University of Cyprus, Yiorgos@ucy.ac.cy Author: Despina

More information

Spatiotemporal Sensitivity and Visual Attention for Efficient Rendering of Dynamic Environments

Spatiotemporal Sensitivity and Visual Attention for Efficient Rendering of Dynamic Environments Spatiotemporal Sensitivity and Visual Attention for Efficient Rendering of Dynamic Environments HECTOR YEE, SUMANTA PATTANAIK and DONALD P. GREENBERG Program of Computer Graphics, Cornell University We

More information

Selective Component-based Rendering

Selective Component-based Rendering Selective Component-based Rendering Kurt Debattista University of Bristol Veronica Sundstedt University of Bristol Luis Paulo Santos Universidade do Minho Alan Chalmers University of Bristol Abstract The

More information

Functional Difference Predictors (FDPs): Measuring Meaningful Image Differences

Functional Difference Predictors (FDPs): Measuring Meaningful Image Differences Functional Difference Predictors (FDPs): Measuring Meaningful Image Differences James A. Ferwerda Program of Computer Graphics Cornell University Fabio Pellacini Pixar Animation Studios Emeryville, CA

More information

Accelerated Ambient Occlusion Using Spatial Subdivision Structures

Accelerated Ambient Occlusion Using Spatial Subdivision Structures Abstract Ambient Occlusion is a relatively new method that gives global illumination like results. This paper presents a method to accelerate ambient occlusion using the form factor method in Bunnel [2005]

More information

Detail to Attention: Exploiting Visual Tasks for Selective Rendering

Detail to Attention: Exploiting Visual Tasks for Selective Rendering Eurographics Symposium on Rendering 2003 Per Christensen and Daniel Cohen-Or (Editors) Detail to Attention: Exploiting Visual Tasks for Selective Rendering K. Cater, 1 A. Chalmers 1 and G. Ward 2 1 Department

More information

Perceptual Effects in Real-time Tone Mapping

Perceptual Effects in Real-time Tone Mapping Perceptual Effects in Real-time Tone Mapping G. Krawczyk K. Myszkowski H.-P. Seidel Max-Planck-Institute für Informatik Saarbrücken, Germany SCCG 2005 High Dynamic Range (HDR) HDR Imaging Display of HDR

More information

Fast HDR Image-Based Lighting Using Summed-Area Tables

Fast HDR Image-Based Lighting Using Summed-Area Tables Fast HDR Image-Based Lighting Using Summed-Area Tables Justin Hensley 1, Thorsten Scheuermann 2, Montek Singh 1 and Anselmo Lastra 1 1 University of North Carolina, Chapel Hill, NC, USA {hensley, montek,

More information

Radiance on S mall S creen Devices

Radiance on S mall S creen Devices Radiance on S mall S creen Devices Matt Aranha, Alan Chalmers and Steve Hill University of Bristol and ST Microelectronics 5th International Radiance Scientific Workshop 13 th September 2006 Overview Research

More information

Selective Quality Rendering by Exploiting Human Inattentional Blindness: Looking but not Seeing

Selective Quality Rendering by Exploiting Human Inattentional Blindness: Looking but not Seeing Selective Quality Rendering by Exploiting Human Inattentional Blindness: Looking but not Seeing Kirsten Cater, Alan Chalmers & Patrick Ledda University of Bristol, Merchant Venturers Building, Woodland

More information

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S Radha Krishna Rambola, Associate Professor, NMIMS University, India Akash Agrawal, Student at NMIMS University, India ABSTRACT Due to the

More information

A Qualitative Analysis of 3D Display Technology

A Qualitative Analysis of 3D Display Technology A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract

More information

Shadow Mapping and Visual Masking in Level of Detail Systems 1

Shadow Mapping and Visual Masking in Level of Detail Systems 1 Shadow Mapping and Visual Masking in Level of Detail Systems 1 Ronan Dowling May 14, 2003 1 Submitted under the faculty supervision of Professor Gary Meyer, in partial fulfillment of the requirements for

More information

Computer Graphics Shadow Algorithms

Computer Graphics Shadow Algorithms Computer Graphics Shadow Algorithms Computer Graphics Computer Science Department University of Freiburg WS 11 Outline introduction projection shadows shadow maps shadow volumes conclusion Motivation shadows

More information

Journal of Universal Computer Science, vol. 14, no. 14 (2008), submitted: 30/9/07, accepted: 30/4/08, appeared: 28/7/08 J.

Journal of Universal Computer Science, vol. 14, no. 14 (2008), submitted: 30/9/07, accepted: 30/4/08, appeared: 28/7/08 J. Journal of Universal Computer Science, vol. 14, no. 14 (2008), 2416-2427 submitted: 30/9/07, accepted: 30/4/08, appeared: 28/7/08 J.UCS Tabu Search on GPU Adam Janiak (Institute of Computer Engineering

More information

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Efficient Rendering of Glossy Reflection Using Graphics Hardware Efficient Rendering of Glossy Reflection Using Graphics Hardware Yoshinori Dobashi Yuki Yamada Tsuyoshi Yamamoto Hokkaido University Kita-ku Kita 14, Nishi 9, Sapporo 060-0814, Japan Phone: +81.11.706.6530,

More information

Faceting Artifact Analysis for Computer Graphics

Faceting Artifact Analysis for Computer Graphics Faceting Artifact Analysis for Computer Graphics Lijun Qu Gary Meyer Department of Computer Science and Engineering and Digital Technology Center University of Minnesota {lijun,meyer}@cs.umn.edu Abstract

More information

Real-Time Universal Capture Facial Animation with GPU Skin Rendering

Real-Time Universal Capture Facial Animation with GPU Skin Rendering Real-Time Universal Capture Facial Animation with GPU Skin Rendering Meng Yang mengyang@seas.upenn.edu PROJECT ABSTRACT The project implements the real-time skin rendering algorithm presented in [1], and

More information

BINOCULAR DISPARITY AND DEPTH CUE OF LUMINANCE CONTRAST. NAN-CHING TAI National Taipei University of Technology, Taipei, Taiwan

BINOCULAR DISPARITY AND DEPTH CUE OF LUMINANCE CONTRAST. NAN-CHING TAI National Taipei University of Technology, Taipei, Taiwan N. Gu, S. Watanabe, H. Erhan, M. Hank Haeusler, W. Huang, R. Sosa (eds.), Rethinking Comprehensive Design: Speculative Counterculture, Proceedings of the 19th International Conference on Computer- Aided

More information

Advanced Computer Graphics CS 563: Screen Space GI Techniques: Real Time

Advanced Computer Graphics CS 563: Screen Space GI Techniques: Real Time Advanced Computer Graphics CS 563: Screen Space GI Techniques: Real Time William DiSanto Computer Science Dept. Worcester Polytechnic Institute (WPI) Overview Deferred Shading Ambient Occlusion Screen

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

Perception-Based Global Illumination, Rendering, and Animation Techniques

Perception-Based Global Illumination, Rendering, and Animation Techniques Perception-Based Global Illumination, Rendering, and Animation Techniques Karol Myszkowski Max-Planck-Institut für Informatik, Stuhlsatzenhausweg 85, 66123 Saarbrücken, Germany karol@mpi-sb.mpg.de Abstract

More information

Applications of Explicit Early-Z Culling

Applications of Explicit Early-Z Culling Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of

More information

Photorealistic 3D Rendering for VW in Mobile Devices

Photorealistic 3D Rendering for VW in Mobile Devices Abstract University of Arkansas CSCE Department Advanced Virtual Worlds Spring 2013 Photorealistic 3D Rendering for VW in Mobile Devices Rafael Aroxa In the past few years, the demand for high performance

More information

Statistical Acceleration for Animated Global Illumination

Statistical Acceleration for Animated Global Illumination Statistical Acceleration for Animated Global Illumination Mark Meyer John Anderson Pixar Animation Studios Unfiltered Noisy Indirect Illumination Statistically Filtered Final Comped Frame Figure 1: An

More information

Evaluation of regions-of-interest based attention algorithms using a probabilistic measure

Evaluation of regions-of-interest based attention algorithms using a probabilistic measure Evaluation of regions-of-interest based attention algorithms using a probabilistic measure Martin Clauss, Pierre Bayerl and Heiko Neumann University of Ulm, Dept. of Neural Information Processing, 89081

More information

A Model of Dynamic Visual Attention for Object Tracking in Natural Image Sequences

A Model of Dynamic Visual Attention for Object Tracking in Natural Image Sequences Published in Computational Methods in Neural Modeling. (In: Lecture Notes in Computer Science) 2686, vol. 1, 702-709, 2003 which should be used for any reference to this work 1 A Model of Dynamic Visual

More information

Fast Texture Based Form Factor Calculations for Radiosity using Graphics Hardware

Fast Texture Based Form Factor Calculations for Radiosity using Graphics Hardware Fast Texture Based Form Factor Calculations for Radiosity using Graphics Hardware Kasper Høy Nielsen Niels Jørgen Christensen Informatics and Mathematical Modelling The Technical University of Denmark

More information

A Shadow Volume Algorithm for Opaque and Transparent Non-Manifold Casters

A Shadow Volume Algorithm for Opaque and Transparent Non-Manifold Casters jgt 2008/7/20 22:19 page 1 #1 Vol. [VOL], No. [ISS]: 1?? A Shadow Volume Algorithm for Opaque and Transparent Non-Manifold Casters Byungmoon Kim 1, Kihwan Kim 2, Greg Turk 2 1 NVIDIA, 2 Georgia Institute

More information

Using Perceptual Texture Masking for Efficient Image Synthesis

Using Perceptual Texture Masking for Efficient Image Synthesis EUROGRAPHICS 2002 / G. Drettakis and H.-P. Seidel (Guest Editors) Volume 21 (2002), Number 3 Using Perceptual Texture Masking for Efficient Image Synthesis Abstract Texture mapping has become indispensable

More information

Locating Objects Visually Using Opposing-Colour-Channel Coding

Locating Objects Visually Using Opposing-Colour-Channel Coding Locating Objects Visually Using Opposing-Colour-Channel Coding Ulrich Nehmzow and Hugo Vieira Neto Department of Computer Science University of Essex Wivenhoe Park Colchester, Essex CO4 3SQ, UK {udfn,

More information

Goal. Interactive Walkthroughs using Multiple GPUs. Boeing 777. DoubleEagle Tanker Model

Goal. Interactive Walkthroughs using Multiple GPUs. Boeing 777. DoubleEagle Tanker Model Goal Interactive Walkthroughs using Multiple GPUs Dinesh Manocha University of North Carolina- Chapel Hill http://www.cs.unc.edu/~walk SIGGRAPH COURSE #11, 2003 Interactive Walkthrough of complex 3D environments

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

Introduction. Chapter Computer Graphics

Introduction. Chapter Computer Graphics Chapter 1 Introduction 1.1. Computer Graphics Computer graphics has grown at an astounding rate over the last three decades. In the 1970s, frame-buffers capable of displaying digital images were rare and

More information

Illumination and Geometry Techniques. Karljohan Lundin Palmerius

Illumination and Geometry Techniques. Karljohan Lundin Palmerius Illumination and Geometry Techniques Karljohan Lundin Palmerius Objectives Complex geometries Translucency Huge areas Really nice graphics! Shadows Graceful degradation Acceleration Optimization Straightforward

More information

In- Class Exercises for Shadow Algorithms

In- Class Exercises for Shadow Algorithms In- Class Exercises for Shadow Algorithms Alex Wiens and Gitta Domik, University of Paderborn, Germany Abstract: We are describing two exercises to deepen the understanding of two popular real-time shadow

More information

A New Time-Dependent Tone Mapping Model

A New Time-Dependent Tone Mapping Model A New Time-Dependent Tone Mapping Model Alessandro Artusi Christian Faisstnauer Alexander Wilkie Institute of Computer Graphics and Algorithms Vienna University of Technology Abstract In this article we

More information

PowerVR Hardware. Architecture Overview for Developers

PowerVR Hardware. Architecture Overview for Developers Public Imagination Technologies PowerVR Hardware Public. This publication contains proprietary information which is subject to change without notice and is supplied 'as is' without warranty of any kind.

More information

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering T. Ropinski, F. Steinicke, K. Hinrichs Institut für Informatik, Westfälische Wilhelms-Universität Münster

More information

OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT

OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT ANJIBABU POLEBOINA 1, M.A. SHAHID 2 Digital Electronics and Communication Systems (DECS) 1, Associate

More information

Recent Advances in Monte Carlo Offline Rendering

Recent Advances in Monte Carlo Offline Rendering CS294-13: Special Topics Lecture #6 Advanced Computer Graphics University of California, Berkeley Monday, 21 September 2009 Recent Advances in Monte Carlo Offline Rendering Lecture #6: Monday, 21 September

More information

Overview. A real-time shadow approach for an Augmented Reality application using shadow volumes. Augmented Reality.

Overview. A real-time shadow approach for an Augmented Reality application using shadow volumes. Augmented Reality. Overview A real-time shadow approach for an Augmented Reality application using shadow volumes Introduction of Concepts Standard Stenciled Shadow Volumes Method Proposed Approach in AR Application Experimental

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Participating Media Measuring BRDFs 3D Digitizing & Scattering BSSRDFs Monte Carlo Simulation Dipole Approximation Today Ray Casting / Tracing Advantages? Ray

More information

Efficient Stream Reduction on the GPU

Efficient Stream Reduction on the GPU Efficient Stream Reduction on the GPU David Roger Grenoble University Email: droger@inrialpes.fr Ulf Assarsson Chalmers University of Technology Email: uffe@chalmers.se Nicolas Holzschuch Cornell University

More information

Shadows. COMP 575/770 Spring 2013

Shadows. COMP 575/770 Spring 2013 Shadows COMP 575/770 Spring 2013 Shadows in Ray Tracing Shadows are important for realism Basic idea: figure out whether a point on an object is illuminated by a light source Easy for ray tracers Just

More information

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Yongying Gao and Hayder Radha Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823 email:

More information

Non-Linearly Quantized Moment Shadow Maps

Non-Linearly Quantized Moment Shadow Maps Non-Linearly Quantized Moment Shadow Maps Christoph Peters 2017-07-30 High-Performance Graphics 2017 These slides include presenter s notes for your convenience. 1 In this presentation we discuss non-linearly

More information

Multimedia Technology CHAPTER 4. Video and Animation

Multimedia Technology CHAPTER 4. Video and Animation CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

3D graphics, raster and colors CS312 Fall 2010

3D graphics, raster and colors CS312 Fall 2010 Computer Graphics 3D graphics, raster and colors CS312 Fall 2010 Shift in CG Application Markets 1989-2000 2000 1989 3D Graphics Object description 3D graphics model Visualization 2D projection that simulates

More information

Lecture 15: Shading-I. CITS3003 Graphics & Animation

Lecture 15: Shading-I. CITS3003 Graphics & Animation Lecture 15: Shading-I CITS3003 Graphics & Animation E. Angel and D. Shreiner: Interactive Computer Graphics 6E Addison-Wesley 2012 Objectives Learn that with appropriate shading so objects appear as threedimensional

More information

Lecturer Athanasios Nikolaidis

Lecturer Athanasios Nikolaidis Lecturer Athanasios Nikolaidis Computer Graphics: Graphics primitives 2D viewing and clipping 2D and 3D transformations Curves and surfaces Rendering and ray tracing Illumination models Shading models

More information

Light. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter?

Light. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter? Light Properties of light Today What is light? How do we measure it? How does light propagate? How does light interact with matter? by Ted Adelson Readings Andrew Glassner, Principles of Digital Image

More information

)LGHOLW\0HWUL VIRU$QLPDWLRQ

)LGHOLW\0HWUL VIRU$QLPDWLRQ )LGHOLW\0HWUL VIRU$QLPDWLRQ &DURO2 6XOOLYDQ,PDJH 6\QWKHVLV *URXS 7ULQLW\ &ROOHJH 'XEOLQ 'XEOLQ,UHODQG &DURO26XOOLYDQ# VW GLH ABSTRACT In this paper, the problem of evaluating the fidelity of animations

More information

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye Ray Tracing What was the rendering equation? Motivate & list the terms. Relate the rendering equation to forward ray tracing. Why is forward ray tracing not good for image formation? What is the difference

More information

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Ground Truth. Welcome!

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Ground Truth. Welcome! INFOGR Computer Graphics J. Bikker - April-July 2015 - Lecture 10: Ground Truth Welcome! Today s Agenda: Limitations of Whitted-style Ray Tracing Monte Carlo Path Tracing INFOGR Lecture 10 Ground Truth

More information

Using Perceptual Texture Masking for Efficient Image Synthesis

Using Perceptual Texture Masking for Efficient Image Synthesis EUROGRAPHICS 2002 / G. Drettakis and H.-P. Seidel (Guest Editors) Volume 21 (2002), Number 3 Using Perceptual Texture Masking for Efficient Image Synthesis Bruce Walter Sumanta N. Pattanaik Donald P. Greenberg

More information

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development Chap. 5 Scene Management Overview Scene Management vs Rendering This chapter is about rendering

More information

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University Global Illumination CS334 Daniel G. Aliaga Department of Computer Science Purdue University Recall: Lighting and Shading Light sources Point light Models an omnidirectional light source (e.g., a bulb)

More information

Computer graphics and visualization

Computer graphics and visualization CAAD FUTURES DIGITAL PROCEEDINGS 1986 63 Chapter 5 Computer graphics and visualization Donald P. Greenberg The field of computer graphics has made enormous progress during the past decade. It is rapidly

More information

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural 1 Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural to consider using it in video games too. 2 I hope that

More information

ANTI-ALIASED HEMICUBES FOR PERFORMANCE IMPROVEMENT IN RADIOSITY SOLUTIONS

ANTI-ALIASED HEMICUBES FOR PERFORMANCE IMPROVEMENT IN RADIOSITY SOLUTIONS ANTI-ALIASED HEMICUBES FOR PERFORMANCE IMPROVEMENT IN RADIOSITY SOLUTIONS Naga Kiran S. P. Mudur Sharat Chandran Nilesh Dalvi National Center for Software Technology Mumbai, India mudur@ncst.ernet.in Indian

More information

International Journal of Advance Engineering and Research Development

International Journal of Advance Engineering and Research Development Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative

More information

Practical Shadow Mapping

Practical Shadow Mapping Practical Shadow Mapping Stefan Brabec Thomas Annen Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken, Germany Abstract In this paper we propose several methods that can greatly improve

More information

Computer Graphics 10 - Shadows

Computer Graphics 10 - Shadows Computer Graphics 10 - Shadows Tom Thorne Slides courtesy of Taku Komura www.inf.ed.ac.uk/teaching/courses/cg Overview Shadows Overview Projective shadows Shadow textures Shadow volume Shadow map Soft

More information

Real Time Rendering of Expensive Small Environments Colin Branch Stetson University

Real Time Rendering of Expensive Small Environments Colin Branch Stetson University Real Time Rendering of Expensive Small Environments Colin Branch Stetson University Abstract One of the major goals of computer graphics is the rendering of realistic environments in real-time. One approach

More information

Graphics and Interaction Rendering pipeline & object modelling

Graphics and Interaction Rendering pipeline & object modelling 433-324 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Lecture outline Introduction to Modelling Polygonal geometry The rendering

More information

Saliency Extraction for Gaze-Contingent Displays

Saliency Extraction for Gaze-Contingent Displays In: Workshop on Organic Computing, P. Dadam, M. Reichert (eds.), Proceedings of the 34th GI-Jahrestagung, Vol. 2, 646 650, Ulm, September 2004. Saliency Extraction for Gaze-Contingent Displays Martin Böhme,

More information

Enhancing Traditional Rasterization Graphics with Ray Tracing. October 2015

Enhancing Traditional Rasterization Graphics with Ray Tracing. October 2015 Enhancing Traditional Rasterization Graphics with Ray Tracing October 2015 James Rumble Developer Technology Engineer, PowerVR Graphics Overview Ray Tracing Fundamentals PowerVR Ray Tracing Pipeline Using

More information

Final Project: Real-Time Global Illumination with Radiance Regression Functions

Final Project: Real-Time Global Illumination with Radiance Regression Functions Volume xx (200y), Number z, pp. 1 5 Final Project: Real-Time Global Illumination with Radiance Regression Functions Fu-Jun Luan Abstract This is a report for machine learning final project, which combines

More information

Last Time. Reading for Today: Graphics Pipeline. Clipping. Rasterization

Last Time. Reading for Today: Graphics Pipeline. Clipping. Rasterization Last Time Modeling Transformations Illumination (Shading) Real-Time Shadows Viewing Transformation (Perspective / Orthographic) Clipping Projection (to Screen Space) Scan Conversion (Rasterization) Visibility

More information

Visual Attention From a Graphics Point of View:

Visual Attention From a Graphics Point of View: Visual Attention From a Graphics Point of View: Computer Graphics Applications Sumanta Pattanaik Associate Professor Department of Computer Science University of Central Florida, Orlando, FL, USA sumant@cs.ucf.edu

More information

Real-Time Shadows. Computer Graphics. MIT EECS Durand 1

Real-Time Shadows. Computer Graphics. MIT EECS Durand 1 Real-Time Shadows Computer Graphics MIT EECS 6.837 Durand 1 Why are Shadows Important? Depth cue Scene Lighting Realism Contact points 2 Shadows as a Depth Cue source unknown. All rights reserved. This

More information

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane Rendering Pipeline Rendering Converting a 3D scene to a 2D image Rendering Light Camera 3D Model View Plane Rendering Converting a 3D scene to a 2D image Basic rendering tasks: Modeling: creating the world

More information

NVIDIA Case Studies:

NVIDIA Case Studies: NVIDIA Case Studies: OptiX & Image Space Photon Mapping David Luebke NVIDIA Research Beyond Programmable Shading 0 How Far Beyond? The continuum Beyond Programmable Shading Just programmable shading: DX,

More information

Lecture 1. Computer Graphics and Systems. Tuesday, January 15, 13

Lecture 1. Computer Graphics and Systems. Tuesday, January 15, 13 Lecture 1 Computer Graphics and Systems What is Computer Graphics? Image Formation Sun Object Figure from Ed Angel,D.Shreiner: Interactive Computer Graphics, 6 th Ed., 2012 Addison Wesley Computer Graphics

More information

Selective rendering for efficient ray traced stereoscopic images

Selective rendering for efficient ray traced stereoscopic images Vis Comput (2010) 26: 97 107 DOI 10.1007/s00371-009-0379-4 ORIGINAL ARTICLE Selective rendering for efficient ray traced stereoscopic images Cheng-Hung Lo Chih-Hsing Chu Kurt Debattista Alan Chalmers Published

More information

Dynamic visual attention: competitive versus motion priority scheme

Dynamic visual attention: competitive versus motion priority scheme Dynamic visual attention: competitive versus motion priority scheme Bur A. 1, Wurtz P. 2, Müri R.M. 2 and Hügli H. 1 1 Institute of Microtechnology, University of Neuchâtel, Neuchâtel, Switzerland 2 Perception

More information

Augmenting Reality with Projected Interactive Displays

Augmenting Reality with Projected Interactive Displays Augmenting Reality with Projected Interactive Displays Claudio Pinhanez IBM T.J. Watson Research Center, P.O. Box 218 Yorktown Heights, N.Y. 10598, USA Abstract. This paper examines a steerable projection

More information

CHAPTER 1 Graphics Systems and Models 3

CHAPTER 1 Graphics Systems and Models 3 ?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........

More information

For Intuition about Scene Lighting. Today. Limitations of Planar Shadows. Cast Shadows on Planar Surfaces. Shadow/View Duality.

For Intuition about Scene Lighting. Today. Limitations of Planar Shadows. Cast Shadows on Planar Surfaces. Shadow/View Duality. Last Time Modeling Transformations Illumination (Shading) Real-Time Shadows Viewing Transformation (Perspective / Orthographic) Clipping Projection (to Screen Space) Graphics Pipeline Clipping Rasterization

More information

Rendering Grass with Instancing in DirectX* 10

Rendering Grass with Instancing in DirectX* 10 Rendering Grass with Instancing in DirectX* 10 By Anu Kalra Because of the geometric complexity, rendering realistic grass in real-time is difficult, especially on consumer graphics hardware. This article

More information

Efficient Image-Based Methods for Rendering Soft Shadows. Hard vs. Soft Shadows. IBR good for soft shadows. Shadow maps

Efficient Image-Based Methods for Rendering Soft Shadows. Hard vs. Soft Shadows. IBR good for soft shadows. Shadow maps Efficient Image-Based Methods for Rendering Soft Shadows Hard vs. Soft Shadows Maneesh Agrawala Ravi Ramamoorthi Alan Heirich Laurent Moll Pixar Animation Studios Stanford University Compaq Computer Corporation

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

LOD and Occlusion Christian Miller CS Fall 2011

LOD and Occlusion Christian Miller CS Fall 2011 LOD and Occlusion Christian Miller CS 354 - Fall 2011 Problem You want to render an enormous island covered in dense vegetation in realtime [Crysis] Scene complexity Many billions of triangles Many gigabytes

More information

Lecture 17: Shadows. Projects. Why Shadows? Shadows. Using the Shadow Map. Shadow Maps. Proposals due today. I will mail out comments

Lecture 17: Shadows. Projects. Why Shadows? Shadows. Using the Shadow Map. Shadow Maps. Proposals due today. I will mail out comments Projects Lecture 17: Shadows Proposals due today I will mail out comments Fall 2004 Kavita Bala Computer Science Cornell University Grading HW 1: will email comments asap Why Shadows? Crucial for spatial

More information

Visual Perception in Realistic Image Synthesis

Visual Perception in Realistic Image Synthesis Volume 20 (2001), number 4 pp. 211 224 COMPUTER GRAPHICS forum Visual Perception in Realistic Image Synthesis Ann McNamara Department of Computer Science, Trinity College, Dublin 2, Ireland Abstract Realism

More information

Irradiance Gradients. Media & Occlusions

Irradiance Gradients. Media & Occlusions Irradiance Gradients in the Presence of Media & Occlusions Wojciech Jarosz in collaboration with Matthias Zwicker and Henrik Wann Jensen University of California, San Diego June 23, 2008 Wojciech Jarosz

More information

An Efficient Saliency Based Lossless Video Compression Based On Block-By-Block Basis Method

An Efficient Saliency Based Lossless Video Compression Based On Block-By-Block Basis Method An Efficient Saliency Based Lossless Video Compression Based On Block-By-Block Basis Method Ms. P.MUTHUSELVI, M.E(CSE), V.P.M.M Engineering College for Women, Krishnankoil, Virudhungar(dt),Tamil Nadu Sukirthanagarajan@gmail.com

More information

Computational Foundations of Cognitive Science

Computational Foundations of Cognitive Science Computational Foundations of Cognitive Science Lecture 16: Models of Object Recognition Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February 23, 2010 Frank Keller Computational

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Reading for Today A Practical Model for Subsurface Light Transport, Jensen, Marschner, Levoy, & Hanrahan, SIGGRAPH 2001 Participating Media Measuring BRDFs

More information

Salient Region Detection and Segmentation in Images using Dynamic Mode Decomposition

Salient Region Detection and Segmentation in Images using Dynamic Mode Decomposition Salient Region Detection and Segmentation in Images using Dynamic Mode Decomposition Sikha O K 1, Sachin Kumar S 2, K P Soman 2 1 Department of Computer Science 2 Centre for Computational Engineering and

More information

DEFERRED RENDERING STEFAN MÜLLER ARISONA, ETH ZURICH SMA/

DEFERRED RENDERING STEFAN MÜLLER ARISONA, ETH ZURICH SMA/ DEFERRED RENDERING STEFAN MÜLLER ARISONA, ETH ZURICH SMA/2013-11-04 DEFERRED RENDERING? CONTENTS 1. The traditional approach: Forward rendering 2. Deferred rendering (DR) overview 3. Example uses of DR:

More information

Deep Opacity Maps. Cem Yuksel 1 and John Keyser 2. Department of Computer Science, Texas A&M University 1 2

Deep Opacity Maps. Cem Yuksel 1 and John Keyser 2. Department of Computer Science, Texas A&M University 1 2 EUROGRAPHICS 2008 / G. Drettakis and R. Scopigno (Guest Editors) Volume 27 (2008), Number 2 Deep Opacity Maps Cem Yuksel 1 and John Keyser 2 Department of Computer Science, Texas A&M University 1 cem@cemyuksel.com

More information

Computer Graphics Global Illumination

Computer Graphics Global Illumination Computer Graphics 2016 14. Global Illumination Hongxin Zhang State Key Lab of CAD&CG, Zhejiang University 2017-01-09 Course project - Tomorrow - 3 min presentation - 2 min demo Outline - Shadows - Radiosity

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Matthias Zwicker Universität Bern Herbst 2009 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation of physics Global

More information

Advanced Deferred Rendering Techniques. NCCA, Thesis Portfolio Peter Smith

Advanced Deferred Rendering Techniques. NCCA, Thesis Portfolio Peter Smith Advanced Deferred Rendering Techniques NCCA, Thesis Portfolio Peter Smith August 2011 Abstract The following paper catalogues the improvements made to a Deferred Renderer created for an earlier NCCA project.

More information