Reusing previously rendered images for rendering acceleration

Size: px
Start display at page:

Download "Reusing previously rendered images for rendering acceleration"

Transcription

1 Reusing previously rendered images for rendering acceleration Johannes Scharl, MatrNr June 13, 2006 Abstract This paper presents an overview and comparison of several rendering methods that aim at exploiting spatial and temporal coherence for rendering acceleration. I focus on full screen methods that redisplay previously rendered images in order to save rendering time. To achieve this, there are several possibilities: Interpolating between discrete viewing positions that were previously sampled, exploiting frame-to-frame coherence by re-using rendered images, and adaptive refinement of images. I tried to categorize the most important approaches, describe them and discuss their advantages and disadvantages. 1 Introduction Interactive Visualization of three-dimensional environments and models is a basic requirement of any virtual reality system. The realism of a scene is more or less proportional to the number of polygons representing it. Also, the performance costs grow proportionally. Recent graphics hardware, despite of improving performance, can not cope with the growing demands of producers and consumers of such scenes. Therefore, the need for techniques to reduce the rendering costs for such scenes is great. Those techniques are often based on culling of non-visible parts of the scene. Another approach, the one that I want to discuss in this paper, is to reuse previously gathered information about objects from one frame to the next, since in most cases, the difference between two adjacent frames is hardly noticeable. A conventional graphics pipeline is capable of displaying every frame a completely different geometric model or scene. The viewpoint could change randomly without any path coherence. Temporal and spatial coherence between adjacent frames are not considered. In the following three sections, I want to discuss the most interesting methods exploiting spatial and temporal coherence. In section 2, I want to describe methods that aim to reuse rendered images by interpolating beween previously sampled images. These reference images are either precomputed or rendered on the fly by predicting future viewpoints, varying from method to method. The interpolation can also be done in various ways and imposes difficulties in the form of parallax effects such as holes and overlaps, that must be dealt with. Section 3 discusses techniques that aim to exploit spatial and temporal frame-to-frame coherence by using image composition. The elements that are composited together can be updated at different rates, depending on the visual amount they change from frame to frame. Background objects may be used over their whole lifetime without rerendering them. In section 4 I will present some methods that take a different approach on the quality-vs-performance tradeoff by adaptive refinement: They seek to present the user a coarse image at high framerates when the user input changes quickly, and further refine this image when the scene remains relatively static, thus minimizing spatial 1

2 and temporal error. 2.2 Lippman s Movie Maps 2 Interpolating between discrete viewing positions The methods discussed in this section try to reuse previously rendered or precomputed images by interpolating between them in various ways. By using the information gathered at discrete viewpoints before, they approximate images from new viewpoints without rendering a new frame from there. 2.1 The Plenoptic function Introduced by Adelson et al. [AB91], the Plenoptic Function describes all radiant energy that can be perceived by an observer at a certain viewpoint. The plenoptic function is defined in the following parameter space: p = P (θ, φ, λ, V x, V y, V z, t) While V x, V y, V z describe a viewing position in space where the camera is located, θ and φ describe the azimuth an elevation angles, λ describes a band of wavelength, and t defines the time at which we want to evaluate the function. The plenoptic function therefore describes all of the image information visible within a certain band of wavelength, from a viewing position in a particular viewing direction at a certain point in time. One could also say the plenoptic function describes all possible environment maps for a scene. McMillan et al. [MB95] claim that all of the following image based rendering approaches are attempts to reconstruct or approximate the plenoptic function from a set of discrete samples. They define a complete sample for the plenoptic function as a full spherical map for a given viewpoint and time value, and an incomplete sample as some solid angle subset of this spherical map. Lippman [Lip80] presented an interactive walkthrough system in 1980 based on video discs, nowadays regarded as the basic work for most of the walkthrough applications today: Lippman took individual panoramic images with a special fisheye-lens all over a city, in every street, every few meters. He achieved instantaneous freedom of choice in route selection by using two video discs. One was used to play the sequence of images that corresponds to travelling along a street, while the other dynamically changed its position to the corresponding street at the next street intersection. So one disc held the streets running from north to south, the second one the streets running from west to east. The movement was controlled with a joystick, if the user changed direction to left or right at an intersection, the signal from the second disc was made visible. The discs could be played either backwards or forwards, so the user could move left, right, back and forth. It was also possible to stop or change the playback speed of the successive panoramic images, so the speed of the walkthrough would be changed. Lippman suggested that with anamorphic mapping, the transition between two spatially sparse images could be made smoother, but did not present any techniques for that. His work can be thought of as one of the basics for most of the research in the field of image based rendering systems. 2.3 View Interpolation for Image Synthesis In 1994, Chen et al. [CW93] presented an approach that replaces the 3D representation of the scene with images rendered from discrete viewpoints and interpolates between these images to compute a smooth camera move. While this method is restricted to static scenes and view independent shading (so no reflection mapping or specular reflections are possible), it uses synthetic images with depth information and morphs between them in real time if the camera moves between two precomputed views. Chen et al. define Image Morphing as the simul- 2

3 taneous interpolation of shape and texture. Their technique consists of two steps: First, establishing the correspondence between two images, which is quite difficult if not done by hand. Chen et al. developed a method to use the camera transformation and the depth information to determine the correspondence between two or more images. This correspondence is called forward mapping and describes the pixel-to-pixel correspondence from the source- to the destination image. This mapping is bi-directional and is acquired as follows: From each pixel s screen coordinates (x, y and z) and the known camera location, a three-dimensional spatial offset vector can be computed for each pixel. These offset vectors are displayed in Figure 2.1. Each offset vector describes the pixel movement from one image to the next. These vectors are stored in a morph map, which describes the forward mapping from one whole image to another. In the second step, Chen et al. use the acquired mapping to interpolate between two given images. To generate these in-between views of a pair of images, the offset vectors are interpolated linearly and the pixels from the source image are moved by the interpolated vectors. Examples for interpolated views are shown in Figure 2.2. Though more expensive to perform, quadratic or cubic interpolation should be possible in real time on today s hardware. This would be closer to the exact solution than linear interpolation. Figure 2.2: Interpolated views of a teapot - the left and the right pictures are sampled, the two in the center are morphed views. [CW93] The two main problems of the forward mapping technique is that overlaps and holes occur in the interpolated images. Overlaps appear because many pixels can map to one when an interpolated morph field is used. This can be solved by using a standard Z-buffer algorithm. Holes may appear when areas of the images were occluded by objects in the foreground when the image was computed. If the view point is morphed to a different position, these regions may become visible. This is shown in Figure 2.3. Figure 2.1: Offset Vectors [CW93] To make this basic method more efficient, a MPEGlike block compression can be used. Using this compression, only one offset vector per block is computed. Figure 2.3: Holes appear as objects in the background become visible by morphing the images. [CW93] This problem can be minimized, though not completely solved, if the new view is interpolated from two or 3

4 more source images. The remaining holes can be filled by interpolating between the adjacent pixels. This is done in a post-process. Cases where background objects are visible through missing pixels in foreground objects can not be fixed. Chen et al. s method of View Interpolation can be used efficiently to compute Motion Blur and Shadows: The motion vectors can be used to approximate Motion Blur by oversampling the motions in the temporal domain. The sampling rate is determined by the largest offset vector. Shadows of light sources located between other lights can be computed in the same way as views are interpolated between precomputed views using a standard shadow buffer algorithm. Chen et al. s image rendering system allows interactive walkthroughs through complex scenes at reasonable performance costs. An interesting idea is to use photographed images instead of rendered ones by using a ranging camera [Bes88]. While this method greatly accelerates the calculation of Motion Blur and Shadows, the restrictions by static scenes and view independent lighting show the method s boundaries. McMillan et al. acquire the sample images by panning a camera and continuously taking pictures. They project each image into the cylindrical projection by using an appropriate homogenous transformation consisting of two parts: an intrinsic transformation which is determined by camera properties and an extrinsic transformation determined by the rotation around the camera. To determine the flow fields between images taken at different viewing positions, plenoptic modelling requires a set of corresponding points visible from both views. This establishes relative relationships between a pair of cylindrical projections. While a set of these tiepoints (see Figure 2.4)between two images may be set by hand, this process is far too tedious for a whole cylinder or even a scene. However, McMillan et al. have developed a method to identify corresponding points in cylindrical projections in O(n) runtime by approximating the cylindrical epipolar geometry equation [Fau92, RHC92]. This epipolar relationship can be used to calculate image flow fields using standard computer graphic methods, like Faugeras correlation method [Fau93]. 2.4 Plenoptic Modeling Like other image-based rendering systems, McMillan et al. [MB95] use a series of images for the scene description. They use a cylindrical projection around the point of view as the plenoptic sample representation, instead of a sphere or a cube. This is because of the following reasons: While a perfect cube is quite difficult to represent with reasonable storing on a computer, and a spherical projection can not be easily mapped onto a plane, a cylinder can easily be unrolled onto a planar map. They also avoid the optical distortions of a cubic mapping that appear in corners and edges. However, using a cylinder as projection also creates the shortcoming of the boundary conditions at the top and bottom. By not sampling the top and bottom caps, McMillan et al. simply limited the vertical field of view. Figure 2.4: Tie points are corresponding points visible in both views. These flow fields are then used to reconstruct views from points between the plenoptic samples initially taken. A painter s style rendering algorithm is used to establish correct visibility, and occluded regions are interpolated between their boundaries, since there is depth information available. McMillan et al. demonstrated their modelling system by taking images using a video camcorder on a levelled tripod. Figure 2.5 shows two (incomplete, because cylindrical) plenoptic samples taken in one of the authors backyard. 4

5 Figure 2.5: Panoramic views aquired with a video camcorder on a levelled tripod [MB95] The images in Figure 2.6 where reconstructed in real time. creates rubber-sheet effects when warping the image: Figure 2.7: Rubber sheet effects appear when the reference frame is treated as a mesh. [MMB97] Figure 2.6: Reconstructed images [MB95] 2.5 Post Rendering 3D Warping In 1997, Mark et al [MMB97] presented an approach exploiting frame-to-frame coherence to completely avoid conventional rendering of most frames. This approach is called Post-Rendering 3D Warping. These rubber sheets must not be treated as real meshes in the compositing process, otherwise they may occlude other objects. Additionally, there is still the occlusion problem: After a warp, objects may become visible that were occluded in the reference frame and therefore no information on them exists. A 3D warp uses a per-pixel disparity value that can easily be computed from the standard Z-Buffer as part of the warp computation. Unlike similar warps, the 3D warp correctly accounts for viewpoint translation and rotation. However, accounting for translation results in two difficulties: Because adjacent pixels may move different distances as result of the warp, the image reconstruction is more difficult than in other warps. This is because of treating the reference frame as a mesh Figure 2.8: Holes caused by parallax-effects after a 3D Warp [MMB97] Mark et al. solve this problem by warping multiple frames to get each derived frame: The compositing algorithm warps both reference images and then composites them together. It has to be decided, for each pixel in the derived image, which of the two 5

6 warped reference frames will determine that pixel s colour. This decision is based on the pixel s Z and confidence value in each reference frame. A low confidence value indicates that this pixel is undersampled. If the warped Z values are different, the pixel with the closer Z value is stored. If the warped Z values are the same within some tolerance, the pixel with the greater confidence value is stored. To interpolate a moving viewpoint from two reference frames, the system has to predict a future position from the current motion of the user (Predictive Tracking). As soon as the user passes this viewpoint, it is used as the past reference frame and the future reference frame is calculated from the next predicted viewpoint. Figure 2.10: While a single occluder (a) can not cause occlusion artefacts if the user is moving on a path between two sampled views, this can be the case with multiple occluders (b).[mmb97] 2.6 Conclusions The presented methods of image based rendering can produce interpolated frames of near reference-frame quality at much higher rates than classical double buffered rendering. One of the main problems of these approaches is the acquiring of future reference frames: There are quite a few possibilities to add more flexibility to these methods, for example, if the user does not move, it would make sense to acquire several reference frames in a cloud around the user s position to have a reference frame ready for the next unpredictable movement. Also, in an architectural scene, placing reference frame viewpoints in doorways would be a good idea, because chances are high that the user will pass them. Figure 2.9: Derived frames are interpolated between sampled reference frames. [MMB97] This ensures that single convex occluder causes no occlusion artefacts in the derived frame as demonstrated in Figure 2.10 (a). However, for multiple occluders this may not be the case(b). Mark et al. state that in practice, these remaining artefacts are few and barely visible. The presented techniques mostly can not cope with the problem of warping specular shading - specular highlights will jump around at reference frame rate. Furthermore, affine and 2D warps as used in most of the presented approaches can not completely compensate by themselves for viewpoint translation if objects in the scene are not all coplanar. One method to work around this is Mark et al s approach of Post-Rendering 3D Warping. But affine and 2D warps can at least partially compensate for translations if the scene is separated into layers 6

7 and an affine or 2D warp is used for each of these layers. In the next section, this idea will be the central point of the presented methods. 3 Exploiting spatial and temporal frame-to-frame coherence by using image composition When designing a graphics system, one struggles with two fundamental problems: memory bandwith and system latency. To keep the costs down, one also has to consider memory cost. A conventional graphics pipeline is capable of displaying every frame a completely different geometric model. For example, the viewpoint could skip randomly without any path coherence. Temporal and spatial coherence between successive frames are not taken into account. Figure 3.1: The Address Recalculation Pipeline [RP94] Various approaches have been suggested to exploit temporal and spatial coherence for rendering acceleration by using image composition. I would like to present a few in this section. 3.1 Priority Rendering with an Address Recalculation Pipeline In 1994, Regan et al. [RP94] presented a graphics system called Priority Rendering with an Address Recalculation Pipeline, specifically designed for use in virtual reality systems. It performs the orientation viewport mapping after rendering, which means that as long as the user does not move, the scene does not have to be re-rendered, even if the viewing direction (the head orientation) changes. To achieve this, the viewing orientation is detached from the rendering process using the Adress Recalculation Rendering Pipeline displayed in Figure 3.1. This pipeline consists of three stages: In the first stage, each pixel s screen location is converted into a 3Dvector pointing away from the camera, simulating a wide angle viewing lens. In the second stage, each of these vectors is multiplied with the matrix containing the viewing direction, resulting in another 3D-vector. This vector points into the direction in which the pixel is seen in world coordinates. In the third stage of the pipeline, this pixel is converted into a display memory location. This means that the rendering overhead is much bigger, since no clipping or view frustum culling can be applied. However, a once rendered scene can be reused as long as the user does not change its position and just changes his view direction. To reuse rendered parts of the scene as often as possible, Regan et al. also use Image Composition, as illustrated in Figure 3.2: 7

8 Figure 3.2: Image composition [RP94] Figure 3.3: An object s image can be reused without rerendering for a maximum translation distance that depends on the distance to the user and the user s viewing direction. [RP94] Using image composition in combination with an address recalculation pipeline has the following advantage that images in the display memory do not necessarily become invalid when the viewing direction changes, thus an image can be used longer for composition without updating it. Furthermore, an image may even stay valid when the user changes its position, for example a static background may never require rerendering. Regan et al. also presented a method to subdivide a virtual world into parts that are rendered at different rates, called Priority Rendering. Priority Rendering is demand driven, which means objects are not redrawn until their image within the display buffer has changed by a certain threshold. This would not be very effective in a conventional graphics system, since most view orientation changes would change the display memory. In an address recalculation pipeline, the display buffer has to be updated far less frequently for most objects. The threshold for determining when an object should be redrawn is ideally less than the minimum feature size the human eye can detect. It is calculated by the distance of the object to the users eye, the users translation and its movement direction. This is demonstrated in Figure 3.3. Regan et al. set the update rates at a maximum of 60Hz, which was the standard display update rate back then. All other update rates are exponential harmonics of this rate, so that the images are updated synchronous and can be swapped between the rates (from lower to higher, if necessary). This can be seen in Figure 3.4. Figure 3.4: Update rates are exponential harmonics (2 n ) of the standard display rate. [RP94] In a virtual reality system, using an address recalculation pipeline with priority rendering greatly reduces the latency for head rotations. Also, objects in the background are updated less frequently. This happens at the cost of rendering the whole scene without viewport clipping or view frustum culling. When the user moves a lot at changing speed, as it is, for example, the case in modern computer games, the scene has to be updated frequently. Then the address recalculation pipeline has six times the rendering overhead of a conventional graphics system. This is because the whole scene is rendered in ev- 8

9 ery direction, but can not be reused due to the changed viewpoint, and no view frustum culling can be applied. 3.2 The Talisman hardware architecture: Rendering with coherent layers Torborg et al. [TK96] from the Microsoft Corporation introduced an alternative hardware rendering architecture in , codenamed talisman, which exploits both spatial and temporal coherence to accelerate rendering of complex scenes. The talisman architecture implements an image composition method similar to the one described by Regan et al. in [RP94]: Multiple independent image layers are composited together to create the output signal. These images can be rendered and manipulated independently, and therefore each object can be updated independently. Image layers can be of any shape, and several operations like scaling, rotation and subpixel positioning can be applied. Many of these 3D transformations can be simulated by 2D imaging operations, for example an object moving away from the user just needs to be scaled down. Thus, processing requirements are reduced significantly, since image layer transforms can be processed a lot faster than rerendering the geometry - typically 10 to 20 times faster. properties of different geometric objects: Relative Geometry - for example, if two objects move away from each other, they should be separated into different layers. Perceptual distinctness - elements in the background do not need to be updated as frequently as foreground objects, since their appearance does not change as much. Ratio of clear pixel to used ones: If many objects are aggregated into one layer, a lot of the sprite area (and therefore memory) is wasted. Splitting the layer into smaller ones typically decreases this ratio. Visibility sorting is done by a kd-tree containing bounding polyphedra of each layer to quickly determine occluded objects. Even shading may be factored into separate layers as demonstrated in Figure 3.5. To take advantage of temporal coherence, highlights or reflections that move faster than the objects can be put in a separate layer and be updated more often, while blurry highlights and shadows can be given fewer samples. Furthermore, talisman uses an image compression very similar to JPEG, called TREC, in combination with chunking to increase performance. Chunking means that each image layer is divided into 32x32 pixel regions called chunks. The geometry is pre-sorted based on which chunk it will be rendered into. Since all geometry in one chunk is rendered before proceeding to the next, the depth buffer needs only to be as large as one single chunk. Also, antialiasing is considerably easier since each chunk can be dealt with independently and the anti-aliasing algorithm must only maintain information about the pixels of the chunk instead of every pixel on the screen, which would be the case if the pixels were accessed in random order. Lengyel et al. [LS97] enhanced and further explained the talisman architecture by further specifying how to factor separate elements into different layers and how to best approximate 3D transformations with 2D affine transformations: They factor Geometry by considering the following Figure 3.5: A 3D mesh of a cow is represented by three sprites containing the shadow, the gouraud-shaded and textured cow and the specular highlights with an alpha map. [LS97] 9

10 In the talisman architecture, a scene is split into the smallest renderable units. These units are grouped together into layers using these factoring guidelines. To increase sprite reuse, the clipping area is extended. So even layers that are not completely visible are not clipped and thus can be used again. Sprites are mapped to the screen using affine transformations. To avoid wasting sprite space, the tightest fitting bounding rectangle is calculated for these transformations. Even motion blur can be approximated using image layers by undersampling the image in the wanted axis. Lengyel and Snyder measure the fidelity of their approximation techniques with so-called fiducials. There are four types of fiducials: Geometric fiducials measure the error in the projected positions of the images compared to the original geometry. Photometric fiducials measure the error in lighting and shading. Sampling fiducials determine the degree of distortion of the image samples. Visibility fiducials measure visibility artefacts. The results show that the talisman architecture and the concept of rendering with coherent layers uses much less resources, but there are of course some pixel errors due to the transformation approximations. 3.3 Conclusions Exploiting frame to frame coherence by using image composition can greatly accelerate the rendering process. This is different if fast movements appear in camera or object motion and the whole image has to be updated frequently. In this case, performance gains by exploiting coherence are low, but due to the extra calculations the overall performance is worse than in classical rendering. Furthermore, there has to be a trade-off between image quality and resource usage. However, in the talisman architecture, the resource usage can be scaled much better, since critical sprites can be updated more often. 4 Image Rendering by Adaptive Refinement In 1986, Bergman et al.[bfgs86] described techniques for improving the performance of image rendering by first generating a crude image rapidly and then adaptively refining it where necessary. One way to provide faster visual feedback and enhanced interactivity is to provide the user with approximated results rather than waiting for exact results to be available. Their goal was to find what they call the golden thread, a single step that if repeated a few times will generate a crude image, but which repeated many times will generate a high quality image. This idea was used in the approaches presented in the following section. 4.1 The Render Cache In 1999, Walter et al. [WDP99] presented the render cache, which provides visual feedback at a rate faster than a renderer, like, for example, ray tracing or path tracing, can generate complete frames, at the cost of producing approximated images during camera and object translations. This shall provide ray tracing or path tracing image quality at interactive frame rates in real-time applications. The renderer is separated from the synchronous part of the visual feedback loop as illustrated in Figure

11 Figure 4.1: (a) Shows the classical visual feedback loop: The renderer calculates every image the user sees new and as a whole. Using the render cache (b) the renderer is seperated and samples only important regions of reference frames [WDP99] This reduces the framerate s dependence on the speed of the renderer, but the display process does not replace the renderer and depends on it for all shading computations. The display process may be used with various renderes, the main requirement is that the renderer has to be able to compute individual rays, thus ray tracing is a perfect candidate. Images between two frames that were computed by the renderer are generated in the display process by projecting rendered points from the render cache onto an image plane called point image. This projection consists of a transform based on the current camera parameters and Z-buffering to handle cases where more than one point of the cache maps to the same image pixel. The results of this projection alone cause artefacts as seen in the left illustration of Figure 4.2. To remove them, Walter et al. use depth culling, smoothing and interpolation. The depth culling heuristic examines each pixel s 3x3 neighbourhood and computes an average depth value. If one pixel s depth is significantly different form the average value, there are obviously pixels from an occluded surface shining through the occluder. These pixels are then removed. After that, interpolation and smoothing filters are used to fill the remaining gaps in the point image. Figure 4.2 illustrates the whole process. Figure 4.2: A projected image (left) is first depth culled, then smoothed and interpolated. [WDP99] Reference images are not sampled as a whole, instead the renderer samples single pixels or regions in order of their importance. Thus, the render cache is also some kind of approximation of that golden thread that incrementally improves image quality and correctness. To choose the samples that should be rendered next, a priority image is calculated and an age is applied to each pixel. A pixel s age starts at zero when it is rendered and is incremented at each frame. When the pixel reaches a certain age, it is re-renderd. To exploit frame-to-frame coherence, it is possible to alter the speed of single pixel s aging, for example, pixels with many valid (not interpolated) neighbours get a lower priority and are aging slower, because it is more important to sample regions 11

12 with lower local point density. In 2002, Walter et al. [WDG02] presented some impressive enhancements for their render cache: Predictive Sampling: By predicting the camera s movement a few frames ahead in time, the renderer can sample image regions that are not visible at the moment. Tiled Z-Buffer: Because Points in the cache are unordered, projecting them onto the image plane results in completely random (and therefore slow) access to the image plane data structures. By dividing the image into regions - tiles - points are somewhat bucket-sorted on the image plane. This requires extra work, but memory access latency can greatly be reduced by that. The image is pre-filtered with a lager filter kernel to better handle sparse image regions. To avoid completely blurred images, the 3x3 filter is used afterwards instead everywhere where it produces valid data. 4.2 Tapestry Simmons et al. [SSE00] presented a method for interactive viewing of dynamically sampled environments based on a 3D mesh reconstruction called a tapestry. Tapestry is an enhancement of the Holodeck Interactive Ray Cache previously developed by Simmons et al. [WS99, Sim00]. A tapestry is a 3D Mesh that serves as the display representation as well as a cache for reuse of previously taken samples. Samples of the scene are taken by ray tracing and projected onto the tapestry as points in the mesh. The mesh is refined adaptively when the viewpoint remains at its position, and reconstructed and changed when it moves. The tapestry is established as a unit sphere centered at the viewpoint and a icosahedral base mesh is created to cover the the surface of the sphere, as displayed in Figure 4.3 (a). Point eviction: If a point becomes invalid after some time (could happen because of non-diffuse shading effects) it is deleted after its age has reached a certain limit. SIMD instructions on Intel s MMX, SSE and SSE 2 extensions are used to project four points at the same time. Experiments show that the render cache can achieve interactive ray tracing even on a single processor system whose processing power is equivalent to that of today s computers. Even if the renderer is only able to compute a low number of new samples, (e.g., 1/64th of image resolution), using the render cache achieves good interactivity and satisfactory image quality. A clear drawback to the render cache is the lack of good anti-aliasing. Since anti-aliasing is view dependent, only supersampling can be used, which considerably increases the computation expenses. Figure 4.3: (a) Basic icosahedral mesh around the viewpoint (b) Samples projected onto the unit sphere (c) The resulting spherical mesh [Sim00] For every new sample that is projected onto that sphere 4.3 (b), a vertex is added to this mesh, and the result is a spherical mesh as can be seen in Figure 4.3 (c). The whole algorithm is shown in Figure 4.4. Samples are selected, generated by ray tracing and inserted into the mesh as a vertex. Since every vertex also contains color information, this mesh can be rendered using OpenGL Hardware. 12

13 Figure 4.4: Overview of the algorithm used in tapestry [SSE00] Figure 4.5: The images in the first row were reconstruced by adding 50 samples per frame, the images in the second row with 5 samples per frame. [SSE00] Because the number of samples that can be taken for each frame is limited, it is important to generate those samples that contribute most to the image s quality. To achieve that, a priority is assigned to each triangle in the tapestry. This is done by an heuristic that calculates how well the gouraud-shaded triangle approximates the geometry it represents. If the approximation is crude, the sampling density in this triangle is refined, i.e. more samples are added and it is split up into more triangles. When the viewpoint remains static, the mesh can be used over multiple frames, because it is valid for all viewing directions. Resources are then used to further refine the mesh and improve image quality. Furthermore, the mesh evolves during user motion. This means that samples that are no longer valid are deleted from the tapestry, and triangles that become back facing in the new view are deleted. New samples are added as described above. Resulting images are shown in Figure 4.5, in the first row of images, 50 samples where added each frame while in the second row only 5 samples per frame. The tapestry cache can use generated samples over multiple frames and solves the problem of occlusion errors quite elegantly: Since the tapestry is a complete 3D mesh that is gouraud shaded, no holes or overlaps appear. During fast motion, the image is quite blurry and crude, but frame times are low. When the image changes are limited to a change in view direction, the image quality is improved and frame times go up due to the finer mesh. However, evolving the mesh creates a considerable overhead during motion. Also, since the tapestry is one single mesh, no view frustum culling can be applied. The main visible artefacts are blurry edges resulting from low sample density. These are minimized by adaptive sampling, but remain visible as long as the user moves. 4.3 Interruptible Rendering Woolley et al. [WLWD03] proposed a new approach to the fidelity-versus-performance tradeoff. They aim to unify spatial and temporal error and developed a method to render coarse images at a high framerate when the user or objects move fast, and highly detailed images at lower framerames when the scene is relatively still. Spatial errors result from rendering coarse approximations for speed. They are caused by low resolution or a 13

14 low level of detail (LOD) of models. Temporal error results from the delay imposed by rendering. It is caused by a low framerate or high latency. Both errors are displayed in Figure 4.6. useless, the front and the back buffers are swapped and the rendering begins again. While the image in the back buffer is refined, the overall dynamic visual error of the back buffer is compared to the error in the front buffer. If it is lower, the buffers are also swapped, because the image in the back buffer is nearer to the perfect image than the one in the front buffer. To avoid visibility artefacts when refining images, progressive hulls are used. This means that a lower level of detail is completely contained inside any higher level (like a russian Matrjoschka -puppet). This ensure that every higher level of detail completely occludes the previous ones, as displayed in Figure 4.7. Figure 4.6: Spatial and tempral error: the silhouette represents the ideal image rendered from the latest input. The left, coarsely sampled image has a high spatial error, but does not have a visible temporal error. The right image is finely sampled and therefore has no spatial error, but is displayed too late and is therefore displaced. [WLWD03] Woolley et al. introduced a unified measure of spatial and temporal error called dynamic visual error. It approximates the difference between an ideal image, rendered in full detail and representing the very latest user input, and the actual rendered image. It thus combines spatial and temporal error and makes the two compareable. The spatial error is estimated depending on the used rending method. With progressive ray casting, it is estimated by the maximum size of the image region sampled by a ray. Estimating the temporal error is done by comparing a small, precomputed set of vertices that should surround the model to the real position of the model. The temporal error results from the offset of the model from the surrounding vertices. Using this dynamic visual error, Woolley et al. suggest a new approach to fideltity vs. performance control called interruptible rendering : The image in the front buffer is refined as long as the temporal error does not exceed the spatial error. If it does, a coarse image is rendered into the back buffer and continously refined, until the temporal error exceeds the spatial error. Thus, refinement is Figure 4.7: Progressive hulls: Each finer level of detail encloses the previous rendering. [WLWD03] This results in a system that displays coarse images at a high framerate when the user input is changing rapidly, and detailed images at a low framerate when the input is static, thus nicely fitting the human ability to percieve high frequencies in the spatial domain in still scenes, but only low frequencies during fast motions. To compare their technique to other fidelity control schemes, Woolley et al. compared each frame of a previous recorded, offline rendered ideal walkthough with the frame that was displayed at the corresponding time in each fidelity control scheme using root mean squared (RMS) error. The results are quite impressive: Interruptible Ray Casting is two times more accurate than constant fidelity ray casting and even four times more accurate than traditional, unmanaged ray casting according to RMS error. But there are also some implications imposed by using interruptible rendering: Since it needs some computation not needed by traditional LOD schemes that can produce some overdraw, interruptible rending could be improved by eliminating progressive rendering in non-interruptible rendering chunks. However, these additional computations are necessary and very useful. 14

15 4.4 Frameless Rendering Bishop et al. [BFMZ94] presented an alternative rendering strategy that computes each pixel in random order based on the most recent input, so that each pixel accurately represents the time at which it is computed and displayed. This works somehow like the human visual system [Hub89], where photoreceptor cells on the retina called opsins absorb photons in random order and transmit a signal to the brain, where the image is put together as a composition of all signals from the opsins. After some time, the opsins regenerate and are ready to absorb the next photon. Classical double buffered rendering works more like a video projector, since each frame is rendered completely before it is displayed. Thus, frameless rendering is an approximation to that golden thread, since it computes a fraction of the pixels in the image and immediately updates them on the display. Compared to double buffered rendering there is an impression of smooth movement, but it is rather blurred. Bishop et al. describe it as a rough approximation of motion blur. Figure 4.8: The components of the adaptive rendering system. [DWWL05] The sampler consists of a controller, a ray tracer instructed by the controller what to sample next and a deep buffer that stores samples taken by the ray tracer. The reconstructor also keeps a deep buffer that stores samples. These are the input for the reconstructor s adaptive filter bank, which computes images reconstructed from the latest samples. The sampler aims to sample image regions that change often at a higher frequency than regions that are relatively still. To achieve that, the controller uses image-space tiling of the deep buffer, as shown in Figure 4.9. The major issue with frameless rendering is that the image may become a confusing mixture of past and current images when scene cuts or fast movements happen. The idea of frameless rendering was picked up and enhanced recently by Dayal et al. in their proposal of Adaptive Frameless Rendering [DWWL05]. They use ray tracing for adaptive sampling and reconstruction and display each taken sample instantly. The adaptive frameless rendering system consists of two components as displayed in Figure 4.8. Figure 4.9: The tiling used by the sampler to identify constantly changing regions. Object edges and occlusions are covered by finer tiling. [DWWL05] Of course, because the rendering is frameless, the sampled content is always changing. Therefore, the tiling has 15

16 to be adjusted continously. Each tile is roughly covering the same amount of color variation, which basically means that edges and regions that change more often are covered with more (smaller) tiles. The controller chooses a random tile to decide what to sample next. Therefore, regions that change more often are sampled more often because they are finer tiled. In the same intention as interruptible rendering, the reconstructor aims to provide highly detailed images when the scene is static, and coarser images at higher framerates when the scene is changing rapidly. This is achieved with adaptive space-time filtering, which means filters are shaped and sized using the tiling information provided by the sampler. In a highly dynamic image region, the reconstructor will only use recently aquired samples, but in a more static area, also older buffer samples can be used, resulting in higher detail. In adaptive frameless rendering, in contrast to most other fidelity vs. performance control schemes, the tradeoff decision is not made frame by frame, but constantly for the tiled regions in the current image. This further improves the adaptive rendering approach by not trading quality for performance for the whole image, but only specific areas in it. The difference between traditional frameless rendering, other fidelity control schemes such as the render cache [WDP99] and adaptive frameless rendering is demonstrated the Figure 4.10 and, more impressive, in a video 1. Figure 4.10: Adaptive frameless rendering compared to other methods: In (a), using frameless rendering [BFMZ94] creates many visible artefacts, in (b), using adaptive reconstruction eliminates many of these artefacts. (c) shows the results of using the render cache [WDP99]. In (d), adaptive reconstuction and the method of frameless rendering is used, clarifying edges and fast motions.[dwwl05] Adaptive frameless rendering provides ray tracing at interactive frame rates with good image quality. To achieve that, it exploits spatial and particulary temporal coherence by intelligently selected samples. However, spatial and temporal coherence have their limits, and after all the renderer is forced to sample more often if quality and performance should be improved any further. The rapidly improving performance of recent graphics hardware and the future possibility of hardware accelerated ray tracing make adaptive frameless rendering a promising approach after all. 4.5 Conclusions A lot of recent research has gone into rendering by adaptive refinement, and the results are quite impressive: Ray Tracing image quality at interactive frame rates. Like in most other methods that exploit frame-toframe coherence, there is a tradeoff between image quality and performance. Even though the overall image quality is quite good, clearly visible artefacts and - in some discussed methods - difficult anti-aliasing do remain. 1 luebke/publications/afr.egsr.submitted.small.mp4 The more recent methods, like interruptible rendering or adaptive frameless rendering, use the interesting ap- 16

17 proach of minimizing spatial and temporal error by intelligent sampling and adaptive reconstruction. Considering future improvements in hardware accelerated ray tracing, these approaches seem very promising. 5 Acknowledgements The author would like to thank Stefan Jeschke for advisory and a very helpful overview of the methods using previously rendered images for rendering acceleration, and Sabine Meyer for translating support and correcting grammar. References [AB91] [Bes88] [BFGS86] [BFMZ94] [CW93] E. H. Adelson and J. R. Bergen. The plenoptic function and the elements of early vision. Computational Models of Visual Processing, Chapter 1, Edited by Michael Landy and J. Anthony Movshon. The MIT Press, pages 3 20, P.J Besl. Active optical range imaging sensors. Machine Vision and Applications, Vol. 1: , Larry Bergman, Henry Fuchs, Eric Grant, and Susan Spach. Image rendering by adaptive refinement. In SIGGRAPH 86: Proceedings of the 13th annual conference on Computer graphics and interactive techniques, pages 29 37, New York, NY, USA, ACM Press. Gary Bishop, Henry Fuchs, Leonard McMillan, and Ellen J. Scher Zagier. Frameless rendering: double buffering considered harmful. In SIGGRAPH 94: Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pages , New York, NY, USA, ACM Press. Shenchang Eric Chen and Lance Williams. View interpolation for image synthesis. In SIGGRAPH 93: Proceedings of the 20th annual conference on Computer graphics and interactive techniques, pages , New York, NY, USA, ACM Press. [DWWL05] Abhinav Dayal, Cliff Woolley, Benjamin Watson, and David P. Luebke. Adaptive frameless rendering. In Rendering Techniques, pages , [Fau92] [Fau93] O. Faugeras. What can be seen in three dimensions from an uncalibrated stereo rig? In Proceedings of the 2nd European Conference on Computer Vision, pages , Santa Margherita Ligure, Italy, Springer- Verlag. Olivier Faugeras. Three-Dimensional Computer Vision: A Geometric Viewpoint, ISBN: MIT Press, Cambridge, Massachusetts, [Hub89] David H. Hubel. Eye, Brain and Vision, ISBN: New York: Scientific American Library, [Lip80] Andrew Lippman. Movie-maps: An application of the optical videodisc to computer graphics. In SIGGRAPH 80: Proceedings of the 7th annual conference on Computer graphics and interactive techniques, pages 32 42, New York, NY, USA, ACM Press. [LS97] Jed Lengyel and John Snyder. Rendering with coherent layers. In SIGGRAPH 97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages , New York, NY, USA, ACM Press/Addison-Wesley Publishing Co. [MB95] [MMB97] Leonard McMillan and Gary Bishop. Plenoptic modeling: an image-based rendering system. In SIGGRAPH 95: Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pages 39 46, New York, NY, USA, ACM Press. William R. Mark, Leonard McMillan, and Gary Bishop. Post-rendering 3d warping. In SI3D 97: Proceedings of the 1997 symposium on Interactive 3D graphics, pages 7 17., New York, NY, USA, ACM Press. 17

18 [RHC92] R. Gupta R. Hartley and T. Chang. Stereo from uncalibrated cameras. In Proceedings of the Conference on Computer Vision and Pattern Recognition, pages , Illinois, Urbana-Champaign. [RP94] Matthew Regan and Ronald Pose. Priority rendering with a virtual reality address recalculation pipeline. In SIGGRAPH 94: Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pages , New York, NY, USA, ACM Press. [Sim00] Maryann Simmons. A dynamic mesh display representation for the holodeck ray cache system. Technical Report CSD , University of California, Berkeley, January 13, [SSE00] M. SIMMONS, S.,, and C. EQUIN. Tapestry: A dynamic mesh-based display representation for interactive rendering, [TK96] Jay Torborg and James T. Kajiya. Talisman: commodity realtime 3d graphics for the pc. In SIGGRAPH 96: Proceedings of the 23rd annual conference on Computer graphics and [WDG02] [WDP99] interactive techniques, pages , New York, NY, USA, ACM Press. Bruce Walter, George Drettakis, and Donald P. Greenberg. Enhancing and optimizing the render cache. In P. Debevec and S. Gibson, editors, Eurographics Workshop on Rendering, pages Springer-Verlag, Bruce Walter, George Drettakis, and Steven Parker. Interactive rendering using the render cache. In Dani Lischinski and Gregory Ward Larson, editors, Rendering Techniques 99, Proceedings of the Eurographics Workshop in Granada, Spain, June 21-23, 1999, pages Springer, [WLWD03] Cliff Woolley, David Luebke, Benjamin Watson, and Abhinav Dayal. Interruptible rendering. In SI3D 03: Proceedings of the 2003 symposium on Interactive 3D graphics, pages , New York, NY, USA, ACM Press. [WS99] Gregory Ward and Larson Maryann Simmons. The holodeck interactive ray cache, June 13,

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about Image-Based Modeling and Rendering Image-Based Modeling and Rendering MIT EECS 6.837 Frédo Durand and Seth Teller 1 Some slides courtesy of Leonard McMillan, Wojciech Matusik, Byong Mok Oh, Max Chen 2

More information

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract

More information

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture Texture CS 475 / CS 675 Computer Graphics Add surface detail Paste a photograph over a surface to provide detail. Texture can change surface colour or modulate surface colour. Lecture 11 : Texture http://en.wikipedia.org/wiki/uv_mapping

More information

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture CS 475 / CS 675 Computer Graphics Lecture 11 : Texture Texture Add surface detail Paste a photograph over a surface to provide detail. Texture can change surface colour or modulate surface colour. http://en.wikipedia.org/wiki/uv_mapping

More information

Image-based modeling (IBM) and image-based rendering (IBR)

Image-based modeling (IBM) and image-based rendering (IBR) Image-based modeling (IBM) and image-based rendering (IBR) CS 248 - Introduction to Computer Graphics Autumn quarter, 2005 Slides for December 8 lecture The graphics pipeline modeling animation rendering

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Participating Media Measuring BRDFs 3D Digitizing & Scattering BSSRDFs Monte Carlo Simulation Dipole Approximation Today Ray Casting / Tracing Advantages? Ray

More information

Universiteit Leiden Computer Science

Universiteit Leiden Computer Science Universiteit Leiden Computer Science Optimizing octree updates for visibility determination on dynamic scenes Name: Hans Wortel Student-no: 0607940 Date: 28/07/2011 1st supervisor: Dr. Michael Lew 2nd

More information

Reusing Shading for Interactive Global Illumination GDC 2004

Reusing Shading for Interactive Global Illumination GDC 2004 Reusing Shading for Interactive Global Illumination Kavita Bala Cornell University Bruce Walter Introduction What is this course about? Schedule What is Global Illumination? Computing Global Illumination

More information

Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts

Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts MSc Computer Games and Entertainment Maths & Graphics II 2013 Lecturer(s): FFL (with Gareth Edwards) Fractal Terrain Based on

More information

The Light Field and Image-Based Rendering

The Light Field and Image-Based Rendering Lecture 11: The Light Field and Image-Based Rendering Visual Computing Systems Demo (movie) Royal Palace: Madrid, Spain Image-based rendering (IBR) So far in course: rendering = synthesizing an image from

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

Temporally Coherent Interactive Ray Tracing

Temporally Coherent Interactive Ray Tracing Temporally Coherent Interactive Ray Tracing William Martin Erik Reinhard Peter Shirley Steven Parker William Thompson School of Computing, University of Utah School of Electrical Engineering and Computer

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Reading for Today A Practical Model for Subsurface Light Transport, Jensen, Marschner, Levoy, & Hanrahan, SIGGRAPH 2001 Participating Media Measuring BRDFs

More information

TSBK03 Screen-Space Ambient Occlusion

TSBK03 Screen-Space Ambient Occlusion TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm

More information

A Warping-based Refinement of Lumigraphs

A Warping-based Refinement of Lumigraphs A Warping-based Refinement of Lumigraphs Wolfgang Heidrich, Hartmut Schirmacher, Hendrik Kück, Hans-Peter Seidel Computer Graphics Group University of Erlangen heidrich,schirmacher,hkkueck,seidel@immd9.informatik.uni-erlangen.de

More information

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen Image-Based Rendering Image-Based Rendering What is it? Still a difficult question to answer Uses images (photometric( info) as key component of model representation What s Good about IBR Model acquisition

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Final Projects Proposals due Thursday 4/8 Proposed project summary At least 3 related papers (read & summarized) Description of series of test cases Timeline & initial task assignment The Traditional Graphics

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 11794--4400 Tel: (631)632-8450; Fax: (631)632-8334

More information

Topics and things to know about them:

Topics and things to know about them: Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will

More information

Temporally Coherent Interactive Ray Tracing

Temporally Coherent Interactive Ray Tracing Vol. 7, No. 2: 00 00 Temporally Coherent Interactive Ray Tracing William Martin, Peter Shirley, Steven Parker, William Thompson University of Utah Erik Reinhard University of Central Florida Abstract.

More information

Rasterization. MIT EECS Frédo Durand and Barb Cutler. MIT EECS 6.837, Cutler and Durand 1

Rasterization. MIT EECS Frédo Durand and Barb Cutler. MIT EECS 6.837, Cutler and Durand 1 Rasterization MIT EECS 6.837 Frédo Durand and Barb Cutler MIT EECS 6.837, Cutler and Durand 1 Final projects Rest of semester Weekly meetings with TAs Office hours on appointment This week, with TAs Refine

More information

Computer Graphics I Lecture 11

Computer Graphics I Lecture 11 15-462 Computer Graphics I Lecture 11 Midterm Review Assignment 3 Movie Midterm Review Midterm Preview February 26, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Final projects. Rasterization. The Graphics Pipeline. Illumination (Shading) (Lighting) Viewing Transformation. Rest of semester. This week, with TAs

Final projects. Rasterization. The Graphics Pipeline. Illumination (Shading) (Lighting) Viewing Transformation. Rest of semester. This week, with TAs Rasterization MIT EECS 6.837 Frédo Durand and Barb Cutler MIT EECS 6.837, Cutler and Durand Final projects Rest of semester Weekly meetings with TAs Office hours on appointment This week, with TAs Refine

More information

LOD and Occlusion Christian Miller CS Fall 2011

LOD and Occlusion Christian Miller CS Fall 2011 LOD and Occlusion Christian Miller CS 354 - Fall 2011 Problem You want to render an enormous island covered in dense vegetation in realtime [Crysis] Scene complexity Many billions of triangles Many gigabytes

More information

Graphics for VEs. Ruth Aylett

Graphics for VEs. Ruth Aylett Graphics for VEs Ruth Aylett Overview VE Software Graphics for VEs The graphics pipeline Projections Lighting Shading VR software Two main types of software used: off-line authoring or modelling packages

More information

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen Image-Based Rendering Image-Based Rendering What is it? Still a difficult question to answer Uses images (photometric( info) as key component of model representation What s Good about IBR Model acquisition

More information

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005 Real Time Rendering CS 563 Advanced Topics in Computer Graphics Songxiang Gu Jan, 31, 2005 Introduction Polygon based rendering Phong modeling Texture mapping Opengl, Directx Point based rendering VTK

More information

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Hybrid Rendering for Collaborative, Immersive Virtual Environments Hybrid Rendering for Collaborative, Immersive Virtual Environments Stephan Würmlin wuermlin@inf.ethz.ch Outline! Rendering techniques GBR, IBR and HR! From images to models! Novel view generation! Putting

More information

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading: Reading Required: Watt, Section 5.2.2 5.2.4, 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading: 18. Projections and Z-buffers Foley, et al, Chapter 5.6 and Chapter 6 David F. Rogers

More information

A Three Dimensional Image Cache for Virtual Reality

A Three Dimensional Image Cache for Virtual Reality A Three Dimensional Image Cache for Virtual Reality Gernot Schaufler and Wolfgang Stürzlinger GUP, Johannes Kepler Universität Linz, Altenbergerstr.69, A- Linz, Austria/Europe schaufler@gup.uni-linz.ac.at

More information

Overview. A real-time shadow approach for an Augmented Reality application using shadow volumes. Augmented Reality.

Overview. A real-time shadow approach for an Augmented Reality application using shadow volumes. Augmented Reality. Overview A real-time shadow approach for an Augmented Reality application using shadow volumes Introduction of Concepts Standard Stenciled Shadow Volumes Method Proposed Approach in AR Application Experimental

More information

Point based Rendering

Point based Rendering Point based Rendering CS535 Daniel Aliaga Current Standards Traditionally, graphics has worked with triangles as the rendering primitive Triangles are really just the lowest common denominator for surfaces

More information

Visible-Surface Detection Methods. Chapter? Intro. to Computer Graphics Spring 2008, Y. G. Shin

Visible-Surface Detection Methods. Chapter? Intro. to Computer Graphics Spring 2008, Y. G. Shin Visible-Surface Detection Methods Chapter? Intro. to Computer Graphics Spring 2008, Y. G. Shin The Visibility Problem [Problem Statement] GIVEN: a set of 3-D surfaces, a projection from 3-D to 2-D screen,

More information

Accelerated Ambient Occlusion Using Spatial Subdivision Structures

Accelerated Ambient Occlusion Using Spatial Subdivision Structures Abstract Ambient Occlusion is a relatively new method that gives global illumination like results. This paper presents a method to accelerate ambient occlusion using the form factor method in Bunnel [2005]

More information

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development Chap. 5 Scene Management Overview Scene Management vs Rendering This chapter is about rendering

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Lightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute

Lightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute Lightcuts Jeff Hui Advanced Computer Graphics 2010 Rensselaer Polytechnic Institute Fig 1. Lightcuts version on the left and naïve ray tracer on the right. The lightcuts took 433,580,000 clock ticks and

More information

Image-Based Rendering. Image-Based Rendering

Image-Based Rendering. Image-Based Rendering Image-Based Rendering Image-Based Rendering What is it? Still a difficult question to answer Uses images (photometric info) as key component of model representation 1 What s Good about IBR Model acquisition

More information

CHAPTER 1 Graphics Systems and Models 3

CHAPTER 1 Graphics Systems and Models 3 ?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........

More information

Shadow Rendering EDA101 Advanced Shading and Rendering

Shadow Rendering EDA101 Advanced Shading and Rendering Shadow Rendering EDA101 Advanced Shading and Rendering 2006 Tomas Akenine-Möller 1 Why, oh why? (1) Shadows provide cues about spatial relationships among objects 2006 Tomas Akenine-Möller 2 Why, oh why?

More information

Image-Based Rendering and Light Fields

Image-Based Rendering and Light Fields CS194-13: Advanced Computer Graphics Lecture #9 Image-Based Rendering University of California Berkeley Image-Based Rendering and Light Fields Lecture #9: Wednesday, September 30th 2009 Lecturer: Ravi

More information

Pipeline Operations. CS 4620 Lecture 10

Pipeline Operations. CS 4620 Lecture 10 Pipeline Operations CS 4620 Lecture 10 2008 Steve Marschner 1 Hidden surface elimination Goal is to figure out which color to make the pixels based on what s in front of what. Hidden surface elimination

More information

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1) Computer Graphics Lecture 14 Bump-mapping, Global Illumination (1) Today - Bump mapping - Displacement mapping - Global Illumination Radiosity Bump Mapping - A method to increase the realism of 3D objects

More information

GUERRILLA DEVELOP CONFERENCE JULY 07 BRIGHTON

GUERRILLA DEVELOP CONFERENCE JULY 07 BRIGHTON Deferred Rendering in Killzone 2 Michal Valient Senior Programmer, Guerrilla Talk Outline Forward & Deferred Rendering Overview G-Buffer Layout Shader Creation Deferred Rendering in Detail Rendering Passes

More information

Image-Based Rendering

Image-Based Rendering Image-Based Rendering COS 526, Fall 2016 Thomas Funkhouser Acknowledgments: Dan Aliaga, Marc Levoy, Szymon Rusinkiewicz What is Image-Based Rendering? Definition 1: the use of photographic imagery to overcome

More information

Screen Space Ambient Occlusion TSBK03: Advanced Game Programming

Screen Space Ambient Occlusion TSBK03: Advanced Game Programming Screen Space Ambient Occlusion TSBK03: Advanced Game Programming August Nam-Ki Ek, Oscar Johnson and Ramin Assadi March 5, 2015 This project report discusses our approach of implementing Screen Space Ambient

More information

Visibility and Occlusion Culling

Visibility and Occlusion Culling Visibility and Occlusion Culling CS535 Fall 2014 Daniel G. Aliaga Department of Computer Science Purdue University [some slides based on those of Benjamin Mora] Why? To avoid processing geometry that does

More information

Graphics and Interaction Rendering pipeline & object modelling

Graphics and Interaction Rendering pipeline & object modelling 433-324 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Lecture outline Introduction to Modelling Polygonal geometry The rendering

More information

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources.

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources. 11 11.1 Basics So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources. Global models include incident light that arrives

More information

PowerVR Hardware. Architecture Overview for Developers

PowerVR Hardware. Architecture Overview for Developers Public Imagination Technologies PowerVR Hardware Public. This publication contains proprietary information which is subject to change without notice and is supplied 'as is' without warranty of any kind.

More information

Rendering: Reality. Eye acts as pinhole camera. Photons from light hit objects

Rendering: Reality. Eye acts as pinhole camera. Photons from light hit objects Basic Ray Tracing Rendering: Reality Eye acts as pinhole camera Photons from light hit objects Rendering: Reality Eye acts as pinhole camera Photons from light hit objects Rendering: Reality Eye acts as

More information

3D Rasterization II COS 426

3D Rasterization II COS 426 3D Rasterization II COS 426 3D Rendering Pipeline (for direct illumination) 3D Primitives Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments

Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments Nikos Komodakis and Georgios Tziritas Computer Science Department, University of Crete E-mails: {komod,

More information

Shape as a Perturbation to Projective Mapping

Shape as a Perturbation to Projective Mapping Leonard McMillan and Gary Bishop Department of Computer Science University of North Carolina, Sitterson Hall, Chapel Hill, NC 27599 email: mcmillan@cs.unc.edu gb@cs.unc.edu 1.0 Introduction In the classical

More information

A million pixels, a million polygons. Which is heavier? François X. Sillion. imagis* Grenoble, France

A million pixels, a million polygons. Which is heavier? François X. Sillion. imagis* Grenoble, France A million pixels, a million polygons. Which is heavier? François X. Sillion imagis* Grenoble, France *A joint research project of CNRS, INRIA, INPG and UJF MAGIS Why this question? Evolution of processing

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

Implementation of a panoramic-based walkthrough system

Implementation of a panoramic-based walkthrough system Implementation of a panoramic-based walkthrough system Abstract A key component in most virtual reality systems is the ability to perform a walkthrough of a virtual environment from different viewing positions

More information

Ray Tracing III. Wen-Chieh (Steve) Lin National Chiao-Tung University

Ray Tracing III. Wen-Chieh (Steve) Lin National Chiao-Tung University Ray Tracing III Wen-Chieh (Steve) Lin National Chiao-Tung University Shirley, Fundamentals of Computer Graphics, Chap 10 Doug James CG slides, I-Chen Lin s CG slides Ray-tracing Review For each pixel,

More information

Efficient View-Dependent Sampling of Visual Hulls

Efficient View-Dependent Sampling of Visual Hulls Efficient View-Dependent Sampling of Visual Hulls Wojciech Matusik Chris Buehler Leonard McMillan Computer Graphics Group MIT Laboratory for Computer Science Cambridge, MA 02141 Abstract In this paper

More information

Lecture 17: Shadows. Projects. Why Shadows? Shadows. Using the Shadow Map. Shadow Maps. Proposals due today. I will mail out comments

Lecture 17: Shadows. Projects. Why Shadows? Shadows. Using the Shadow Map. Shadow Maps. Proposals due today. I will mail out comments Projects Lecture 17: Shadows Proposals due today I will mail out comments Fall 2004 Kavita Bala Computer Science Cornell University Grading HW 1: will email comments asap Why Shadows? Crucial for spatial

More information

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011) Lecture 15: Image-Based Rendering and the Light Field Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Demo (movie) Royal Palace: Madrid, Spain Image-based rendering (IBR) So

More information

Texture-Mapping Tricks. How Bad Does it Look? We've Seen this Sort of Thing Before. Sampling Texture Maps

Texture-Mapping Tricks. How Bad Does it Look? We've Seen this Sort of Thing Before. Sampling Texture Maps Texture-Mapping Tricks Filtering Textures Textures and Shading Bump Mapping Solid Textures How Bad Does it Look? Let's take a look at what oversampling looks like: Click and drag the texture to rotate

More information

High-quality Shadows with Improved Paraboloid Mapping

High-quality Shadows with Improved Paraboloid Mapping High-quality Shadows with Improved Paraboloid Mapping Juraj Vanek, Jan Navrátil, Adam Herout, and Pavel Zemčík Brno University of Technology, Faculty of Information Technology, Czech Republic http://www.fit.vutbr.cz

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015 Orthogonal Projection Matrices 1 Objectives Derive the projection matrices used for standard orthogonal projections Introduce oblique projections Introduce projection normalization 2 Normalization Rather

More information

Visibility. Tom Funkhouser COS 526, Fall Slides mostly by Frédo Durand

Visibility. Tom Funkhouser COS 526, Fall Slides mostly by Frédo Durand Visibility Tom Funkhouser COS 526, Fall 2016 Slides mostly by Frédo Durand Visibility Compute which part of scene can be seen Visibility Compute which part of scene can be seen (i.e., line segment from

More information

Mach band effect. The Mach band effect increases the visual unpleasant representation of curved surface using flat shading.

Mach band effect. The Mach band effect increases the visual unpleasant representation of curved surface using flat shading. Mach band effect The Mach band effect increases the visual unpleasant representation of curved surface using flat shading. A B 320322: Graphics and Visualization 456 Mach band effect The Mach band effect

More information

Pipeline Operations. CS 4620 Lecture 14

Pipeline Operations. CS 4620 Lecture 14 Pipeline Operations CS 4620 Lecture 14 2014 Steve Marschner 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives

More information

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane Rendering Pipeline Rendering Converting a 3D scene to a 2D image Rendering Light Camera 3D Model View Plane Rendering Converting a 3D scene to a 2D image Basic rendering tasks: Modeling: creating the world

More information

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering T. Ropinski, F. Steinicke, K. Hinrichs Institut für Informatik, Westfälische Wilhelms-Universität Münster

More information

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye Ray Tracing What was the rendering equation? Motivate & list the terms. Relate the rendering equation to forward ray tracing. Why is forward ray tracing not good for image formation? What is the difference

More information

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11 Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION

More information

Multi-View Stereo for Static and Dynamic Scenes

Multi-View Stereo for Static and Dynamic Scenes Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.

More information

Chapter 4. Chapter 4. Computer Graphics 2006/2007 Chapter 4. Introduction to 3D 1

Chapter 4. Chapter 4. Computer Graphics 2006/2007 Chapter 4. Introduction to 3D 1 Chapter 4 Chapter 4 Chapter 4. Introduction to 3D graphics 4.1 Scene traversal 4.2 Modeling transformation 4.3 Viewing transformation 4.4 Clipping 4.5 Hidden faces removal 4.6 Projection 4.7 Lighting 4.8

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Matthias Zwicker Universität Bern Herbst 2009 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation of physics Global

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions.

Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions. 1 2 Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions. 3 We aim for something like the numbers of lights

More information

Simple Silhouettes for Complex Surfaces

Simple Silhouettes for Complex Surfaces Eurographics Symposium on Geometry Processing(2003) L. Kobbelt, P. Schröder, H. Hoppe (Editors) Simple Silhouettes for Complex Surfaces D. Kirsanov, P. V. Sander, and S. J. Gortler Harvard University Abstract

More information

Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms

Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms Visible Surface Determination Visibility Culling Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms Divide-and-conquer strategy:

More information

Recent Advances in Monte Carlo Offline Rendering

Recent Advances in Monte Carlo Offline Rendering CS294-13: Special Topics Lecture #6 Advanced Computer Graphics University of California, Berkeley Monday, 21 September 2009 Recent Advances in Monte Carlo Offline Rendering Lecture #6: Monday, 21 September

More information

Topic 12: Texture Mapping. Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping

Topic 12: Texture Mapping. Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping Topic 12: Texture Mapping Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping Texture sources: Photographs Texture sources: Procedural Texture sources: Solid textures

More information

CS 498 VR. Lecture 20-4/11/18. go.illinois.edu/vrlect20

CS 498 VR. Lecture 20-4/11/18. go.illinois.edu/vrlect20 CS 498 VR Lecture 20-4/11/18 go.illinois.edu/vrlect20 Review from last lecture Texture, Normal mapping Three types of optical distortion? How does texture mipmapping work? Improving Latency and Frame Rates

More information

Triangle Rasterization

Triangle Rasterization Triangle Rasterization Computer Graphics COMP 770 (236) Spring 2007 Instructor: Brandon Lloyd 2/07/07 1 From last time Lines and planes Culling View frustum culling Back-face culling Occlusion culling

More information

Advanced Ray Tracing

Advanced Ray Tracing Advanced Ray Tracing Thanks to Fredo Durand and Barb Cutler The Ray Tree Ni surface normal Ri reflected ray Li shadow ray Ti transmitted (refracted) ray 51 MIT EECS 6.837, Cutler and Durand 1 Ray Tree

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

Texture Mapping II. Light maps Environment Maps Projective Textures Bump Maps Displacement Maps Solid Textures Mipmaps Shadows 1. 7.

Texture Mapping II. Light maps Environment Maps Projective Textures Bump Maps Displacement Maps Solid Textures Mipmaps Shadows 1. 7. Texture Mapping II Light maps Environment Maps Projective Textures Bump Maps Displacement Maps Solid Textures Mipmaps Shadows 1 Light Maps Simulates the effect of a local light source + = Can be pre-computed

More information

Towards a Perceptual Method of Blending for Image-Based Models

Towards a Perceptual Method of Blending for Image-Based Models Towards a Perceptual Method of Blending for Image-Based Models Gordon Watson, Patrick O Brien and Mark Wright Edinburgh Virtual Environment Centre University of Edinburgh JCMB, Mayfield Road, Edinburgh

More information

Topic 11: Texture Mapping 11/13/2017. Texture sources: Solid textures. Texture sources: Synthesized

Topic 11: Texture Mapping 11/13/2017. Texture sources: Solid textures. Texture sources: Synthesized Topic 11: Texture Mapping Motivation Sources of texture Texture coordinates Bump mapping, mip mapping & env mapping Texture sources: Photographs Texture sources: Procedural Texture sources: Solid textures

More information

Chapter 11 Global Illumination. Part 1 Ray Tracing. Reading: Angel s Interactive Computer Graphics (6 th ed.) Sections 11.1, 11.2, 11.

Chapter 11 Global Illumination. Part 1 Ray Tracing. Reading: Angel s Interactive Computer Graphics (6 th ed.) Sections 11.1, 11.2, 11. Chapter 11 Global Illumination Part 1 Ray Tracing Reading: Angel s Interactive Computer Graphics (6 th ed.) Sections 11.1, 11.2, 11.3 CG(U), Chap.11 Part 1:Ray Tracing 1 Can pipeline graphics renders images

More information

Advanced Computer Graphics

Advanced Computer Graphics Advanced Computer Graphics Lecture 2: Modeling (1): Polygon Meshes Bernhard Jung TU-BAF, Summer 2007 Overview Computer Graphics Icon: Utah teapot Polygon Meshes Subdivision Polygon Mesh Optimization high-level:

More information

Interactive Rendering using the Render Cache

Interactive Rendering using the Render Cache Author manuscript, published in "Rendering techniques '99 (Proceedings of the 10th Eurographics Workshop on Rendering) 10 (1999) 235--246" Interactive Rendering using the Render Cache Bruce Waltery, George

More information

Adaptive Supersampling Using Machine Learning Techniques

Adaptive Supersampling Using Machine Learning Techniques Adaptive Supersampling Using Machine Learning Techniques Kevin Winner winnerk1@umbc.edu Abstract Previous work in adaptive supersampling methods have utilized algorithmic approaches to analyze properties

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

Shadows. COMP 575/770 Spring 2013

Shadows. COMP 575/770 Spring 2013 Shadows COMP 575/770 Spring 2013 Shadows in Ray Tracing Shadows are important for realism Basic idea: figure out whether a point on an object is illuminated by a light source Easy for ray tracers Just

More information

Computer Graphics. Shadows

Computer Graphics. Shadows Computer Graphics Lecture 10 Shadows Taku Komura Today Shadows Overview Projective shadows Shadow texture Shadow volume Shadow map Soft shadows Why Shadows? Shadows tell us about the relative locations

More information