1 State of The Art for Volume Rendering

Size: px
Start display at page:

Download "1 State of The Art for Volume Rendering"

Transcription

1 1 State of The Art for Volume Rendering Technical Report Jianlong Zhou and Klaus D. Tönnies Institute for Simulation and Graphics University of Magdeburg, Magdeburg, Germany The interpretation of 3D scalar fields is considerably difficult because of their intrinsic complexity. In the context of this thesis the term volume rendering refers to the visualization of static 3D scalar fields especially the 3D imaging in medicine. A variety of different approaches have been developed in the past decades for volume rendering. This chapter tries to assess different volume rendering algorithms and draw an overall picture of significant concepts and ideas. It first examines existing volume rendering algorithms and compares advantages and disadvantages of each. Through investigating the current literature and systems, we conclude the current active research topics for volume rendering. Then different approaches are proposed for laying down the problems still existed for volume data information exploration. 1.1 Basics for Volume Rendering in Medicine Volume Data Traditionally, computer graphics represented a model as a set of vectors which were displayed on vector graphic displays. With the introduction of raster displays, polygons became the basic rendering primitive, where the polygons of a model were rasterized into pixels, which represent the compounds of the frame buffer. Compared to surface data which solely determines the outer shell of an object, volume data is used to describe the internal structures of a solid object. A volume is a regular 3D array of data values, voxels. The three-dimensional array can also be seen as a stack of two-dimensional arrays of data values and each of these two-dimensional arrays as an image (or slice), where each of the data values represents a pixel (Figure 1.1). This alternative view is motivated by the slice oriented, traditional way physicians look at a volumetric dataset. It is denoted by a matrix V = Γ X Y Z with X rows, Y columns and Z slices, which represents a discrete grid of volume elements (or voxels) v {1,..., X} {1,..., Y } {1,..., Z}. For each 1

2 2 Chapter 1. State of The Art for Volume Rendering voxel we denote by I(v) : N 3 Γ its gray value, which, for example, reflects the x-ray intensity in CT volume data. Of course, the voxel value can be a vector to represent the object properties in some specific fields (e.g. computational fluid dynamics). Each voxel is characterized by its position in the 3D grid. Medical volume data obtained from MRI (Magnetic Resonance Imaging) and CT-scanners (Computed-Tomography) typically are anisotropic with an equal sampling density in x and y direction but a coarser density along the z direction. The size is typically about more than 100 slices or more with voxels each, which requires a special care concerning the efficiency of the developed algorithms. These data sets V are the basis for our assessment of volume rendering algorithms. Figure 1.1: 3D volume data representation Optical Model for Volume Rendering The basic goal of volume rendering is to find a good approximation of the low albedo volume rendering optical model that expresses the relation between the volume intensity and opacity function and the intensity in the image plane. In this section we describe a typical physical model on which volume rendering algorithms are based. Our intention is to explain theoretical foundations on which volume rendering algorithms are based. Physical optical models are afforded by radiative transport theory which attempts to view a volume as a cloud populated with particles. The transport of light is studied by considering the various phenomena at work. Light from a source can either be scattered or absorbed by particles. There might be a net increase when particles emit light themselves. Models which take into account all the phenomena tend to be very complicated. In practice much simpler local models are used. Most of the standard volume rendering algorithms therefore approximate an volume rendering integral (VRI) equation (Max 1995; Meissner et al. 2000) by: I λ (x, r) = L 0 C λ (s) µ (s) e s 0 µ(t)dt ds (1.1)

3 1.2. Direct Volume Rendering 3 where the amount of light of wavelength λ coming from ray direction r that is received at location x on the image plane is I λ, L is the length of the ray r, µ is the density of volume particles which receive light from all surrounding light sources and reflect this light towards the observer according to their specular and diffuse material properties, C λ is the light of wavelength λ reflected and/or emitted at location s in the direction of r. In general case, the Equation 1.1 can not be computed analytically. Hence, in practical uses, most volume rendering algorithms get a numeric solution of Equation 1.1 through employing a zero-order quadrature of the inner integral along with a first-order approximation of the exponential. The outer integral is also solved by a finite sum of uniform samples. Then we get the familiar compositing equation (Levoy 1990): I λ (x, r) = M k=1 k 1 C λ (s k ) α (s k ) (1 α (s i )) (1.2) where α(s k ) are the opacity samples along the ray and C λ (s k ) are the local color values derived from the illumination model. This expression is referred as discretized VRI (DVRI), where opacity α = 1.0 transparency. Equation 1.2 represents a common theoretical framework for all volume rendering algorithms. All algorithms obtain colors and opacities in discrete intervals along a linear path and composite them in front to back order. However, the algorithms can be distinguished by the process in which the colors C λ (s k ) and opacities α (s k ) are calculated in each interval k, and how wide the interval width s is chosen (Meissner et al. 2000). C and α are now transfer functions, commonly implemented as lookup-tables. The raw volume densities are used to index the transfer functions for color and opacity, thus the fine details of the volume data can be expressed in the final image through using different transfer functions. i=1 1.2 Direct Volume Rendering In general, volume rendering approaches are divided into two categories. One is indirect volume rendering approaches. Surface rendering approach belongs to this category. The other is direct volume rendering (DVR). We consider that volume rendering methods which do not explicitly extract geometric structures from volume data and render volumes based on fuzzy segmentation through transfer function are called direct volume rendering. The ray casting approach is the typical direct volume rendering method. In addition to these methods, other techniques have been proposed for volume rendering. Frequency domain volume rendering transforms original data set to frequency domain and then renders it. Recently, nonphotorealistic rendering has been used for volume rendering. This definitely extends the investigation for volume rendering. In this section, we show how practical volume rendering algorithms are obtained and explain the various stages of the pipeline in some amount of details.

4 4 Chapter 1. State of The Art for Volume Rendering Ray Casting Of all volume rendering algorithms, ray casting has a largest body of publications over the years. The basic goal of ray casting is to allow the best use of the three-dimensional data and not attempt to impose any geometric structure on it. It solves one of the most important limitations of surface extraction techniques, namely the way in which they display a projection of a thin shell in the acquisition space. Surface extraction techniques fail to take into account that, particularly in medical imaging, data may originate from fluid and other materials which may be partially transparent and should be modeled as such. Ray casting does not suffer from this limitation. Figure 1.2: A ray R casts into the voxel of 3D volume data. Currently, most volume rendering that uses ray casting is based on the Blinn/Kajiya model (see Figure 1.2) and accomplished with Equation 1.2. In the image-space oriented ray casting approaches, rays are cast from the view-point through the view-plane into the volume. Along their way through that volume, data are defined at the corners of each cell (voxel) and samples are calculated usually at equal sampling distances between two sample points. A sample is computed based on trilinear interpolation within a cell of eight voxels. Thereafter, it is classified according to the transfer functions. If that sample has a contribution to the ray, the normal gradient is computed based on a trilinear interpolation of the normalized central differences at the eight voxels of the cell which contains the sample point. Finally, the sample is composited with the previous samples of the ray along the ray path. The fastest implementations of this method are achieved by combining several common computer graphics techniques, like early ray termination, octree decomposition and adaptive sampling (Ogata et al. 1998). Early ray termination is a technique that can be used if the rays are traversed front-to-back. It simply ends the ray traversal after the accumulated opacity for that ray is above a certain threshold. Octree decomposition is

5 1.2. Direct Volume Rendering 5 Figure 1.3: Ray casting of a volume dataset with parallel projection and uniform sampling. a hierarchical spatial enumeration technique that permits fast traversal of empty space, thus saving substantial time in traversing the volume and calculating trilinear interpolations. Adaptive sampling tries to minimize work by taking advantage of the homogeneous parts of the volume, for each square in the image, one traverses the rays going out of the vertices of the bounding box and recursively goes down repartitioning this square into smaller ones if the difference in the image pixel value is larger than a threshold. Although these acceleration techniques obtain a significant speedup in rendering times, ray casting is far from being an interactive technique. Nevertheless, it generates images of highest quality Splatting Splatting was proposed by Westover (Westover 1990). In splatting algorithms the voxels are represented by 3D reconstruction kernels, commonly Gaussian kernels with amplitudes scaled by the voxel values. Integration of these kernels along the line of sight results in building blocks called footprints. A mapping to the image plane by superposition of the footprints weighted by the voxel values forms the image in the view plane. Figure 1.4 shows the principle of splatting. High speed is obtained by a footprint lookup table of the pre-computed, radially symmetric kernel function. The procedure is called splatting because it can be likened to throwing a snowball (voxel) at a glass plate. The snow contribution at the center of impact will be high and the contribution will drop off further away from the center of impact. Although splatting technique operates differently from ray casting, it may produce high quality images that are similar to the images produced by the other algorithms. A major advantage of splatting is that only voxels relevant to the image must be projected and rasterized. This can tremendously reduce the volume data that needs to be both processed and stored (Mueller et al. 1999).

6 6 Chapter 1. State of The Art for Volume Rendering Figure 1.4: Splats arranged on view-plane aligned voxel slaps are projected into sheet buffer, which are in turn composited front to back into the final image on the view-plane. There are some disadvantages to the splatting approach: the use of pre-integrated kernels introduces inaccuracies into the compositing process since the 3D reconstruction kernel is composited as a whole, and not piecewise, as part of an interpolated sample along a viewing ray. Due to this circumstance, the colors of hidden background objects may bleed into the final image. This causes severe brightness variations. Mueller (Mueller and Crawfis 1998) modified the splatting approach such that slabs of the voxel kernels were processed in an image aligned fashion and projected in sheet buffers, which in turn are composited into the final image on the view-plane. Other modifications include the early splat elimination for the removal of noncontributing splats from rasterization (Mueller et al. 1999). Here, the splats are added within consecutive cache-sheets, represented by the volume slices most parallel to the image plane. The sheets are subsequently composited together in a back-to-front (or front-to-back) order. Problems with aliasing artifacts were the topic of advanced research as well as the improvement of the visual quality of images Shear-Warp Shear-Warp was proposed by Lacroute (Lacroute and Levoy 1994) and has been recognized as the fastest software volume renderer to date. It achieves this by performing a run-length encoding (RLE) compression of the volume to allow a fast streaming through the volume data, and rendering uses a simultaneous object-order and image-order traversal to rapidly splat entire slices of the volume onto the image plane. The shear-warp factorization algorithm operates as follows (Figure 1.5): 1) Firstly transform the volume data to sheared object space by translating and resampling each slice according to S. P specifies which of the three possible slicing directions to use. 2) Composite the resampled slices together in front to back order using the over operator (Porter and Duff 1984). This step projects the volume into a 2D intermediate image in sheared

7 1.2. Direct Volume Rendering 7 object space. 3) Transform the intermediate image to image space by warping according to M warp in Equation 1.3. This second resampling step produces the correct final image. M view = P S M warp (1.3) Figure 1.5: The shear-warp algorithm includes three conceptual steps: shear and resample the volume slices, project resampled voxel scanlines onto intermediate image scanlines, and warp the intermediate image into the final image (Lacroute and Levoy 1994). Figure 1.6: A volume is transformed to sheared object space for a parallel projection by translating each slice. The projection in sheared object space is simple and efficient (Lacroute and Levoy 1994). The projection follows a synchronized scanline-order that is aligned to both the volume data and the image. The projection onto the scanline requires a resampling of the data. This is done by a 2D resampling filter for reasons of efficiency. Similar to the cell projection algorithm mentioned before, pixels marked as opaque are not processed further. The final image is obtained by warping and a resampling of the intermediate image. The advantage of this technique is that scanlines of the volume data and scanlines of the intermediate image are always aligned and that the necessary projections are straight

8 8 Chapter 1. State of The Art for Volume Rendering forward and quite efficient. Furthermore, the algorithm gains speed and efficiency since the filter operations are performed in 2D. Shear warp rendering is capable of generating images of reasonably sized data volumes at interactive frame rates. Nevertheless a good deal of image quality is sacrificed. Several acceleration techniques have been developed for shear-warp: using parallel rendering approaches (Lacroute 1996), or optimized sparse data representations (Csebfalvi 1999). Additionally techniques for the hybrid rendering of volume data and polygonal structures based on shear warp factorization were introduced (Zakaria et al. 1999). The concept of shear warp however has some major drawbacks. First, the choice of resampling filters for both the scanline resampling and the image resampling is limited to 2D. Therefore artifacts become easily visible. Furthermore, the efficient access to the data from arbitrary directions requires to maintain three persistent copies of the volume, each compressed along a different axis. Finally, the algorithm exhibits slower performance if a significant amount of voxels have low-opacity values Maximum Intensity Projection Maximum intensity projection (MIP) (Sakas et al. 1995; Sato et al. 1998; Mroz et al. 2000) is a volumetric visualization method capable of improving the perception of location, shape, and topology of objects. The main idea is: the intensity assigned to a pixel in the rendition is simply the maximum of the scene intensity encountered along a viewing ray within the scene domain. MIP is mainly used to visualize high-intensity structures within volumetric data. It is especially suitable for depicting blood vessels in medical imaging applications. Usually, MIP contains no shading information, depth and occlusion information is lost. Structures with higher data value lying behind a lower valued object appear to be in front of it. This leads to irregular depth positions of the projected voxels. The most common way to ease the interpretation of such images is to animate or interactively change the viewpoint while viewing. So the recent research on MIP aims at interactive MIP and high image quality MIP (Mroz et al. 2000; Sakas et al. 1995). Sato (Sato et al. 1998) developed a local maximum intensity projection (LMIP) algorithm, which is an extended version of MIP. In this approach, the rendered image is created by tracing an optical ray traversing 3D data from the viewpoint in the viewing direction, and then selecting the first local maximum intensity value, instead of global maximum value, encountered that is larger than a pre-selected threshold value. LMIP can depict spatial information to some degree. Because of the limitation of MIP, it is only used in limited areas D Texture Mapping Volume Rendering Texture mapping is a widely supported technique in traditional 3D graphics to increase the realism in synthetic images. The use of texture mapping for volume rendering was

9 1.2. Direct Volume Rendering 9 popularized by Cabral (Cabral et al. 1994). Figure 1.7 shows the principle of 3D texture mapping volume rendering. The basic idea of 3D texture mapping volume rendering is to interpret the voxel array as a 3D texture defined over [0, 1] 3 and to understand 3D texture mapping as the trilinear interpolation of the volume data set at an arbitrary point within this domain. At the core of the algorithm multiple planes parallel to the viewing plane are clipped against the parametric texture domain and sent to the geometry processing unit. The hardware is then used for interpolating 3D texture coordinates issued at the polygon vertices and for reconstructing the texture samples by trilinearly interpolating within the volume. Finally, pixel values are blended appropriately into the frame buffer in order to approximate the continuous volume rendering integral (Westermann and Ertl 1998). The sampling of the texture slices from the volume is either trilinear, if 3D texture mapping hardware is available, or bilinear, if only 2D textures are supported. Shading can be achieved by the computation of a pre-shaded color volume (van Gelder and Kim 1996), or by using multi-pass methods to visualize isosurfaces (Westermann and Ertl 1998), or transparent volumes (Meiner et al. 1999). Texture-based multiresolution volume rendering is used for interactive volume rendering of very large data sets (LaMar et al. 1999). Multitexturing and pre-integrated volume rendering have been used to improve image quality and rendering performance (Rezk-Salama et al. 2000; Engel et al. 2001). Figure 1.7: Volume rendering by 3D texture slicing (Westermann and Ertl 1998): 3D texture slices are generated from the volume, perpendicular to the viewing direction (left); the textures are mapped onto the screen (middle); blended textures of previous slices (right). The drawbacks of 3D texture mapping is that the larger volumes require the swapping of volume bricks in and out of the limited texture memory. The factor that constrains quality in 3D texture mapping hardware approaches is the fact that the frame buffer has limited bit resolution (8 12 bits). This is far less than the floating point precision that can be obtained with the software algorithms. The latest generation graphics boards (Radeon 9700 and NVIDIA Geforce FX) do support 16 bit textures, but only with nearest neighbour interpolation. So currently there is no easy way to directly use data more than 8 bit. (This might be changed with forthcoming hardware). Real-time graphics hardware is becoming programmable. This hardware exposes a different kind of hardware abstraction, namely the programmable vertex/fragment-processing and NVIDIA s NV vertex program and NV register combiner (Kilgard 2003) OpenGL

10 10 Chapter 1. State of The Art for Volume Rendering extensions. The NVIDIA s register combiner architecture is register-based, that is, a processing unit named register combiner operates on a set of registers and constants to compute new values, which are then written back into registers. It operates on one fragment at a time. The architecture allows the number of registers and textures to vary with degree of multitexture supported and also allows the number of register combiner units to vary. It changes the traditional texture compositing pipeline through multitexturing and multi-stage rasterization. It provides totally customizable per-pixel engines and the most useful for our case is that RGB and Alpha computations of a texture are separate. This flexibility gives us the freedom to modulate RGB or Alpha seperately and does not consider another part. Previous researches (Rezk-Salama et al. 2000; Rezk-Salama 2001; Engel et al. 2001; Kniss et al. 2001; Lum and Ma 2002) show the powerful abilities of programmable hardware for improving volume rendering performance. 1.3 Surface Rendering Within the family of surface rendering approach, the earliest method was the polygon oriented technique, which appeared with triangulation algorithms using planar contours. Another method for surface rendering is the transfer function oriented technique (Levoy 1988). This method does not explicitly extract geometrical objects from volume data and uses transfer function to render surfaces. These methods have been mostly applied to the 3D representation of anatomical structures from parallel slices. Using a polygon approach to represent surfaces, some work has also been done to display 3D structures in other than a parallel slices data set, such as the display of the vascular system from angiograms. The voxel oriented approaches for surface rendering have improved the accuracy and the reliability of 3D representations. Within these approaches, a surface tracking algorithm is executed to extract the surface components (voxels or faces of voxels) in order to build up a surface. Marching cubes algorithm (Lorensen and Cline 1987) is this kind of algorithm which performs surface tracking in order to construct a polygonal data based on the surface voxel neighborhood topology. Surface rendering algorithms classify each voxel within a volume as belonging to or not belonging to the object being rendered, usually by comparing each voxel to one or more user-selected threshold values which define the range of pixel intensity values that represent the material of interest. Identifying all pixels belonging to an object effectively describes the object s surface, which the computer typically models as a collection of polygons and displays with surface shading. This technique produces surfaces in the domain of the scalar quantity on which the scalar quantity has the same value, the socalled isosurface value. The surfaces can be colored according to the isosurface value or they can be colored according to another scalar field using the texture technique. The latter case allows for the search for correlation between different scalar quantities. There are different methods to generate the surfaces from a discrete set of data points. All methods use interpolation to construct a continuous function. The correctness of the

11 1.4. Fourier Volume Rendering 11 generated surfaces depends on how well the constructed continuous function matches the underlying continuous function representing the discrete data set. 1.4 Fourier Volume Rendering Frequency domain volume rendering approach is based on the Fourier projection-slice theorem and provides high frame rates for the computation of two-dimensional intensity projections of volumetric data sets. It allows projections of volume data to be generated in O(n 2 log n) time for a volume of size n 3. In general, this technique operates on the three-dimensional frequency domain representation of the data set. The Fourier volume rendering approach consists of the following steps: 1) Preprocessing. Compute the 3D discrete Fourier transform of the volume data by FFT; 2) Actual volume rendering. For each direction θ perpendicular to the image plane, firstly interpolate the Fourier transformed data and resample on a regular grid of points in the slice plane orthogonal to θ (slice extraction), then compute the 2D inverse Fourier transform (IFFT), this yields X-ray like image (Westenberg and Roerdink 2000). Figure 1.8: The main steps of the frequency domain volume rendering: instead of directly projecting the volume data, the data is transformed to frequency domain, and then extract a slice and perform an inverse transform. (Totsuka and Levoy 1993) The frequency domain rendering concept allows renderings of X-ray images or linearly depth cued and directional diffuse shaded volumes (Totsuka and Levoy 1993; Lippert et al. 1997), but no occlusion and perspective projection. The speed improvements of these methods result from the low complexity of the inverse fast Fourier transform but suffer from the costs for the two dimensional resampling. Another big disadvantage of the frequency domain volume rendering is the high memory demand. Where spatial domain acceleration techniques often result in compression of the volume, this is not possible in frequency domain (Theul 1999). The usefulness of frequency domain volume rendering is at least questionable, since there are other techniques that operate in spatial domain and are capable of generating similar

12 12 Chapter 1. State of The Art for Volume Rendering results in real time (e.g. Maximum Intensity Projection) and are more flexible, whereas frequency domain volume rendering is restricted to X-ray like images (Theul 1999; Westenberg and Roerdink 2000). 1.5 Nonphotorealistic Based Volume Rendering The increasing trend towards photo-realism in 3D graphics has been balanced by those interested in the effects of nonphotorealistic approaches. Nonphotorealistic rendering (NPR) uses the techniques of artists and traditional mediums to convey visual information about an object. NPR tends to show a reduced level of detail of objects as compared with photo-realistic computer generated images, in which the goal is to produce an image with the amount of detail that a camera would show. Recently, nonphotorealistic rendering, which originally has been used for computer graphics in general (Lansdown and Schofield 1995; Satio and Takahashi 1990), has been proposed for volume rendering (Hauser et al. 2001; Rheingans and Ebert 2001; Csebfalvi et al. 2001; Lu et al. 2002; Lum and Ma 2002), definitely extending the abilities for the investigation of 3D data Nonphotorealistic Rendering (NPR) Revisited Figure 1.9: To render object silhouette edges using NPR to show object structure information. (Raskar and Cohen 1999) In computer graphics, photorealistic rendering attempts to make artificial images of simulated 3D environments that look just like the real world. So nonphotorealistic rendering is any technique that produces images of simulated 3D world in a style other than realism. Often these styles are reminiscent of paintings (painterly rendering), or

13 1.5. Nonphotorealistic Based Volume Rendering 13 of various other styles of artistic illustration (sketch, pen and ink, etching, lithograph, etc.). Of particular commercial interest are techniques that can render 3D scenes in styles which match the look of traditionally animated films. Often called toon shading, these techniques allow for seamless combination of 3D elements with traditional cel animation. Another important application of nonphotorealistic rendering is to help the user understand that a depiction is only approximate. Psychologically, photorealistic rendering seems to imply an exactness and perfection which may overstate the fidelity of the simulated scene to a real object, while nonphotorealistic rendering tends to show the approximation and overview of an object to make the user concentrate on important features. The default tendency of nonphotorealistic rendering systems is to generate imagery that superficially looks like that made by artists, for example, pen and wash, water color or charcoal (Lansdown and Schofield 1995). It aims at generating an image so that it appears to be something other than a photograph of the real world but mostly still creating a photograph of a painting or sketch. The main topics of NPR research include: 1) the lighting model for NPR (Gooch et al. 1998); 2) simulating artistic renderings, for example, painting on a canvas using brush strokes (Hertzmann 1998); 3) depicting object shape using controlled-density hatching (Winkenbach and Salesin 1996); 4) representing 3D shape using principle directions and principle curvatures to guide the placement of the lines of stroke texture (Curtis et al. 1997); 5) generating a line drawing of a 3D scene and depicting object shape using silhouettes and outlines (Hertzmann 1999). Nonphotorealistic rendering has the effect of highlighting important details of objects while diminishing the importance of extraneous data. Silhouette edges and outlines are useful to effectively convey a great deal of information with a very few strokes using NPR. They are the active research topics in the field of NPR (Raskar and Cohen 1999; Hertzmann 1999). Currently, technical illustration is considered to be the most effective means of conveying information through an image. Gooch explored the use of lighting in computer graphics to emulate the effects of these hand drawn images (Gooch et al. 1998) NPR for Volume Rendering In contrast to photorealistic approaches, which stick to models of the real physical translucent media, nonphotorealistic approaches allow depicting and enhancing user specified features, like regions of significant surface curvatures, silhouettes and shading textures. Most of the previous research work on nonphotorealistic rendering has been done for polygonal data with surfaces, not volumetric data. Recently nonphotorealistic rendering has been proposed for volume rendering and has become an area of active research (Lu et al. 2002; Hauser et al. 2001; Rheingans and Ebert 2001; Csebfalvi et al. 2001; Lum and Ma 2002). Nonphotorealistic rendering can be used effectively to clarify fine structures in a volume. More information about nonphotorealistic volume rendering will be investigated in

14 14 Chapter 1. State of The Art for Volume Rendering 1.6 Comparison of Different Rendering Algorithms Surface Rendering versus Direct Volume Rendering Surface rendering has several important advantages. Because they reduce the original data volume down to a compact surface model, surface-rendering algorithms can operate very rapidly on modern computers. The realistic lighting models used in many surface rendering algorithms can provide the most three-dimensionally intuitive images. Finally, the distinct surfaces in surface reconstructions facilitate clinical measurements. Two serious drawbacks are associated with the use of surface rendering for the display of volume data set (e.g. medical data set). Most fundamentally, taking the bone rendering as an example, surface renderings depict only the bone surface. Most of the available data is not incorporated into the 3D image. In cases where the pathology of interest is subcortical or obscured by overlying bone, surface rendering does not display the most important information in the dataset. The second serious drawback is poor image fidelity. Surface renderings simplify the data into a binary form, classifying each pixel as either 100% bone or 0% bone. The finite voxel size in medical data produces many voxels that are only fractionally composed of bone, and classifying them as all or none introduces stair step artifacts into the image. By varying the threshold minimally, fracture gaps can appear to open and close, bony processes lengthen and shorten, and holes in the cortex are created and fused. Any data-driven surface generation would create grossly wrong shapes. Model-driven surface extraction may not be available because model is not explicitly defined (although expert may have model in his mind ). These artifacts, coupled with the inability to show subcortical detail, can make it impossible to visualize important aspects of volume data set. Direct volume rendering has two principal advantages over surface rendering. First, the fuzzy classification through transfer function provides a physically realistic depiction of volume-averaged CT data. Because the voxels are of finite size, many voxels contain multiple tissue types and are only fractionally composed of bone. Direct volume rendering with fuzzy classification accurately depicts this physical reality. Second, direct volume rendering incorporates all of the data contained in the volume into the displayed image. Volume renderings can show multiple overlying and internal features, and the displayed intensity is related to the amount of bone encountered along a line extending through the volume. Surface shading and increased opacity can be used to enhance 3D understanding of the volume rendered images. The main drawbacks associated with direct volume rendering are the increased computational cost and the difficulty in appreciating 3D relationships in very transparent volume-rendered images. Another problem for direct volume rendering is that it is difficult to find a good transfer function to depict volume samples. This process is often trial and error and laborious. The flexibility of the direct volume rendering technique is an important advantage over surface rendering. Ideally, the user would be able to interactively change these parame-

15 1.7. Recent Active Research Topics for Volume Rendering 15 ters in real time, thereby maximizing both 3D perception and subcortical visualization in a single image DVR Algorithms Comparison Each volume rendering approach has its own advantages and disadvantages. Generally, when the rendering approach aims at realizing high quality images, it has to lose some performance, and vice versa. Shear-warp and 3D texture mapping volume rendering are devised to maximize frame rates on the expense of image quality, while image-aligned splatting and ray-casting are devised to achieve high image quality on the expense of performance. Usually the image quality achieved with texture mapping shows severe color-bleeding artifacts due to the non-opacity weighted colors, as well as staircasing. The latter is due to the limited bit precision of the frame buffer and can be reduced by increasing the number of slices. Some approaches have been proposed to improve image quality of texture-based volume rendering (Rezk-Salama et al. 2000; Engel et al. 2001). The image created using shear-warp shows a little blurring significant aliasing in the form of staircasing is present. This is due to the ray sampling rate being less than 1.0 and can be disturbing in animated viewing. Image-aligned splatting offers a rendering quality similar to that of ray-casting. It, however, produces smoother images due to the z-averaged kernel and the anti-aliasing effect of the larger Gaussian filter. Both 3D texture mapping and shear-warp are always substantially faster than those of raycasting and splatting, in most cases by an order of one or two magnitudes (Meissner et al. 2000). Splatting can generate a high quality image faster than ray-casting when visualizing large volume data, and does not cause the extensive blurring of shear-warp. In contrast to ray-casting, splatting considers each voxel only once (for a 2D interpolation on the screen) and not several times (for a 3D interpolation in world space). Additionally, as an object-order approach, only the relevant voxels need to be considered, which, in many cases, constitute only 10 percent of the volume voxels (Wilhelms and Gelder 1991). Table 1.1 makes a comparison of distinguishing features and conceptual differences of the typical volume rendering algorithms. 1.7 Recent Active Research Topics for Volume Rendering Although many researchers have dedicated to volume rendering for decades, there are still some crucial problems for volume rendering need to be solved. Especially with the appearance of some new techniques in computer graphics and hardware, the research for volume rendering shows its splendid chances. Recent active research for volume rendering is mainly focused on improving image quality and showing more detailed information about volume data. Interactive volume rendering is also an active research

16 16 Chapter 1. State of The Art for Volume Rendering Sampling rate Freely selectable Interpolation kernel Rendering pipeline Table 1.1: The comparison of different volume rendering algorithms Splatting Shear-Warp 3D Texture Mapping Freely selectable Averaged across s Fixed [1.0,0.58] Freely selectable Ray Casting Polygonbased Rendering Freely selectable Fourier Volume Rendering Freely selectable Focal Region based Volume Rendering Freely selectable Point sampled Point sampled Point sampled Point sampled Point sampled Trilinear Gaussian Bilinear Trilinear Trilinear Trilinear Trilinear Acceleration Early ray termination Precision/channel Floating point Voxels considered Sample evaluation Postclassified Postclassified Postclassified Early splat elimination Floating point Pre-classified, opacityweighted colors RLE opacity encoding Pre/Postclassified, no opacityweighted colors Graphics hardware Pre-classified Preclassified,Postclassified Graphics hardware FFT Focal region and graphics hardware Floating point 8 12 bits Floating point Floating point 8 12 bits All Relevant Relevant All Relevant All Relevant Speed + a Quality Perspective projection Y b Y Y Y Y N Y Irregular-grids Y Y N Y Y Y Y Hardware acceleration Special Y N Y Y N Y Combined with Special N N Easily Directly N Directly polygons Information richness a +: general, ++: good, +++: very good; b Y: Yes, N: No.

17 1.7. Recent Active Research Topics for Volume Rendering 17 topic for volume rendering. At the same time, new techniques for computer graphics have been adapted for volume rendering. Altogether, the current active research topics for volume rendering mainly include transfer function generation, nonphotorealistic volume rendering, hybrid volume rendering, hardware-accelerated volume rendering, combine segmentation into volume rendering and other approaches for improving images quality and rendering performance Transfer Function The transfer function is a critical component of the volume rendering process that specifies the relation between scalar data (e.g. computerized tomography Hounsfield units), as well as derivative values (e.g. gradient volume of an MRI dataset), and optical characteristics (e.g. color and opacity). Finding good transfer functions has been listed among the top ten problems in volume visualization (Pfister et al. 2001). Consequently, much effort has recently been spent on improving this situation (He et al. 1996; Bajaj et al. 1997; Marks et al. 1997; Fang et al. 1998; Kindlmann and Durkin 1998; König and Gröller 2001; Botha and Post 2002; Kniss et al. 2001). Existing schemes range from fully manual to semi-automatic techniques for finding transfer functions. Trial-and-error is one of the widely used approaches for finding a good transfer function for volume data. This usually involves arbitrarily and repeatedly manipulating the coefficients of some mathematical representation of the transfer function to adjust the visualization outcome. Common forms of such mathematical representations are piecewise linear functions, polynomials and splines. If special volume rendering hardware that can do this at interactive rate is not available, this can be a very laborious and timeconsuming process. Manual transfer function design is primarily based on experience, allowing the user to bring in his knowledge of the specific data as well as his personal taste. Recent researches on transfer functions can be categorized into image-driven techniques and data-driven techniques. Fang presented an image-based transfer function model that integrates 3D image processing tools (image enhancement and boundary detection) into the volume visualization pipeline to facilitate the search for an image-based transfer function in volume data visualization and exploration (Fang et al. 1998). The model defines a transfer function as a sequence of 3D image processing procedures, and allows the users to adjust a set of qualitative and descriptive parameters to achieve their subjective visualization goals. Design galleries approach is another viable alternative to facilitate transfer function selection (Marks et al. 1997). It generates all possible transfer functions simultaneously based on automatical analysis, each representing a different configuration of the transfer function. The satisfied transfer function is then selected from these representatives and then implicitly optimized. The principle technical challenges are to generate automatically the different transfer functions that are used to generate a wide spread of

18 18 Chapter 1. State of The Art for Volume Rendering dissimilar output renderings and arrange the resulting designs for easy browsing. This technique relies on the fast rendering hardware being available to reach its full potential as an interactive method considering potentially hundreds of different renderings have to be made. Also take into account that many of the optimised software volume rendering techniques such as shearwarp factorization require pre-calculated transfer function lookups, which makes them less applicable to this problem where the transfer functions are being continuously modified (Botha and Post 2002). König combined elements of the design galleries approach and trial-and-error techniques with the use of real-time raycasting hardware (König and Gröller 2001). In this approach, the transfer function specification is simplified as three steps process: first the user indicates scalar ranges of interest, then assigns colors to these interesting ranges and finally assigns opacities to these ranges. Numerous feedback renderings are performed during this process in order to get a good transfer function. The method presumes that the user has already known the scalar range of interest. While these methods reportedly succeed in finding useful transfer functions, and while they both allow the user to inspect the transfer function behind a rendering, the systems are fundamentally designed for finding good renderings, not for finding good transfer functions. These processes are entirely driven by analysis of rendered images, and not of the dataset itself. Rather than having an high-level interface to control the transfer function, the user has to choose a transfer function from among those randomly generated, making it hard to gain insight into what makes a transfer function appropriate for a given dataset. Bajaj s contour spectrum for transfer function consists of metrics that are computed over a scalar field (Bajaj et al. 1997). This more data-centric approach is visually summarizes the space of isosurfaces in terms of metrics like surface area and mean gradient magnitude, thereby guiding the choice of isovalue for isosurfacing, and also providing information useful for transfer function generation. Kindlmann proposed a semi-automatic method which is a highly-regarded technique for generating transfer function from volume data (Kindlmann and Durkin 1998). This method makes the reasonable assumption that the features of interest in the data are the boundary regions between different materials. It uses a data structure named histogram volume to capture the relationship between data values and boundary representations (first and second directional derivatives). Bajaj s contour spectrum and Kindlmann s semi-automatic method are designed to create transfer functions with minimal or no user interaction. Actually providing meaningful feedback and fine-tuning is a necessary step for a good transfer function. The transfer function specification method should consider this fact. Kniss presented a three dimensional transfer function mechanism for scalar data (based on data value, gradient magnitude, and a second directional derivative) (Kniss et al. 2001). It also provides a set of manipulation widgets which make specifying transfer functions intuitive and convenient. To make this process genuinely interactive, it exploits

19 1.7. Recent Active Research Topics for Volume Rendering 19 the graphics hardware especially 3D texture memory and pixel texturing operations. This work is an extension of semi-automatic transfer function method (Kindlmann and Durkin 1998). Multidimensional transfer function specification is a highly regarded work in volume rendering. This work shows that the graphics hardware supported interactively transfer function specification is possible and useful. These transfer function approaches uses the scalar value, gradient and/or second derivative as the basic parameters to design transfer function specification methods. The differences for these methods are that they create different widgets and use these parameters in a different way (data-centric or image-centric). Considering an actual volume data, the voxel position also plays an important role for depicting object information. For example, the user is only interested in the objects near the eye and he wants to emphasize the objects near the eye. So one of the solutions for this situation is to consider voxel position in transfer function specification. This is a challenge and interesting research topic in volume visualization. This is also useful for focal region based volume rendering we proposed in this thesis. We consider this factor in volume rendering through the distance between the focal region center and the current volume sample position NPR Enhanced Volume Rendering Nonphotorealistic rendering is originally proposed for computer graphics. Nonphotorealistic rendering for volumetric data visualization has recently become an area of active research. NPR is mainly used to enhance volume features (e.g. silhouette, boundary) in volume visualization (Rheingans and Ebert 2001; Csebfalvi et al. 2001; Lum and Ma 2002; Lu et al. 2002). An interesting experience from working with applications in the field of volume rendering, which is especially important for our approach, is that volumetric data set is often interpreted as being composed of distinct objects, for example, organs within medical data sets. For a useful depiction of 3D objects, often boundary surfaces are used as a visual representation of the objects. This is mostly due to the need to avoid visual clutter as much as possible. Visible structures are significantly related to object boundaries. Consequently, the silhouette and boundary outline rendering in volumetric data using NPR consist of the main part in the field of nonphotorealistic rendering for volumetric data visualization (Rheingans and Ebert 2001; Csebfalvi et al. 2001; Lum and Ma 2002). Rheingans introduced the illustration approach (Rheingans and Ebert 2001). In this approach, a broad spectrum of various nonphotorealistic visual cues (boundary enhancement, sketch lines, silhouettes, feature halos, etc.) to be integrated within the volume visualization pipeline. This approach can be considered as a feature-based technique. The original volume features (e.g. boundaries) are enhanced using nonphotorealistic rendering techniques. The main idea is that features to be enhanced are defined on the basis of local volume characteristics (e.g. gradient) and can be enhanced locally. Csebfalvi

20 20 Chapter 1. State of The Art for Volume Rendering developed an NPR technique for volumetric data to visualize object contours depending on the magnitude of gradient information (Csebfalvi et al. 2001). 3D structures, which are characterized by regions exhibiting significant changes of data values (greater gradient magnitude), become visible without being obstructed by visual representations of rather continuous regions within the 3D data set. This approach provided a fast interactive investigation of 3D data and let users quickly learn about internal structures. Lum presented a method for interactive nonphotorealistic volume rendering using hardware accelerated rendering techniques with a PC cluster (Lum and Ma 2002). In this method, several perceptually effective NPR techniques such as tone shading, silhouette illustration and depth based color cues are implemented in hardware. By using multiple graphics cards spread across a PC cluster, the high resolution volumes can be rendered interactively using nonphotorealistic visualization techniques. Lu (Lu et al. 2002) presented a framework for volume stippling. By combining the principles of artistic and scientific illustration, it explores several feature enhancement techniques (boundaries, silhouettes, resolution, distance, etc.) to create effective, interactive visualizations of scientific and medical datasets. This approach introduces a rendering mechanism that generates appropriate point lists at all resolutions during an automatic preprocess, and modifies rendering styles through different combinations of these feature enhancements. Volume stippling provides a method to investigate volume models. Figure 1.10: To render object feature lines using NPR to show object structure information. (Raskar and Cohen 1999) Interrante enhanced surface shape and position using textured ridge and valley lines (Interrante et al. 1995). Figure 1.10 shows the effect of enhanced surface shape with ridge and valley line texture. Both ridge and valley lines correspond to geometric features of the surface, and as such they can be computed automatically from local

21 1.7. Recent Active Research Topics for Volume Rendering 21 measures of the surface s geometry, for example, surface normals, normal curvatures. This method gives a direct perception of object shape. Nagy described a volume hatching technique to illustrate volume data set by simulating free-hand line art drawing suitable for generating technical illustrations and sketches (Nagy et al. 2002). Hatching fields are created and rendered in several separate passes: firstly higher order differential characteristics of the volumetric data field are computed and encoded in a hierarchical data structure. At run time, based on some user-defined importance criterion a representative set of hatching strokes is computed, each of which is effectively encoded by a connected group of line segments. Strokes are finally displayed as colored and shaded line strips employing OpenGL functionality and the anisotropic model (Banks 1994). These previous NPR techniques for volume rendering mainly focused on creating volume illustration, volume hatches or volume stippling effects which have widely developed in computer graphics. They mainly focused on simulating these techniques in volume rendering. For computer graphics research, they contributed important advances on this area. But they did not consider how to enhance an existing volume scene based on existing volume information. To solely use these techniques to investigate information in medical imaging is not enough, because medical imaging needs precise information of objects. They must combine with direct volume rendering to show their advantages for investigating information. So this area needs to be developed further considering real applications of NPR in medical imaging Combine Segmentation with Volume Rendering As visualization tasks grow in size and complexity, the problem of presenting data effectively is accompanied by another, potentially more difficult problem how to extract presentable data from the flood of raw information. Thus, the segmentation is intimately tied to data visualization. Segmentation is the process of distinguishing objects in the data set from their surroundings so as to facilitate the creation of geometric models. For example, in medical imaging it is often important to measure the shape, surface area, or volume of tissues in the body. Once the dataset is segmented, those quantities are easily measured. Segmentation is an active area of ongoing research. Manual segmentation is a slow and laborious process requiring expert knowledge, often involving the tracing of object outlines in a series of dataset slices. On the other hand, automated techniques are rarely robust and reliable, often requiring extensive supervision (Wells et al. 1996). Recently, segmentation is integrated into volume rendering pipeline to explore detailed information from volumetric data. Actually, segmentation and volume rendering can not be separated completely. Segmentation needs visualization to show its final result. Volume rendering can show more detailed and clear information if the segmentation is integrated into the pipeline. Some of the significant work of integrating segmentation and volume rendering include the analysis of vascular structures and the analysis of tumors (Hahn et al. 2001; Bullitt and Aylward 2001). When combining segmentation

22 22 Chapter 1. State of The Art for Volume Rendering with volume rendering pipeline, it is easy to realize selective volume rendering. This approach can create high-quality images Hybrid Volume Rendering Traditionally, most volume visualization approaches only use a single rendering method in a volume data to depict information. Although a single technique can provide useful insights into volume data, it is insufficient for many problems. Recent researches tend to combine several rendering methods in one volume data to extract more and meaningful information (Hauser et al. 2001). Especially in cases where the simultaneous visualization of all the data information is not possible, e.g., in cases of very large data sets or data of high dimensionality, the questions of what subset of the data to show and what rendering method to use become a very important decision during investigation. In medical applications, for example, when visualizing 3D data sets, it is in general not possible to concurrently show all the data information. Instead, certain selective rendering approaches of the data are used for visualization: voxels of maximal intensity, which are displayed using maximum-intensity projection, iso-surfaces, which bound certain subsets of the data, as well as several others. Hauser introduced a two-level volume rendering approach, which allows for selectively using different rendering techniques for different subsets of a 3D data set (Hauser et al. 2001). Different structures within the data set are rendered locally on an objectby-object basis by DVR, MIP, surface rendering or nonphotorealistic rendering. The result of subsequent object rendering is combined globally in a merging step. In this approach, NPR is mainly used to render and enhance high gradient magnitude regions (e.g. contours, boundary surfaces). Figure 1.11 shows the result of two-level volume rendering of part of the hand. The two-level volume rendering stimulates us that different rendering methods combined together can provide rich data information. The main challenge in hybrid volume rendering is how to efficiently combine different rendering methods together to use respective advantages to explore detailed information from volume data.

23 1.7. Recent Active Research Topics for Volume Rendering 23 Figure 1.11: Two-level volume rendering of part of human hand: Bones are rendered using DVR, surface rendering is used for vessels, and nonphotorealistic rendering is used for skin (Hauser et al. 2001).

First Steps in Hardware Two-Level Volume Rendering

First Steps in Hardware Two-Level Volume Rendering First Steps in Hardware Two-Level Volume Rendering Markus Hadwiger, Helwig Hauser Abstract We describe first steps toward implementing two-level volume rendering (abbreviated as 2lVR) on consumer PC graphics

More information

Volume Rendering. Lecture 21

Volume Rendering. Lecture 21 Volume Rendering Lecture 21 Acknowledgements These slides are collected from many sources. A particularly valuable source is the IEEE Visualization conference tutorials. Sources from: Roger Crawfis, Klaus

More information

Volume Visualization

Volume Visualization Volume Visualization Part 1 (out of 3) Overview: Volume Visualization Introduction to volume visualization On volume data Surface vs. volume rendering Overview: Techniques Simple methods Slicing, cuberille

More information

Volume Visualization. Part 1 (out of 3) Volume Data. Where do the data come from? 3D Data Space How are volume data organized?

Volume Visualization. Part 1 (out of 3) Volume Data. Where do the data come from? 3D Data Space How are volume data organized? Volume Data Volume Visualization Part 1 (out of 3) Where do the data come from? Medical Application Computed Tomographie (CT) Magnetic Resonance Imaging (MR) Materials testing Industrial-CT Simulation

More information

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller Scalar Data Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Basic strategies Function plots and height fields Isolines Color coding Volume visualization (overview) Classification Segmentation

More information

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization Volume visualization Volume visualization Volumes are special cases of scalar data: regular 3D grids of scalars, typically interpreted as density values. Each data value is assumed to describe a cubic

More information

Scalar Data. CMPT 467/767 Visualization Torsten Möller. Weiskopf/Machiraju/Möller

Scalar Data. CMPT 467/767 Visualization Torsten Möller. Weiskopf/Machiraju/Möller Scalar Data CMPT 467/767 Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Basic strategies Function plots and height fields Isolines Color coding Volume visualization (overview) Classification

More information

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Rendering Computer Animation and Visualisation Lecture 9 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Volume Data Usually, a data uniformly distributed

More information

Direct Volume Rendering

Direct Volume Rendering Direct Volume Rendering Balázs Csébfalvi Department of Control Engineering and Information Technology Budapest University of Technology and Economics Classification of Visualization Algorithms Indirect

More information

Data Visualization (DSC 530/CIS )

Data Visualization (DSC 530/CIS ) Data Visualization (DSC 530/CIS 60-0) Isosurfaces & Volume Rendering Dr. David Koop Fields & Grids Fields: - Values come from a continuous domain, infinitely many values - Sampled at certain positions

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

Visualization Computer Graphics I Lecture 20

Visualization Computer Graphics I Lecture 20 15-462 Computer Graphics I Lecture 20 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 15, 2003 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Data Visualization (CIS/DSC 468)

Data Visualization (CIS/DSC 468) Data Visualization (CIS/DSC 46) Volume Rendering Dr. David Koop Visualizing Volume (3D) Data 2D visualization slice images (or multi-planar reformating MPR) Indirect 3D visualization isosurfaces (or surface-shaded

More information

Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering

Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering Balázs Csébfalvi bfalvi,, Lukas Mroz, Helwig Hauser, Andreas König, Eduard Gröller Institute of Computer Graphics and Algorithms

More information

Volume Graphics Introduction

Volume Graphics Introduction High-Quality Volume Graphics on Consumer PC Hardware Volume Graphics Introduction Joe Kniss Gordon Kindlmann Markus Hadwiger Christof Rezk-Salama Rüdiger Westermann Motivation (1) Motivation (2) Scientific

More information

Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University

Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University 15-462 Computer Graphics I Lecture 21 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

CIS 467/602-01: Data Visualization

CIS 467/602-01: Data Visualization CIS 467/60-01: Data Visualization Isosurfacing and Volume Rendering Dr. David Koop Fields and Grids Fields: values come from a continuous domain, infinitely many values - Sampled at certain positions to

More information

A Survey of Volumetric Visualization Techniques for Medical Images

A Survey of Volumetric Visualization Techniques for Medical Images International Journal of Research Studies in Computer Science and Engineering (IJRSCSE) Volume 2, Issue 4, April 2015, PP 34-39 ISSN 2349-4840 (Print) & ISSN 2349-4859 (Online) www.arcjournals.org A Survey

More information

Non-Photorealistic Rendering

Non-Photorealistic Rendering 15-462 Computer Graphics I Lecture 22 Non-Photorealistic Rendering November 18, 2003 Doug James Carnegie Mellon University http://www.cs.cmu.edu/~djames/15-462/fall03 Pen-and-Ink Illustrations Painterly

More information

Direct Volume Rendering

Direct Volume Rendering Direct Volume Rendering CMPT 467/767 Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Volume rendering equation Compositing schemes Ray casting Acceleration techniques for ray casting Texture-based

More information

Volume Illumination & Vector Field Visualisation

Volume Illumination & Vector Field Visualisation Volume Illumination & Vector Field Visualisation Visualisation Lecture 11 Institute for Perception, Action & Behaviour School of Informatics Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes CSCI 420 Computer Graphics Lecture 26 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 2.11] Jernej Barbic University of Southern California Scientific Visualization

More information

Visualization. CSCI 420 Computer Graphics Lecture 26

Visualization. CSCI 420 Computer Graphics Lecture 26 CSCI 420 Computer Graphics Lecture 26 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 11] Jernej Barbic University of Southern California 1 Scientific Visualization

More information

A SURVEY ON 3D RENDERING METHODS FOR MRI IMAGES

A SURVEY ON 3D RENDERING METHODS FOR MRI IMAGES 178 A SURVEY ON 3D RENDERING METHODS FOR MRI IMAGES Harja Santana Purba, Daut Daman, Ghazali Bin Sulong Department of Computer Graphics & Multimedia, Faculty of Computer Science & Information Systems,

More information

Interactive Volume Illustration and Feature Halos

Interactive Volume Illustration and Feature Halos Interactive Volume Illustration and Feature Halos Nikolai A. Svakhine Purdue University svakhine@purdue.edu David S.Ebert Purdue University ebertd@purdue.edu Abstract Volume illustration is a developing

More information

Direct Volume Rendering

Direct Volume Rendering Direct Volume Rendering Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview 2D visualization slice images (or multi-planar reformating MPR) Indirect 3D visualization isosurfaces (or surface-shaded

More information

Volume Illumination, Contouring

Volume Illumination, Contouring Volume Illumination, Contouring Computer Animation and Visualisation Lecture 0 tkomura@inf.ed.ac.uk Institute for Perception, Action & Behaviour School of Informatics Contouring Scaler Data Overview -

More information

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Illumination Visualisation Lecture 11 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

Data Visualization (DSC 530/CIS )

Data Visualization (DSC 530/CIS ) Data Visualization (DSC 530/CIS 60-01) Scalar Visualization Dr. David Koop Online JavaScript Resources http://learnjsdata.com/ Good coverage of data wrangling using JavaScript Fields in Visualization Scalar

More information

Visualization Computer Graphics I Lecture 20

Visualization Computer Graphics I Lecture 20 15-462 Computer Graphics I Lecture 20 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] November 20, 2003 Doug James Carnegie Mellon University http://www.cs.cmu.edu/~djames/15-462/fall03

More information

Visualization. Images are used to aid in understanding of data. Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26]

Visualization. Images are used to aid in understanding of data. Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26] Visualization Images are used to aid in understanding of data Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26] Tumor SCI, Utah Scientific Visualization Visualize large

More information

Hardware Accelerated Volume Visualization. Leonid I. Dimitrov & Milos Sramek GMI Austrian Academy of Sciences

Hardware Accelerated Volume Visualization. Leonid I. Dimitrov & Milos Sramek GMI Austrian Academy of Sciences Hardware Accelerated Volume Visualization Leonid I. Dimitrov & Milos Sramek GMI Austrian Academy of Sciences A Real-Time VR System Real-Time: 25-30 frames per second 4D visualization: real time input of

More information

Applications of Explicit Early-Z Culling

Applications of Explicit Early-Z Culling Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of

More information

Volume Rendering - Introduction. Markus Hadwiger Visual Computing Center King Abdullah University of Science and Technology

Volume Rendering - Introduction. Markus Hadwiger Visual Computing Center King Abdullah University of Science and Technology Volume Rendering - Introduction Markus Hadwiger Visual Computing Center King Abdullah University of Science and Technology Volume Visualization 2D visualization: slice images (or multi-planar reformation:

More information

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering T. Ropinski, F. Steinicke, K. Hinrichs Institut für Informatik, Westfälische Wilhelms-Universität Münster

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

Level of Details in Computer Rendering

Level of Details in Computer Rendering Level of Details in Computer Rendering Ariel Shamir Overview 1. Photo realism vs. Non photo realism (NPR) 2. Objects representations 3. Level of details Photo Realism Vs. Non Pixar Demonstrations Sketching,

More information

Non-Photo Realistic Rendering. Jian Huang

Non-Photo Realistic Rendering. Jian Huang Non-Photo Realistic Rendering Jian Huang P and NP Photo realistic has been stated as the goal of graphics during the course of the semester However, there are cases where certain types of non-photo realistic

More information

Mirrored LH Histograms for the Visualization of Material Boundaries

Mirrored LH Histograms for the Visualization of Material Boundaries Mirrored LH Histograms for the Visualization of Material Boundaries Petr Šereda 1, Anna Vilanova 1 and Frans A. Gerritsen 1,2 1 Department of Biomedical Engineering, Technische Universiteit Eindhoven,

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

CHAPTER 1 Graphics Systems and Models 3

CHAPTER 1 Graphics Systems and Models 3 ?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........

More information

cs6630 November TRANSFER FUNCTIONS Alex Bigelow University of Utah

cs6630 November TRANSFER FUNCTIONS Alex Bigelow University of Utah cs6630 November 14 2014 TRANSFER FUNCTIONS Alex Bigelow University of Utah 1 cs6630 November 13 2014 TRANSFER FUNCTIONS Alex Bigelow University of Utah slide acknowledgements: Miriah Meyer, University

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

2. Review of current methods

2. Review of current methods Transfer Functions for Direct Volume Rendering Gordon Kindlmann gk@cs.utah.edu http://www.cs.utah.edu/~gk Scientific Computing and Imaging Institute School of Computing University of Utah Contributions:

More information

Scalar Data. Alark Joshi

Scalar Data. Alark Joshi Scalar Data Alark Joshi Announcements Pick two papers to present Email me your top 3/4 choices. FIFO allotment Contact your clients Blog summaries: http://cs.boisestate.edu/~alark/cs564/participants.html

More information

Clipping. CSC 7443: Scientific Information Visualization

Clipping. CSC 7443: Scientific Information Visualization Clipping Clipping to See Inside Obscuring critical information contained in a volume data Contour displays show only exterior visible surfaces Isosurfaces can hide other isosurfaces Other displays can

More information

Scalar Visualization

Scalar Visualization Scalar Visualization 5-1 Motivation Visualizing scalar data is frequently encountered in science, engineering, and medicine, but also in daily life. Recalling from earlier, scalar datasets, or scalar fields,

More information

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR)

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR) Introduction Illustrative rendering is also often called non-photorealistic rendering (NPR) we shall use these terms here interchangeably NPR offers many opportunities for visualization that conventional

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming L1 - Introduction Contents Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming 1 Definitions Computer-Aided Design (CAD) The technology concerned with the

More information

Raycasting. Ronald Peikert SciVis Raycasting 3-1

Raycasting. Ronald Peikert SciVis Raycasting 3-1 Raycasting Ronald Peikert SciVis 2007 - Raycasting 3-1 Direct volume rendering Volume rendering (sometimes called direct volume rendering) stands for methods that generate images directly from 3D scalar

More information

Scalar Algorithms: Contouring

Scalar Algorithms: Contouring Scalar Algorithms: Contouring Computer Animation and Visualisation Lecture tkomura@inf.ed.ac.uk Institute for Perception, Action & Behaviour School of Informatics Contouring Scaler Data Last Lecture...

More information

Multidimensional Transfer Functions in Volume Rendering of Medical Datasets. Master thesis. Tor Øyvind Fluør

Multidimensional Transfer Functions in Volume Rendering of Medical Datasets. Master thesis. Tor Øyvind Fluør UNIVERSITY OF OSLO Department of Informatics Multidimensional Transfer Functions in Volume Rendering of Medical Datasets Master thesis Tor Øyvind Fluør February 2006 Abstract In volume rendering, transfer

More information

Computer Graphics. - Volume Rendering - Philipp Slusallek

Computer Graphics. - Volume Rendering - Philipp Slusallek Computer Graphics - Volume Rendering - Philipp Slusallek Overview Motivation Volume Representation Indirect Volume Rendering Volume Classification Direct Volume Rendering Applications: Bioinformatics Image

More information

Indirect Volume Rendering

Indirect Volume Rendering Indirect Volume Rendering Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Contour tracing Marching cubes Marching tetrahedra Optimization octree-based range query Weiskopf/Machiraju/Möller

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

GPU-based Volume Rendering. Michal Červeňanský

GPU-based Volume Rendering. Michal Červeňanský GPU-based Volume Rendering Michal Červeňanský Outline Volume Data Volume Rendering GPU rendering Classification Speed-up techniques Other techniques 2 Volume Data Describe interior structures Liquids,

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

Volume Illumination and Segmentation

Volume Illumination and Segmentation Volume Illumination and Segmentation Computer Animation and Visualisation Lecture 13 Institute for Perception, Action & Behaviour School of Informatics Overview Volume illumination Segmentation Volume

More information

Medical Visualization - Illustrative Visualization 2 (Summary) J.-Prof. Dr. Kai Lawonn

Medical Visualization - Illustrative Visualization 2 (Summary) J.-Prof. Dr. Kai Lawonn Medical Visualization - Illustrative Visualization 2 (Summary) J.-Prof. Dr. Kai Lawonn Hatching 2 Hatching Motivation: Hatching in principle curvature direction Interrante et al. 1995 3 Hatching Hatching

More information

Data Representation in Visualisation

Data Representation in Visualisation Data Representation in Visualisation Visualisation Lecture 4 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Data Representation 1 Data Representation We have

More information

Isosurface Rendering. CSC 7443: Scientific Information Visualization

Isosurface Rendering. CSC 7443: Scientific Information Visualization Isosurface Rendering What is Isosurfacing? An isosurface is the 3D surface representing the locations of a constant scalar value within a volume A surface with the same scalar field value Isosurfaces form

More information

Computational Strategies

Computational Strategies Computational Strategies How can the basic ingredients be combined: Image Order Ray casting (many options) Object Order (in world coordinate) splatting, texture mapping Combination (neither) Shear warp,

More information

A Study of Medical Image Analysis System

A Study of Medical Image Analysis System Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun

More information

GPU-Accelerated Deep Shadow Maps for Direct Volume Rendering

GPU-Accelerated Deep Shadow Maps for Direct Volume Rendering Graphics Hardware (2006) M. Olano, P. Slusallek (Editors) GPU-Accelerated Deep Shadow Maps for Direct Volume Rendering Markus Hadwiger Andrea Kratz Christian Sigg Katja Bühler VRVis Research Center ETH

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 11794--4400 Tel: (631)632-8450; Fax: (631)632-8334

More information

Efficient View-Dependent Sampling of Visual Hulls

Efficient View-Dependent Sampling of Visual Hulls Efficient View-Dependent Sampling of Visual Hulls Wojciech Matusik Chris Buehler Leonard McMillan Computer Graphics Group MIT Laboratory for Computer Science Cambridge, MA 02141 Abstract In this paper

More information

Interactive Boundary Detection for Automatic Definition of 2D Opacity Transfer Function

Interactive Boundary Detection for Automatic Definition of 2D Opacity Transfer Function Interactive Boundary Detection for Automatic Definition of 2D Opacity Transfer Function Martin Rauberger, Heinrich Martin Overhoff Medical Engineering Laboratory, University of Applied Sciences Gelsenkirchen,

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Reading for Today A Practical Model for Subsurface Light Transport, Jensen, Marschner, Levoy, & Hanrahan, SIGGRAPH 2001 Participating Media Measuring BRDFs

More information

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows are what we normally see in the real world. If you are near a bare halogen bulb, a stage spotlight, or other

More information

Photorealism vs. Non-Photorealism in Computer Graphics

Photorealism vs. Non-Photorealism in Computer Graphics The Art and Science of Depiction Photorealism vs. Non-Photorealism in Computer Graphics Fredo Durand MIT- Lab for Computer Science Global illumination How to take into account all light inter-reflections

More information

Point based Rendering

Point based Rendering Point based Rendering CS535 Daniel Aliaga Current Standards Traditionally, graphics has worked with triangles as the rendering primitive Triangles are really just the lowest common denominator for surfaces

More information

Shear-Warp Volume Rendering. Volume Rendering Overview

Shear-Warp Volume Rendering. Volume Rendering Overview Shear-Warp Volume Rendering R. Daniel Bergeron Department of Computer Science University of New Hampshire Durham, NH 03824 From: Lacroute and Levoy, Fast Volume Rendering Using a Shear-Warp- Factorization

More information

Point-Based Rendering

Point-Based Rendering Point-Based Rendering Kobbelt & Botsch, Computers & Graphics 2004 Surface Splatting (EWA: Elliptic Weighted Averaging) Main Idea Signal Processing Basics Resampling Gaussian Filters Reconstruction Kernels

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

Particle-Based Volume Rendering of Unstructured Volume Data

Particle-Based Volume Rendering of Unstructured Volume Data Particle-Based Volume Rendering of Unstructured Volume Data Takuma KAWAMURA 1)*) Jorji NONAKA 3) Naohisa SAKAMOTO 2),3) Koji KOYAMADA 2) 1) Graduate School of Engineering, Kyoto University 2) Center for

More information

Hot Topics in Visualization

Hot Topics in Visualization Hot Topic 1: Illustrative visualization 12 Illustrative visualization: computer supported interactive and expressive visualizations through abstractions as in traditional illustrations. Hot Topics in Visualization

More information

Scalar Visualization

Scalar Visualization Scalar Visualization Visualizing scalar data Popular scalar visualization techniques Color mapping Contouring Height plots outline Recap of Chap 4: Visualization Pipeline 1. Data Importing 2. Data Filtering

More information

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller Local Illumination CMPT 361 Introduction to Computer Graphics Torsten Möller Graphics Pipeline Hardware Modelling Transform Visibility Illumination + Shading Perception, Interaction Color Texture/ Realism

More information

Chapter 7 - Light, Materials, Appearance

Chapter 7 - Light, Materials, Appearance Chapter 7 - Light, Materials, Appearance Types of light in nature and in CG Shadows Using lights in CG Illumination models Textures and maps Procedural surface descriptions Literature: E. Angel/D. Shreiner,

More information

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves CSC 418/2504: Computer Graphics Course web site (includes course information sheet): http://www.dgp.toronto.edu/~elf Instructor: Eugene Fiume Office: BA 5266 Phone: 416 978 5472 (not a reliable way) Email:

More information

Introduction to Visualization and Computer Graphics

Introduction to Visualization and Computer Graphics Introduction to Visualization and Computer Graphics DH2320, Fall 2015 Prof. Dr. Tino Weinkauf Introduction to Visualization and Computer Graphics Visibility Shading 3D Rendering Geometric Model Color Perspective

More information

Hot Topics in Visualization. Ronald Peikert SciVis Hot Topics 12-1

Hot Topics in Visualization. Ronald Peikert SciVis Hot Topics 12-1 Hot Topics in Visualization Ronald Peikert SciVis 2007 - Hot Topics 12-1 Hot Topic 1: Illustrative visualization Illustrative visualization: computer supported interactive and expressive visualizations

More information

Ray Tracing Acceleration Data Structures

Ray Tracing Acceleration Data Structures Ray Tracing Acceleration Data Structures Sumair Ahmed October 29, 2009 Ray Tracing is very time-consuming because of the ray-object intersection calculations. With the brute force method, each ray has

More information

Render methods, Compositing, Post-process and NPR in NX Render

Render methods, Compositing, Post-process and NPR in NX Render Render methods, Compositing, Post-process and NPR in NX Render Overview What makes a good rendered image Render methods in NX Render Foregrounds and backgrounds Post-processing effects Compositing models

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Participating Media Measuring BRDFs 3D Digitizing & Scattering BSSRDFs Monte Carlo Simulation Dipole Approximation Today Ray Casting / Tracing Advantages? Ray

More information

Geometric Representations. Stelian Coros

Geometric Representations. Stelian Coros Geometric Representations Stelian Coros Geometric Representations Languages for describing shape Boundary representations Polygonal meshes Subdivision surfaces Implicit surfaces Volumetric models Parametric

More information

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render 1 There are two major classes of algorithms for extracting most kinds of lines from 3D meshes. First, there are image-space algorithms that render something (such as a depth map or cosine-shaded model),

More information

Vector Visualization

Vector Visualization Vector Visualization Vector Visulization Divergence and Vorticity Vector Glyphs Vector Color Coding Displacement Plots Stream Objects Texture-Based Vector Visualization Simplified Representation of Vector

More information

CS 5630/6630 Scientific Visualization. Volume Rendering I: Overview

CS 5630/6630 Scientific Visualization. Volume Rendering I: Overview CS 5630/6630 Scientific Visualization Volume Rendering I: Overview Motivation Isosurfacing is limited It is binary A hard, distinct boundary is not always appropriate Slice Isosurface Volume Rendering

More information

3/29/2016. Applications: Geology. Appliations: Medicine. Applications: Archeology. Applications: Klaus Engel Markus Hadwiger Christof Rezk Salama

3/29/2016. Applications: Geology. Appliations: Medicine. Applications: Archeology. Applications: Klaus Engel Markus Hadwiger Christof Rezk Salama Tutorial 7 Real-Time Volume Graphics Real-Time Volume Graphics [01] Introduction and Theory Klaus Engel Markus Hadwiger Christof Rezk Salama Appliations: Medicine Applications: Geology Deformed Plasticine

More information

CS 130 Final. Fall 2015

CS 130 Final. Fall 2015 CS 130 Final Fall 2015 Name Student ID Signature You may not ask any questions during the test. If you believe that there is something wrong with a question, write down what you think the question is trying

More information

Using image data warping for adaptive compression

Using image data warping for adaptive compression Using image data warping for adaptive compression Stefan Daschek, 9625210 Abstract For years, the amount of data involved in rendering and visualization has been increasing steadily. Therefore it is often

More information

Computer Graphics Ray Casting. Matthias Teschner

Computer Graphics Ray Casting. Matthias Teschner Computer Graphics Ray Casting Matthias Teschner Outline Context Implicit surfaces Parametric surfaces Combined objects Triangles Axis-aligned boxes Iso-surfaces in grids Summary University of Freiburg

More information

CSC Computer Graphics

CSC Computer Graphics // CSC. Computer Graphics Lecture Kasun@dscs.sjp.ac.lk Department of Computer Science University of Sri Jayewardanepura Polygon Filling Scan-Line Polygon Fill Algorithm Span Flood-Fill Algorithm Inside-outside

More information

Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL

Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL International Edition Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL Sixth Edition Edward Angel Dave Shreiner Interactive Computer Graphics: A Top-Down Approach with Shader-Based

More information

Multipass GPU Surface Rendering in 4D Ultrasound

Multipass GPU Surface Rendering in 4D Ultrasound 2012 Cairo International Biomedical Engineering Conference (CIBEC) Cairo, Egypt, December 20-21, 2012 Multipass GPU Surface Rendering in 4D Ultrasound Ahmed F. Elnokrashy 1,2, Marwan Hassan 1, Tamer Hosny

More information

Fast Interactive Region of Interest Selection for Volume Visualization

Fast Interactive Region of Interest Selection for Volume Visualization Fast Interactive Region of Interest Selection for Volume Visualization Dominik Sibbing and Leif Kobbelt Lehrstuhl für Informatik 8, RWTH Aachen, 20 Aachen Email: {sibbing,kobbelt}@informatik.rwth-aachen.de

More information

Direct Volume Rendering. Overview

Direct Volume Rendering. Overview Direct Volume Rendering Department of Computer Science University of New Hampshire Durham, NH 03824 Based on: Brodlie and Wood, Recent Advances in Visualization of Volumetric Data, Eurographics 2000 State

More information

Overview. Direct Volume Rendering. Volume Rendering Integral. Volume Rendering Integral Approximation

Overview. Direct Volume Rendering. Volume Rendering Integral. Volume Rendering Integral Approximation Overview Direct Volume Rendering Department of Computer Science University of New Hampshire Durham, NH 03824 Based on: Brodlie and Wood, Recent Advances in Visualization of Volumetric Data, Eurographics

More information