Illustrating Dynamics of Time-Varying Volume Datasets in Static Images

Size: px
Start display at page:

Download "Illustrating Dynamics of Time-Varying Volume Datasets in Static Images"

Transcription

1 Illustrating Dynamics of Time-Varying Volume Datasets in Static Images Jennis Meyer-Spradow, Timo Ropinski, Jan Vahrenhold, Klaus Hinrichs Institut für Informatik, Westfälische Wilhelms-Universität Münster Einsteinstraße 62, Münster, Germany Abstract We present visualization techniques for timevarying volume datasets, i.e., time series of 3D data volumes. Instead of sequentially displaying the consecutive 3D volumes contained in such a time series, our method allows to depict the dynamics in a single static image. We extract the motion dynamics inherent to a time-varying volume dataset automatically during a preprocessing step by using three-dimensional block matching. The final rendering of a 3D volume associated with a specific point in time incorporates the extracted information to depict dynamics in a single static image by nonphotorealistic overlays in the form of glyphs and other graphical elements. In addition the preceding and following volumes of the time series may be rendered and combined with the final image. 1 Introduction Volume rendering has become a mature field of interactive 3D computer graphics. It supports professionals from different domains when exploring volume datasets, representing for example medical or meteorologic structures and processes. Current medical scanners produce datasets having a high spatial resolution. Especially Computer Tomography (CT) datasets are highly suitable for showing anatomical structures. However, when dynamic processes have to be explored a single 3D volume dataset is often insufficient. For example Positron Emission Tomography (PET) imaging techniques are used for exploring the dynamics of metabolism. Successive scans produce a time series of images each showing the distribution of molecules at a certain point in time. Time-varying volume data is also produced when applying functional Magnetic Resonance (MR) imaging techniques to evaluate neurological reaction to certain stimuli. In the subsequent sections we will use the term time-varying (volume) dataset or just dataset for a time series of 3D data volumes and 3D volume for a single data volume at a specific point in time. Although 3D volumes can be displayed separately to view the data for specific points in time, the dynamics contained in a time-varying dataset can only be extracted when considering all of them. Therefore time-varying volume datasets are often visualized by displaying the 3D volumes sequentially in the order they have been acquired. When rendering these 3D volumes at a high frame rate, the viewer is able to construct a mental image of the dynamics. Current graphics hardware supports rendering of time-varying datasets having moderate dimensions with appropriately high frame rates. However, often this rendering technique is unappropriate, since it is preferable in some domains to view still images rather than dynamic image sequences. The following two problems arise when dealing with image sequences of 3D volumes. When several people watch an image sequence, to exchange findings with the group a viewer usually points at a specific feature within an image. Since the images are dynamic, it is hard to pinpoint such an item of interest, especially if it is moving, and the animation needs to be paused. Another problem is the exchange of visualization results. In medical departments diagnostic findings are often exchanged on static media such as film or paper. However, it is not possible to exchange time-varying volume datasets in this manner. Although the sequential viewing process could be simulated by showing all 3D volumes next to each other, this could lead to registration problems, i.e., it would be more difficult to identify a reference item across the 3D volumes. This paper proposes visualization techniques which allow to represent motion dynamics extracted from a time series of 3D volumes within a single

2 static image. To this end we extract motion information between successive 3D volumes in a preprocessing step. Our visualization techniques depict motion by composing the successive 3D volumes and additional visualization elements. These visualization elements are used to augment the rendering of data extracted from the time-varying volume dataset at a specific point in time. For achieving effects adapted from cartoons in order display motion we combine polygonal elements with the volume rendering of the source datasets to obtain the final image. In the next section we discuss related work. Section 3 briefly introduces the used motion estimation technique. Section 4 describes the visualization techniques used to depict the extracted motion information especially the usage of transparency, speedlines and motion glyphs. After discussing the application of our techniques to datasets from different domains in Section 5, the paper concludes in Section 6. 2 Related Work Mostly, non-photorealistic rendering techniques are used for depicting motion in a single image. Masuch et al. [13] present an approach to visualize motion extracted from polygonal data by speedlines, repeatedly drawn contours and arrows; different line styles and other stylization give the viewer the impression of a hand-drawing. Nienhaus and Döllner [15] present a technique to extract motion information from a behavior graph, which represents events and animation processes. Their system automatically creates cartoonlike graphical representations. Joshi and Rheingans [9] use speedlines and flow ribbons to visualize motion in volume datasets. However, their approach handles only feature extraction data and is mainly demonstrated on experimental datasets. A lot of research in the area of dynamic volume visualization has been dedicated to flow visualization. Motion arrows are widely used [10], but are often difficult to understand because information is lost when performing a 2D projection to the screen. Boring and Pang [4] tried to tackle this problem by highlighting arrows, which point in direction specified by the user. Texture based approaches compute dense representations of flow and visualize it [14]. First introduced for 2D flow, it has also been used for 3D flow visualization, for example with volume line integral convolution used by Interrante and Grosch [7]. Svakhine et al. [16] emulate traditional flow illustration techniques and interactive simulation of Schlieren photography. Hanson and Cross [6] present an approach to visualize surfaces and volumes embedded in fourdimensional space. Therefore they use 4D illuminated surface rendering with 4D shading and occlusion coding. Woodring et al. [18] interpret timevarying datasets as four-dimensional data fields and provide an intuitive user interface to specify 4D hyperplanes, which are rendered with different techniques. Ji et al. [8] extract time-varying isosurfaces and interval volumes considering 4D data directly. Our work is based partially on a block matching algorithm to extract motion information from timevarying volume datasets presented by de Leeuw and van Liere [11]. They rendered the estimated motion using polygonal models which are combined with volume rendering results. In contrast to their approach we detect and visualize object motion instead of flow introduced by single voxels. 3 Extracting Dynamics from Time- Varying Volume Datasets To visualize dynamics expressively, motion information needs to be retrieved from the underlying data. Usually motion is not accessible directly and has to be extracted by applying appropriate algorithms. For our visualization we use an algorithm introduced by de Leeuw and van Liere [11]. Here we will give only a brief overview of the underlying concepts, while a complete explanation can be found in the original paper. The goal is to extract dynamic motion from timevarying volume datasets. For each voxel in one 3D volume the algorithm searches for a corresponding voxel with a similar intensity in the successive 3D volume and interprets the spatial difference as motion. However, considering only one voxel for the comparison would be insufficient. Hence, all voxels in a block around the two candidates are used, which works similar to a block matching algorithm as used in 2D video compression [1]. Since comparing a voxel block with each voxel block in the next 3D volume is not very efficient, only those blocks of the successive 3D volume are

3 tested, which lie in the neighborhood region of the current block. This test region is called the search window and its size is dataset dependent. Comparing the blocks is done by using a matching operator. The result of is greater when comparing less similar blocks and smaller for more similar ones. We calculate the absolute differences for each corresponding voxel and scale it with the result of a Gaussian weighting function, centered in the appropriate block matching region. This factor expresses that we are interested in the motion at a particular position so nearby voxels are more important than those farther away. We have adapted the 3D block matching algorithm to be better suitable for our visualization techniques, because a meaningful estimation was not possible for all blocks. Hence we discard blocks having only sparse content, i.e., having only a small number of voxels with an intensity not equal to zero. This elimination avoids that these sparse blocks might match to arbitrary destination blocks. For example a block containing only a single voxel lying on the border of an object may match to several other blocks, containing only one voxel with a similar intensity value. Furthermore in cases in which more than one candidate with the same best matching value exist, we decided to choose the candidate with the minimal distance. 4 Illustrating Dynamics When visualizing time-varying volume datasets we do this by using the current 3D volume which would be rendered when rendering the 3D volumes sequentially as a frame of reference. This frame is visualized by applying direct volume rendering techniques and contributes the most to the image. The information obtained from preceding and successive 3D volumes is used in order to augment the image. In the following subsections we explain three different techniques for performing this augmentation. The first subsection explains how to merge the information directly contained in different 3D volumes, while the following two subsections explain how to use the meta information extracted from the dataset. Figure 1: Three visualization techniques to depict the motion of the golfball volume dataset. The current 3D volume is rendered using isosurface shading. Edges of preceding and successive 3D volumes are shown (top), preceding 3D volumes are rendered semi-transparent (center), dynamic motion is depicted with speedlines (bottom). Renderings of ten time steps extracted from the dataset are shown (left). 4.1 Depicting Multiple Datasets To directly visualize the content of preceding or successive 3D volumes while rendering the current frame with direct volume rendering techniques we propose a combination of transparency and edge detection. In contrast to motion blur these techniques avoid blurring the information contained in the preceding resp. successive 3D volume. Both techniques can be parameterized based on the difference in time between the preceding/successive 3D volume and the point in time given by the current 3D volume. The edge detection we apply is an image-based technique which allows us to emphasize the silhouette of objects contained in a 3D volume by applying appropriate filter kernels. To select which

4 4.2.1 Direction and Shape Estimation Figure 2: Hand-drawn speedlines are used. To enhance the desired cartoon-like impression they are disturbed during rendering. edges should be emphasized we introduce an edge threshold. This threshold ensures that only edges exceeding a certain thickness end up in the final image. The color of the edges is parameterized depending on the point in time of the 3D volume to be processed. In Figure 1 (top) cold colors are used to depict the silhouettes of preceding 3D volumes, while warm colors are used to emphasize the silhouettes of object contained in successive 3D volumes. For the preceding 3D volumes the edge threshold is set depending on the difference in time, such that 3D volumes with a greater difference in time are depicted by using thinner edges. Inspired by an advertisement illustration (see Figure 9) we have also applied transparency techniques in order to show the preceding 3D volumes. In these cases the transparency used to show the objects contained in the 3D volumes varies depending on their corresponding points in time. An image which has been generated with this technique is shown in Figure 1 (center). Once the current 3D volume as well as the desired preceding and successive 3D volumes are rendered, the images need to be composed in order to obtain a single image as the final result. For this composition we can either blend the results or mask them by considering the background color. 4.2 Speedlines Speedlines are a very common approach to depict past motion in cartoons. We have developed an algorithm to automatically generate speedlines for 3D volumes by using motion information extracted in a pre-processing step. Our algorithm starts by first estimating the motions preeminent direction m. If all voxels move with approximately the same speed, we can simply use the motion vector with median slope among all vectors. For vector fields that exhibit a significant variance with respect to the absolute speed of the objects, we compute the weighted median of the slopes where each slope is weighted with the (normalized) length of the corresponding motion vector. Both variants of median-finding can be implemented by first sorting the slopes according to their polar angle followed by no more than a single scan over the sorted set of slopes [5, Ch. 9]. We then compute the convex hull of the motion vectors; the convex hull will be used as a conservative approximation of the vector field s shape. Based upon the estimated direction and shape, we compute the line l that is orthogonal to m and tangent to the convex hull such that the extension of m to a ray does not intersect l. Finally, we project the convex hull along m onto l. For reasons that become evident later, the interval obtained by this projection is shrunk to 85% of its original length, resulting in an interval l see Figure 3. interval l l Figure 3: Projection of the convex hull along the median slope m (fat) and shrunken interval l Speedline Positioning The number k and actual position of the speedlines can be controlled by the user. The algorithm first subdivides l into k equal-sized subintervals and constructs for each of the k +1 interval end points a support line parallel to m (dotted lines in Figure 4). Finally, a speedline is positioned on each of the k + 1 support lines such that it is at the distance ε selected by the user from the convex hull. The position is slightly blurred and a randomly selected, hand-drawn model is used as speedline to m

5 k intervals distance ε Figure 4: Influence of parameters k and ε. enhance the visual impression (see Figure 2). An application of the speedlines technique is shown in Figure 1 (bottom). If we had not shrunk the interval l, an extreme, yet not unlikely, configuration where the slope of the top- or bottommost edge of the convex hull is close to m would result in a visually irritating effect: the top- or bottommost speedline would be positioned relatively far away from the other speedlines since their support lines would run almost parallel to the top- or bottommost edges of the convex hull see, e.g., the gray lines in Figure Implementation Details Our implementation of the convex hull algorithm is based upon Andrew s algorithm [2], a convex hull algorithm that is especially efficient if the given point set is presorted in some lexicographically consistent way. We obtain such an order by computing the projection of the points, i.e., the origins and the tips of the vectors, on the line orthogonal to m. Furthermore, since the speedlines will be drawn on only one side, say on the left side, of the convex hull, it is sufficient to actually compute the corresponding left hull of the motion vectors. Thus, we can prune all points to the right of the line through the minimal and maximal points with respect to the above projection and simply compute the left hull of the remaining points see Figure 5. Figure 5: Computation of a partial convex hull. To efficiently sort the vectors according to their slopes, we use the orientation predicate [3] which provides an elegant way of implicitly dealing with infinite slopes. However, since the polar angle order is a circular order, we have to define a zero direction such that for any two directions the circular order yields an unambiguous result. For the rendering of the first frame, we set this direction to be the direction of the negative x-axis, and for each consecutive frame, we assume a temporal coherence and set it to be the inverse direction of the median m used in the previous frame. 4.3 Motion Arrows Since speedlines require to have a main direction of motion within the 3D volume, different visualization techniques are needed for cases where arbitrary directions are present. Motion arrows provide a good alternative for these cases. Furthermore they can be used to show multiple arbitrary motion directions overlaying the main direction of movement. Arrows are appropriate because they can visualize the direction of motion and encoded as arrow length its velocity. However, to avoid visual clutter only a subset of the arrows should be visualized. Another important aspect is the three dimensional character of the arrows. This is especially hard to communicate when the arrow size falls below a certain minimum value, as it is often the case when combining arrows with volume renderings in order to avoid occlusions. Thus it has to be ensured that the depth information of arrows being almost collinear with the view vector is communicated Arrow Selection First we address the problem of selecting a subset of the arrows for rendering. We use a two step approach. In the first step position dependent information, in the second step motion and orientation dependent information is considered. To classify the arrows based on their position, the probably simplest idea is to use a three dimensional raster, selecting only each n-th arrow in each dimension of the 3D volume. However, this may result in several arrows occluding each other and thus confusing the observer. Nevertheless this approach is sometimes adequate to obtain an overview of the motion in the dataset especially when the motion direction is almost uniform.

6 Figure 6: Visualization of the Hurricane Isabel. Using speedlines to show simultaneously the motion of the hurricane and the distribution of rain (left). Volume Rendering of the clouds modality by visualizing the silhouettes of the next two steps using edge detection (right). An alternative approach is to select only the arrows, lying on an isosurface given by the visualized object(s). Note that this surface cannot be computed in a pre-processing step because it is highly dependent on various rendering parameters such as thresholding and the applied transfer function. Thus before ray-casting the 3D volume, we compute entry points at the corresponding isosurface of the volume. Since we are using GPU-based ray-casting we obtain these parameters in image space. Further to avoid cluttering we use a two dimensional raster to select only every n-th motion information. Since we are working in image space, we thus obtain appropriately placed arrows. In the second step of the selection process we take into account motion velocity or the motion direction to discard certain arrows. Meaningful parameters are minimal and maximal velocity for the first case, and angle and variance for the second case. The user has to keep in mind, that there are probably unwanted motion information. To give a hint we can render these arrows in another color Arrow Rendering We first render the 3D volume and then overlay it with the arrows. As mentioned above rendering 3D arrows may incorporate some problems for the user when recognizing the arrows direction. This recognition problem can be reduced by rendering only a few, relative large arrows having an orientation that is easier to perceive. In cases where these larger arrows are not sufficient due to size restrictions, we decided to render 2D arrows aligned in image space. 5 Use Cases 5.1 Hurricane Isabel Hurricane Isabel was the only Category 5 hurricane in the 2003 Atlantic hurricane season. A simulation of different variables of this hurricane as rain, snow, or wind speed has been done by the National Center for Atmospheric Research in the United States. In 2004 this simulation has been used in the IEEE Visualization Contest. The data for each variable consists of 48 time steps, each containing voxels with float precision. Almost all of the participating teams used a combination of volume and polygonal rendering. Only one team rendered voxels as semi-transparent quads and spheres in different colors. Another team tried to use vortex detection algorithms but they were not convinced by the results of this approach. However, their automatic detection of the central region of the hurricane succeeded. The winning team extracted data with complex attributes ( high velocity and clouds ) and rendered various properties of the hurricane. For testing of our methods we applied an appro-

7 priate scaling to obtain a datasize of one byte per voxel, but we did not change the resolution. Our first aim was to visualize the main motion direction of the hurricane with speedlines while simultaneously visualizing rain. We applied our motion estimation to the rain dataset. To filter out disturbances, we selected the motion arrows by length and then used our speedline algorithm. For visualizing the rain distribution we used direct volume rendering by applying an appropriate transfer function combined with edge enhancement. Additionally the motion within the hurricane was visualized with 2D arrows. To ease the spatial comprehension, a map of the region is displayed below the volume rendering (see Figure 6 (left)). We have also visualized three time steps of the cloud dataset. The first time step is rendered using direct volume rendering, the second and third time step are rendered by displaying the silhouette and a semi-transparent filling using different shades of gray. Again we use the map layer to improve spatial comprehension. This method produces good results from a top view (see Figure 6 (right)). Viewing the dataset from south northward allows interesting insights into the temporal cloud distribution. As can be seen in Figure 8, the current cloud will expand to the shape depicted by the red edges and then further expand as indicated by the blue edges. The examination of the dataset can further be enhanced by using a clipping plane aligned to the viewing plane, in such a way that only one slice of the hurricane is visualized. 5.2 Heart Motion Figure 7: Cut through a beating human heart. The green and blue edge depict the next two timesteps (see color plate). In the area of medical visualization we have experimented with our technique in order to visualize heart motion. We have used a segmented timevarying volume dataset, that we have cut with a clipping plane to show motion of only the left ventricle. Again we used the edge detection technique to display one time step while we used silhouettes for the other time steps. In this case we also give some context information by rendering the organs behind the clipping plane using toon shading. As it can be seen in Figure 7, heart motion is clearly visible. To achieve an impression similar to visualizations used in medical illustrations we added a rendering of the body s silhouette behind the volume rendering. 6 Conclusions and Future Work In this paper we have presented visualization techniques for depicting dynamics in still renderings of time-varying volume datasets. To extract motion information from these datasets, we have applied a slightly modified version of the three dimensional block matching algorithm. Three different techniques have been proposed to visualize motion: (1) Applying edge enhancement and transparency techniques when rendering multiple 3D volumes, (2) overlaying speedlines and (3) rendering of motion arrows. All these techniques are applied automatically and do not need any user input. In order to obtain interactive frame rates they have been implemented as extensions to GPU-based ray-casting. To demonstrate the usability of our visualization techniques we have described the application to real world datasets. With the presented techniques it is possible to present a reasonable subset of the motion information contained in a time-varying dataset in a single image. Thus this information can be communicated

8 more easily since it can also be visualized on static media. Our approach has the drawback that using speedlines with volume data containing different objects moving in several directions will result in wrong speedline positioning. We will address this problem by extending the motion detection to support the use of clustering. Thus it will be possible to track only specific regions of interest within the dataset to obtain better results when dealing with more chaotic data or to show different speedlines with distinct directions for the specific objects. 7 Acknowledgments This work was partly supported by grants from the Deutsche Forschungsgemeinschaft (DFG), SFB 656 MoBil Münster, Germany (project Z1). The golfball dataset is available at voreen.unimuenster.de. It has been generated with the voxelization library vxt developed by Milos Sramek. The Hurricane Isabel dataset is courtesy of the National Center for Atmospheric Research in the United States. The medical dataset has been generated using the 4D NCAT phantom. References [1] A. Anandan. A computational framework and an algorithm for the measurement of visual motion. International Journal of Computer Vision, 2(3): , 1989 [2] A. M. Andrew. Another efficient algorithm for convex hulls in two dimensions. Information Processing Letters, 9(5): , 1979 [3] M. de Berg, M. van Kreveld, M. Overmars, and O. Schwarzkopf. Computational Geometry: Algorithms and Applications. 2nd ed., Springer, 2000 [4] Ed Boring and Alex Pang. Directional Flow Visualization of Vector Fields. In IEEE Visualization, 1996, [5] T. Cormen, C. Leiserson, R. Rivest, and C. Stein. Introduction to Algorithms. 2nd ed., MIT Press, 2001 [6] A. J. Hanson, R. A. Cross. Interactive visualization methods for four dimensions. In IEEE Visualization, 1993, [7] V. Interrante and C. Grosch. Strategies for Effectively Visualizing 3D Flow with Volume LIC. In IEEE Visualization, 1997, [8] G. Ji, H.-W. Shen, and R. Wenger. Volume Tracking using Higher Dimensional Isocontouring. In IEEE Visualization, 2003, [9] A. Joshi and P. Rheingans. Illustrationinspired techniques for visualizing timevarying data. In IEEE Visualization, 2005, [10] Robert S. Laramee, Helwig Hauser, Helmut Doleisch, Benjamin Vrolijk, Frits H. Post, and Daniel Weiskopf. The State of the Art in Flow Visualization: Dense and Texture- Based Techniques. In Computer Graphics Forum, 23(2): , 2004 [11] W. de Leeuw and R. van Liere. BM3D: Motion estimation in time dependent volume data. In IEEE Visualization, 2002, pp [12] A. Lu, C. Morris, D. Ebert, P. Rheingans, and C. Hansen. Non-photorealistic volume rendering using stippling techniques. In IEEE Visualization, 2002, [13] M. Masuch, S. Schlechtweg, and R. Schulz. Speedlines Depicting Motion in Motionless Pictures. In SIGGRAPH 99 Conference Abstracts and Applications, 1999, 277. [14] N. Max, R. Crawfis, and D. Williams. Visualizing Wind Velocities by Advecting Cloud Textures. In IEEE Visualization, 1992, [15] M. Nienhaus and J. Döllner. Depicting Dynamics Using Principles of Visual Art and Narrations. IEEE Computer Graphics & Applications, 25(3):40 51, 2005 [16] N. Svakhine, Y. Jang, D. Ebert, and K. Gaither. Illustration and Photography Inspired Visualization of Flows and Volumes. In IEEE Visualization, 2005, [17] C. Tietjen, T. Isenberg, and B. Preim. Combining Silhouettes, Surface and Volume Rendering for Surgery Education Planning. In IEEE VGTC Symposium on Visualization, 2005, [18] J. Woodring, C. Wang, and H.-W. Shen. High Dimensional Direct Rendering of Time- Varying Volumetric Data. In IEEE Visualization, 2003,

9 Figure 8: Sideview of the cloud dataset superimposed with edges. In each subfigure the first volume is rendered using direct volume rendering. Of the second and third volumes only the silhouette is shown in dark resp. bright red. Figure 9: A golf hangar advertisement depicting motion by showing preceding golfballs. (Image courtesy of Golf Hangar at the Düsseldorf Airport, Germany.) Figure 10: Color version of the cut through a beating human heart. The green and blue edge depict the next two timesteps. Figure 11: Color version of the visualization of the Hurricane Isabel. Using speedlines to show simultaneously the motion of the hurricane and the distribution of rain (top). Volume Rendering of the clouds modality by visualizing the silhouettes of the next two steps using edge detection (bottom).

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering T. Ropinski, F. Steinicke, K. Hinrichs Institut für Informatik, Westfälische Wilhelms-Universität Münster

More information

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR)

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR) Introduction Illustrative rendering is also often called non-photorealistic rendering (NPR) we shall use these terms here interchangeably NPR offers many opportunities for visualization that conventional

More information

First Steps in Hardware Two-Level Volume Rendering

First Steps in Hardware Two-Level Volume Rendering First Steps in Hardware Two-Level Volume Rendering Markus Hadwiger, Helwig Hauser Abstract We describe first steps toward implementing two-level volume rendering (abbreviated as 2lVR) on consumer PC graphics

More information

Volume Illumination & Vector Field Visualisation

Volume Illumination & Vector Field Visualisation Volume Illumination & Vector Field Visualisation Visualisation Lecture 11 Institute for Perception, Action & Behaviour School of Informatics Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

ITS 102: Visualize This! Lecture 7: Illustrative Visualization. Introduction

ITS 102: Visualize This! Lecture 7: Illustrative Visualization. Introduction Introduction ITS 102: Visualize This! Lecture 7: Illustrative Visualization Illustrative rendering is also often called non-photorealistic rendering (NPR) we shall use these terms here interchangeably

More information

Medical Visualization - Illustrative Visualization 2 (Summary) J.-Prof. Dr. Kai Lawonn

Medical Visualization - Illustrative Visualization 2 (Summary) J.-Prof. Dr. Kai Lawonn Medical Visualization - Illustrative Visualization 2 (Summary) J.-Prof. Dr. Kai Lawonn Hatching 2 Hatching Motivation: Hatching in principle curvature direction Interrante et al. 1995 3 Hatching Hatching

More information

Volume Illumination and Segmentation

Volume Illumination and Segmentation Volume Illumination and Segmentation Computer Animation and Visualisation Lecture 13 Institute for Perception, Action & Behaviour School of Informatics Overview Volume illumination Segmentation Volume

More information

A Study of Medical Image Analysis System

A Study of Medical Image Analysis System Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun

More information

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization Volume visualization Volume visualization Volumes are special cases of scalar data: regular 3D grids of scalars, typically interpreted as density values. Each data value is assumed to describe a cubic

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

Emissive Clip Planes for Volume Rendering Supplement.

Emissive Clip Planes for Volume Rendering Supplement. Emissive Clip Planes for Volume Rendering Supplement. More material than fit on the one page version for the SIGGRAPH 2003 Sketch by Jan Hardenbergh & Yin Wu of TeraRecon, Inc. Left Image: The clipped

More information

Sketchy Illustrations for Presenting the Design of Interactive CSG

Sketchy Illustrations for Presenting the Design of Interactive CSG Sketchy Illustrations for Presenting the Design of Interactive CSG Marc Nienhaus, Florian Kirsch, Jürgen Döllner University of Potsdam, Hasso Plattner Institute {nienhaus@hpi.uni-potsdam.de, kirsch@hpi.uni-potsdam.de,

More information

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis Advanced Visual Medicine: Techniques for Visual Exploration & Analysis Interactive Visualization of Multimodal Volume Data for Neurosurgical Planning Felix Ritter, MeVis Research Bremen Multimodal Neurosurgical

More information

Planes Intersecting Cones: Static Hypertext Version

Planes Intersecting Cones: Static Hypertext Version Page 1 of 12 Planes Intersecting Cones: Static Hypertext Version On this page, we develop some of the details of the plane-slicing-cone picture discussed in the introduction. The relationship between the

More information

Mirrored LH Histograms for the Visualization of Material Boundaries

Mirrored LH Histograms for the Visualization of Material Boundaries Mirrored LH Histograms for the Visualization of Material Boundaries Petr Šereda 1, Anna Vilanova 1 and Frans A. Gerritsen 1,2 1 Department of Biomedical Engineering, Technische Universiteit Eindhoven,

More information

Using surface markings to enhance accuracy and stability of object perception in graphic displays

Using surface markings to enhance accuracy and stability of object perception in graphic displays Using surface markings to enhance accuracy and stability of object perception in graphic displays Roger A. Browse a,b, James C. Rodger a, and Robert A. Adderley a a Department of Computing and Information

More information

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes CSCI 420 Computer Graphics Lecture 26 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 2.11] Jernej Barbic University of Southern California Scientific Visualization

More information

Visualization. CSCI 420 Computer Graphics Lecture 26

Visualization. CSCI 420 Computer Graphics Lecture 26 CSCI 420 Computer Graphics Lecture 26 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 11] Jernej Barbic University of Southern California 1 Scientific Visualization

More information

Visualization. Images are used to aid in understanding of data. Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26]

Visualization. Images are used to aid in understanding of data. Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26] Visualization Images are used to aid in understanding of data Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26] Tumor SCI, Utah Scientific Visualization Visualize large

More information

Flow Visualization: The State-of-the-Art

Flow Visualization: The State-of-the-Art Flow Visualization: The State-of-the-Art The Visual and Interactive Computing Group Computer Science Department Swansea University Swansea, Wales, UK 1 Overview Introduction to Flow Visualization (FlowViz)

More information

Visualization Computer Graphics I Lecture 20

Visualization Computer Graphics I Lecture 20 15-462 Computer Graphics I Lecture 20 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] November 20, 2003 Doug James Carnegie Mellon University http://www.cs.cmu.edu/~djames/15-462/fall03

More information

Scalar Visualization

Scalar Visualization Scalar Visualization 5-1 Motivation Visualizing scalar data is frequently encountered in science, engineering, and medicine, but also in daily life. Recalling from earlier, scalar datasets, or scalar fields,

More information

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render 1 There are two major classes of algorithms for extracting most kinds of lines from 3D meshes. First, there are image-space algorithms that render something (such as a depth map or cosine-shaded model),

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

GPU-Accelerated Deep Shadow Maps for Direct Volume Rendering

GPU-Accelerated Deep Shadow Maps for Direct Volume Rendering Graphics Hardware (2006) M. Olano, P. Slusallek (Editors) GPU-Accelerated Deep Shadow Maps for Direct Volume Rendering Markus Hadwiger Andrea Kratz Christian Sigg Katja Bühler VRVis Research Center ETH

More information

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7) 5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?

More information

Visualization Computer Graphics I Lecture 20

Visualization Computer Graphics I Lecture 20 15-462 Computer Graphics I Lecture 20 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 15, 2003 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007 Simpler Soft Shadow Mapping Lee Salzman September 20, 2007 Lightmaps, as do other precomputed lighting methods, provide an efficient and pleasing solution for lighting and shadowing of relatively static

More information

Non-Photorealistic Experimentation Jhon Adams

Non-Photorealistic Experimentation Jhon Adams Non-Photorealistic Experimentation Jhon Adams Danny Coretti Abstract Photo-realistic rendering techniques provide an excellent method for integrating stylized rendering into an otherwise dominated field

More information

Chapter IV Fragment Processing and Output Merging. 3D Graphics for Game Programming

Chapter IV Fragment Processing and Output Merging. 3D Graphics for Game Programming Chapter IV Fragment Processing and Output Merging Fragment Processing The per-fragment attributes may include a normal vector, a set of texture coordinates, a set of color values, a depth, etc. Using these

More information

Display. Introduction page 67 2D Images page 68. All Orientations page 69 Single Image page 70 3D Images page 71

Display. Introduction page 67 2D Images page 68. All Orientations page 69 Single Image page 70 3D Images page 71 Display Introduction page 67 2D Images page 68 All Orientations page 69 Single Image page 70 3D Images page 71 Intersecting Sections page 71 Cube Sections page 72 Render page 73 1. Tissue Maps page 77

More information

Presentation and analysis of multidimensional data sets

Presentation and analysis of multidimensional data sets Presentation and analysis of multidimensional data sets Overview 1. 3D data visualisation Multidimensional images Data pre-processing Visualisation methods Multidimensional images wavelength time 3D image

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Conveying 3D Shape and Depth with Textured and Transparent Surfaces Victoria Interrante

Conveying 3D Shape and Depth with Textured and Transparent Surfaces Victoria Interrante Conveying 3D Shape and Depth with Textured and Transparent Surfaces Victoria Interrante In scientific visualization, there are many applications in which researchers need to achieve an integrated understanding

More information

Data Representation in Visualisation

Data Representation in Visualisation Data Representation in Visualisation Visualisation Lecture 4 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Data Representation 1 Data Representation We have

More information

Practical Shadow Mapping

Practical Shadow Mapping Practical Shadow Mapping Stefan Brabec Thomas Annen Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken, Germany Abstract In this paper we propose several methods that can greatly improve

More information

Clipping. CSC 7443: Scientific Information Visualization

Clipping. CSC 7443: Scientific Information Visualization Clipping Clipping to See Inside Obscuring critical information contained in a volume data Contour displays show only exterior visible surfaces Isosurfaces can hide other isosurfaces Other displays can

More information

3D vector fields. Contents. Introduction 3D vector field topology Representation of particle lines. 3D LIC Combining different techniques

3D vector fields. Contents. Introduction 3D vector field topology Representation of particle lines. 3D LIC Combining different techniques 3D vector fields Scientific Visualization (Part 9) PD Dr.-Ing. Peter Hastreiter Contents Introduction 3D vector field topology Representation of particle lines Path lines Ribbons Balls Tubes Stream tetrahedra

More information

Original grey level r Fig.1

Original grey level r Fig.1 Point Processing: In point processing, we work with single pixels i.e. T is 1 x 1 operator. It means that the new value f(x, y) depends on the operator T and the present f(x, y). Some of the common examples

More information

Hidden Surface Removal

Hidden Surface Removal Outline Introduction Hidden Surface Removal Hidden Surface Removal Simone Gasparini gasparini@elet.polimi.it Back face culling Depth sort Z-buffer Introduction Graphics pipeline Introduction Modeling Geom

More information

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller 3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University

Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University 15-462 Computer Graphics I Lecture 21 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

arxiv: v1 [cs.gr] 5 Jul 2016

arxiv: v1 [cs.gr] 5 Jul 2016 arxiv:1607.01102v1 [cs.gr] 5 Jul 2016 A Visualization Method of Four Dimensional Polytopes by Oval Display of Parallel Hyperplane Slices Akira Kageyama Department of Computational Science, Kobe University,

More information

Volume Rendering. Lecture 21

Volume Rendering. Lecture 21 Volume Rendering Lecture 21 Acknowledgements These slides are collected from many sources. A particularly valuable source is the IEEE Visualization conference tutorials. Sources from: Roger Crawfis, Klaus

More information

This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight

This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight local variation of one variable with respect to another.

More information

8. Tensor Field Visualization

8. Tensor Field Visualization 8. Tensor Field Visualization Tensor: extension of concept of scalar and vector Tensor data for a tensor of level k is given by t i1,i2,,ik (x 1,,x n ) Second-order tensor often represented by matrix Examples:

More information

Spatial Data Structures

Spatial Data Structures CSCI 420 Computer Graphics Lecture 17 Spatial Data Structures Jernej Barbic University of Southern California Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees [Angel Ch. 8] 1 Ray Tracing Acceleration

More information

The amount of volume data requiring analysis

The amount of volume data requiring analysis Applications Editor: Mike Potel www.wildcrest.com Voreen: A Rapid-Prototyping Environment for Ray-Casting-Based Volume Visualizations Jennis Meyer- Spradow, Timo Ropinski, Jörg Mensmann, and Klaus Hinrichs

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Hot Topics in Visualization

Hot Topics in Visualization Hot Topic 1: Illustrative visualization 12 Illustrative visualization: computer supported interactive and expressive visualizations through abstractions as in traditional illustrations. Hot Topics in Visualization

More information

Rendering Nonphotorealistic Strokes with Temporal and Arc-Length Coherence

Rendering Nonphotorealistic Strokes with Temporal and Arc-Length Coherence -.,., Rendering Nonphotorealistic Strokes with Temporal and Arc-Length Coherence Lubomir Bourdev Department of Computer Science Brown University Submitted in partial fulfillment of the requirements for

More information

Spatial Data Structures

Spatial Data Structures CSCI 480 Computer Graphics Lecture 7 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids BSP Trees [Ch. 0.] March 8, 0 Jernej Barbic University of Southern California http://www-bcf.usc.edu/~jbarbic/cs480-s/

More information

Fast Interactive Region of Interest Selection for Volume Visualization

Fast Interactive Region of Interest Selection for Volume Visualization Fast Interactive Region of Interest Selection for Volume Visualization Dominik Sibbing and Leif Kobbelt Lehrstuhl für Informatik 8, RWTH Aachen, 20 Aachen Email: {sibbing,kobbelt}@informatik.rwth-aachen.de

More information

Flexible Direct Multi-Volume Rendering in Dynamic Scenes

Flexible Direct Multi-Volume Rendering in Dynamic Scenes Flexible Direct Multi-Volume Rendering in Dynamic Scenes Sören Grimm, Stefan Bruckner, Armin Kanitsar, Eduard Gröller Vienna University of Technology, Austria Tiani Medgraph AG, Austria Email: {grimm,

More information

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves CSC 418/2504: Computer Graphics Course web site (includes course information sheet): http://www.dgp.toronto.edu/~elf Instructor: Eugene Fiume Office: BA 5266 Phone: 416 978 5472 (not a reliable way) Email:

More information

Hot Topics in Visualization. Ronald Peikert SciVis Hot Topics 12-1

Hot Topics in Visualization. Ronald Peikert SciVis Hot Topics 12-1 Hot Topics in Visualization Ronald Peikert SciVis 2007 - Hot Topics 12-1 Hot Topic 1: Illustrative visualization Illustrative visualization: computer supported interactive and expressive visualizations

More information

Introduction to 3D Graphics

Introduction to 3D Graphics Graphics Without Polygons Volume Rendering May 11, 2010 So Far Volumetric Rendering Techniques Misc. So Far Extended the Fixed Function Pipeline with a Programmable Pipeline Programming the pipeline is

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Rendering Computer Animation and Visualisation Lecture 9 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Volume Data Usually, a data uniformly distributed

More information

Chapter 6 Visualization Techniques for Vector Fields

Chapter 6 Visualization Techniques for Vector Fields Chapter 6 Visualization Techniques for Vector Fields 6.1 Introduction 6.2 Vector Glyphs 6.3 Particle Advection 6.4 Streamlines 6.5 Line Integral Convolution 6.6 Vector Topology 6.7 References 2006 Burkhard

More information

Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms

Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms Visible Surface Determination Visibility Culling Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms Divide-and-conquer strategy:

More information

Applications of Explicit Early-Z Culling

Applications of Explicit Early-Z Culling Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

Interactive Volume Illustration and Feature Halos

Interactive Volume Illustration and Feature Halos Interactive Volume Illustration and Feature Halos Nikolai A. Svakhine Purdue University svakhine@purdue.edu David S.Ebert Purdue University ebertd@purdue.edu Abstract Volume illustration is a developing

More information

CEng 477 Introduction to Computer Graphics Fall 2007

CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection Visible surface detection or hidden surface removal. Realistic scenes: closer objects occludes the

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

Physically-Based Laser Simulation

Physically-Based Laser Simulation Physically-Based Laser Simulation Greg Reshko Carnegie Mellon University reshko@cs.cmu.edu Dave Mowatt Carnegie Mellon University dmowatt@andrew.cmu.edu Abstract In this paper, we describe our work on

More information

Vector Field Visualisation

Vector Field Visualisation Vector Field Visualisation Computer Animation and Visualization Lecture 14 Institute for Perception, Action & Behaviour School of Informatics Visualising Vectors Examples of vector data: meteorological

More information

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,

More information

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1 Modifications for P551 Fall 2013 Medical Physics Laboratory Introduction Following the introductory lab 0, this lab exercise the student through

More information

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller Scalar Data Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Basic strategies Function plots and height fields Isolines Color coding Volume visualization (overview) Classification Segmentation

More information

METK The Medical Exploration Toolkit

METK The Medical Exploration Toolkit METK The Medical Exploration Toolkit Christian Tietjen 1, Konrad Mühler 1, Felix Ritter 2, Olaf Konrad 2, Milo Hindennach 2, Bernhard Preim 1 1 Institut für Simulation und Graphik, Otto-von-Guericke-Universität

More information

Deferred Rendering Due: Wednesday November 15 at 10pm

Deferred Rendering Due: Wednesday November 15 at 10pm CMSC 23700 Autumn 2017 Introduction to Computer Graphics Project 4 November 2, 2017 Deferred Rendering Due: Wednesday November 15 at 10pm 1 Summary This assignment uses the same application architecture

More information

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Illumination Visualisation Lecture 11 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

(Refer Slide Time 05:03 min)

(Refer Slide Time 05:03 min) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture # 27 Visible Surface Detection (Contd ) Hello and welcome everybody to the

More information

CIS 467/602-01: Data Visualization

CIS 467/602-01: Data Visualization CIS 467/60-01: Data Visualization Isosurfacing and Volume Rendering Dr. David Koop Fields and Grids Fields: values come from a continuous domain, infinitely many values - Sampled at certain positions to

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality

Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality A Cut-Away Technique for In-Situ Visualization Felix Wimmer 1, Christoph Bichlmeier 1, Sandro M. Heining 2, Nassir

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Visual Perception. Basics

Visual Perception. Basics Visual Perception Basics Please refer to Colin Ware s s Book Some materials are from Profs. Colin Ware, University of New Hampshire Klaus Mueller, SUNY Stony Brook Jürgen Döllner, University of Potsdam

More information

Image Acquisition Systems

Image Acquisition Systems Image Acquisition Systems Goals and Terminology Conventional Radiography Axial Tomography Computer Axial Tomography (CAT) Magnetic Resonance Imaging (MRI) PET, SPECT Ultrasound Microscopy Imaging ITCS

More information

Data Visualization (DSC 530/CIS )

Data Visualization (DSC 530/CIS ) Data Visualization (DSC 530/CIS 60-0) Isosurfaces & Volume Rendering Dr. David Koop Fields & Grids Fields: - Values come from a continuous domain, infinitely many values - Sampled at certain positions

More information

Shadows in the graphics pipeline

Shadows in the graphics pipeline Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects

More information

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds 1 Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds Takeru Niwa 1 and Hiroshi Masuda 2 1 The University of Electro-Communications, takeru.niwa@uec.ac.jp 2 The University

More information

The convex hull of a set Q of points is the smallest convex polygon P for which each point in Q is either on the boundary of P or in its interior.

The convex hull of a set Q of points is the smallest convex polygon P for which each point in Q is either on the boundary of P or in its interior. CS 312, Winter 2007 Project #1: Convex Hull Due Dates: See class schedule Overview: In this project, you will implement a divide and conquer algorithm for finding the convex hull of a set of points and

More information

Multi-variate Visualization of 3D Turbulent Flow Data

Multi-variate Visualization of 3D Turbulent Flow Data Multi-variate Visualization of 3D Turbulent Flow Data Sheng-Wen Wang a, Victoria Interrante a, Ellen Longmire b a University of Minnesota, 4-192 EE/CS Building, 200 Union St. SE, Minneapolis, MN b University

More information

Pipeline Operations. CS 4620 Lecture 14

Pipeline Operations. CS 4620 Lecture 14 Pipeline Operations CS 4620 Lecture 14 2014 Steve Marschner 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives

More information

Scalar Data. CMPT 467/767 Visualization Torsten Möller. Weiskopf/Machiraju/Möller

Scalar Data. CMPT 467/767 Visualization Torsten Möller. Weiskopf/Machiraju/Möller Scalar Data CMPT 467/767 Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Basic strategies Function plots and height fields Isolines Color coding Volume visualization (overview) Classification

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2. Prof Emmanuel Agu

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2. Prof Emmanuel Agu Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2 Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Image Processing Graphics concerned

More information

Differential Structure in non-linear Image Embedding Functions

Differential Structure in non-linear Image Embedding Functions Differential Structure in non-linear Image Embedding Functions Robert Pless Department of Computer Science, Washington University in St. Louis pless@cse.wustl.edu Abstract Many natural image sets are samples

More information

Volume rendering for interactive 3-d segmentation

Volume rendering for interactive 3-d segmentation Volume rendering for interactive 3-d segmentation Klaus D. Toennies a, Claus Derz b a Dept. Neuroradiology, Inst. Diagn. Radiology, Inselspital Bern, CH-3010 Berne, Switzerland b FG Computer Graphics,

More information

Whole Body MRI Intensity Standardization

Whole Body MRI Intensity Standardization Whole Body MRI Intensity Standardization Florian Jäger 1, László Nyúl 1, Bernd Frericks 2, Frank Wacker 2 and Joachim Hornegger 1 1 Institute of Pattern Recognition, University of Erlangen, {jaeger,nyul,hornegger}@informatik.uni-erlangen.de

More information

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology Point Cloud Filtering using Ray Casting by Eric Jensen 01 The Basic Methodology Ray tracing in standard graphics study is a method of following the path of a photon from the light source to the camera,

More information

AudioFlow. Flow-influenced sound. Matthias Adorjan, BSc February 12, 2015

AudioFlow. Flow-influenced sound. Matthias Adorjan, BSc February 12, 2015 AudioFlow Flow-influenced sound Matthias Adorjan, BSc e0927290@student.tuwien.ac.at February 12, 2015 1 Introduction Nowadays flow visualization is used in various application fields to make invisible

More information

Efficient View-Dependent Sampling of Visual Hulls

Efficient View-Dependent Sampling of Visual Hulls Efficient View-Dependent Sampling of Visual Hulls Wojciech Matusik Chris Buehler Leonard McMillan Computer Graphics Group MIT Laboratory for Computer Science Cambridge, MA 02141 Abstract In this paper

More information

Fmri Spatial Processing

Fmri Spatial Processing Educational Course: Fmri Spatial Processing Ray Razlighi Jun. 8, 2014 Spatial Processing Spatial Re-alignment Geometric distortion correction Spatial Normalization Smoothing Why, When, How, Which Why is

More information