ADVANCED FLOW VISUALIZATION

Size: px
Start display at page:

Download "ADVANCED FLOW VISUALIZATION"

Transcription

1 ADVANCED FLOW VISUALIZATION DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Liya Li, B.E., M.S. * * * * * The Ohio State University 2007 Dissertation Committee: Approved by Professor Han-Wei Shen, Adviser Professor Roger Crawfis Professor Yusu Wang Adviser Graduate Program in Computer Science and Engineering

2 c Copyright by Liya Li 2007

3 ABSTRACT Flow visualization has been playing a substantial role in many engineering and scientific applications, such as automotive industry, computational fluid dynamics, chemical processing, and weather simulation and climate modelling. Many methods have been proposed in the past decade to visualize steady and time-varying flow fields, in which textures-based and geometry-based visualization are widely used to explore the underlying fluid dynamics. This dissertation presents a viewdependent flow texture algorithm, an illustrative streamline placement algorithm on two-dimensional vector fields, and an image-based streamline placement algorithm on three-dimensional vector fields. Flow texture, generated through convolution and filtering of texture values according to the local flow vectors, is a dense representation of the vector field to provide global information of the flow structure. A view-dependent algorithm for multi-resolution flow texture advection on two-dimensional structured rectilinear and curvilinear grid is presented. By using an intermediate representation of the underlying flow fields, the algorithm can adjust the resolutions of the output texture on the fly as the user zooms in and out of the field, which can avoid aliasing as well as ensure enough detail. Geometry-based methods use geometries, such as lines, tubes, or balls, to represent the motion paths advected from the vector fields. It provides a sparse representation ii

4 and an intuitive visualization of flow trajectory. For two-dimensional vector fields, a streamline placement strategy is presented to generate representative and illustrative streamlines, which can effectively prevent the visual overload by emphasizing the essential and deemphasizing the trivial or repetitive flow patterns. A user study is performed to quantify the effectiveness of this visualization algorithm, and the results are provided. For three-dimensional vector fields, an image-based streamline seeding algorithm is introduced to better display the streamlines and reduce visual cluttering in the output images. Various effects can be achieved to enhance the visual understanding of three-dimensional flow lines. iii

5 To Tao, and my parents. iv

6 ACKNOWLEDGMENTS I am grateful to my advisor Dr. Han-Wei Shen, who guided me expertly along the PhD study and helped me through many research difficulties. I would like to express my sincere gratitude to Dr. Roger Crawfis, Dr. Yusu Wang, Dr. Garry McKenzie, and my other committee members. Thank you very much for your valuable time and effort, insightful criticism and advice. A heartfelt thanks to my colleagues Dr. Jinzhu Gao, Dr. Antonio Garcia, Teng- Yok Lee, Dr. Naeem Shareef, Dr. Chaoli Wang, and Jonathan Woodring, whose hard work and passion encouraged me. I enjoyed working with them and learning from them. I would like to extend my thanks to other members in the Computer Graphics group with whom I shared memorable experiences for the past five years. I wish you all the best in your respective research and career. My deepest gratitude is to my husband Tao Li, for everything. For love and for life. I am very grateful to my loyal friends for their encouragement. v

7 VITA Born - Hubei, China B.E. Computer Science Beijing Institute of Technology, China M.S. Computer Science Beijing Institute of Technology, China M.S. Computer Science The Ohio State University September August University Fellow The Ohio State University September March Graduate Teaching Associate The Ohio State University April August Graduate Research Associate The Ohio State University June - September, Research Intern The National Center for Atmospheric Research September - November, Intern NVIDIA vi

8 PUBLICATIONS Refereed Papers Liya Li, Hsien-Hsi Hsieh, and Han-Wei Shen, Illustrative Streamline Placement and Visualization. IEEE Pacific Visualization Symposium, March Liya Li and Han-Wei Shen, Image-Based Streamline Generation and Rendering. IEEE Transactions on Visualization and Computer Graphics, 13(3): , May Liya Li and Han-Wei Shen, View-dependent Multi-resolutional Flow Texture Advection. Visualization and Data Analysis, Chaoli Wang, Jinzhu Gao, Liya Li, and Han-Wei Shen, A Multiresolution Volume Rendering Framework for Large-Scale Time-Varying Data Visualization. In Proceedings of International Workshop on Volume Graphics 2005, Stony Brook, New York, pages 11-19, June Jinzhu Gao, Chaoli Wang, Liya Li, and Han-Wei Shen, A Parallel Multiresolution Volume Rendering Algorithm for Large Data Visualization. Parallel Computing (Special Issue on Parallel Graphics and Visualization), 31(2): , February Unrefereed Papers Liya Li and Han-Wei Shen, Image-Based Streamline Generation and Rendering. Technical Report OSU-CISRC8/06-TR71, Department of Computer Science and Engineering, The Ohio State University, FIELDS OF STUDY Major Field: Computer Science and Engineering Studies in: Computer Graphics Computer Architecture Computer Networking Professor Han-Wei Shen Professor Gagan Agrawal Professor Dong Xuan vii

9 TABLE OF CONTENTS Page Abstract Dedication Acknowledgments Vita List of Tables ii iv v vi x List of Figures xi Chapters: 1. Introduction Background Vector Fields Grids Integral Curves Critical Points and Flow Topology Related Work Flow Texture Two-dimensional Streamline Placement Simplification of Vector Fields Streamline Clustering Three-dimensional Streamline Placement Visualization Enhancement viii

10 4. View-dependent Multi-resolutional Flow Texture Advection Algorithm Overview Flow Field Representation Texture Advection Spatial Coherence Multi-resolutional Texture Avection Adjustment of Advection Step Size Results Illustrative Streamline Placement Algorithm Overview Distance Field Computation of Local Dissimilarity Influence from Multiple Streamlines Computation of Global Dissimilarity Selection of Candidate Seeds Topology-Based Enhancement Quality Analysis Quantitative Comparison User Study Results Image Based Streamline Generation and Rendering Algorithm Overview Image Space Streamline Placement Evenly-spaced Streamlines in Image Space Streamline Placement Strategies Additional Run Time Control Results Conclusions Bibliography ix

11 LIST OF TABLES Table Page 4.1 Datasets used in the experiments. Note that the size for the vortex data includes all 31 time steps. The sizes are in KBytes The time for trace slice preprocessing and texture creation and loading (in seconds) The size of trace slices (in MBytes) including all time steps. Note that Vortex dataset is time-varying The percentages of user rankings for each image based on the easiness to follow the underlying flow paths The percentages of user rankings for each image based on the easiness to locate the critical points by observing the streamlines The percentages of user rankings for each image based on the overall effectiveness of visualization considering the flow paths and critical points Information of four different datasets, and the number of streamlines generated by the algorithm Timings (in seconds) measured for generating streamlines. Each row corresponds to a data set listed in the same row of Table x

12 LIST OF FIGURES Figure Page 1.1 Hand-drawn streamlines for a flow field around a cylinder. Image courtesy of Greg Turk [63] Different types of grids (a) regular Cartesian grid (b) irregular Cartesian grid (c) structured grid (d) unstructured grid Classification of critical points of two-dimensional vector fields. R1 and R2 denote the real part of the eigenvalues of the Jacobian matrix, while I1 and I2 denote the imaginary parts. Image courtesy of Helman and Hesselink [24] The creation of trace slices by backward advection Texture advection using two-stage texture lookups Comparison of particle position errors for travelling 1 to 10 time steps using the down-sampled trace slices and the down-sampled vortex dataset reduced from 100x100 to 50x50. X axis indicates the time steps that the particles have travelled, and Y axis indicates particle position errors compared to the accurate traces using the Euclidean distance in the field Comparison of particle position errors for travelling 1 to 10 time steps using the down-sampled trace slice and the down-sampled vortex dataset reduced from 100x100 to 25x25. X axis indicates the time steps that the particles have travelled, and Y axis indicates particle position errors compared to the accurate traces using the Euclidean distance in the field Rendering of the post dataset (a) with (b) without the multi-resolution level of detail control (a)with the correction of noise distribution, no stretched pattern can be seen (b) Rendering using LIC with the original resolution of 52x xi

13 4.7 The image on the left was generated when zoomed in. As the user zoomed out from the image on the left, the algorithm was able to produce a clearer pattern by switching to a lower resolution of trace slices and noise texture (upper right), while the algorithm with no LOD control produced aliasing result (lower right) A similar test as Figure 4.7 using the time-varying vortex dataset. It can be seen that this algorithm produced a better image (upper right) compared with no level of detail adjustment (lower right) Streamlines generated by the algorithm Assume the flow field is linear and streamlines are straight lines. The circle in the images denotes the region where a critical point is located. Black lines represent the exact streamlines seeded around the critical point. The orange lines represent the approximate vectors by considering the influence of only one closest streamline (left), and the blending influence of two closest streamlines (right) Streamlines generated by the algorithm on the Oceanfield data Streamlines generated when the flow topology is considered. There are three saddle and two attracting focus critical points in this data (a) Representative streamlines generated by the algorithm (b) Gray scale image colored by one minus a normalized value of the cosine of the angle between vectors from the original field and the reconstructed field. Dark color means the two vectors are almost aligned with each other, while brighter color means more errors. The maximal difference between the vector directions in this image is about 26 degree, and the minimal difference is 0 degree (a) Gray scale image colored by the distance errors (in the unit of cells) between two streamlines integrated from each grid point in the original vector field and the reconstructed one. Dark color means low errors, while brighter color means higher errors (b) Histogram of the streamline errors collected from all grid points in the field. X axis is the error, while Y axis is the frequency of the corresponding error value. The maximal difference is 23.1 and the minimal is 0.0. The dimensions of the field is 100x xii

14 5.7 A group of images used in the first task of the user study Streamlines generated by Mebarki et al. s algorithm (left), Liu et al. s algorithm (middle), and my algorithm (right) Interface for predicting particle advection paths. Blue arrows on red streamlines show the flow directions. The red point is the particle to be advected from Mean errors for the advection task on the four different datasets. X axis stands for radius of circles around the selected points, and Y axis depicts the mean error plus or minus the standard deviation. Larger value along Y axis means higher error. Y axis starts from -1 to make the graphs easier to visualize. Dimensions of the datasets (a) 64x64 (b) 64x64 (c) 64x64 (d) 100x Visualization pipeline of the image-based streamline generation scheme Streamlines generated on two different stream surfaces Seeding templates for different types of critical points - left: repelling or attracting node; middle: attracting or repelling saddle and spiral saddle; right: attracting or repelling spiral (critical point classification image courtesy of Alex Pang) Streamlines generated from critical point templates. Three sphere templates stand for sinks, while the two-cones template stands for saddle (a) An isosurface of velocity magnitude colored by using the velocity (u,v,w) as (r,g,b) (b) Streamlines generated from the isosurface (a) A slicing plane colored by using the velocity (u,v,w) as (r,g,b) (b) Streamlines generated from the slicing plane (a) The Cylinder as an external object (b) Streamlines generated from a cylinder Level of detail streamlines generated at three different scales. It can be seen that as the field is projected to a larger area, more streamlines that can better reveal the flow features are generated xiii

15 6.9 Streamlines computed using different offsets from a depth map generated by a sphere. (a) no offset from the original depth map (b) by increasing a value from the original depth map (c) by further increasing a value from the original depth map (d) by decreasing a value from the original depth map An example of peeling away one layer of streamlines by not allowing them to integrate beyond a fixed distance from the input depth map First row: rendering images of stream surface from different viewpoints. Second row: streamlines generated at the corresponding viewpoint. Third row: the combined images of streamlines rendered from four different views Streamlines generated from three different cylinder locations (left three images) are combined together and rendered to the image on the right Streamline densities are controlled by velocity magnitude on a slice. (a) larger velocity magnitudes are displayed in brighter colors (b) the streamlines generated from the slice Streamlines generated and rendered with three different styles by the imagebased algorithm The percentage of total time each main step used The pink curve (the left axis as scale) shows the number of streamlines, while the blue one (the right axis as scale) shows the number of line segments generated The time (in seconds) to generate streamlines from an isosurface for different separating distance (pixels) using the Plume data set xiv

16 CHAPTER 1 INTRODUCTION Simulations play an important role in scientific and engineering fields, which can be used to simulate a phenomenon, analyze what has happened, and predict what will happen. By utilizing graphics techniques to process the data output from simulations, visualization [17] provides an intuitive way to interpret the data and further explore the phenomena described by the data. This visual information bridges the gap between the explicit and implicit information inherent in the data and the end users. Visualization of vector fields has been a major interest since early time to explore the fluid dynamics [1, 66], evolved from experimental flow visualization to computer simulated flow visualization. Take some of the experimental flow visualization techniques [16] as example, we can see how helpful they are in solving different problems. To study the flow field on a surface, color pigments are mixed with oil and painted on the surface of a model in a windtunnel [48]. The air flowing over the surface carries the oil with it and a streaky deposit of the paint remains to mark the directions of the flow. To visualize the flow dynamics of a liquid fluid, color dyes are injected into the liquid. It was a popular way to visualize how the flow converges, diverges and mixes at the downstream when the dyes flow through. When using different colors, we can 1

17 know how streams from different regions mix together. For experimental techniques, both the setup of equipments and the material used can cause some errors during the experiments. In addition, it is not easy to reproduce the results generated from previous experiments with special setup. With faster computing power and higher complexity of simulations, more and more data are analyzed using computational models in recent decades. Among those existing computerized techniques to visualize flow fields, texturebased and geometry-based are two most popular ones. The goal of my research is to design new algorithms to effectively visualize vector fields to address the following issues. Texture-based Flow Visualization: Existing texture advection techniques for structured recti- and curvi-linear grid data can be classified into object space and image space methods. In the object space methods such as [12, 42], the computation of textures is first performed in the domain that defines the flow field using techniques such as Line Integral Convolution (LIC). The resulting texture is then mapped to the proxy geometry representing the underlying surface and displayed to the screen. The image space methods such as IBFVS [70], or ISA [34], on the other hand, perform the calculation directly on the screen. In those methods, the computation is done at per fragment level through successive advection and blending of textures. In general, object space methods do not consider the viewing parameters related to the display when the texture advection is performed. As the resulting flow texture is mapped to the surface mesh, aliasing or distortion can happen if there is a large discrepancy between the density of the mesh and the resolution of the screen. When multiple grid cells are projected to a single pixel, for example, a straightforward 2

18 mapping of the flow texture to the geometry will produce an aliased result because the texture is under-sampled. When the density of the grid after projection is much lower than the screen resolution, on the other hand, the output texture does not possess enough granularity to depict the flow directions clearly because the texels in the flow texture will only get enlarged or interpolated. In fact, both aliasing and a lack of clear depiction of flow directions can exist simultaneously when the grid density varies substantially across the field domain. This is particularly common for data defined on curvilinear grids. For the image space methods, when the underlying flow field is defined on a surface mesh existing in three-dimensional space, the mesh is first projected onto the image plane before the texture advection and blending are performed. IBFVS [70] does the projection after the vertices are advected in the flow field while ISA [34] projects both the vertices and the vector field before the advection is performed. In those methods, since the input texture is defined and advected in screen space, the texture aliasing problem is alleviated. Performing the texture advection after projection, however, may encounter several problems. First, since the input noise is defined in screen space and thus has the same size and frequency everywhere regardless of the distances, forms, and sizes of the objects, important cues for depth and shape reasoning in the three-dimensional space are lost. Also, when multiple cells are projected onto the same pixel, since the advection and blending are performed in the image space, the path of the texture advection can be incorrect. Finally, the restriction of only using input textures defined on the image plane makes it more difficult to control the appearance of the output, especially for the case when it is more desirable to advect textures that adhere to the object surfaces. 3

19 A view-dependent flow texture advection algorithm is presented based on a hybrid image and object space approach. The algorithm can be applied to two-dimensional steady and time-varying vector fields defined on structured rectilinear and curvilinear surface meshes. It is an image space method in the sense that the flow texture is computed at each fragment at the rasterization stage when the screen projection of the mesh has already been determined by the given viewing parameters. It is an object space method because the input texture to be advected and the flow line advection paths are all defined in the original domain where the flow field is defined. This is to preserve important depth cues that allow better depictions of the shapes. The proposed algorithm is based on a novel intermediate representation of the flow field, called a Trace Slice, which can generate the flow texture at a desired resolution interactively based on the run-time viewing parameters. The algorithm can generate flow patterns with appropriate granularity in the output texture even at the places where the mesh is sparse. As the user zooms in and out of the field, the resulting flow texture of an appropriate resolution will be computed at an interactive rate. When the view is constantly changing, it will not produce a blurred result as in those image based methods [70, 34] where the texture will only clear up over the course of several frames. Two-dimensional Streamline Placement: Generally speaking, the main challenge for the streamline-based methods is the placement of seeds. On the one hand, placing too many streamlines can make the final images cluttered, and hence the data become more difficult to understand. On the other hand, placing too few streamlines can miss important flow features. 4

20 Hand-drawn streamlines are frequently shown in scientific literature to provide concise and illustrative descriptions of the underlying physics. Fig. 1.1 shows such an example. The abstract information provided by the streamlines in the image clearly shows the primary features of the flow field. Even though the streamlines do not cover everywhere in the field, we human beings are able to create a mental model to reconstruct the flow field when looking at this concise illustration. That means, to depict a flow field, it is unnecessary to draw streamlines at a very high density. Abstraction can effectively prevent visual overload, and only emphasize the essential while deemphasizing the trivial or repetitive flow patterns. In the visualization research literature, there have been some streamline seeding algorithms proposed in the past [63, 28, 72, 45, 39]. Most of the methods, however, are based on evenly-spaced distribution criteria, namely streamlines are spaced evenly apart at a pre-set distance threshold across the entire field. While those methods can reduce visual cluttering by terminating the advection of streamlines when they are too close to each other, more streamlines than necessary are often generated as a result. In addition, there is no visual focus provided to the viewers to quickly identify the overall structure of the flow field. Figure 1.1: Hand-drawn streamlines for a flow field around a cylinder. Image courtesy of Greg Turk [63]. 5

21 Spatial coherence often exists in a flow field, meaning neighboring regions have similar vector directions, and nearby streamlines resemble each other. To create a concise and illustrative visualization of streamlines, a seeding strategy is presented which utilizes spatial coherence of streamlines in two-dimensional vector fields. The goal is to succinctly and effectively illustrate vector fields, rather than uniformly laying out streamlines with equal distances among them, as in most of the existing methods. The density of streamlines in the final image is varied to reflect the coherence of the underlying flow patterns and provide visual focus. In the algorithm, two-dimensional distance fields representing the distances from each grid point in the field to the nearby streamlines are computed. From the distance fields, a local metric is derived to measure the dissimilarity between the vectors from the original field and an approximate field computed from the nearby streamlines. A global metric is defined to measure the dissimilarity between streamlines. A greedy method is used to choose a point as the next seed if both of its local and global dissimilarity satisfy the requirements. Three-dimensional Streamline Placement: Streamline placement becomes more difficult for three-dimensional vector fields. For line primitives, after being projected to the screen, the relative depth relationship between neighboring line segments is lost. Thus even though two lines are far away from each other in three-dimensional space, they might give the impression that they overlap or intersect with each other in two-dimensional space. For visualizing streamlines in three-dimensional vector fields, spatial perception is important to be considered as well. An ideal streamline seed 6

22 placement algorithm should be able to generate visually pleasing and technically illustrative images. It should also allow the user to focus on important local features in the flow field. To better display three-dimensional streamlines and reduce visual cluttering in the output images, an image based approach is presented. Visual cluttering happens because streamlines can arbitrarily intersect or overlap with each other after being projected to the screen, which makes it difficult for the user to perceive the underlying flow structures. In the algorithm, instead of placing streamline seeds in three-dimensional space, seeds are placed on the image plane and then unprojected back to object space before streamline integrations take place. The three-dimensional positions of the seeds can be uniquely determined by the selected image positions and their depth values obtained from an input depth map. By carefully spacing out the streamlines in image space as they are integrated, visual cluttering can be effectively reduced, which minimizes depth ambiguity caused by overlapping streamlines in the image. It is feasible to achieve a variety of effects, such as level of detail, depth peeling, and stylized rendering to enhance the perception of three-dimensional flow lines. Another advantage is that seed placement and streamline visualization become more tightly coupled with other visualization techniques. As the user is exploring other flow related variables, when interesting features on the screen are spotted, the seeds can be directly placed on the image without having to have a separate process to find the seed positions surrounding the features generated by the visualization technique in use. The remainder of this dissertation is organized as follows: Chapter 2 presents some definitions and background on vector fields that will be useful for subsequent 7

23 chapters. Chapter 3 reviews the related work. New algorithms are described in Chapter 4, Chapter 5, and Chapter 6 respectively. The dissertation is concluded in Chapter 7. Portions of this research have been published in citations [37], and [38]. 8

24 CHAPTER 2 BACKGROUND This chapter is devoted to the introduction of vector fields and related knowledge that will be useful for subsequent chapters. 2.1 Vector Fields A vector field f defined on an n-dimensional domain S R n can be represented by a mapping f(x) : S R n, while x is defined in standard Euclidean coordinates (x 1, x 2,..., x n ). In the case of n = 3, this represents a static three-dimensional vector field. Usually, the flow data is given with respect to time, named time-varying flow field, which can be defined as f(x, t) : S I R n, while I R, and t I serves as a description for time Grids In real applications, flow simulations are performed in the domain with respect to a certain type of grid. For example, if the simulation is tested align the surface of a plane, the grid is constructed to wrap the shape of the plane. Therefore, flow fields can be defined on various types of grids. Generally speaking, there are regular and irregular type. For regular type, for example Fig. 2.1 (a), (b), and(c), there is a 9

25 mathematical relationship within the composing points and cells and can be implicitly represented, which saves memory storage and computation, such as data interpolation and point location. For irregular type, for example Fig. 2.1 (d), which is the most general form, both the topology and geometry are completely unstructured. (a) (b) (c) (d) Figure 2.1: Different types of grids (a) regular Cartesian grid (b) irregular Cartesian grid (c) structured grid (d) unstructured grid. Regular Cartesian Grid The regularity of topology and geometry of this type suggests a natural mapping with the x-y-z coordinate system. A particular point or cell can be uniquely indexed by three indices i-j-k, which simplifies both the interpolation and location of data and position in the grid. Irregular Cartesian Grid The difference between regular and irregular Cartesian grid is the regularity and irregularity of geometry. Geometry information, such as the cell size, needs to be considered in some operations, such as interpolation. 10

26 Structured Curvilinear Grid A structured curvilinear grid is a type of grid with regular topology but irregular geometry. That is to say, the topology of the grid can be implicitly represented by specifying the dimensions, however, the geometry needs to be explicitly represented by an array of point coordinates. A typical application is Computational Fluid Dynamics (CFD). A grid is generated in such a way to wrap the surface of objects in the flow fields. The density of the grid can be various, depending on the structure of the object and the requirement of accuracy. The higher accuracy needed or higher gradient occurs, the denser the grid is. Usually the visualization of the vector fields defined on curvilinear grids can be performed in two different spaces: Physical space: In physical space, the parameter surfaces are described and the motion is defined as well. Properties, such as velocity, density, pressure, or temperature, of the vector field are generated and stored at each grid point. Coordinates of a grid point in this space can be denoted as x = (x,y,z). Computational space: This space lies on a regular Cartesian grid, which is transformed from physical space using the inverse Jacobian matrix. Coordinates of a grid point in this space can be denoted as ξ = (ξ, η, ζ). Even though it is feasible to directly apply visualization techniques in physical space on a curvilinear grid, usually the vector field is transformed to computational space to perform numerical operation. This is because the regularity of the topology structure in computational space simplifies the interpolation and point location. In computational space, at any location, the vector can be reconstructed by using linear interpolation of the vectors at neighboring grid points. However, in physical space, 11

27 the physical geometry is irregular and every cell has its own shape and size. The reconstruction becomes much more complex. The velocity in physical space and computational space can be denoted as equation 2.1 and 2.2 respectively. x = t x t y t z t (2.1) ξ t = ξ t η t ζ t (2.2) The transformation of the vector field from physical space to computational space is then specified by ξ t η t ζ t = x ξ y ξ z ξ x η y η z η x ζ y ζ z ζ 1 x t y t z t (2.3) Unstructured Grid For unstructured grids, both the topology and geometry are irregular. Compared with structured grids, connectivity relationship between vertices are required. Ueng et al. [64] proposed an efficient method to construct streamlines on unstructured grids Integral Curves One of the main tasks of applying visualization techniques to vector fields is to explore the dynamical evolution of a fluid system, which gives rise to a set of integral curves. These curves are defined by ordinary differential equation with different initial conditions, 12

28 x(t) t = f( x, t) (2.4) where x(t) represents the particle position at time t, and t is the integration time. Streamline: A streamline is a curve that everywhere is tangent to the instantaneous local vector field. In an unsteady flow field, the instantaneous vector at a fixed time is considered. At an instantaneous time τ, a streamline is the solution to x(t) t = f( x, τ), where x(t 0 ) = x 0 (2.5) Pathline: A pathline is the actual path traveled by an individual fluid particle over some time period. Starting from time t 0, a particle path is the solution to x(t) t = f( x, t), where x(t 0 ) = x 0 (2.6) Streakline: A streakline is the line joining the positions of all particles that have been released previously from the same point. To get a streakline at time t, a set of particles are released from a position x 0 at times s [t1, t], and the position for each particle at time t is the solution to equation 2.6 with its corresponding initial condition (x 0, s). To solve the ODEs problems, usually numerical methods [6] are applied. According to the accuracy, the performance, and the complexity, there are different approaches to be used. In this dissertation, I mainly use the fourth order Runge- Kutta (RK4) numerical integration method [49] to better approximate the local behavior of the integral curves. For a static vector field, RK4 is defined by: 13

29 k 1 = h v(x) k 2 = h v(x k 1) k 3 = h v(x k 2) k 4 = h v(x + k 3 ) x n+1 = x n + k k k k O(h5 ) (2.7) where h is the step size and can be adjusted adaptively. When the flow becomes turbulent, smaller step size is used to capture the changes, and when the flow becomes stable, larger step size is used to save the integration time. 2.2 Critical Points and Flow Topology Critical points are points at which the magnitude of the vector vanishes. In Mathematics, a critical point is a point on the domain of a function where the derivative is zero or the function is not differentiable. It is also called stationary point. The critical points in a vector field can determine the topology of that field [22, 23, 2], which is very important to analyze the underlying flow dynamics. Critical points can be characterized according to the behavior of nearby tangent curves (two-dimensional) or tangent surfaces (three-dimensional). To simplify, two-dimensional vector field is used as an example to discuss the critical points and the classification. A vector (u, v) in the vicinity of a critical point (x0, y0) can be expressed by the first-order Taylor series expansion u(dx 1, dy 1 ) u x 1 dx 1 + u y 1 dy 1 v(dx 1, dy 1 ) v x 1 dx 1 + v (2.8) y 1 dy 1 14

30 where dx 1 and dy 1 are small distance increments from the position of the critical point. This critical point can be classified according to the eigenvalues of the Jacobian matrix, as defined in 2.3, of the vector (u, v) with respect to (x0, y0). Fig. 2.2 shows the classification of critical points according to the eigenvalues of a two-dimensional vector fields. A real eigenvector of the matrix defines a direction such that if moving slightly off the critical point in that direction, the field is parallel to the direction being moved. Thus, at the critical point, the real eigenvectors are tangent to the trajectories that end on the point. The positive or negative real part of an eigenvalue indicates an attracting (incoming) or repelling (outgoing) nature of a critical point. When the real part is greater than zero, it is a repelling critical point. Otherwise, it is an attracting one. The imaginary part denotes whether the point presents a circulation pattern. When the imaginary part is not zero, the corresponding critical point is repelling or attracting focus, rather than node. Saddle point is distinct from other types, because there are only four tangent curves ending at the point. These curves are tangent to the two eigenvectors of the Jacobian matrix, which are the separatrices of the saddle point. It has one positive and one negative eigenvalue. Near a saddle, the vector field approaches the critical point along negative eigendirections and recedes along positive eigen-directions. The same principle and method to classify critical points can be applied to threedimensional vector fields, except for the fact that there are more types of critical points. 15

31 Figure 2.2: Classification of critical points of two-dimensional vector fields. R1 and R2 denote the real part of the eigenvalues of the Jacobian matrix, while I1 and I2 denote the imaginary parts. Image courtesy of Helman and Hesselink [24]. 16

32 CHAPTER 3 RELATED WORK In this chapter, I discuss the related work in the area of flow texture, streamline placement, vector field simplification, streamline clustering, and perception enhancement. 3.1 Flow Texture Texture advection has been widely used for visualizing flow fields [44, 51, 33]. It provides a full spatial coverage of the field, which can better show the global characteristics. A lot of techniques have been proposed to visualize textures on twodimensional or three-dimensional vector fields. Among the existing texture advection methods, Line Integral Convolution (LIC) [5] and Spot Noises [67] are generally considered classic. In LIC, convolution is performed along the streamline path originated from each pixel in a two-dimensional grid to create coherent flow patterns. In Spot Noise, random spots are warped along the local flow directions and blended together to create the final image. Both of the algorithms have inspired a substantial amount of follow-up research in the past decade [56, 9, 12, 53, 31, 74, 78, 73, 41, 54, 19, 8, 71, 58, 65], to name a few. 17

33 To speed up the computation of the original LIC algorithm, Stalling and Hege [56] introduced a new line integral algorithm, called FastLIC. Unlike the original LIC algorithm, which integrates a streamline from every pixel in the output image and performs convolution, FastLIC efficiently reuses the intensities obtained along the convolution path of every streamline by spreading out the values to many pixels covered by this streamline. It not only saves a lot of computing overhead, but also makes it feasible for computing images at arbitrary resolution. For time-varying fields, Shen and Kao [54] presented UFLIC, an Unsteady Flow Line Integral Convolution algorithm. Their algorithm uses a time accurate value scattering scheme to model the texture advection process. To further enhance the coherence of the flow animation, they successively update the convolution results over time by using the output from the previous step as input to the next step. Jobard et al. [27] proposed a Lagrangian- Eulerian Advection (LEA) algorithm for unsteady flows which performs the texture advection at each fragment at an interactive speed. van Wijk [69] proposed Image Based Flow Visualization (IBFV), which advects the underlying mesh by the flow field. Through successive updates of texture coordinates at the mesh vertices, an input texture is being continuously advected and blended. Recently, Li et al. [36, 55] proposed a Chameleon algorithm utilizing GPU-based dependent texture hardware for a more flexible control of texture appearance to visualize three-dimensional steady and unsteady flows. Xue et al. [76] proposed two techniques to render implicit flow fields, which are constructed to record the information about flow advection. The algorithm provides a way to visualize information inside the flow volume. For non-parametric surfaces, van Wijk recently extended his IBFV [69] to IBVFS [70], which first advects the mesh vertices in the three-dimensional space, and then 18

34 projects the vertices to screen space to advect a screen space aligned input texture. Laremee et al. [34] proposed another image space-based method, called ISA, which projects the mesh vertices as well as the vector field to the two-dimensional screen before texture advections are performed. Weiskopf et al. [75] proposed a unified framework for two-dimensional time-varying fields that can generate animated flow textures to highlight both instantaneous and time-dependent flow features. There were algorithms proposed specifically to visualize flow textures on curvilinear grids, even though those techniques to generate flow textures on surfaces can be applied. Forssell and Cohen [12] extended the original LIC for visualizing the flow on curvilinear grids. First the vectors in physical space, which defines the warped structure of the curvilinear grid, are converted to computational space, which defines the logical organization in the form of regular grid of the curvilinear grid, by multiplying the inverse Jacobian matrix with the vector at each grid point. It is in computational space that the conventional LIC algorithm is performed and the two-dimensional flow texture is generated. Then this texture is mapped back onto the three-dimensional surface in physical space. As the sizes of cells in a curvilinear grid can be dramatically different, when mapping the texture back to physical space, the distortion of the texture in each cell can be different, which might give users wrong impression of the underlying fields. This effect can become more severely if animation is used to emphasize the flow motion. To address the aliasing effect caused by uniform convolution length used in LIC algorithm, they proposed to use varied convolution length based on the grid density in the direction of the flow. In [42], Mao et al. pointed out that the solution proposed by Forssell and Cohen is not enough to completely solve the problem, because the noise granularity is important as well to generate flow textures 19

35 on curvilinear grids. They proposed to use multi-granularity noise texture based on a stochastic sampling technique called Poisson ellipse sampling. The computational space is re-sampled into a set of randomly distributed points, and the sizes of ellipses are adjusted according to the local cells in physical space. The final noise image, reflecting the density of the grid, is reconstructed from these points and ellipses and used as input to LIC. As for multi-frequency noise image for LIC, Kiu and Banks [31] presented an explicit method using the velocity magnitude. The vector field is divided into intervals while each interval maps to vector magnitude in some range. The noise frequency marked by each interval is inversely proportional to the vector magnitude. And the final noise image is composed of a sequence of images with different frequency. Although the problem of aliasing in the grid with high density is alleviated, their method is not interactive and can not be adapted to arbitrary viewing conditions as the user zooms in and out of the field. 3.2 Two-dimensional Streamline Placement For two-dimensional static vector fields, there exist several streamline seeding strategies. One of general strategies is to place streamlines evenly according to the distance between them in the field. In this layout, the image space is uniformly divided by the streamlines. Turk and Banks first proposed the image guided streamline placement algorithm in [63], which uses an energy function to measure the difference between a low-pass filtered streamline image and an image of the desired visual density. The motivation behind is that the energy of evenly placed streamlines should be even too. High energy value means streamlines are close to each other, while low energy 20

36 means the regions are devoid of streamlines. With this energy function, a random optimization process is performed iteratively to reduce the energy through some predefined operations on the streamlines. This method produces high quality images, however, the convergence is slow and the computation is expensive. Jobard and Lefer [28] proposed a very easy and fast method to generate evenly-based streamlines. A new seed is chosen at a minimal distance away from existing streamlines, and the streamline from this seed is integrated until it is too close to the existing streamlines or leaves the domain. The process terminates until there is no more void region in the field. It explicitly controls the distance between adjacent streamlines to achieve the desired density. The most time-consuming process is the computation of distance between streamlines, unlike energy based algorithm, which spends time on the trial process of operations. According to the algorithm, new seeds are placed near to some existing streamlines, which might cause conflict with the preference for long streamlines. This is because the new streamline from this seed tends to get close to existing streamline and then the integration will be terminated before it can go far away. In order to address this issue, Mebarki et al. [45] proposed a two-dimensional streamline seeding algorithm by placing a new streamline at the farthest point away from all existing streamlines. The purpose of their algorithm is to generate long and evenly spaced streamlines. Seeding a streamline in the largest empty region indeed favors longer streamlines. Delaunay triangulation is used to tessellate regions between streamlines, which finds the largest void region and controls the distance between streamlines. The flow coherence is improved, even though the discontinuities still appear in the results when it becomes denser. Liu et al. [39] proposed an advanced evenly-spaced streamline placement strategy which prioritizes topological seeding and 21

37 long streamlines to minimize discontinuities. Adaptive distance control based on local flow variance is used to address the cavity problem. Even though evenly-based streamline placing strategy is popular, there are some potential issues to be addressed. First, due to its simplicity, there is no consideration of the underlying flow structure. On the one hand, if the distance threshold is set to be larger, it is easy to miss important flow features; on the other hand, if the distance is set to be smaller, the streamlines are very dense, which can cause visual aliasing effect. Second, the termination of integration of streamlines is decided according to the distance between streamlines. In this way, it might be distracted and confused whether the termination is caused by the flow field itself, for example hitting a critical point, or it is because of the distance constraints with the neighbors in the final image. Some algorithms intentionally favor longer streamlines. But as they mentioned, it is not easy to generate streamlines satisfying both the distance criterion and the length preference. This effect becomes worse for vector fields with convergent flow structure because the flow in those regions tends to squeeze together. Streamlines can easily be terminated in those regions before they get close to each other, which can leave void regions in the final images. Another strategy places streamlines according to the flow topology. The flow topology can be determined by the types of critical points, and the flow field can be divided into stable regions by critical points and tangent curves (two-dimensional) or tangent surfaces (three-dimensional). Topology skeleton is important to analyze the flow fields. In view of the importance of the topology, it is natural to place streamlines based on this information. Verma et al. [72] proposed a seed placement strategy based on flow topology characterized by critical points in the two-dimensional vector fields. 22

38 The algorithm was designed to capture the important flow patterns, and also cover the region sufficiently with streamlines. Critical points in the field are first located and the types are identified. The field is divided into regions and each region contains one critical point. The correspondingly pre-defined template is applied according to the type of each critical point. The shape and size of templates are determined by the influence region covered by the critical points. To have a sufficient coverage, additional seed points are randomly distributed in empty regions using Poisson disk distribution. In this way, the important features will not be missed, no matter how dense or how sparse the final density of streamlines will be. 3.3 Simplification of Vector Fields In the past years, there are many techniques proposed to simplify two-dimensional vector fields. One general class of simplification, as known as clustering, works on the vector field itself, such as constructing hierarchy of vector field. Heckel et al. [18] proposed to generate a top-down segmentation of the discrete field by splitting clusters of points. At beginning, all points of vector field are placed in a single cluster, which is defined by using a single point and associated vector. Both the representative point and associated vector are computed by averaging the coordinates and vector values in the original vector field. In order to split this initial cluster, for each point in the cluster, two streamlines are integrated, one based on the simplified vector field and the other one based on the original vector field. The distance between the sequence of sampling points on the streamlines is accumulated to be as the error value at this point. Then the error value associated with the whole cluster can be computed as the maximum error value of all points in this cluster. Each 23

39 cluster is split using a bisection plane. The construction of the hierarchy is an iterative process, which always picks up the cluster with the maximal error as the next cluster to split, until the maximal error of each cluster is less then a threshold value. Since the algorithm uses the visual difference shown by streamlines as the error metric to guide the splitting process, it more or less involves some information about the topology of the underlying flow field. Telea and Wijk [60] presented a method to hierarchically construct the clusters bottom-up from the input flow field. Starting with each node as a cluster, the algorithm repeatedly selects the two most resembling neighboring clusters and merges them to form a larger cluster until a single cluster covering the whole field is generated. The metric to evaluate the similarity between vectors is based on the direction and magnitude comparison, and the position comparison. Du and Wang [10] proposed to use Centroidal Voronoi tessellations (CVTs) to simplify and visualize the vector fields. Given the definition of the distance between two points on the vector field, which involves both the angle between these two vectors and the Euclidean distance, for a point belonging to a Voronoi region, its distance to the generator is the shortest in all distances to generators in other Voronoi regions. The result of tessellations is the simplified vector field, while the centroid of each cell is the representative vector. The properties of CVT ensure that the results are from a global perspective, rather than locally greedy. This algorithm is fast and easy to implement. 24

40 3.4 Streamline Clustering Fiber tracking, as known as streamline tracing, is wildly used in visualizing the results of Diffusion Tensor Imaging (DTI) [35]. Bundles constructed from the clustering of fibers tell anatomically meaningful information, which define the connection of different grey-matter regions. Fibers in DTI are different from general streamlines in conventional vector fields, and this is because there is inherent clustering structure in the fields. This means the spatial coherence in local regions is more obvious than that of general vector fields. The methods used to cluster streamlines [46] can inspire how to compare streamlines in conventional vector fields. Corouge et al. [7] proposed to use the position and shape similarity between pairs of fibers and tested several distance metrics. For a pair of fibers, the distance between them can be evaluated as the closest point distance, mean distance of closest distances, or Hausdorff distance between the corresponding points on the fibers. The shape-based distance is computed by extracting geometric features from fibers, such as length, center of mass, and second order moments. Brun et al. [4] presented a clustering method using normalized cut. Representative information, such as mean vectors and the covariance matrix of points on the traces, are first extracted and mapped to an Euclidean feature space. In the feature space, fiber traces are compared pairwise, and a weighted, undirected graph is created. This graph is partitioned into coherent sets by using normalized cut criterion. 3.5 Three-dimensional Streamline Placement For three-dimensional vector fields, the flow topology becomes more complex than that of two-dimensional fields. For rendering, when three-dimensional streamlines are 25

41 projected to two-dimensional screen, they can intersect or overlap with each other, and the depth information is lost. Thus, exiting methods on two-dimensional fields can not effectively be extended to three-dimensional fields. To generate aesthetically pleasing streamlines in three-dimensional flow fields, only the spacing control of streamlines in object space is not enough. The spacing control in image space is as important as that in object space. Compared with streamline seeding for two-dimensional vector fields, less work were proposed on three-dimensional fields which could address the issues mentioned. After the topology-based seed placement [72] on two-dimensional vector fields being proposed, Ye et al. [77] extended this strategy to three-dimensional fields. Critical points are first located and classified, then appropriate templates are applied at the vicinity of critical points. Finally, Poisson seeding is used to populate the final empty region. Placing streamlines with higher importance first can reduce the visual cluttering effect in certain degree. Even though this method is effective to place streamlines in three-dimensional vector fields, analyzing the topology is not an easy work, because the way we detect and classify critical points are numerical and approximate, instead of being accurate. For some three-dimensional fields, there are no critical points at all, in which case the seeds are just placed with Poisson distribution. Work presented by Mattausch et al. in [43] was an extension of Jobard and Lefer s evenlybased algorithm [28] and multi-resolution strategy [29]. Spatial perception of the three-dimensional flow was improved by using depth cueing and halos. They also applied focus+context methods, ROI driven streamline placing, and spotlights to solve the occlusion problem. This method only controls spacing between streamlines in 26

42 object space, and there is no guarantee for the spacing in image space and the completeness of flow pattern. The completeness of flow information means the topology presented by the final set of streamlines. In this case, both the visual cluttering and missing of information are unavoidable. 3.6 Visualization Enhancement Some research addressed the issue related with visual cluttering, occlusion, perception in the context of flow visualization. Take three-dimensional streamlines as example, as being projected to image space, the spatial information of streamlines is lost, which might hinder users understanding and real exploration of the underlying flow patterns. Lighting is one of the elements to improve spatial perception, especially in the case when streamlines are in bundle. Stalling and Zockler [57] employed a realistic shading model to interactively render a large number of properly illuminated fieldlines using two-dimensional textures. A unique outward normal vector, which is well defined on surfaces, does not exist for line primitives. Instead, traditional lighting equations are transformed to the form defined by lighting vector and the tangent vector of the line primitives. It is based on a maximum lighting principle, which gives a good approximation of specular reflection. To improve diffuse reflection, Mallo et al. [40] proposed a view-dependent lighting model based on averaged Phong/Blinn lighting of infinitesimally thin cylindrical tubes. They used a simplified expression of cylinder averaging. To emphasize the depth discontinuities, which can intuitively present the depth separation in a projected view, Interrante and Grosch [26] used a visibility-impeding volumetric halo function to highlight the locations and strengths of depth discontinuities. Interactive clipping of three-dimensional LIC volumes [50] 27

43 addresses the occlusion issue. Li et al. [36] used lighting, silhouette, and tone shading to incorporate various depth cues in their rendering framework. Limb darkening [20, 21] can be used to convey the three-dimensional shape and depth relations of the fieldlines by creating a halo effect around each line. 28

44 CHAPTER 4 VIEW-DEPENDENT MULTI-RESOLUTIONAL FLOW TEXTURE ADVECTION 4.1 Algorithm Overview The primary goal is to visualize two-dimensional steady and unsteady flow fields defined on recti- or curvi-linear surface meshes that exist in three-dimensional space. An example of such data is a computational plane from a curvilinear mesh as shown in Figure 4.7 presented in the result section 4.6. To provide the interactivity at run time, texture advection is computed directly at each fragment in the image space using graphics hardware. Particle paths and the input texture to be advected are defined in object space to generate accurate and correct appearances. For grid cells that cover multiple pixels, the goal is to generate flow patterns with enough granularity within the projection region of those cells in the output texture without up-sampling the flow field or integrating additional particle paths. The main idea is to generate textures of various resolutions to match the screen resolution as the user zooms in and out of the data, it is important to have an intermediate representation for the flow field whose resolution can be easily adjusted at 29

45 run time to allow flexible and efficient texture advection. This intermediate representation should avoid the problems commonly encountered when up- or down-sampling a flow field: up-sampling the vector field would create a large overhead for computing additional particle traces, while down-sampling the vector field can generate incorrect flow paths. The latter is because a small error in each vector from the down-sampled field can accumulate to a large error as the numerical integration is computed. Another important criterion is that such an intermediate representation can be used directly by the texture advection algorithm and allows for an effective use of modern graphics hardware. 4.2 Flow Field Representation The core of the algorithm is a novel representation of the underlying flow field, called a Trace Slice, which is used for texture advection under different viewing conditions. The primary advantage is that instead of generating multi-resolutional vector fields and using the approximated field to perform texture advection, the advection paths of flow textures can be more accurately obtained at arbitrary resolutions using the trace slices. The trace slices also make it feasible to exploit programmable GPUs to perform view dependent texture advection for each fragment at an interactive speed. A trace slice S T d T s is a two-dimensional array with the same dimension as the input grid. Each element of the trace slice corresponds to a mesh vertex, which can be indexed by coordinates (i, j) defined on the parametric surface. The attribute stored in the trace slice, denoted as S T d T s (i, j), is a 2-tuple (a, b), which means if a particle is released from (a, b) in the flow field at time step T s, it reaches vertex (i, j) at time 30

46 step T d, where T d > T s. In essence, the information stored in a trace slice S T d T s is primarily used to advect a texture N released at time step T s to time step T d. This is done by using the 2-tuple (a, b) stored in S T d T s (i, j) as the texture coordinates to look up the input texture N defined at T s. To create the trace slices for a given flow field, for every grid point (i, j) in the mesh, perform the following steps for each time step T d = 0...T max, where T max is the maximal time step in the time-varying field. Given a time T d, we first perform a backward advection from the grid point (i, j), which results in a pathline travelling in the space-time domain. Then, we sample the pathline at a sequence of time instants t i = T d i t, i = 1..K to get K positions of the pathline for a constant K as long as t i stays greater than zero. Here the discussion about the choice of K is deferred to section 4.4. Since the pathline is advected backwards, this space-time position (a, b, t i ) implies that a particle released from (a, b) at time step t i travels forwards and arrives at (i, j) at time T d. According to our definition of trace slices, the position (a, b) is stored into S T d t i (i, j). If the same process is repeated for all grid points from the input mesh, a two-dimensional array S T d T i (i, j), i [0, I max ], j [0, J max ] is formed, which is a trace slice. Because the pathlines are sampled K times for different t i, there are K trace slices, which all have the same destination time step T d. If the underlying field is a steady flow field, the virtual time is used for computing streamlines in this process. Figure 4.1 shows the process of creating K trace slices. 4.3 Texture Advection This section first describes how the trace slices are used to perform flow texture advections at run time for a fixed resolution. Multi-resolution texture advection will 31

47 Figure 4.1: The creation of trace slices by backward advection. be described in section 4.5. Since the underlying flow field is defined on a twodimensional structured recti- or curvi-linear mesh, this mesh surface can be rendered using one (for a regular Cartesian grid) or multiple polygons (one per cell for a structured curvilinear grid) mapped with flow textures that are computed at run time. The texture rendering algorithm takes as input an initial texture that is to be advected, and a set of trace slices loaded in as textures. Without a loss of generality, in the following it is assumed that the texture to be advected is a noise texture, although any texture can be used as an input for the advection. A mapping is needed from the trace slice to the mesh surface. This is necessary because when the mesh geometry is rasterized, each fragment looks up the trace slices and use the 2-tuples stored in the corresponding locations to look up the noise texture. In the algorithm, this mapping is established using the two-dimensional mesh parameters (i, j), i [0, I max ], j [0, J max ] as the texture coordinates for the mesh vertices, where I max and J max are the dimensions of the structured mesh. Besides the mapping between the trace slices and the surface mesh, a mapping from the input noise texture to the mesh is needed also. Conceptually this mapping 32

48 can be seen as distributing the noise on the surface in order for the advection to take place. In the algorithm, when the mesh is a regular Cartesian grid, the mapping is the same as how the trace slices are mapped to the mesh surface. For a Curvilinear mesh, however, care should be taken so that the noise is mapped to the mesh in the physical domain as uniform as possible regardless of cell size or shape. With the mapping from the trace slices and the input noise to the mesh being established, the texture advection algorithm can now be explained as follows. Given an input texture N released at time T s, knowing that the texture color at (x, y) from time step T s is to be advected to the point (i, j) at time step T d if the trace slice S T d T s (i, j) = (x,y), a two-stage texture look-up to advect the texture N to time step T d can be performed. First, the mesh polygon is rasterized, where each fragment interpolates the texture coordinates provided to the neighboring mesh vertices and then look up the trace slice texture using the interpolated texture coordinates. Then, the 2-tuple (x, y) retrieved from the trace slice texture S T d T s for the fragment is used as the texture coordinates to look up the input noise texture N. The advection of the noise texture from T s to T d is expressed as: C(i, j, T d ) = N(x, y) = N(S T d T s (i, j)) (4.1) where C(i, j, T d ) is the color for the fragment with the texture coordinates (i, j) at T d, (x, y) is the trace slice 2-tuple stored at (i, j) in the trace slice, and N(x, y) is the texel from the noise texture. Figure 4.2 shows the 2-stage texture lookup algorithm for texture advection. 33

49 Figure 4.2: Texture advection using two-stage texture lookups. 4.4 Spatial Coherence The advection algorithm presented above only calculates the influence of the input texture released at T s to the frame at T d. In fact, a fragment at time T d receives contributions from the noise texture released at multiple time steps. Therefore, to compute the output color for each fragment at time T d, we can change our algorithm to: C(i, j, T d ) = i=1..k N(S T d T d i t (i, j)) K (4.2) where K represents the number to previous time steps before T d that the final color of each fragment is influenced by. An average of the color contribution from the K input noise texture is assigned to the fragment. As demonstrated by the LIC algorithm [5], a coherence of the pixel intensity along the flow lines provides effective cues to illustrate the underlying flow direction. The combination of textures described here allows to establish such pixel intensity coherence along the pathlines that are relatively steady. This is because adjacent pixels along the pathline have a large overlap in their backward advection traces. 34

50 Note that here the method to create the texture advection result at each animation frame T d is different from some of the existing methods (LEA [27], IBFV [69], IBFVS [70]) in the sense that there is no need to use the output of the previous frame as the input texture to compute the current frame. There are several reasons for this choice. First, the removal of inter-frame dependency allows the user to change the camera position or transform the mesh surface continuously since now the consecutive frames do not need to be rendered under the same view, a requirement for the previous methods. Second, when the underlying domain is not a simple two-dimensional flat plane and hence occlusions may occur, it is not possible to transform the output from the frame buffer back to object space and continue the advection for the next frame. Furthermore, as will be described in the next section, the algorithm computes the output image using different resolutions of trace slices S T d T s and input texture N based on OpenGL mipmapping mechanisms. The resolution of the trace slice and noise texture to use is determined for each fragment independently. Therefore, unless we are computing the texture advection for the previous frame at all levels, which is expensive, it is difficult to satisfy the need of all fragments that may be rendered at different mipmap levels. Finally, the animation frames in the algorithm can be generated simultaneously, so it becomes possible to use different threads with multiple graphics cards to implement the algorithm when the underlying dataset is large. 4.5 Multi-resolutional Texture Avection In essence, the motivation for generating multi-resolution textures is to address the problems of aliasing and a lack of detail when visualizing the flow fields. With the 35

51 trace slices and the texture advection algorithm presented above, the algorithm can generate multi-resolution flow textures under various viewing conditions by adjusting the resolutions of the input noise and the trace slices on the fly. Specifically, to avoid the rendering artifact, it is important to ensure that both the ratio between the size of a texel from trace slices and the size of a fragment within the object s projection area on the screen, and the ratio between the size of a texel from trace slices to the size of a texel from the input noise remain approximately one. This requirement can be enforced by using the classic mipmapping algorithm. In the algorithm, OpenGL s mipmapping function is exploited to implement the idea of multi-resolution texture advection. Starting from a base level of the input noise texture at a pre-defined resolution, average every 2x2 texels recursively to create a sequence of mip-mapped input textures. The same operation is applied to each of the trace slices, which is equivalent to creating a multi-resolutional version of the particle trace locations. It is worth noting that although conceptually creating a lower resolution trace slice is similar to down-sampling the flow field, it is in fact fundamentally different in the sense that the trace slices are computed from the original field. Integrating a particle using a vector field with lower resolution is much more susceptible to accumulating errors, which makes the pathline drift away from the correct path. As the user zooms out of the field, when the density of the projected mesh cells exceeds that of the screen pixels, OpenGL mipmapping is triggered and automatically chooses a lower resolution trace slice for each fragment to perform the texture advection algorithm presented above. An appropriate level of the input noise texture is chosen when the trace slice 2-tuple S T d T s (i, j) is used to access the noise texture. In 36

52 the algorithm, mipmapping mainly helps when multiple cells are projected to a single pixel, which prevents aliasing. When a cell is projected to multiple pixels, since the texture advection is computed per fragment, flow patterns of fine granularity within the cell are still generated. This is because each fragment within the cell looks up the trace slice based on the texture coordinates interpolated from the corners of the cell and perform the advection of noise texels along the interpolated pathline locations. As long as the input noise texture has enough resolution, the resulting flow texture can convey discernable flow patterns Adjustment of Advection Step Size According to equation 4.2, each fragment takes a sequence of samples from the input noise texture following the pathline traces. To avoid aliasing and ensure spatial coherence between adjacent fragments along the same pathline, it is important for each fragment to sample contiguous texels from the input noise. This allows adjacent fragments along the pathline to average a similar set of noise input, hence creating spatial coherence. Previously, van Wijk and Jobard [27, 69] have made similar observations and suggested that the step size for the particle integration should satisfy the following rule: v t <= w (4.3) where v is the velocity at the current fragment location, t is the step size, and w is the texel width. In the algorithm, t should be adjusted based on the resolution of the noise texture used for the fragment decided by OpenGL s mipmapping algorithm. However, 37

53 since the level of detail for each fragment is determined by OpenGL independently at run time and not directly known to the application program, it is more difficult to determine t for each fragment from the program. Fortunately, since t is proportional to the texel size, and hence related to the noise texture resolution, a set of mipmapped t can be computed to accompany the mipmapped noise texture. To do this, for the noise texture at the highest resolution, first map the texel size to the space where pathines are computed. Then, compute a t for each grid point based on its local velocity. This produces a two-dimensional array of t at the same resolution as the noise texture. Then, a sequence of down-sampled t textures can be created in a similar manner as other mipmaps, except that now at each level, it is needed to multiply the down-sampled t values by two since the corresponding texel size in the corresponding noise texture when mapped to the mesh surface becomes twice as large in each dimension every time when reducing the resolution by one level. Once the t mipmaps are created, at run time, it can use the same texture coordinates for the noise texture to look up the mipmapped t slices. Since the t texture has the same resolution and the same number of mipmapping levels as the noise texture, each fragment uses an appropriate t to match the noise texel size w. According to equation 4.2, t values accessed from the mipmapped texture are used to access the trace slices. It is possible that the value t i = T d i t in the equation can fall in between the trace slices that sampled. In this case, a linear interpolation from adjacent trace slice 2-tuples is performed in the fragment program before looking up the noise texture. 38

54 4.6 Results The algorithm was implemented using OpenGL 1.5 and OpenGL Shading Language (GLSL) 2.0 running on a PC with an Intel Pentium GHz processor, 768 MB memory, and an nvidia 6800 GT graphics card with 256 MB of video memory. The flow advection algorithm described above is primarily implemented in a fragment program. Each fragment is provided with the texture coordinates used to access the trace slices. The textures input to the fragment program include the noise texture, the t texture, and K trace slices where K is the convolution kernel size according to equation 4.2. The mipmaps for all the input textures are implicitly managed by the OpenGL run-time system, so no special handling is needed in the fragment program. Dataset Dimensions Total Size Post 32x Shuttle 52x Vortex 100x Table 4.1: Datasets used in the experiments. Note that the size for the vortex data includes all 31 time steps. The sizes are in KBytes. Three datasets were used to test the algorithm, as listed in Table 4.1. The vortex dataset is a time-varying flow on a regular Cartesian grid data, and the rest are steady state flows on curvilinear grids. The trace slices were computed by first starting a backward pathline at every time step from each grid point, and then sampling the backward pathline locations. Each pathline was advected backwards as far as K time steps, where K is equal to the convolution size described in equation 4.2. When the underlying dataset is a steady field, the pseudo time for the particle integration is 39

55 used. In all of the experiments, K was set to 10. The value of K affects whether the convolution algorithm can be completed in a single pass or not. If K exceeds the maximal number of active textures that is allowed by GLSL, the texture advection is implemented by using multiple rendering passes. The experiment results represented in this paper were all done in a single rendering pass. In the process of computing the trace slices, if a particle goes out of bounds before K time steps, its advection is terminated and the trace slice values for the remaining time steps are set to the location where the particle exits the domain. All trace slices were computed in a preprocessing stage. The second column in Table 4.2 lists the total preprocessing time for each dataset. For a steady state dataset, the preprocessing time is within a few seconds. For the time-varying data vortex data, the preprocessing time is slightly larger. It is worth noting that the preprocessing only needs to be done once, and can be used for all output resolutions and different viewing conditions. Dataset Pre-Processing Texture Creation and loading Post Shuttle Vortex Table 4.2: The time for trace slice preprocessing and texture creation and loading (in seconds). When the user zooms in and out of the field, the level of detail for both the trace slices and noise are adjusted automatically and the texture advection is computed on 40

56 the fly. Using graphics hardware, this multi-resolution texture advection can be done very fast. For all the datasets used, after the textures were loaded into the video memory, the frame rate to advect and render the texture exceeded several hundred frames per second while the level of detail is being adjusted automatically. In fact, the nvidia GeForce 6800 GT can render 5.6 billion texels, and 525 million vertices at each second. The amount of geometry and textures were considerably lower than the peak load that the graphics hardware can handle. The third column in Table 4.2 lists the time for creating and loading all the necessary textures to the video memory. Note that it is a process that only needs to be done in the beginning of the program so it is part of the program set-up time. Additional tests were performed to verify the core idea of trace slices, that is, creating a down-sampled version of trace slice is more accurate than integrating particles using the down-sampled flow field. Using the vortex dataset with a resolution of 100x100, we first created the trace slices S 10 t (i, j), t [0, 9] at a resolution of 100x100, and then down-sampled them to a resolution of 50x50 and a resolution of 25x25. The flow field was also down-sampled to 50x50 and 25x25 and are computed backward pathlines using the down-sampled data. For the particle locations computed both from the down-sampled trace slices and the down-sampled fields, they were compared with the particle locations using the original field. Figure 4.3 shows the results for the 50x50 resolution. It can be seen that as the particles travelled farther, larger errors were accumulated when the down-sampled field was used. For the trace slices, the errors were bounded and no accumulation took place. As the dataset was further down-sampled to 25x25, as shown in Figure 4.4, the error of the particle locations became larger when using the down-sampled field. 41

57 Figure 4.3: Comparison of particle position errors for travelling 1 to 10 time steps using the down-sampled trace slices and the down-sampled vortex dataset reduced from 100x100 to 50x50. X axis indicates the time steps that the particles have travelled, and Y axis indicates particle position errors compared to the accurate traces using the Euclidean distance in the field. Figure 4.4: Comparison of particle position errors for travelling 1 to 10 time steps using the down-sampled trace slice and the down-sampled vortex dataset reduced from 100x100 to 25x25. X axis indicates the time steps that the particles have travelled, and Y axis indicates particle position errors compared to the accurate traces using the Euclidean distance in the field. The left image in Figure 4.5 shows a snapshot of the Post dataset with the multiresolutional texture advection while the right one shows a snapshot without such a control. The mesh of the Post dataset has some interesting characteristics - the cells toward the center of the mesh quickly become much smaller than those close to the outside boundary, with several orders of magnitude difference. It can be seen that the algorithm can generate a result with less aliasing. 42

58 (a) (b) Figure 4.5: Rendering of the post dataset (a) with (b) without the multi-resolution level of detail control. The two images in Figure 4.6 show the comparison of the images for the shuttle dataset generated by our algorithm and traditional LIC. Each image was generated at the original resolution of 52x62 for the portion that is shown. The algorithm can generate image on the right with clearer flow patterns without up-sampling the field, while the LIC image on the right clearly does not include enough detail to show the flow directions. Figure 4.7 shows the test results for the shuttle dataset. Starting from a more close-up view and zooming, it can be seen from the image on the upper right that the multi-resolution algorithm quickly switched to a lower resolution of traces and noises and thus still produce clear flow pattern. The image on the lower right did not use the multi-resolution algorithm. The same test was performed on the vortex dataset. The image on the left of Figure 4.8 is a close-up view. The image on the upper right is the result generated with the multi-resolution algorithm and the image 43

59 on the lower right without. Similar to the shuttle dataset, the algorithm was able to generate a clearer flow pattern since aliasing was avoided. (a) (b) Figure 4.6: (a)with the correction of noise distribution, no stretched pattern can be seen (b) Rendering using LIC with the original resolution of 52x62. The overhead of the algorithm is the additional space for storing the trace slices. Table 4.3 shows the total size of the trace slices created for each data. For the datasets tested, the overhead was moderate. Dataset Size of Trace Slices Post 1.00 Shuttle 1.32 Vortex 21.8 Table 4.3: The size of trace slices (in MBytes) including all time steps. Note that Vortex dataset is time-varying. 44

60 Figure 4.7: The image on the left was generated when zoomed in. As the user zoomed out from the image on the left, the algorithm was able to produce a clearer pattern by switching to a lower resolution of trace slices and noise texture (upper right), while the algorithm with no LOD control produced aliasing result (lower right). 45

61 Figure 4.8: A similar test as Figure 4.7 using the time-varying vortex dataset. It can be seen that this algorithm produced a better image (upper right) compared with no level of detail adjustment (lower right). 46

62 CHAPTER 5 ILLUSTRATIVE STREAMLINE PLACEMENT 5.1 Algorithm Overview The primary goal is to generate streamlines succinctly for two-dimensional flow fields by emphasizing the essential and deemphasizing the trivial or repetitive flow patterns. Fig. 5.1 shows an example of streamlines generated by the algorithm, in which the selection of streamlines is based on a similarity measure among streamlines in the nearby region. The similarity is measured locally by the directional difference between the original vector at each grid point and an approximate vector derived from the nearby streamlines, and globally by the accumulation of the local dissimilarity at every integrated point along the streamline path. To approximate the vector field from existing streamlines, two-dimensional distance fields recording the closest distances from each grid point in the field to the nearby streamlines are first computed. Then the approximate vector direction is derived from the gradients of the distance fields. This algorithm greedily chooses the next candidate seed that has the least degree of similarity according to the metrics. The algorithm has its unique characteristics when compared with the existing streamline seeding algorithms [63, 28, 72, 45, 39]. 47

63 Figure 5.1: Streamlines generated by the algorithm. First, the density of streamlines. Some of the existing techniques favor uniformly spaced streamlines. However, in this algorithm, the density of streamlines is allowed to vary in different regions. The different streamline densities reflect different degrees of coherence in the field, which allows the viewer to focus on more important flow features. Regions with sparse streamlines imply the flows are relatively coherent, while regions with dense streamlines mean more seeds are needed to capture the essential flow features. This characteristic of the algorithm matches with one of the general principles of visual design by Tufte [62]: different regions should carry different weights, depending on their importance. The information can be conveyed in a layered manner by means of distinctions in shape, color, density, or size. Second, the representativeness of streamlines. The general goal of streamline placement is to visualize the flow field without missing important features, which can be characterized by critical points. Since the flow directions around critical points can change rapidly compared to those non-critical regions, this algorithm is able to capture those regions and place more streamline seeds accordingly. 48

64 Finally, the completeness of flow patterns. In the previous streamline placement algorithms that have explicit inter-streamline distance control, the advection of streamlines can be artificially terminated. This may cause visual discontinuity of the flow pattern, especially when it is near the vicinity of critical points. The seeding algorithm in this chapter, however, only determines where to drop seeds and allows the streamlines to be integrated as long as possible until they leave the domain, reach a critical point, or generate a loop. Without abruptly stopping the streamlines, the flow patterns shown in the visualization are much more complete and hence easier to understand Distance Field A distance field [30] represents the distance from every point in the domain to the closest point on any object. The distance can be unsigned or signed, and the sign is used to denote that the point in question is inside or outside of the object. With the distance field, some geometric properties can be derived such as the surface normal [14]. The concept of distance fields has been used in various applications such as morphology [52], visualization [79], animation [13], and collision detection [3]. In this algorithm, unsigned distance fields are used to record the closest distance from every point in the field to the nearby streamlines that have been computed. In practice, a mathematically smooth streamline is approximated by a series of polylines integrated bidirectionally through numerical integrations. Given a line segment s i = {p i, p i+1 }, where p R 3, i N, a vector v i = p i+1 p i, the nearest point p q on the line segment s i to an arbitrary point q can be computed by: 49

65 p q = p i + tv i (5.1) where t = clamp( (q p i) v i v i 2 ), clamp(x) = min(max(x, 0), 1), (5.2) The distance d(q, s i ) from the point q to the line segment s i is computed by the Euclidean distance between q and p q. For a given streamline L, where L = { s i s i = {p i, p i+1 }, i N, p R 3 }, and s i is a line segment of line L, the unsigned distance function at a point q with respect to L is: d(q, L) = min{d(q, s i ) s i L} (5.3) To speed up the computation of distance fields, this is implemented on the GPU. The discussion about the GPU implementation is deferred to section 5.4. The resulting distance fields are used to derive an approximate vector field, which can be used to measure the dissimilarity between streamlines in the local regions Computation of Local Dissimilarity Because of spatial coherence in the field, neighboring points can have advection paths with similar shapes, even though they may not be exactly the same. Given a streamline, considering the closest distance from every point in the field to this streamline, a distance field can be computed. The iso-contours of this distance field will locally resemble the streamline, i.e., the closer the contour to the streamline, the more similar their shapes will be. This is the basic idea of how to locally approximate 50

66 streamlines in the empty regions from existing ones, which forms the basis to measure the coherence of the vector field in local regions. With the distance field, a gradient field is computed using the central difference operator. For each vector of this gradient field, after a 90 degree of rotation, an approximate vector is generated that is derived from the single streamline. Whether to rotate the gradient clockwise or counter-clockwise is decided based on the flow direction of the nearby streamline so that the resulting approximate vector points to roughly the same direction as the flow. To measure the local coherence, a local dissimilarity metric is defined by measuring the direction difference between the true vector at the point in question and its approximate vector. For a point p R 3, the local dissimilarity D l (p) at this point is written as the following: v (p) v(p) D l (p) = 1 ( + 1)/2 (5.4) v (p) v ( p) where v (p) is the approximate vector at p, and v(p) the original vector. The value is in the range of 0.0 to 1.0; the larger the value is, the more dissimilar between the true vector and the approximate vector at that point. It is worthwhile noting that this metric only denotes the local dissimilarity between the vectors at the point, instead of the dissimilarity between the streamline originated from this point and its nearby streamline. Also, so far I only consider the case that there exists only one streamline in the field. In the next section, some discussion is given about how to consider multiple streamlines existing in the field and modify the dissimilarity metric, which is a more general case assumed in the algorithm. 51

67 5.1.3 Influence from Multiple Streamlines When there exist multiple streamlines in the field, it is not enough by only using the standard definition of distance field and simply computing one smallest distance from each point to the streamlines, and evaluate the dissimilarity metric as presented above. This is because the distance field computed with this method will generate a discrete segmentation of the field. For example, the left image in Fig. 5.2 shows the approximate vectors in orange given two existing streamlines S 1 and S 2 in black. For the points in the lower triangular region under the dotted line, they are classified to be the closest to streamline S 2, while the points in the upper triangle are the closest to streamline S 1. If only using a single distance field computed from the two lines to approximate the local vectors, the resulting vectors will be generated in a binary manner, as shown by those orange vectors. This binary segmentation causes discontinuity in the approximate vector field. Given two lines as shown in the example, for the empty space in between, a more reasonable approximation of the vectors should go through a smooth transition from one line to the other, as shown on the right in Fig In the algorithm, a smooth transition of vector directions between streamlines can be achieved by blending the influences from multiple nearby streamlines. In the previous section, I discuss how to compute the dissimilarity metric if there exists only one streamline. For the more general case where multiple streamlines are present, for each point the M nearest streamlines are picked, and for each streamline the dissimilarity function as in equation 5.4 is evaluated respectively. Finally, the M D lk (p) is blended together to compute the final dissimilarity value at p as: 52

68 Figure 5.2: Assume the flow field is linear and streamlines are straight lines. The circle in the images denotes the region where a critical point is located. Black lines represent the exact streamlines seeded around the critical point. The orange lines represent the approximate vectors by considering the influence of only one closest streamline (left), and the blending influence of two closest streamlines (right). M D l (p) = (w k D lk (p)) (5.5) k=1 where w k is the weight of the influence from the streamline k decided by the distance between point p and the streamline k. D lk (p) is the dissimilarity value computed at point p using the distance field generated by streamline k. Analogously, the approximate vector at p is the blending of the vectors generated from the M nearest streamlines, and each vector is a 90 degree rotation of the gradient computed from the corresponding streamline, as described above. It is worthwhile noting that different methods for assigning the weight can be used in the equation depending on the requirement of the user. For all the images presented in this chapter, the blending of two nearest streamlines is considered, that is, M equals to 2 in equation

69 5.1.4 Computation of Global Dissimilarity As mentioned in the previous section, at each point, there is a local dissimilarity measure that represents the direction difference between the true vector at that point and the approximate vector derived from the nearby streamlines. However, the local dissimilarity only captures the coherence about the local vectors instead of the similarity between streamlines. If we only use local dissimilarity to decide the seed placement, it can generate a lot of streamlines in the final images, even though most of them resemble the nearby streamlines, and is only different at some local segments. In order to capture the coherence between a streamline originated from a point and its nearby streamlines, a global dissimilarity measure is defined by accumulating the local dissimilarity at every integrated point along its streamline path. Written in equation: L D g (p) = (u n D l (x n, y n )) (5.6) n=1 where D g (p) is the global dissimilarity at point p, and (x n, y n ) is the nth integrated point along the streamline originated from p. The length of the streamline is L. D l (x n, y n ) is computed by interpolating the local dissimilar values at the four corner grid points. Based on different metrics, u n can be computed differently. In the algorithm, the averaged local dissimilarity values along the streamline path is used, i.e., u n is equal to 1/L Selection of Candidate Seeds Before discussing the algorithm, two user-specified threshold values, T l and T g, are first introduced. T l is the threshold for the minimum local dissimilarity, while T g 54

70 is the threshold for the minimum global dissimilarity. To avoid drawing unnecessary streamlines, only seeds from grid points satisfying equation 5.7 are chosen. D l (i, j) > T l D g (i, j) > T g (5.7) The initial input is a streamline seeded at a random location in the field. For example, the central point of the domain can be used as the initial seed to generate the streamline. With the first streamline, the distance field is calculated and the dissimilarity value at each grid point is computed. The important step now is how to choose the next seed. Here a greedy but more efficient method for this purpose is presented. Given the two threshold values, the algorithm for choosing the next seed is described as follow: 1. Sort the grid points in descending order of the local dissimilar values computed from equation Dequeue the first point (i, j) in the sorted queue. If D l (i, j) is larger than T l, integrate a streamline from this point bidirectionally and compute the global dissimilarity value D g (i, j) by using equation 5.6. Otherwise, if D l (i, j) is smaller than T l, the iteration terminates. 3. If D g (i, j) is larger than T g, this seed is accepted as the new seed and the streamline being integrated is displayed. Otherwise, go back to step (2). When a new streamline is generated, the nearest streamlines to each grid point is updated and the dissimilarity values as mentioned in section is re-computed. The above algorithm runs iteratively to place more streamlines. As more streamlines 55

71 are placed, the smaller the dissimilarity values will become at the grid points. The program terminates when no seed can be found that satisfies equation 5.7. At this point, there are enough streamlines to represent the underlying flow field according to the user desired coherence thresholds. To speed up the process of choosing the candidate seeds, during the process mentioned above, when D g (i, j) is smaller than T g, this grid point is marked, and those grid points at the four corners of the cells that are passed by the streamline originated from (i, j) are marked too. These points will be excluded from being considered any further in the later iterations, because there already exist nearby streamlines similar to the streamlines that would have been computed from them. Therefore, it is unnecessary to check those grid points again. Generally speaking, for a dataset that has a sufficient resolution, the flow within a cell is very likely to be coherent, so this heuristic will not affect the quality of the visualization output much. That means, in most cases, streamlines from those grid points will be similar to the streamline that has already been rejected. This makes it possible to reduce the number of streamlines to compute and test substantially, without visible quality differences. Fig. 5.3 shows an image of streamlines generated with an oceanfield data using this algorithm. For rendering, since the algorithm allows streamlines to be integrated as long as possible until they leave the domain, reach critical points, or generate a loop, the local density of ink in some regions may be higher than other regions. To even the distribution of ink, the streamlines are rendered in the alpha blending mode, where the alpha value of each line segment is adjusted according to the density distribution of the projected streamline points in image space. Each sampling point on the streamlines is first mapped to image space, and the corresponding screen space point is treated 56

72 as some energy source, which can be defined by the Gaussian function. Then, an energy distribution map based on all streamlines is generated. This energy map is mapped to an opacity map to control the opacity of the streamline line segments as they are drawn. This can effectively reduce the intensity of the lines if they are cluttered together. Figure 5.3: Streamlines generated by the algorithm on the Oceanfield data. 5.2 Topology-Based Enhancement Although without explicitly considering the flow topology, the algorithm would naturally place more streamlines around the critical points because of the lack of coherence there. Sometimes it is desired to highlight the streamline patterns around the critical points so that the viewer can clearly identify the type of the critical points. To achieve this goal, the algorithm can be adapted by placing an initial set of streamlines with some specific patterns around the critical points, instead of randomly dropping the first seed. This is similar to the idea of seed templates proposed by Verma et al. [72]. For each type of critical points, a minimal set of streamlines to distinguish them from each other are used. For a source or sink, four seeds are placed 57

73 along the perimeter of a circle around the critical point, where each of the seeds is the intersection point of the x-y axes with the circle; for saddle, four seeds are placed along the two lines bisecting the eigendirections with two seeds on each line; for spiral or center, one seed is placed along a straight line emanating from the critical point. Fig. 5.4 shows such an image of streamlines generated with topology information being considered. Figure 5.4: Streamlines generated when the flow topology is considered. There are three saddle and two attracting focus critical points in this data. Streamline placement guided by topology information alone is not always effective, which can happen when there is no critical point, or there are too many critical points in the field. When there are too many critical points, the final image may easily get cluttered. On the other hand, if there is no critical point at all in the field, then no rules can be applied to guide the placement of streamlines. Our algorithm can consider both the vector coherence and the flow topology. 58

74 5.3 Quality Analysis As mentioned above, this algorithm generates representative streamlines to illustrate the flow patterns of the underlying field. Given appropriate threshold values, the algorithm selects streamlines based on the flow coherence via the dissimilarity measures defined above. The density of the selected streamlines can vary based on the degree of coherence in the local regions. As in Fig. 5.3, there are void regions between the displayed streamlines, which indicates that the streamlines in those void regions look similar to each other and hence can be easily derived. Therefore, it does not place many seeds in those regions. Since only a small subset of the streamlines are drawn in the whole vector field, it is necessary to conduct quality analysis of the method. One method of analysis, which can be performed quantitatively, is to compare the original vector field with the approximate vector field derived from the streamlines selected by the algorithm. Another method is to perform user studies to verify whether the users can correctly interpret the field in the empty regions, and also whether this representation is an effective method to depict the vector fields. In the following, I first describe the approach used for performing quantitative analysis with some results, and then present findings from user studies Quantitative Comparison The quantitative analysis consists of a data level comparison and a streamline level comparison. For the data level comparison, at first a vector field is reconstructed from the streamlines generated by the algorithm. Then compare the local vectors between the reconstructed field and the original field. For the streamline level comparison, originated from each grid point, two streamlines are integrated respectively in the 59

75 original vector field and the reconstructed one, and then compute the errors between these two streamlines. It is worth noting that the errors are only used to study whether the algorithm misses any regions that require more streamlines to be drawn. The errors do not represent the errors in the visualization, since every streamline presented to the user is computed using the original vector field. In the following, I first describe how to reconstruct a vector field from the streamlines that are displayed. Then present the data level and streamline level comparison results. Reconstruction of Flow Field The process to reconstruct the approximate flow field from selected streamlines is the same as the process presented in section and being used to iteratively introduce streamline seeds. The main difference is that now a final set of streamlines to generate the gradient fields are given. Given a streamline in the final streamline set, a distance field can be computed, from which its derived gradients can be computed. In section 5.1.3, I discuss the computation of the local dissimilarity by considering multiple nearby streamlines. With the same idea, for each grid point, first identify the nearest M streamlines, and then use the distances to the streamlines to generate M gradients at that point. After rotating the gradients by 90 degrees to get the approximate vectors, the final reconstructed vector at this grid point is computed from an interpolation of the M vectors inversely proportional to the distances from the point to the corresponding streamline. As mentioned above, in the current implementation, I only consider the nearest two streamlines for each grid point, that is, M = 2. For the grid points that are selected as the seeds or there are streamlines passing through it, the original vectors as the reconstructed vectors are used. 60

76 Data Level Comparison Data level comparison is performed between the original vector field and the reconstructed vector field at every grid point. The goal is to evaluate how well the streamlines displayed by the algorithm can represent the original vectors at the empty regions, based on the computational model introduced above. One of the challenges to perform data level comparison is to design appropriate metrics to quantify the errors. Since the goal is to evaluate how much the true vector direction at each grid point is aligned with the reconstructed vector, the cosine of the angle between the original vector and the reconstructed vector at each grid point is taken as a measure of similarity. Fig. 5.5 shows a result of the comparison using one vector data set. In the image, dark pixels depict that the two vectors at the grid points are almost the same, while brighter pixels mean more errors. From the image, it can be seen that the streamlines displayed are representative for the original vector field, because in most of the empty regions, the approximate vectors from the streamlines are well aligned with those in the original field. There are a few regions with higher errors, which mostly fall into the following cases. The first case is regions near the domain boundary. The algorithm explicitly excludes the grid points on the boundary from being selected as candidate seeds. This is because sometimes the vectors on boundaries are odd due to sampling issues, but the fieldlines in downstream or upstream tend to be more normal and stable. The second case of error is due to the implementation. When selecting the next candidate seed, if a grid point is too near to an existing streamline, for example the distance to this streamline is within a cell, this point is excluded from being a candidate seed. This is really not a cause of concern because even if the streamline integrated from this point is different from this existing 61

77 streamline, eventually there will be some point elsewhere on this streamline or near this streamline being picked up as the seed. The third case might be a problem caused by the linear interpolation operator used to blend the influence from multiple nearby streamlines based on the distance from the grid points to those streamlines. (a) (b) Figure 5.5: (a) Representative streamlines generated by the algorithm (b) Gray scale image colored by one minus a normalized value of the cosine of the angle between vectors from the original field and the reconstructed field. Dark color means the two vectors are almost aligned with each other, while brighter color means more errors. The maximal difference between the vector directions in this image is about 26 degree, and the minimal difference is 0 degree. Streamline Level Comparison Besides comparing the original and the reconstructed vector fields with the raw data, these two fields can be compared in terms of some global features, such as streamlines. To do this, from every grid point, simultaneously integrate streamlines forward and backward in the original vector field and the reconstructed field, and then compute the distance between those two streamlines at every integration step 62

78 based on some metrics, such as Euclidean distance, or Manhattan distance. Fig. 5.6 shows a result of streamline comparison on the same vector fields as Fig. 5.5, where the average Euclidean distance between the two streamlines is computed. Similar to the cases discussed in section 5.3.1, some errors are detectable in some local regions but they are quite small. Fig. 5.6 (b) shows the histogram of the distance errors, which tells that most of the grid points from which the streamlines originated only bear small errors. (a) (b) Figure 5.6: (a) Gray scale image colored by the distance errors (in the unit of cells) between two streamlines integrated from each grid point in the original vector field and the reconstructed one. Dark color means low errors, while brighter color means higher errors (b) Histogram of the streamline errors collected from all grid points in the field. X axis is the error, while Y axis is the frequency of the corresponding error value. The maximal difference is 23.1 and the minimal is 0.0. The dimensions of the field is 100x

79 5.3.2 User Study Abstract or illustrative presentations have been widely used and accepted in nonphotorealistic rendering and artistic design to depict information succinctly. User study is a way to quantify the effectiveness of new methods, like in [32]. To evaluate the effectiveness of using illustrative streamlines generated by this algorithm, a user study was conducted which contained four questions categorized into two tasks. The tasks and questions were related to visualization of four different two-dimensional vector fields. Participants Subjects for the user study were 12 unpaid graduate students from the Department of Computer Science and Engineering. Five of them are majored or will be majored in Computer Graphics, and others are in other research groups, such as Artificial Intelligence, Networking, etc. Two of them know a little about the concept of flow fields and streamlines, but none of them had studied fluid mechanics or related courses. There were four female students and eight male students. They all have normal or corrected visions and can see the images presented to them clearly. The study took about 30 mins for each subject, and before the test, the subjects were given a tutorial introducing them to the application. I explained the purpose of using streamlines to visualize flow fields, and different flow features being depicted by different types of critical points. The tests did not start until they could easily tell the flow features in the training datasets without help. 64

80 Tasks and Procedure The first task was to evaluate whether the users were able to effectively identify the underlying flow features, including flow paths and critical points, from the visualization generated by the algorithm. In particular, I wanted to verify whether the streamline representation was as effective as other existing algorithms, or more effective, in terms of allowing the users to understand the vector fields. This part was conducted on pieces of paper handed out to the subjects and there were three questions involved. To perform the test, I chose two existing two-dimensional streamline placement algorithms by Mebarki et al. [45] and Liu et al. [39], plus the method presented here, and generated images using four datasets. We first described the tasks to be finished and gave a brief introduction of related background knowledge. The subjects were shown 15 groups of images, and each group included three images generated by the three algorithms respectively and was organized like Fig For the images within each group generated by the algorithms of Mebarki and Liu, the streamline densities were similar, but between different groups, the density of streamlines were different. To avoid possible bias caused by a fixed ordering of images by the three algorithms, the order of three images was changed randomly in each group. Fig. 5.8 shows three groups of images used in the user study. At the beginning of this task, instructions were given to the subjects about the questions in detail. They were required to fully understand the questions before they started to give answers. The first question in the test was to ask the subjects to rate the three images in each group according to the easiness of depicting the flow paths in the vector fields, where 1 was the best and 3 was the worst. The second question was about critical 65

81 points. If there were critical points in the fields, subjects were asked to circle them and rate how helpful the streamlines presented in the visualization were to detect those critical points. The third questions was about the overall effectiveness of visualization considering both the flow paths and critical points. In the study, the subjects were not asked to classify the critical points. If the subjects thought all three images were equally helpful, then they could rate them equally. The second task was to evaluate how correctly the subjects were able to interpret the flow directions with the images generated from the algorithm in those empty regions without streamlines being drawn. This task was run with a completely automated program with four datasets. I pre-generated streamlines using the algorithm on each data set, which were used as the input to the program. When the program started with each data set, four random seed points were generated in those void regions. For each point, six circles with increasing radii were generated in a sequence. The subjects were asked to mark where the streamlines would intersect with the circles when they were advected from the seeds. That means, given a seed point, a circle with the smallest radius was first shown to the subject, who would then mark the streamline intersection point on the circle. After that, another circle with a larger radius was shown around the same point. This process repeated six times for each seed point. For some seed points, if the subjects believed the advection would go out of boundary or terminate at some point before it reached the circle, such as stagnant points, they could identify the last point in the circle instead of on the circle. Fig. 5.9 shows a screen snapshot of the interface for this task with only one circle drawn. 66

82 This user study was not timed, so subjects had enough time to give the answers. In summary, the questions involved were: 1. Rate images based on the easiness to follow the underlying flow paths. 2. Rate images based on the easiness to locate the critical points by observing the streamlines. 3. Rate images based on the overall effectiveness of visualization considering both the flow paths and critical points. 4. Predict where a particle randomly picked up in the field will go in the subsequent steps. Results and Discussions For the task about rating how easily the streamline images allow the subjects to follow the flow paths, the study result is shown in Table 5.1. From the result, it tells that most of the subjects prefer images generated by the algorithm. When analyzing the results from individual subjects in detail, I found that, for some images generated by the algorithm, if they are too abstract, some subjects tended to rate the evenly-spaced based methods higher. Even though the subjects could tell and follow the flow directions with images from the algorithm, evenly spaced methods were better for them to pinpoint the vectors at local points, because the streamlines were uniformly placed and cover all the domain. I also found that six subjects liked the images generated by the algorithm very much and always rated the highest, while one subject completely did not like all images generated by this algorithm and rated all images the lowest. 67

83 Algorithm Rank 1 Rank 2 Rank 3 Mebarki et al. s 5.4% 45.5% 51.0% Liu et al. s 20.1% 46.9% 30.0% Li et al. s 74.5% 7.6% 19.0% Table 5.1: The percentages of user rankings for each image based on the easiness to follow the underlying flow paths. Even though the algorithm does not explicitly place more streamlines near critical points, it indeed captures most of the features around the critical points. This is because vectors around critical points are less coherent and the algorithm is designed to place streamlines based on the streamline coherence. Additionally, streamlines getting converged or diverged around critical points contribute more ink in the neighborhood of them, which makes the critical points much more noticeable. The second question in the first task was to ask the subjects to rank how helpful the streamlines in the images were for the subjects to detect critical points. The study result, shown in Table 5.2, suggests that images generated from the algorithm are more helpful for the subjects to detect the critical points. This result is in accordance with the initial expectation since the algorithm allows the viewer to focus on more prominent flow features. This algorithm allows the streamlines to advect as far as possible once they start. Around critical points, relatively speaking, the streamlines become dense and converge around a small region near each critical point. According to Tufte [61], more data ink should be accumulated around the more important regions. The third question asked the users to rate the overall effectiveness of visualization considering both the flow paths and directions and it let the subjects to decide what they think are more important to visualize a vector field and how to balance the 68

84 Algorithm Rank 1 Rank 2 Rank 3 Mebarki et al. s 3.3% 42.5% 60.0% Liu et al. s 7.7% 52.7% 37.8% Li et al. s 89% 4.8% 2.2% Table 5.2: The percentages of user rankings for each image based on the easiness to locate the critical points by observing the streamlines. possible conflict between those two criteria. It is possible some images are good at depicting flow paths, while others are good at depicting critical points. The study result is shown in Table 5.3. Algorithm Rank 1 Rank 2 Rank 3 Mebarki et al. s 3.5% 42.5% 57.0% Liu et al. s 19.9% 52.7% 37.8% Li et al. s 76.6% 4.8% 5.2% Table 5.3: The percentages of user rankings for each image based on the overall effectiveness of visualization considering the flow paths and critical points. For the task about predicting the advection paths of particles, error was measured as the Euclidean distance between the user-selected point and the correct point from the integration using the actual vector data, in the unit of cells. Mean errors are shown in Fig with error bars depicting plus and minus of the standard deviation. I observe that as the radius of circle was increased, the error became slightly larger. In other words, the closer to the starting seed points, the easier for the subjects to pinpoint the particle path, except when the flow becomes convergent in some regions. In this case, even if the radius of the circle becomes larger, because the space between 69

85 streamlines becomes smaller, it is still easier for the subjects to locate the advection path. Overall, the test result shows that the errors were well bounded. In other words, the subjects were able to predict the flow paths reasonably well given the illustrative streamlines drawn by the algorithm. In general, the error range is related to and constrained by the spacing between streamlines, which depends on how similar the nearby streamlines are. 5.4 Results The algorithm was tested on a PC with an Intel Core GHz processor, 2 GB memory, and an nvidia Geforce 7950 GX2 graphics card with 512 MB of video memory. The streamlines were numerically integrated using the fourth order Runge-Kutta integrator with a constant step size. In an earlier section, three comparative results generated by Mebarki et al. s, Liu et al. s, and this algorithm have been presented in Fig Generally speaking, algorithms generating evenly-spaced streamlines are fast, and the performance is relatively independent of the flow feature. For the algorithm generating streamlines by evaluating flow features locally and globally, however, from the timings listed in Table 5.5 for the four datasets (Table 5.4), it can also run at interactive speeds. There are three main steps in the implementation: updating distance fields (section 5.1.1), computing local dissimilarity (section 5.1.2), and selecting seeds (section 5.1.5) including computing the global dissimilarity values. Updating distance fields takes place whenever a new streamline is generated. This part was implemented on GPUs: for each line segment of the newly generated streamline, a quadrilateral is drawn to a window with the same size as the flow field. The fragment shader 70

86 computes the distance from each fragment to the line segment. This distance is set to be the depth of the fragment. After all line segments from a streamline are drawn, the depth test supported by the graphics hardware returns the smallest distance from every pixel to the streamline in the depth buffer, which is then read back to the main memory. On the CPU, the distances to the nearest M streamlines for each pixel are recorded. The computation of local dissimilarity is also performed on the CPU by blending the influence of multiple nearby streamlines. From the timings, it tells that when the size of the flow field increases, more time is spent on the portion of the algorithm that runs on the CPU. Although I have not done so, the computation of the dissimilarity metric for each pixel potentially can be implemented on GPUs as well. This could also reduce the overhead of transferring data from CPU to GPU, and reading back from GPU to CPU. Dataset Dimension # of lines # of line segments Fig. 5.8(c) 64x Fig. 5.5(a) 100x Fig x Fig x Table 5.4: Information of four different datasets, and the number of streamlines generated by the algorithm. 71

87 Total Updating Computing Finding Timing Distance Field Local dissimilarity Seeds Table 5.5: Timings (in seconds) measured for generating streamlines. Each row corresponds to a data set listed in the same row of Table

88 Figure 5.7: A group of images used in the first task of the user study. 73

89 (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 5.8: Streamlines generated by Mebarki et al. s algorithm (left), Liu et al. s algorithm (middle), and my algorithm (right). 74

90 Figure 5.9: Interface for predicting particle advection paths. Blue arrows on red streamlines show the flow directions. The red point is the particle to be advected from (a) (b) (c) (d) Figure 5.10: Mean errors for the advection task on the four different datasets. X axis stands for radius of circles around the selected points, and Y axis depicts the mean error plus or minus the standard deviation. Larger value along Y axis means higher error. Y axis starts from -1 to make the graphs easier to visualize. Dimensions of the datasets (a) 64x64 (b) 64x64 (c) 64x64 (d) 100x

91 CHAPTER 6 IMAGE BASED STREAMLINE GENERATION AND RENDERING 6.1 Algorithm Overview The primary goal of this work is to control scene cluttering when visualizing threedimensional streamlines and allow the user to focus on important local features in the flow field. However, for three-dimensional data, addressing the issue of visual cluttering in object space is more challenging, since even if streamlines are well organized in object space, they might still clutter together after being projected to the screen. In real life, artists usually draw in the way such that strokes are drawn one by one onto the canvas; when some region gets cluttered, fewer strokes are placed, and vice versa. Inspired by the idea of applying this principle to the flow visualization, I propose to place streamlines based on how they are distributed across the image plane. Fig. 6.1 shows the visualization pipeline of the image-based streamline seeding and generation algorithm. The input to the algorithm is a three-dimensional vector field and a two-dimensional image with a depth map. The image and depth map can come from the result of rendering vector field related properties such as stream surfaces, or can be the output of other visualization techniques such as isosurfaces or 76

92 slicing planes of various scalar variables. The algorithm will generate streamlines by placing seeds on the image plane. For the seeds selected in image space, in the region covered by the depth map, they can be unprojected back to object space. Streamlines are then integrated in 3D object space. The algorithm ensures that streamlines will not come too close to each other after they are projected to the screen. Figure 6.1: Visualization pipeline of the image-based streamline generation scheme. With the algorithm, it is possible to avoid scene cluttering caused by streamlines having a very high depth complexity. Although researchers have previously proposed to draw haloed lines to resolve ambiguity of streamline depths [43], when a large number of line segments generated from the haloing effect are displayed, the relative depth relationship between the streamlines becomes very difficult to comprehend. By controlling the spacing of streamlines on the image plane, it is possible to prevent the visualization from being overly crowded. Another advantage of the image-based 77

93 approach is that it enhances the understanding of the correlation between the underlying flow field and other scalar variables. When analyzing a flow field, the user often needs to visualize additional variables in order to understand the underlying physical properties in detail. Directly dropping seeds in the regions of interest defined by those scalar properties can assist the user to create a better mental model to comprehend the data. Traditionally, visualization of streamlines and other scalar properties are performed independently. Placing streamlines in regions signified by other data attributes often requires the implementation of the seed placement algorithm to have knowledge about the specific features. In this work, a simple and unified framework is provided such that when the users find interesting features in the image, they can directly drop seeds in those regions and visualize the corresponding vector fields. This way, the process of issuing queries to answer the user s hypotheses both in scalar and vector fields can be performed more coherently. One key issue to realize this idea is how to place seeds and generate streamlines so that the visual complexity in the output image is well controlled. Based on the image space approach, we can create better visualizations of streamlines. 6.2 Image Space Streamline Placement To generate well organized streamlines on three-dimensional vector fields, one issue to address is how to control the spacing between streamlines when they are projected to the image plane. Previously, researchers have proposed several two-dimensional evenly-spaced streamline placement algorithms [63, 28, 45] and extended the idea to three-dimensional vector fields by ensuring evenly spaced streamlines in object space [43]. However, such a straightforward extension of the two-dimensional streamline 78

94 placement methods to the three-dimensional space does not always produce desired results since evenly spaced streamlines in three-dimensional space does not guarantee visual clarity after they are projected to the screen. The main idea of the algorithm is in order to ensure that streamlines will be well organized in the resulting image, it is more effective to place seeds directly on the image plane. Those screen space seed positions can be unprojected back to unique positions in object space if a depth value is given at the corresponding position. When a streamline is integrated in object space, it is necessary to make sure that this line is not too close to the existing streamlines in image space Evenly-spaced Streamlines in Image Space To start the algorithm, a random seed is first selected on the image plane and mapped back to object space. Here assume a depth value is available for every pixel on the screen. Details about different ways to generate the depth map is discussed in the next section. From the initial seed position, a streamline is integrated and placed into a queue Q. It is required that all streamlines keep a distance of d sep away from each other on the image plane. To ensure this, the following steps are repeated until Q is empty: 1. Dequeue the oldest streamline in Q as the current streamline. 2. Select all possible seed points on the image plane at a distance d sep away from the projection of the current streamline. Considering each projected sample point on the current streamline, there are two candidate positions for the seeds, one on each side of the streamline. 79

95 3. For each of the candidate seeds, a new streamline is integrated as long as possible before it is within the distance d sep from other streamlines on the screen. Then this new streamline is enqueued. 4. Go back to 1). The algorithm above is very similar to the one presented in [28], which works well for two-dimensional flow fields. However, for three-dimensional vector fields, because there involves a projection process from object space to image space, some issues need to be addressed. Perspective Projection The algorithm in [28] approximates the distance between a seed point to the nearby streamlines using the distances from the seed point to the sample points of the streamlines, which are the points computed at every step of streamline integration. Those distances will be used to compare with the desired distance threshold d sep to make sure that streamlines are not too close to each other. The assumption to make this approximation acceptable is that the distance between the sample points along a streamline must be smaller than d sep. In the algorithm, the streamline distance threshold d sep is defined in image space. Since the integration step size is controlled in object space, after being projected to the screen through perspective projection, the distance between the sample points along a streamline might be shortened or lengthened, which may violate the minimum d sep requirement. To address this issue, for each integration step, the projected distance between two consecutive points on the streamline is computed in image space. If the distance is larger than d sep, some intermediate sample points on the streamline are generated by interpolation. 80

96 Depth Comparison Streamlines can overlap or intersect with each other after being projected from object space to image space. In the two-dimensional evenly-spaced streamline algorithm [28], a new sample point on the streamline is invalid if it is within d sep from existing streamlines, or when it leaves the domain defining the flow field. In those cases, the streamline will be terminated. In our algorithm, simply terminating a streamline if it is too close to existing streamlines projections on the image plane is not always desirable, because a streamline closer to the viewpoint should not be terminated by those far behind. To deal with this issue, in the algorithm, when the newly generated point of the current streamline is too close to an existing streamline, first check whether this point is behind that existing streamline. If yes, the integration is terminated. Otherwise, check whether the streamline segment connected to this new point intersects with the existing streamline on the image plane. If they intersect and the new segment is closer to the viewpoint, the intersected segment of the old streamline becomes invalid and will be removed, and the integration of the current streamline continues. If they do not intersect, the integration of the current streamline continues. This can ensure that a correct depth relationship between the streamlines is displayed Streamline Placement Strategies Having described how to control the spacing between streamlines, in this section several strategies to place streamlines on the image plane are discussed. Since the streamline integration is performed in object space by unprojecting the seeds back to object space, depth values for the screen pixels, i.e., a depth map, will be needed. In 81

97 the algorithm, this depth map is generated as a result of rendering objects derived from the input data set, which defines the regions of interest that the user desires. Implicit Stream Surfaces Visualizing stream surfaces can be an effective way to explore flow fields since it is known that streamlines always adhere to the surface, and the local flow direction is perpendicular to the normal of the surface. By visualizing different stream surfaces, the user can get a better understanding of the flow field s global structure. Showing only the stream surfaces, however, is not sufficient since no information about the flow directions on the surface is displayed, as shown in the top images of Fig To create a more effective visualization, a stream surface is first rendered, and then the depth map from the rendered result is used as the input to the algorithm to create better organized streamlines. To generate stream surfaces, a volumetric stream function needs to be computed. Previously, van Wijk [68] proposed a method to generate implicit stream functions by computing a backward streamline from every grid point in the volume and recording its intersection point at the domain boundary. If some scalar values are assigned to the boundary, values from the boundary can be assigned back to the grid points according to the intersection points of their backward streamlines to produce a stream function. Isosurfaces can then be generated from this function to represent the stream surfaces. He proposed to paint certain patterns on the boundary, and see how the patterns evolve as the flow goes from the boundary into the domain. A new method is proposed to assign scalar values to the boundary based on preselected streamlines. The goal is to more clearly visualize the flows in the regions spanned by those streamlines. First calculate the intersection of those streamlines to 82

98 the boundary. Then treat each intersection point on the boundary as a source of a potential function that emits energy to its surrounding area on the boundary. The energy distribution is set to be a Gaussian function where the intensity is inversely proportional to the distance to the source. For every grid point on the boundary, the energy contribution is summed up from all sources and we use the resulting scalar field on the boundaries to create the implicit stream function. With such a setup, the stream surfaces generated can enclose the input streamlines in layers using different isovalues, and the image space method will place streamlines on each of the stream surfaces to depict the flow directions. Fig. 6.2 shows two examples of the stream surfaces with different isovalues and the streamlines generated using the method proposed here. The data set was generated as part of a simulation that models the core collapse of supernova. Flow Topology Based Templates A great deal of insight about a flow field can often be obtained by visualizing the topology of the field, which is defined by the critical points and the tangent curves or surfaces connecting them. With the topology information, the behavior of flows and to some extent the structure of the entire field can be then inferred. Different types of critical points characterize different flow patterns in their neighborhood. Given a critical point, the eigenvalues and eigenvectors of its Jacobian matrix can be computed. The eigenvalues can be used to classify the type of the critical point, and the eigenvectors to find its invariant manifold. Previously, Globus et al. [15] proposed to use three-dimensional glyphs to visualize the flow patterns around critical points. Ye et al. [77] proposed a template-based seeding strategy for visualizing threedimensional flow fields. Briefly speaking, the method in [77] is to first identify and 83

99 Figure 6.2: Streamlines generated on two different stream surfaces. classify critical points in the field. Then seeds are placed on the pre-defined templates around the critical points. Finally Poisson seeding is used to populate the empty region. The main goal of this method is to reveal the flow patterns in the vicinity of critical points. To incorporate the idea into the algorithm and highlight the flow topology, what is needed is a depth map that signifies the critical points. Solid objects can be used to be the templates. Rendering the templates can generate the input depth map for seed placement. Based on the eigenvalues and eigenvectors, there are different templates for different types of critical points. Fig. 6.3 illustrates the templates for the following types of critical points. Note that seeds are not directly dropped on the solid object template 84

100 Figure 6.3: Seeding templates for different types of critical points - left: repelling or attracting node; middle: attracting or repelling saddle and spiral saddle; right: attracting or repelling spiral (critical point classification image courtesy of Alex Pang). in object space, but on the image by rendering those templates at a given view. So only the shape, orientation, size and type of the templates matter, rather than how many seeds to drop, or where to drop seeds. Nodes: This type of critical points are sources or sinks of streamlines. The template used is a solid sphere, which is centered at the position of the critical point. Radius of the sphere is scaled by the eigenvalue s real part. Node Saddles: Two cones are used as the template for this type, which point to the opposite directions of the local eigenplane spanned by the eigenvectors. The radius and height are scaled by the eigenvalue s real part. Spiral: Two cones are used as the template for this type, which point to each other from the opposite direction of the local eigenplane spanned by the eigenvectors. The radius and height are scaled by the eigenvalue s real part. Spiral Saddles: The template for this type is the same as that of Node Saddle. 85

101 For a three-dimensional flow field, it is possible that there is more than one critical point. When multiple critical points are present, each critical point has its corresponding template and they are rendered together to the same image. The resulting depth map will have separate regions representing different templates, from which seeds are dropped. Fig. 6.4 shows streamlines integrated from the depth map generated by rendering solid object templates. There are four critical points in this synthetic flow field: three sinks and one saddle. Figure 6.4: Streamlines generated from critical point templates. Three sphere templates stand for sinks, while the two-cones template stands for saddle. Isosurfaces of Flow Related Scalar Quantities Many scalar variables are related to the properties of a flow field. For instance, vorticity magnitude can often reveal the degree of local rotations, while Laplacian can show the second order derivatives of the flows. As described in [59], those scalar quantities are often important for the understanding of the flow fields even though 86

102 they are not necessarily directly related to the flow directions. When exploring a flow field, one can first generate images from isosurfaces of those variables. As the users find some interesting features from the isosurface, they can use the image space method to drop seeds on the screen directly. This allows one to enrich the image and highlight the correlations between the scalar variable and the flow directions. Fig. 6.5 shows an example of streamlines generated from an isosurface of velocity magnitude using the algorithm. The data set was from a simulated flow field of thermal downflow plumes on the surface layer of the Solar. (a) (b) Figure 6.5: (a) An isosurface of velocity magnitude colored by using the velocity (u,v,w) as (r,g,b) (b) Streamlines generated from the isosurface. Slicing Planes One effective way to visualize volume data, particular for regions that are easily occluded, is to slice through the volume and only visualize data on the slice plane. 87

103 Although slicing planes have been used frequently for visualizing three-dimensional scalar fields, they are not used as often for vector field visualization. One of the primary reasons is that only visualizing the vectors on the plane does not reveal enough insight about the global flow directions, while visualizing a large number of streamlines starting from a slice plane can easily clutter the scene. With the image space method, the visual clarity can be enhanced by first rendering selected slice planes to the screen, colored with optional scalar or vector attributes, and then drop seeds on the planes to compute the streamlines. With the spacing control mechanism, it is feasible to control the depth complexity and show only the outer layer of the streamlines originated from the slice. Fig. 6.6 shows an example of streamlines computed from seeds dropped on a slicing plane. Note that the streamlines are computed in 3D space rather than constrained on the slice. (a) (b) Figure 6.6: (a) A slicing plane colored by using the velocity (u,v,w) as (r,g,b) (b) Streamlines generated from the slicing plane. 88

104 External Objects Another application of the image space method is to drop seeds on the surface of a user selected object. This external object can be thought of as a three-dimensional rake[25], from which streamline seeds are emitted. While previously people have proposed to use widgets as seed placement tools, the seeds were explicitly placed on the surface of the widget. This requires an explicit discretization of the rake surface to determine the seed positions. In the image space method, what is needed is a twodimensional depth map from the rendering result of the object. The seed density is determined in image space and thus can be easily adapted to the resolution of images. Fig. 6.7 shows streamlines computed from seeds on the surface of a cylinder. Note that in this image the depth cue is enhanced by mapping the computed streamlines with a texture that emphasizes the outlines. Details about the rendering are described in section (a) (b) Figure 6.7: cylinder. (a) The Cylinder as an external object (b) Streamlines generated from a 89

105 6.2.3 Additional Run Time Control In this section, several additional controls and effects that can be achieved using the algorithm are described. Level of Detail Rendering In computer graphics, level of detail (LOD) is commonly used to save unnecessary rendering time for objects whose details are too small to be seen on the screen. Rendering low resolution data can also reduce rendering artifacts if the screen resolution is not high enough to sample the high frequency detail. One such example is the texture mip-mapping algorithm supported by OpenGL. When visualizing streamlines, to improve the clarity of visualization, Jobard and Lefer [29] proposed to compute a sequence of streamlines with different densities, while Mebarki et al. [45] proposed to elongate all previously generated streamlines before placing new streamlines when the density is increased. The idea of LOD can be adopted here by adjusting the number of streamlines displayed on the screen according to the projection size of the domain on the screen. To achieve this effect, a constant streamline spacing defined in screen space between streamlines is used. As the user zooms out of the scene, because the screen projection area of the domain becomes smaller, fewer streamlines will be generated as a result of attempting to keep the constant distance between streamlines. As the user zooms into the scene, on the other hand, since now a larger projection area of the domain is displayed, more streamlines will be generated and displayed. Fig. 6.8 shows an example of LOD streamlines generated at different zoom scales. 90

106 Figure 6.8: Level of detail streamlines generated at three different scales. It can be seen that as the field is projected to a larger area, more streamlines that can better reveal the flow features are generated. Temporal Coherence When the user zooms in and out of, or rotates the scene, the projection of the surface will be changed. If re-running the algorithm to generate a completely new set of streamlines whenever such changes occur, some unwanted flickering and other annoyances may happen. To avoid this, it is necessary to maintain temporal coherence for the streamlines generated between consecutive frames. When the user zooms into the surface, the projection area becomes larger. The streamlines from the previous projection need to be retained, and placed into the queue as the initial set of streamlines (see section 6.2.1). These streamlines are first elongated before new ones are generated. Some sample points along the streamlines may go out of the view frustum and thus becomes invalid. When the user zooms out, first verify the sample points along the streamlines from the previous frame and invalidate those points that are too close to other streamlines under the new projection. After this, new streamlines 91

107 will be added to fill the holes if any. For rotations, it involves both the elongation, validation, and insertion of new lines similar to the zoom operations. (a) (b) (a) (b) Figure 6.9: Streamlines computed using different offsets from a depth map generated by a sphere. (a) no offset from the original depth map (b) by increasing a value from the original depth map (c) by further increasing a value from the original depth map (d) by decreasing a value from the original depth map. Layered Display of Streamlines To improve the clarity of visualization, sometimes it is necessary to reduce the rendered streamlines to a few depth layers. One type of techniques related to controlling the depth of rendered scenes is depth peeling [11] for polygonal models. However, 92

108 Figure 6.10: An example of peeling away one layer of streamlines by not allowing them to integrate beyond a fixed distance from the input depth map. depth peeling for lines is not well defined since lines themselves cannot form effective occluders because the space between lines are not occupied. The image space method lends itself well to effective depth control and peeling. This is because seeds are placed on top of the depth map on the image plane. The user can peel into the flows by slowly increasing or decreasing a δz from the original depth map to drop the initial seeds and generate streamlines. Fig. 6.9 shows examples where streamlines are computed using different offsets from a depth map generated by a sphere. The display of streamlines can also be controlled by constraining them to integrate within a +/- δz away from the input depth map. This will effectively control the depth complexity of the rendered scene. This is essentially to create clipping planes to remove streamlines outside the allowed depth range. In this case, the clipping planes have shapes conforming to the initial depth map, which can be more flexible compared to the traditional planar clipping planes. Fig shows an example of opening up 93

109 a portion of the streamlines in the middle section by not allowing streamlines to go beyond a small δz from the input depth map. Generate Streamlines From Multiple Views Figure 6.11: First row: rendering images of stream surface from different viewpoints. Second row: streamlines generated at the corresponding viewpoint. Third row: the combined images of streamlines rendered from four different views. Sometimes it can be beneficial to combine the streamlines generated from multiple views and display them all together. For each individual view, the spacing constrains are still enforced, i.e., not to allow streamlines to come too close to each other. However, when combining the streamlines from multiple views, there is no constraint 94

110 enforced. The motivation behind this is that even though the projection of streamlines from different views may intersect and overlap with each other in image space, as long as the depth complexity in each view, and the number of combined views are well controlled, the combined streamlines can enhance the depth perception of the scene. In the algorithm, the selection of different views is done by the user: Given an object displayed on the screen, a stream surface for example, the user can rotate the surface, identify a good view, and place streamlines based on the current view using the image space algorithm. Then, the user can rotate the scene again to reveal the region that was invisible in the previous view, and place more streamlines. When the user feels the scene is getting too cluttered, the accumulation of streamlines can be stopped. Fig illustrates this process by showing images from four different views and the combined results. Another strategy for combining the streamlines is to keep the current camera view, but to move the probing objects to different locations. For example, the user can use a cylinder to probe the flow field and place streamlines from the projection of the cylinder surface. The user can keep the current camera view but to change the location of the cylinder and gradually populate the scene until the image reveals enough about the flow field without making the scene overly crowded. Fig shows an example of combining the streamlines generated from three different cylinder probe locations. Importance Driven Streamline Placement To distinguish regions of different importance, different spacing thresholds can be used to place the streamlines. For instance, more streamlines can be placed in regions with higher velocity magnitudes, while fewer streamlines are placed in other regions. To achieve this effect, the algorithm takes an importance map as an input, which can 95

111 Figure 6.12: Streamlines generated from three different cylinder locations (left three images) are combined together and rendered to the image on the right. be generated by evaluating any function on the flow field, such as velocity or vorticity magnitude. At every step of the streamline integration, the point is projected to the screen and then the importance value is retrieved at the corresponding screen point. After mapping the importance value to a streamline distance, whether to continue or terminate the integration of the current streamline is to be decided. Different transfer functions can be used to map the importance value to different streamline distance thresholds. For instance, in the implementation the functions of bias and gain proposed in [47] by Perlin and Hoffert to create non-linear mapping effects are used. Fig shows an example of using the velocity magnitude as the importance value to determine the streamline density on a slice plane, where more streamlines are placed in regions with higher velocity magnitudes. 96

112 (a) (b) Figure 6.13: Streamline densities are controlled by velocity magnitude on a slice. (a) larger velocity magnitudes are displayed in brighter colors (b) the streamlines generated from the slice. Stylish Drawing One advantage of the image based streamline placement algorithm is that streamlines are well spaced out with user control on the screen. With the spacing controlled, it becomes much easier to draw patches of desired widths along the streamlines on the screen to enhance the visualization of streamlines, since it is easy to avoid the stream patches overlapping with each other. To compute the stream patches, first compute the screen projection of the streamline as the skeleton. Then, extend the width of the stream patches along the direction that is perpendicular to the streamline s local tangent direction on the screen. The width of the stream patches is controlled by the local spacing of the streamlines, which is defined by the image based algorithm. With the stream patches, a variety of textures can be mapped to enhance depth cues and simulate different rendering styles. It is also possible to vary the width and transparency of the stream patches based on local flow properties. Fig shows three examples of the stylish drawing of streamlines using different textures. 97

113 Figure 6.14: Streamlines generated and rendered with three different styles by the imagebased algorithm. 6.3 Results The algorithm was tested on a PC with an Intel Pentium M 2GHz processor, 2 GB memory, and a nvidia 6800 graphics card with 256 MB of video memory. Two synthetic data sets (Fig. 6.4, 6.9) and two 3D flow simulation data sets (Plume and TSI) were used to test the algorithm and generate the images shown throughout the paper. The Plume data set (Fig. 6.5, 6.6, 6.7, 6.8, 6.10, 6.12, 6.13, 6.14) is a three-dimensional turbulent flow field with dimensions of 126x126x512. The original data set is a time-varying flow field, which models turbulence in a solar simulation performed by National Center for Atmospheric Research scientists. The TSI data set (Fig. 6.2, 6.11) is a three-dimensional flow field with dimensions of 200x200x200. It was to model the core collapse of supernova and generated by collaboration among Oak Ridge National Lab and eight universities. A few time steps of these two data sets were used during the test. When running the algorithm, the user can control the streamline density by specifying different separating distances in screen space. The coverage of the visualized objects in the input depth map affects the generation of streamlines. The larger the 98

114 area is, the more streamlines are generated compared with those generated from a smaller area, if the separating distance remains the same. In the tests, to allow the geometries producing the depth map to cover the screen as much as possible, it was achieved by zooming into the scene. The fourth order Runge-Kutta integrator with a constant step size was used to compute the streamlines. The performance of the algorithm was shown using the Plume data set. The main steps include transformations of streamline points between object and image space, streamline integration, seed point selection, and validation of streamline points. In the program, since longer streamlines were generally preferred, those streamlines that were too short were discarded. In the experiments, the threshold value for the minimum streamline length was 20, and the fixed integration step size was 1.0, both in voxels. This means for a streamline to be accepted, it should have at least 20 integration points. The larger this threshold value is, the possibility for a streamline generated from the distance control algorithm to be discarded becomes higher, thus the percentage of total computation time wasted on generating those short streamlines becomes higher too. Fig shows the percentages of time spent on each of the main steps in the algorithm. From the figure, it can be seen that the streamline integration process is the most time consuming part. Fig shows the number of streamlines and line segments generated with different distance thresholds. Fig shows the timings for generating streamlines with different separating distances, which directly influence the number of streamlines that were computed. Throughout the paper, all images of streamlines are rendered with stylish drawing, the average time to render one line segment is about ms. 99

115 Figure 6.15: The percentage of total time each main step used. Figure 6.16: The pink curve (the left axis as scale) shows the number of streamlines, while the blue one (the right axis as scale) shows the number of line segments generated. Figure 6.17: The time (in seconds) to generate streamlines from an isosurface for different separating distance (pixels) using the Plume data set. 100

Over Two Decades of IntegrationBased, Geometric Vector Field. Visualization

Over Two Decades of IntegrationBased, Geometric Vector Field. Visualization Over Two Decades of IntegrationBased, Geometric Vector Field Visualization Tony McLoughlin1, 1, Ronald Peikert2, Frits H. Post3, and Min Chen1 1 The Visual and Interactive Computing Group Computer Science

More information

Chapter 6 Visualization Techniques for Vector Fields

Chapter 6 Visualization Techniques for Vector Fields Chapter 6 Visualization Techniques for Vector Fields 6.1 Introduction 6.2 Vector Glyphs 6.3 Particle Advection 6.4 Streamlines 6.5 Line Integral Convolution 6.6 Vector Topology 6.7 References 2006 Burkhard

More information

2D vector fields 3. Contents. Line Integral Convolution (LIC) Image based flow visualization Vector field topology. Fast LIC Oriented LIC

2D vector fields 3. Contents. Line Integral Convolution (LIC) Image based flow visualization Vector field topology. Fast LIC Oriented LIC 2D vector fields 3 Scientific Visualization (Part 8) PD Dr.-Ing. Peter Hastreiter Contents Line Integral Convolution (LIC) Fast LIC Oriented LIC Image based flow visualization Vector field topology 2 Applied

More information

Flow Visualization with Integral Surfaces

Flow Visualization with Integral Surfaces Flow Visualization with Integral Surfaces Visual and Interactive Computing Group Department of Computer Science Swansea University R.S.Laramee@swansea.ac.uk 1 1 Overview Flow Visualization with Integral

More information

Vector Visualization. CSC 7443: Scientific Information Visualization

Vector Visualization. CSC 7443: Scientific Information Visualization Vector Visualization Vector data A vector is an object with direction and length v = (v x,v y,v z ) A vector field is a field which associates a vector with each point in space The vector data is 3D representation

More information

Vector Field Visualization: Introduction

Vector Field Visualization: Introduction Vector Field Visualization: Introduction What is a Vector Field? A simple 2D steady vector field A vector valued function that assigns a vector (with direction and magnitude) to any given point. It typically

More information

3D vector fields. Contents. Introduction 3D vector field topology Representation of particle lines. 3D LIC Combining different techniques

3D vector fields. Contents. Introduction 3D vector field topology Representation of particle lines. 3D LIC Combining different techniques 3D vector fields Scientific Visualization (Part 9) PD Dr.-Ing. Peter Hastreiter Contents Introduction 3D vector field topology Representation of particle lines Path lines Ribbons Balls Tubes Stream tetrahedra

More information

Vector Field Visualization: Introduction

Vector Field Visualization: Introduction Vector Field Visualization: Introduction What is a Vector Field? Why It is Important? Vector Fields in Engineering and Science Automotive design [Chen et al. TVCG07,TVCG08] Weather study [Bhatia and Chen

More information

Vector Field Visualisation

Vector Field Visualisation Vector Field Visualisation Computer Animation and Visualization Lecture 14 Institute for Perception, Action & Behaviour School of Informatics Visualising Vectors Examples of vector data: meteorological

More information

Texture Advection. Ronald Peikert SciVis Texture Advection 6-1

Texture Advection. Ronald Peikert SciVis Texture Advection 6-1 Texture Advection Ronald Peikert SciVis 2007 - Texture Advection 6-1 Texture advection Motivation: dense visualization of vector fields, no seed points needed. Methods for static fields: LIC - Line integral

More information

Using Integral Surfaces to Visualize CFD Data

Using Integral Surfaces to Visualize CFD Data Using Integral Surfaces to Visualize CFD Data Tony Mcloughlin, Matthew Edmunds,, Mark W. Jones, Guoning Chen, Eugene Zhang 1 1 Overview Flow Visualization with Integral Surfaces: Introduction to flow visualization

More information

Vector Visualization

Vector Visualization Vector Visualization Vector Visulization Divergence and Vorticity Vector Glyphs Vector Color Coding Displacement Plots Stream Objects Texture-Based Vector Visualization Simplified Representation of Vector

More information

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 13, NO. 3, MAY/JUNE Image-Based Streamline Generation and Rendering

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 13, NO. 3, MAY/JUNE Image-Based Streamline Generation and Rendering IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 13, NO. 3, MAY/JUNE 2007 1 Image-Based Streamline Generation and Rendering Liya Li and Han-Wei Shen Abstract Seeding streamlines in 3D flow

More information

Flow Visualisation 1

Flow Visualisation 1 Flow Visualisation Visualisation Lecture 13 Institute for Perception, Action & Behaviour School of Informatics Flow Visualisation 1 Flow Visualisation... so far Vector Field Visualisation vector fields

More information

Comparing LIC and Spot Noise

Comparing LIC and Spot Noise Comparing LIC and Spot Noise Wim de Leeuw Robert van Liere Center for Mathematics and Computer Science, CWI Abstract Spot noise and line integral convolution (LIC) are two texture synthesis techniques

More information

A Volume Rendering Framework for Visualizing 3D Flow Fields

A Volume Rendering Framework for Visualizing 3D Flow Fields A Volume Rendering Framework for Visualizing 3D Flow Fields Hsien-Hsi HSIEH,, Liya LI, Han-Wei SHEN and Wen-Kai TAI Department of Computer Science and Engineering, The Ohio State University Columbus, OH

More information

Vector Visualisation 1. global view

Vector Visualisation 1. global view Vector Field Visualisation : global view Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics Vector Visualisation 1 Vector Field Visualisation : local & global Vector

More information

Lecture overview. Visualisatie BMT. Vector algorithms. Vector algorithms. Time animation. Time animation

Lecture overview. Visualisatie BMT. Vector algorithms. Vector algorithms. Time animation. Time animation Visualisatie BMT Lecture overview Vector algorithms Tensor algorithms Modeling algorithms Algorithms - 2 Arjan Kok a.j.f.kok@tue.nl 1 2 Vector algorithms Vector 2 or 3 dimensional representation of direction

More information

The State of the Art in Flow Visualization: Dense and Texture-Based Techniques

The State of the Art in Flow Visualization: Dense and Texture-Based Techniques Volume 22 (2003), Number 2, yet unknown pages The State of the Art in Flow Visualization: Dense and Texture-Based Techniques Robert S. Laramee, 1 Helwig Hauser, 1 Helmut Doleisch, 1 Benjamin Vrolijk, 2

More information

Flow Visualization: The State-of-the-Art

Flow Visualization: The State-of-the-Art Flow Visualization: The State-of-the-Art The Visual and Interactive Computing Group Computer Science Department Swansea University Swansea, Wales, UK 1 Overview Introduction to Flow Visualization (FlowViz)

More information

Flow Visualisation - Background. CITS4241 Visualisation Lectures 20 and 21

Flow Visualisation - Background. CITS4241 Visualisation Lectures 20 and 21 CITS4241 Visualisation Lectures 20 and 21 Flow Visualisation Flow visualisation is important in both science and engineering From a "theoretical" study of o turbulence or o a fusion reactor plasma, to

More information

The State of the Art in Flow Visualization: Dense and Texture-Based Techniques

The State of the Art in Flow Visualization: Dense and Texture-Based Techniques Volume 23 (2004), number 2 pp. 203 221 COMPUTER GRAPHICS forum The State of the Art in Flow Visualization: Dense and Texture-Based Techniques Robert S. Laramee, 1 Helwig Hauser, 1 Helmut Doleisch, 1 Benjamin

More information

Coherent view-dependent streamline selection for importance-driven flow visualization

Coherent view-dependent streamline selection for importance-driven flow visualization Coherent view-dependent streamline selection for importance-driven flow visualization Jun Ma a, Chaoli Wang a, and Ching-Kuang Shene a a Department of Computer Science, Michigan Technological University,

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Part I: Theoretical Background and Integration-Based Methods

Part I: Theoretical Background and Integration-Based Methods Large Vector Field Visualization: Theory and Practice Part I: Theoretical Background and Integration-Based Methods Christoph Garth Overview Foundations Time-Varying Vector Fields Numerical Integration

More information

Data Visualization. Fall 2017

Data Visualization. Fall 2017 Data Visualization Fall 2017 Vector Fields Vector field v: D R n D is typically 2D planar surface or 2D surface embedded in 3D n = 2 fields tangent to 2D surface n = 3 volumetric fields When visualizing

More information

A Texture-Based Hardware-Independent Technique for Time-Varying Volume Flow Visualization

A Texture-Based Hardware-Independent Technique for Time-Varying Volume Flow Visualization Journal of Visualization, Vol. 8, No. 3 (2005) 235-244 A Texture-Based Hardware-Independent Technique for Time-Varying Volume Flow Visualization Liu, Zhanping* and Moorhead II, Robert J.* * ERC / GeoResources

More information

8. Tensor Field Visualization

8. Tensor Field Visualization 8. Tensor Field Visualization Tensor: extension of concept of scalar and vector Tensor data for a tensor of level k is given by t i1,i2,,ik (x 1,,x n ) Second-order tensor often represented by matrix Examples:

More information

I've completed the pseudocode for the streamline placement algorithm after our discussion last week. It is as follows:

I've completed the pseudocode for the streamline placement algorithm after our discussion last week. It is as follows: 9 June 2016 I've completed the pseudocode for the streamline placement algorithm after our discussion last week. It is as follows: generate initial seed point (random) add seed point to queue while queue

More information

Scientific Visualization

Scientific Visualization Scientific Visualization Dr. Ronald Peikert Summer 2007 Ronald Peikert SciVis 2007 - Introduction 1-1 Introduction to Scientific Visualization Ronald Peikert SciVis 2007 - Introduction 1-2 What is Scientific

More information

Surface Reconstruction. Gianpaolo Palma

Surface Reconstruction. Gianpaolo Palma Surface Reconstruction Gianpaolo Palma Surface reconstruction Input Point cloud With or without normals Examples: multi-view stereo, union of range scan vertices Range scans Each scan is a triangular mesh

More information

CIS 467/602-01: Data Visualization

CIS 467/602-01: Data Visualization CIS 467/602-01: Data Visualization Vector Field Visualization Dr. David Koop Fields Tables Networks & Trees Fields Geometry Clusters, Sets, Lists Items Items (nodes) Grids Items Items Attributes Links

More information

Interactive 3D Flow Visualization Based on Textures and Geometric Primitives

Interactive 3D Flow Visualization Based on Textures and Geometric Primitives Interactive 3D Flow Visualization Based on Textures and Geometric Primitives Robert S. Laramee and Helwig Hauser www.vrvis.at 1 SUMMARY As the size of CFD simulation data sets expand, the job of the engineer

More information

An Introduction to Flow Visualization (1) Christoph Garth

An Introduction to Flow Visualization (1) Christoph Garth An Introduction to Flow Visualization (1) Christoph Garth cgarth@ucdavis.edu Motivation What will I be talking about? Classical: Physical experiments to understand flow. 2 Motivation What will I be talking

More information

Flow Web: A Graph Based User Interface for 3D Flow Field Exploration

Flow Web: A Graph Based User Interface for 3D Flow Field Exploration Flow Web: A Graph Based User Interface for 3D Flow Field Exploration Lijie Xu and Han-Wei Shen Ohio State University, 395 Dreese Laboratories 2015 Neil Avenue, Columbus Ohio, USA ABSTRACT While there have

More information

Vector Visualization Chap. 6 March 7, 2013 March 26, Jie Zhang Copyright

Vector Visualization Chap. 6 March 7, 2013 March 26, Jie Zhang Copyright ector isualization Chap. 6 March 7, 2013 March 26, 2013 Jie Zhang Copyright CDS 301 Spring, 2013 Outline 6.1. Divergence and orticity 6.2. ector Glyphs 6.3. ector Color Coding 6.4. Displacement Plots (skip)

More information

Scientific Visualization Example exam questions with commented answers

Scientific Visualization Example exam questions with commented answers Scientific Visualization Example exam questions with commented answers The theoretical part of this course is evaluated by means of a multiple- choice exam. The questions cover the material mentioned during

More information

Volume Illumination & Vector Field Visualisation

Volume Illumination & Vector Field Visualisation Volume Illumination & Vector Field Visualisation Visualisation Lecture 11 Institute for Perception, Action & Behaviour School of Informatics Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Function Based 2D Flow Animation

Function Based 2D Flow Animation VISUAL 2000: MEXICO CITY SEPTEMBER 18-22 100 Function Based 2D Flow Animation Ergun Akleman, Sajan Skaria, Jeff S. Haberl Abstract This paper summarizes a function-based approach to create 2D flow animations.

More information

INTERACTIVE FOCUS+CONTEXT GLYPH AND STREAMLINE VECTOR VISUALIZATION

INTERACTIVE FOCUS+CONTEXT GLYPH AND STREAMLINE VECTOR VISUALIZATION INTERACTIVE FOCUS+CONTEXT GLYPH AND STREAMLINE VECTOR VISUALIZATION by Joshua Joseph Anghel A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer

More information

A Texture-Based Framework for Spacetime-Coherent Visualization of Time-Dependent Vector Fields

A Texture-Based Framework for Spacetime-Coherent Visualization of Time-Dependent Vector Fields A Texture-Based Framework for Spacetime-Coherent Visualization of Time-Dependent Vector Fields Daniel Weiskopf 1 Gordon Erlebacher 2 Thomas Ertl 1 1 Institute of Visualization and Interactive Systems,

More information

Scaling the Topology of Symmetric, Second-Order Planar Tensor Fields

Scaling the Topology of Symmetric, Second-Order Planar Tensor Fields Scaling the Topology of Symmetric, Second-Order Planar Tensor Fields Xavier Tricoche, Gerik Scheuermann, and Hans Hagen University of Kaiserslautern, P.O. Box 3049, 67653 Kaiserslautern, Germany E-mail:

More information

Volume Rendering. Lecture 21

Volume Rendering. Lecture 21 Volume Rendering Lecture 21 Acknowledgements These slides are collected from many sources. A particularly valuable source is the IEEE Visualization conference tutorials. Sources from: Roger Crawfis, Klaus

More information

Chapter 1 - Basic Equations

Chapter 1 - Basic Equations 2.20 Marine Hydrodynamics, Fall 2017 Lecture 2 Copyright c 2017 MIT - Department of Mechanical Engineering, All rights reserved. 2.20 Marine Hydrodynamics Lecture 2 Chapter 1 - Basic Equations 1.1 Description

More information

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BHARAT SIGINAM IN

More information

Particle tracing in curvilinear grids. David M. Reed, Lawson Wade, Peter G. Carswell, Wayne E. Carlson

Particle tracing in curvilinear grids. David M. Reed, Lawson Wade, Peter G. Carswell, Wayne E. Carlson Particle tracing in curvilinear grids David M. Reed, Lawson Wade, Peter G. Carswell, Wayne E. Carlson Advanced Computing Center for the Arts and Design and Department of Computer and Information Science

More information

1.2 Numerical Solutions of Flow Problems

1.2 Numerical Solutions of Flow Problems 1.2 Numerical Solutions of Flow Problems DIFFERENTIAL EQUATIONS OF MOTION FOR A SIMPLIFIED FLOW PROBLEM Continuity equation for incompressible flow: 0 Momentum (Navier-Stokes) equations for a Newtonian

More information

This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight

This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight This research aims to present a new way of visualizing multi-dimensional data using generalized scatterplots by sensitivity coefficients to highlight local variation of one variable with respect to another.

More information

Tutorial 2. Modeling Periodic Flow and Heat Transfer

Tutorial 2. Modeling Periodic Flow and Heat Transfer Tutorial 2. Modeling Periodic Flow and Heat Transfer Introduction: Many industrial applications, such as steam generation in a boiler or air cooling in the coil of an air conditioner, can be modeled as

More information

Scalar Algorithms: Contouring

Scalar Algorithms: Contouring Scalar Algorithms: Contouring Computer Animation and Visualisation Lecture tkomura@inf.ed.ac.uk Institute for Perception, Action & Behaviour School of Informatics Contouring Scaler Data Last Lecture...

More information

Computer Graphics Ray Casting. Matthias Teschner

Computer Graphics Ray Casting. Matthias Teschner Computer Graphics Ray Casting Matthias Teschner Outline Context Implicit surfaces Parametric surfaces Combined objects Triangles Axis-aligned boxes Iso-surfaces in grids Summary University of Freiburg

More information

CS205b/CME306. Lecture 9

CS205b/CME306. Lecture 9 CS205b/CME306 Lecture 9 1 Convection Supplementary Reading: Osher and Fedkiw, Sections 3.3 and 3.5; Leveque, Sections 6.7, 8.3, 10.2, 10.4. For a reference on Newton polynomial interpolation via divided

More information

Realtime Water Simulation on GPU. Nuttapong Chentanez NVIDIA Research

Realtime Water Simulation on GPU. Nuttapong Chentanez NVIDIA Research 1 Realtime Water Simulation on GPU Nuttapong Chentanez NVIDIA Research 2 3 Overview Approaches to realtime water simulation Hybrid shallow water solver + particles Hybrid 3D tall cell water solver + particles

More information

Data Visualization (CIS/DSC 468)

Data Visualization (CIS/DSC 468) Data Visualization (CIS/DSC 468) Vector Visualization Dr. David Koop Visualizing Volume (3D) Data 2D visualization slice images (or multi-planar reformating MPR) Indirect 3D visualization isosurfaces (or

More information

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render 1 There are two major classes of algorithms for extracting most kinds of lines from 3D meshes. First, there are image-space algorithms that render something (such as a depth map or cosine-shaded model),

More information

CFD MODELING FOR PNEUMATIC CONVEYING

CFD MODELING FOR PNEUMATIC CONVEYING CFD MODELING FOR PNEUMATIC CONVEYING Arvind Kumar 1, D.R. Kaushal 2, Navneet Kumar 3 1 Associate Professor YMCAUST, Faridabad 2 Associate Professor, IIT, Delhi 3 Research Scholar IIT, Delhi e-mail: arvindeem@yahoo.co.in

More information

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Rendering Computer Animation and Visualisation Lecture 9 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Volume Data Usually, a data uniformly distributed

More information

Isosurface Rendering. CSC 7443: Scientific Information Visualization

Isosurface Rendering. CSC 7443: Scientific Information Visualization Isosurface Rendering What is Isosurfacing? An isosurface is the 3D surface representing the locations of a constant scalar value within a volume A surface with the same scalar field value Isosurfaces form

More information

Halftoning and quasi-monte Carlo

Halftoning and quasi-monte Carlo Halftoning and quasi-monte Carlo Ken Hanson CCS-2, Methods for Advanced Scientific Simulations Los Alamos National Laboratory This presentation available at http://www.lanl.gov/home/kmh/ LA-UR-04-1854

More information

1 Mathematical Concepts

1 Mathematical Concepts 1 Mathematical Concepts Mathematics is the language of geophysical fluid dynamics. Thus, in order to interpret and communicate the motions of the atmosphere and oceans. While a thorough discussion of the

More information

FLUENT Secondary flow in a teacup Author: John M. Cimbala, Penn State University Latest revision: 26 January 2016

FLUENT Secondary flow in a teacup Author: John M. Cimbala, Penn State University Latest revision: 26 January 2016 FLUENT Secondary flow in a teacup Author: John M. Cimbala, Penn State University Latest revision: 26 January 2016 Note: These instructions are based on an older version of FLUENT, and some of the instructions

More information

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions Data In single-program multiple-data (SPMD) parallel programs, global data is partitioned, with a portion of the data assigned to each processing node. Issues relevant to choosing a partitioning strategy

More information

mjb March 9, 2015 Chuck Evans

mjb March 9, 2015 Chuck Evans Vector Visualization What is a Vector Visualization Problem? A vector has direction and magnitude. Typically science and engineering problems that work this way are those involving fluid flow through a

More information

A Level-Set Method for Flow Visualization

A Level-Set Method for Flow Visualization A Level-Set Method for Flow Visualization Rüdiger Westermann, Christopher Johnson, and Thomas Ertl Scientific Computing and Visualization Group, University of Technology Aachen Scientific Computing and

More information

Fast marching methods

Fast marching methods 1 Fast marching methods Lecture 3 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 Metric discretization 2 Approach I:

More information

Abstract. Introduction. Kevin Todisco

Abstract. Introduction. Kevin Todisco - Kevin Todisco Figure 1: A large scale example of the simulation. The leftmost image shows the beginning of the test case, and shows how the fluid refracts the environment around it. The middle image

More information

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

More information

Praktikum 2014 Parallele Programmierung Universität Hamburg Dept. Informatics / Scientific Computing. October 23, FluidSim.

Praktikum 2014 Parallele Programmierung Universität Hamburg Dept. Informatics / Scientific Computing. October 23, FluidSim. Praktikum 2014 Parallele Programmierung Universität Hamburg Dept. Informatics / Scientific Computing October 23, 2014 Paul Bienkowski Author 2bienkow@informatik.uni-hamburg.de Dr. Julian Kunkel Supervisor

More information

Lecture 10: Semantic Segmentation and Clustering

Lecture 10: Semantic Segmentation and Clustering Lecture 10: Semantic Segmentation and Clustering Vineet Kosaraju, Davy Ragland, Adrien Truong, Effie Nehoran, Maneekwan Toyungyernsub Department of Computer Science Stanford University Stanford, CA 94305

More information

Texture Mapping using Surface Flattening via Multi-Dimensional Scaling

Texture Mapping using Surface Flattening via Multi-Dimensional Scaling Texture Mapping using Surface Flattening via Multi-Dimensional Scaling Gil Zigelman Ron Kimmel Department of Computer Science, Technion, Haifa 32000, Israel and Nahum Kiryati Department of Electrical Engineering

More information

Iterative Estimation of 3D Transformations for Object Alignment

Iterative Estimation of 3D Transformations for Object Alignment Iterative Estimation of 3D Transformations for Object Alignment Tao Wang and Anup Basu Department of Computing Science, Univ. of Alberta, Edmonton, AB T6G 2E8, Canada Abstract. An Iterative Estimation

More information

The State of the Art in Flow Visualization, part 1: Direct, Texture-based, and Geometric Techniques

The State of the Art in Flow Visualization, part 1: Direct, Texture-based, and Geometric Techniques Volume 22 (2003), Number 2, yet unknown pages The State of the Art in Flow Visualization, part 1: Direct, Texture-based, and Geometric Techniques Helwig Hauser, Robert S. Laramee, Helmut Doleisch, Frits

More information

1. Interpreting the Results: Visualization 1

1. Interpreting the Results: Visualization 1 1. Interpreting the Results: Visualization 1 visual/graphical/optical representation of large sets of data: data from experiments or measurements: satellite images, tomography in medicine, microsopy,...

More information

Data Representation in Visualisation

Data Representation in Visualisation Data Representation in Visualisation Visualisation Lecture 4 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Data Representation 1 Data Representation We have

More information

Surfaces and Integral Curves

Surfaces and Integral Curves MODULE 1: MATHEMATICAL PRELIMINARIES 16 Lecture 3 Surfaces and Integral Curves In Lecture 3, we recall some geometrical concepts that are essential for understanding the nature of solutions of partial

More information

MAE 3130: Fluid Mechanics Lecture 5: Fluid Kinematics Spring Dr. Jason Roney Mechanical and Aerospace Engineering

MAE 3130: Fluid Mechanics Lecture 5: Fluid Kinematics Spring Dr. Jason Roney Mechanical and Aerospace Engineering MAE 3130: Fluid Mechanics Lecture 5: Fluid Kinematics Spring 2003 Dr. Jason Roney Mechanical and Aerospace Engineering Outline Introduction Velocity Field Acceleration Field Control Volume and System Representation

More information

Stream Hulls: A 3D Visualization Technique for Chaotic Dynamical Systems

Stream Hulls: A 3D Visualization Technique for Chaotic Dynamical Systems Stream Hulls: A 3D Visualization Technique for Chaotic Dynamical Systems Kenny Gruchalla Elizabeth Bradley Department of Computer Science, University of Colorado at Boulder, Boulder, Colorado 80309 gruchall@cs.colorado.edu

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

MSA220 - Statistical Learning for Big Data

MSA220 - Statistical Learning for Big Data MSA220 - Statistical Learning for Big Data Lecture 13 Rebecka Jörnsten Mathematical Sciences University of Gothenburg and Chalmers University of Technology Clustering Explorative analysis - finding groups

More information

DiFi: Distance Fields - Fast Computation Using Graphics Hardware

DiFi: Distance Fields - Fast Computation Using Graphics Hardware DiFi: Distance Fields - Fast Computation Using Graphics Hardware Avneesh Sud Dinesh Manocha UNC-Chapel Hill http://gamma.cs.unc.edu/difi Distance Fields Distance Function For a site a scalar function f:r

More information

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013 Feature Descriptors CS 510 Lecture #21 April 29 th, 2013 Programming Assignment #4 Due two weeks from today Any questions? How is it going? Where are we? We have two umbrella schemes for object recognition

More information

GPUFLIC: Interactive and Accurate Dense Visualization of Unsteady Flows

GPUFLIC: Interactive and Accurate Dense Visualization of Unsteady Flows Eurographics/ IEEE-VGTC Symposium on Visualization (2006) Thomas Ertl, Ken Joy, and Beatriz Santos (Editors) GPUFLIC: Interactive and Accurate Dense Visualization of Unsteady Flows Guo-Shi Li 1 and Xavier

More information

A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY. Sheng-Wen Wang

A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY. Sheng-Wen Wang Effectively Identifying and Segmenting Individual Vortices in 3D Turbulent Flow A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Sheng-Wen Wang IN PARTIAL

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Interactive 3D Flow Visualization Based on Textures and Geometric Primitives

Interactive 3D Flow Visualization Based on Textures and Geometric Primitives Interactive 3D Flow Visualization Based on Textures and Geometric Primitives Robert S. Laramee and Helwig Hauser www.vrvis.at September 14, 2004 Abstract As the size of CFD simulation data sets expand,

More information

2D image segmentation based on spatial coherence

2D image segmentation based on spatial coherence 2D image segmentation based on spatial coherence Václav Hlaváč Czech Technical University in Prague Center for Machine Perception (bridging groups of the) Czech Institute of Informatics, Robotics and Cybernetics

More information

Computer Experiments: Space Filling Design and Gaussian Process Modeling

Computer Experiments: Space Filling Design and Gaussian Process Modeling Computer Experiments: Space Filling Design and Gaussian Process Modeling Best Practice Authored by: Cory Natoli Sarah Burke, Ph.D. 30 March 2018 The goal of the STAT COE is to assist in developing rigorous,

More information

MET71 COMPUTER AIDED DESIGN

MET71 COMPUTER AIDED DESIGN UNIT - II BRESENHAM S ALGORITHM BRESENHAM S LINE ALGORITHM Bresenham s algorithm enables the selection of optimum raster locations to represent a straight line. In this algorithm either pixels along X

More information

Week 5: Geometry and Applications

Week 5: Geometry and Applications Week 5: Geometry and Applications Introduction Now that we have some tools from differentiation, we can study geometry, motion, and few other issues associated with functions of several variables. Much

More information

Skåne University Hospital Lund, Lund, Sweden 2 Deparment of Numerical Analysis, Centre for Mathematical Sciences, Lund University, Lund, Sweden

Skåne University Hospital Lund, Lund, Sweden 2 Deparment of Numerical Analysis, Centre for Mathematical Sciences, Lund University, Lund, Sweden Volume Tracking: A New Method for Visualization of Intracardiac Blood Flow from Three-Dimensional, Time-Resolved, Three-Component Magnetic Resonance Velocity Mapping Appendix: Theory and Numerical Implementation

More information

Visualizing Unsteady Flows on Surfaces Using Spherical Parameterization

Visualizing Unsteady Flows on Surfaces Using Spherical Parameterization 1 Visualizing Unsteady Flows on Surfaces Using Spherical Parameterization Guo-Shi Li, Xavier Tricoche, Charles Hansen UUSCI-2007-013 Scientific Computing and Imaging Institute University of Utah Salt Lake

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Interactive Visualization of Divergence in Unsteady Flow by Level-Set Dye Advection

Interactive Visualization of Divergence in Unsteady Flow by Level-Set Dye Advection Interactive Visualization of Divergence in Unsteady Flow by Level-Set Dye Advection Daniel Weiskopf Ralf Botchen Thomas Ertl Universität Stuttgart Abstract Dye advection is an intuitive and versatile technique

More information

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007 Simpler Soft Shadow Mapping Lee Salzman September 20, 2007 Lightmaps, as do other precomputed lighting methods, provide an efficient and pleasing solution for lighting and shadowing of relatively static

More information

Lecture notes: Visualization I Visualization of vector fields using Line Integral Convolution and volume rendering

Lecture notes: Visualization I Visualization of vector fields using Line Integral Convolution and volume rendering Lecture notes: Visualization I Visualization of vector fields using Line Integral Convolution and volume rendering Anders Helgeland FFI Chapter 1 Visualization techniques for vector fields Vector fields

More information

Scalar Visualization

Scalar Visualization Scalar Visualization Visualizing scalar data Popular scalar visualization techniques Color mapping Contouring Height plots outline Recap of Chap 4: Visualization Pipeline 1. Data Importing 2. Data Filtering

More information

2.11 Particle Systems

2.11 Particle Systems 2.11 Particle Systems 320491: Advanced Graphics - Chapter 2 152 Particle Systems Lagrangian method not mesh-based set of particles to model time-dependent phenomena such as snow fire smoke 320491: Advanced

More information