ftp://ftp.ffi.no/spub/stsk/ahe/index.html

Size: px
Start display at page:

Download "ftp://ftp.ffi.no/spub/stsk/ahe/index.html"

Transcription

1 ! "$#% &'( )*,+- W.0/ /98:4:;</="> ="F G BD6HI1J2I13/>LK M)/>IB NO>"; BPK F3476 Q =">"@"="62;</="> B FUBD>RHLBVF&/>LK >RHIBDFX1ZY[BD69K\BD6]47>RH ^`_ acbeddfxgihjalkm)nchjopgqo ros ]tvuwyx(w zl{ {

2

3 iii Preface This thesis is written as a required part of the Cand. Scient. (Master of science) degree in informatics at the Department of Informatics, University of Oslo, Norway. The work was started in September 2000 and finished in July The majority of the programming and writing are undertaken at FFI (Norwegian Defence Research Establishment). This thesis covers methods and strategies for achieving a more effective visualization of large three-dimensional vector fields. I would like to thank my supervisor, Øyvind Andreassen, for his encouragement and assistance through this work. I would also like to thank Jan Olav Langseth, Bjørn Anders Pettersson Reif and the rest of the staff at FFIBM for their support and help, and my internal supervisor Knut Mørken for his direction and assistance to my study. Special thanks goes to my wife Gro Bente R. Helgeland for devoting a huge amount of love and support to my study as well. This thesis is available in different formats on the web page: ftp://ftp.ffi.no/spub/stsk/ahe/index.html A summary of this thesis in an online html version, including pictures and movies are also available on this web site. Oslo, July 2002 Anders Helgeland

4 iv

5 Contents 1 Introduction Introduction The problem Organization of the thesis Background Definitions Vector Vector field Field line and streamline Path line Streak line Field line integration The ODE system The grid Interpolation Numerical solution Visualization techniques for vector fields Hedgehogs and glyphs Curve representation Texture based techniques The data sets Turbulence Data file format Volume rendering Transparency, opacity and alpha values Color mapping Texture mapping Volume rendering techniques Geometric rendering Direct volume rendering Direct volume rendering with 3D texture mapping VIZ VoluViz

6 vi CONTENTS 4 Line Integral Convolution Introduction to Line Integral Convolution Convolution Convolution along a vector LIC Fast LIC Some improvements Volume LIC Choice of input texture Region Of Interest Sparse input texture Spot size Detail enlargement Seed LIC Aliasing Volume visualization with LIC Assignment of color and opacity values Clipping functionality Halo effect Shading in volume visualization Shading with LIC Two fields visualization Polkagris visualization Summary and conclusion Future work

7 Chapter 1 Introduction 1.1 Introduction Visualization is a part of our everyday life, from weather maps to exciting computer graphics used by the entertainment industry. Informally, visualization is the transformation of data or information into pictures [1]. It is a tool that engages the human senses including our eyes and brain and is an effective medium for communicating complex information. The engineering and scientific communities early employed applications of visualization. The computers were used as a tool to simulate physical processes such as ballistic trajectories, fluid flows and structural mechanics. As the size of the computer simulations grew, it became necessary to transform the results from calculations into pictures. The large amount of data overwhelmed the ability of human perception. In fact, pictures became so important that early visualizations were created manually by plotting data. Today, we can take advantage of advances in computer graphics, computer hardware and software. But, whatever the technology, the application of visualization is the same: to display the results of simulations, experiments, measured data and fantasy and to use these pictures/movies to communicate, understand and entertain [1]. In scientific visualization, the key goal is to transform data into a visual form that enables us to reveal important information about the data. Due to the use of modern visualization techniques we can discover details in data sets that would have remained undiscovered without its use. In this way, visualization helps us to better understand various physical phenomena. For scientists working with large digital data sets, the importance of modern visualization techniques can be compared with the astronomers use of telescopes. Advances in modern supercomputers have made it possible to do bigger and more accurate simulations of physical phenomena of increasing complexity. The study of turbulence, which is a component of the field of fluid dynamics, is an example of an area that has been dependent on the development in computer technology. It is now generally accepted that the three-dimensional, time dependent solution of the Navier-Stokes equation describes the evolution of incompressible flows. In these simulations, referred to as direct numerical simulations (DNS), all scales of motion are resolved in both time and space. The number of grid points needed for a reasonably accurate simulation is proportional to }:~ ' ƒ, where }:~ is the Reynolds number [2], expressing the ratio between inertial and viscous forces 1. The Reynolds number is 1 Turbulence occur when 3 Iî zš.

8 2 Introduction dependent on the characteristic size of the object, the characteristic flow velocity and the kinematic viscosity of the fluid. For example, for air flowing over a fuselage of a commercial aircraft at cruising speed, the Reynolds number is in the neighborhood of Œ. For the air flowing past a golf ball, the Reynolds number of the average velocity is about Œ-Ž and for blood flowing in a midsize artery it is about Œ. We quickly realize that this leads to enormous amounts of data. Consider a 50 meter long airplane that has wings with a chord length (the distance from the leading to the trailing edge) of about five meters. If the aircraft is cruising at 250 meters per second at an altitude of 10,000 meters, about ŒO grid points are required to simulate the turbulence near the surface with reasonable detail [2]. Using the biggest supercomputers we have today it would take thousands of years to compute the flow for one second of flight time. Fortunately, researchers need not to simulate the flow over the entire aircraft to produce useful information. Typically, only the effect of turbulence on quantities of engineering significance is of interest, such as the mean flow of a fluid or in the case of an aircraft, the drag and lift forces [2]. These averaged turbulent flows are smoother than the actual flow and drastically reduces the number of grid points necessary to simulate a field. 1.2 The problem The enormous size of todays data sets, have led to an increasing demand for more efficient and advanced visualization tools, in order to analyze and interpret the data. Especially when visualizing vector fields in 3D, this becomes evident. Vector fields play an important role in science and engineering. They allow us to describe a wide variety of phenomena like fluid flow and electromagnetic fields. Large vector fields often exhibit quite complex structures, which can be difficult to reveal. Making an efficient visualization of a vector field is one of the current challenges in scientific visualization. Traditionally, vector data has been represented by glyphs. By glyphs we are referring to any 2D or 3D geometric representation indicating vector magnitude and direction, such as an arrow or a cone [1]. More sophisticated methods include the display of field lines, stream surfaces [3] and flow volumes [4]. Large vector fields and vector fields with wide dynamic ranges in magnitude, can be difficult to visualize effectively using the techniques above. First, both arrows and field lines if placed densely in space, can produce cluttered and confusing images. Especially in areas of complex flow topology, for example turbulence, arrows and field lines can be difficult to interpret because of the variety of scales and structures in such flows. Second, the limited number of field lines that can be displayed without cluttering the image, make the visualization dependent on the choice of seed points, which are the start position of the integrated lines. It is not obvious how to distribute the field lines in space without missing important details of the field. By using texture based techniques, we avoid some of the problems above. These techniques allow the generation of images with a much higher number of field lines, making the position of an individual line less important. A powerful texture-based visualization method is the Line Integral Convolution (LIC), proposed by Cabral and Leedom [5]. Traditionally in LIC, a random texture is blurred along the field lines of a stationary 2D vector field, making an output texture that reveals the structure of the flow. 3D LIC volumes can be computed in the same manner as in 2D LIC, but this approach leads to dense images where the inner structures of the vector field are difficult to see.

9 Introduction 3 In this thesis we will propose and study methods and strategies for more effectively visualizing three-dimensional vector fields with LIC. Another necessity is the opportunity to interactively explore and manipulate the 3D data. In order to properly study a vector field we must be able to rotate the volume, zoom in and out and change subset at an interactive rate. A high degree of interactivity is important due to the short-term memory of the human brain. If the transformations are carried out too slow, we will forget what was displayed before the next image is rendered 2, and then loose track of the information. Fast response when changing parameters like color and opacity is also important when investigating large data sets. For a scientist not knowing which parameters that give good results, it should be possible to make adjustments without having to spend too much time. The making of a good color table is one example of this. These achievements are far from obvious, especially for big data sets. They can be as large as ΠΠΠdata points. Texture based volume rendering allows some of the necessary functionality. 1.3 Organization of the thesis Chapter 2 is a continuation of the introduction and covers some concepts concerning visualization of vector fields. This includes a discussion of various vector field visualization techniques. Chapter 3 covers topics related to volume rendering and interactive visualization. In chapter 4, we describe the basic ideas of the Line Integral Convolution technique, while in chapters 5 and 6 we propose and study methods for achieving a more effective visualization of three-dimensional vector fields with LIC. 2 Rendering is the process that generates 2D images on the computer screen.

10 ³ ž Chapter 2 Background In this chapter we will first introduce some mathematical background. Then some visualization techniques for vector fields will be presented and evaluated. Finally, a brief description of the data used in this thesis is given. Since one of the examples involves turbulence data, we try to give an explanation of what turbulence is and how to visualize it. 2.1 Definitions Visualization often deals with the time evolution of various fields defined in a three-dimensional space. A key goal in visualization is to identify and clarify certain details of motion contained in the data. Kinematics is the branch of mechanics that deals with quantities involving the description of motion. It treats variables such as displacement, velocity, acceleration, deformation and rotation of objects. This section introduces some concepts that are of relevance to the visualization of vector fields [6], [7]. For a detailed treatment of kinematics, see the book by Kundu [6] Vector Vectors are often described as quantities having both magnitude and direction. In a Cartesian coordinate system, a three-dimensional vector with components 9 can be expressed 9 as where, Ÿ and Vector field šœ i pž Ÿ 3 are the unit vectors along the three coordinate axes, and. A vector field is defined by a map ª«A vector field %±i šj µ³ is independent of time, it is called stationary. ± ª <%±i² in i±i has three component scalar fields ³ i±i9 ³& ] i±i9 i±ii R, so that (see figure 2.1). When a vector field, ³y and ³

11 ¾ ž ¾ š ž ± ¾ ¾ ² š Ÿ ž Background 5 z x F(x,t) y x Figure 2.1: A vector field assigns a vector 3%±i to each point of its domain at time. ± Field line and streamline is a vector field, a where the tangent vectors of the curve coincides with the vector field, see figure [2.2]. For our purpose the curve expressed by the field line is parameterized by the arc length º. The field lines can then be characterized by the equation Field lines can be derived from any vector field, as well as flows. If 3%±i field line for at time ± š, is a curve ¹$ ƒº where ¾ ºÀšÂÁ ¹»,º ¹7,º 9 ± ¼]½. The curve ¹7,º ¾ ¾O ¹$ ƒº šã y ƒº i pž ƒº ¹$ ƒº 9 O ƒº ± q. (2.1) Figure 2.2: Field lines for a vector field. Substituting Ä in 2.1 with the velocity field Å yields streamlines Path line When a vector field is considered as a velocity field, we can define the path line of a particle Æ in the vector field as the trajectory of motion for Æ over a period of time, see figure 2.3. A path line for a particle with initial position RšÈÇÊÉ can be described by the relations ±i šëçì. %±i The path line is obtained by solving the equation Í ÎÐÏ ¾pÑTšAÒ ÔÓ %±,Œ ša (2.2)

12 ÖÍ Ï 6 Background z t=0 r 0 r(r,t) y 0 path line x Figure 2.3: Pathline of a particle Streak line A streak line is another concept used in flow visualization. It is defined as the current location of all fluid particles that have passed through a fixed spatial point at some previous time. It is determined by injecting dye or smoke at a fixed point during an interval of time and is often used in wind tunnel experiments. Streamline, path line and streak line are all identical in a steady flow. In this thesis we will study vector fields at different instants of time, limiting our study to field lines. 2.2 Field line integration The ODE system A field line may be viewed as a solution of the following first order ODE (ordinary differential equation) system. Choosing a starting point LšÕ for the field line, we can write the 9 equation 2.1 as Î Ö» ƒº» ƒº» ƒº with the initial conditions ÍÖ The grid šœ³ y ƒº 9 ƒº * ƒº ±q9 šœ³y ] y ƒº 9 ƒº * ƒº ±q9 šœ³ y ƒº 9 ƒº * ƒº ±q9 Î Ï Ö y,œ j Œ *,Œ šœ 9 šœ šœ ² (2.3) (2.4) When dealing with numerical data, the vector field is not available in analytical form. It is given numerically at discrete locations. In our case will assume that the data are given on an uniform grid.

13 á á Ø á Ú Ü Background 7 Voxel Figure 2.4: An uniform grid. By an uniform grid, we mean a collection of points and cells arranged on a regular, rectangular lattice [1] as shown in figure 2.4. The rows, columns and planes of the lattice are parallel to the global x-y-z coordinate system. Uniform grids consist of line element(1d), pixels(2d) or voxels(3d) (see the figures 2.4 and 2.5 for illustrations of a voxel). Each pixel or voxel in an uniform grid is identical in shape. The number of points in the dataset is UØ Ù! (Ú ÙÛ Ü, where (Ø (Ú (Ü specifies the number of points in the, and directions. The number of cells is (Ø7ÝÞ Ùß (ÚÝÈ Ùœ (Ü:ÝË. If the domain àjš Ø á Ùâ, Ú á Ùœ, Ü, á the spacing between the grid points in each directions are given by ãä Ršå ØÀÝ) Ø %æ (ØDÝA,. ãç \šë ÚPÝ! Ú iæ ÚlÝv Interpolation and ã\ 7šÞ ÜèÝé -Ü iæ (ÜEÝv When working with data in discrete form, vector values between the grid or mesh points have to be computed by interpolation. We will in this thesis assume the data to be sufficiently smooth, so the use of trilinear interpolation is accurate enough. Trilinear interpolation uses data values V6 p4 V7 p6 V4 p3 (x,y,z) V5 z V2 p2 V3 p5 y z V0 y p1 x V1 x Figure 2.5: Trilinear interpolation. from the 8 vertices as shown in figure [2.5] to estimate the data value at the point. We

14 ê ê ê ê ê Ž š š ê ê š ž ž ž ê ó š Ž 8 Background see that this can be done by calculating šâëé ê èšâë šâë ž šâë yž jž yž,ë Ý)ë*É &ì 9,ë Ý)ë* &ì,ë Ý)ë 9&ì,ëî<Ý)ë &ì ê Ý ê i&ì ê Ý ê &ì and finally the data value ï in the intermediate point by where É ã" 7šß Ýí É %æ ãä Ýí É %æ ãä Ýí É %æ ãä Ýí É %æ ãä äýé ÊÉ %æ ãç äýé ÊÉ %æ ãç ï( Ž ž ê Ý ê Ž &ì, cý! É %æ ã\ ðé and É are coordinates of the cell point ëé and ãç ñšò ÝÛ É are the length of the voxel in each direction. ÝÛ É ãç šò Ý` ÊÉ and Numerical solution The ODE system (2.3)-(2.4) can be solved by numerical methods. The simplest numerical scheme is Euler s method, which is derived from using the first two terms in the Taylor series. A point located a distance ó ahead from a point on the same field line can then be found by computing (2.5) ƒº ƒº Xž More accurate methods like the higher-order Runge-Kutta [8], can be derived by including more terms in the Taylor series. We will use a fourth-order Runge-Kutta method to compute the field lines. 2.3 Visualization techniques for vector fields As mentioned before, vector fields are useful for describing a number of physical phenomena and there are many ways of representing them Hedgehogs and glyphs A natural vector visualization technique is to draw an oriented, scaled line for each vector. The line is drawn, starting at a grid point and is oriented in the direction of the vector components associated with that point. The color and length of each line can be set by the vector magnitude. This technique is often referred to as a hedgehog or oriented lines. To get a better impression of the direction of the vector field, arrowheads can be added to the lines. Any 2D or 3D geometric representation indicating vector magnitude and direction is called a glyph (see figure 2.6). These techniques are best suited for small data sets. If the placements of the glyphs are too dense and the variations in magnitude are too big, the images tends to be cluttered and visually confusing. The results can be improved if some form of thresholding is applied. One example which can remove some of the clutter is to neglect the drawing of glyphs where the length of the óäc ƒº ±%9²

15 Background 9 2D Glyphs 3D Glyphs Figure 2.6: Glyphs. vector is below a certain value, øùxøiú û. The threshold ü is typically a normalized quantity in the range ýðœ zþ. If üãšzœ, every vector is displayed. If üãšÿ, only the vectors with the largest magnitude are present in the resulting image. Another method is to scale the vectors so that the overlapping of the glyphs are reduced. In figure 2.7, we have used threshold and scale to emphasize regions where the information of the vector field is important. We see from the bottom image that suppressing a larger number of the least significant vectors may show relevant physical information more clearly Curve representation A better way of representing the vector fields is to draw curves that reveal the orientation and structure of the field. The curves can be any of the curves defined in section 2.1, depending on what we wish to see. The lines can be colored according to vector magnitude, but also other scalar quantities such as temperature or pressure may be used to color the lines. The computation of path lines and streak lines strongly depends on the capabilities of the underlying hardware. Both these techniques are time dependent, and vector data for multiple time steps have to be stored in the computer during the calculations. The requirements in memory can quickly be of many gigabytes, and not all computers are big enough to handle that. A possible problem concerning the rendering of field lines is the spatial perception of the objects in the scene. On common graphics workstations, field lines and other curves are displayed using flat shaded line segments, impairing the spatial impression of the image [9]. Phong type shading models [1] are traditionally applied to surface elements, but can be generalized to line primitives in [9]. Such generalizations have been used to render fur or human hair. However, on current graphics workstations, there is no direct hardware support for the display of illuminated line primitives [9]. Therefore major parts of the illumination calculations have to be performed in software. In 1997 Stalling, Zöckler and Hege [9] presented a method to achieve fast and accurate line illumination, by exploiting texture mapping capabilities of modern graphics hardware. This shading technique allows the visualization of large numbers of field lines in a vector field [9]. Other ways to enhance the three-dimensional impression of the vector field are to represent the field lines by polygonal objects, for example like tubes. One of these techniques is called streamribbons. A streamribbon can be constructed by generating two adjacent field lines and then bridging the lines with a polygonal mesh. This technique works well as long as the field lines remain relatively close to another [1]. If the field lines diverge, the resulting ribbons will not accurately depict the vector field, because we expect the surface of a ribbon to be everywhere

16 ² ² 10 Background Figure 2.7: Visualization of a vector field using glyphs. In the top image we have set the threshold ülšâœ ã"ó, where ã"ó is the largest of the grid spacings ãç, Œp and the scale ºcš ì ã\ and ã". In the bottom image üišèœ of the largest glyphs. and º7š ì ã"ó. The value º determine the length

17 Background 11 Figure 2.8: Visualization of a vector field using field lines. The red lines are at the downstream side of the seed point whereas the green ones are at the upstream side. tangent to the vector field (i.e., definition of field line). A streamsurface is a collection of an infinite number of field lines passing through a curve. The curve defines the starting points for the field lines and if the curve is closed, as in a circle, the surface is closed and we get a streamtube. Streamsurfaces can be computed by generating a set of field lines from selected points on the curve. A polygonal mesh is then constructed by connecting adjacent field lines. Like in streamribbons the separation of the field lines can introduce large errors into the surface. A problem with all these techniques, with the exception of the one proposed by Stalling, Zöckler and Hege [9] 1, is the limitation of the number of field lines that can be displayed in the scene, without cluttering the image. This makes the visualization dependent on the choice of seed points. As mentioned before, it is not obvious how to distribute the field lines in space without missing important details of the field. In figure 2.8, the image is a little cluttered because of the large number of field lines rendered in the vector field. As in figure 2.7, we have focused on a region of interest by thresholding the distribution of seed points Texture based techniques The use of texture based techniques is an alternative method for visualizing vector fields. Examples of these techniques are spot noise [10], [11], illuminated field lines [9] and Line Integral Convolution [5], [12], [13]. These techniques avoid some of the problems with vector visualization discussed in section 1.2 and the subsections and Figure 2.9 shows the result after applying LIC on the same vector field as visualized with other techniques in the figures 2.7 and 2.8. The vector field is obtained from [16]. 1 The Fast display of illuminated field lines method, allows the generation of images with thousands of field lines at interactive rate [9]. This means that the positioning of an individual field line becomes less important.

18 12 Background Figure 2.9: Visualization of a vector field using Line Integral Convolution. This thesis focuses on Line Integral Convolution, which will be presented more clearly in chapter 4. In two dimensions, LIC takes a bitmap (a texture) and a two-dimensional vector field as inputs and computes a new bitmap. The computed LIC texture will look like an image that is covered with spatially oriented structures along the vector field, see figure 4.6 on page 30. The advantage of this approach, as opposed to other field line techniques, is that it depicts all parts of the vector field. Line Integral Convolution evaluates the vector field at every pixel, hence it is independent on the choice of seed points. LIC is also independent of resolution 2. This allows the use of textures that are larger than the grid size of the vector field, without having to resample the vector field. Thus, if the data are sufficiently smooth and the interpolation is sufficiently accurate, we can by increasing the resolution of the texture, produce more detailed images. In three dimensions Line Integral Convolution [5], [12] leads to dense images, where the inner structure of the field can be difficult to depict (see figure 5.1, on page 34). Methods that reveal some of the inner structure will be discussed in the chapters 5 and 6. One approach, is the application of sparse input textures [13], [14], another approach is the use of clip planes. Line Integral Convolution is a quite compute intensive technique. In 1995, Stalling and Hege [12] proposed a much faster and more accurate LIC algorithm, which made LIC a popular technique for displaying vector fields on two dimensional surfaces [15]. When LIC is used to depict a 3D flow through a volume, however, even the algorithm presented by Stalling and Hege [12] may use some time to create the 3D LIC texture. This is the main reason for proposing a fast LIC algorithm in 3D, which we have called Seed LIC. This technique exploits the sparsity of the input texture. The discussion of Seed LIC will take place in section The data sets Data used in the visualizations, in this thesis, comes from numerical simulations computed at the Norwegian Defence Research Establishment (FFI) and the Colorado Research Associates 2 Stalling and Hege [12] made LIC independent of resolution. In the algorithm proposed by Cabral and Leedom [5], the vector field, the input texture and the output texture had to be of the same resolution.

19 š ž š ½ Œ Œ ² Background 13 Figure 2.10: Visualization of a synthetic vector field using a few field lines. The red lines are at the downstream side of the seed point whereas the green ones are at the upstream side. (CoRA/NWRA). In addition we have employed a synthetic data set given by the formula XÝÛ º, < ' Œ Œ i ½ where ½ Á. The data set is a vector field that rotates around a line parallel to the z axis, see figure These data sets have been used in the development of algorithms and to study and compare different visualizations techniques. The data set made at FFI, was obtained from a simulation of shock waves from an explosion [16]. The problem comes from computational fluid dynamics (CFD) and was modeled by the three-dimensional Euler equation. One practical application of such problems is the study of how vorticity 3 produced by shock waves mix two different gases. The solution contains vortices generated by the shock waves, which can be seen by visualizing for instance the vorticity field. The computation was performed on a𜠌çÙðŒ ŒçÙé ŒðŒ grid. The data set from Colorado Research Associates was obtained from a simulation of stratified shear turbulence [17]. This is the 3D direct numerical simulation of highest resolution to date reported, of Kelvin-Helmholtz (KH) [18] instability. The solution offer the most accurate characterization of stratified turbulence present available. KH instability generates vortices or KH billows. The resulting turbulence is often found to be an efficient mixing and dissipating process. The simulation reveal the breakdown of a single KH billow and was solved with a pseudo-spectral Galerkin method with field variables represented by Fourier series. The spatial resolution (number of spectral modes) was varied during the time evolution, so that small-scale features always was properly represented. The data set used, involves more than ŒðŒçÙ Œ ŒçÙé ŒðŒ modes. KH billows occur fairly frequently in the atmosphere, with wavelength up to a few kilometers. As they induce vertical air motion, they sometimes generate billow clouds, see figure KH billows occur at the interface between two fluids of different density and velocity. ½ Œ 3 The vorticity is the curl of the velocity vector and can be written as!#"$. šœœ

20 14 Background Figure 2.11: Billow clouds. An important field of research, is the study of turbulent flows. Next, we describe some basic characteristics of turbulence, and how to visualize it Turbulence Most flows occurring in nature and in engineering applications are turbulent. Blood moving through the heart in our body is turbulent. The flow of water in rivers and canals is turbulent. Most combustion processes, like the mixing of fuel in an engine, involve turbulence and often depend on it. Practically all the fluid flows that interest scientist and engineers are turbulent ones. An understanding of turbulence can for example allow engineers to reduce the aerodynamic drag on a race car or a commercial airplane, increase the maneuverability of a jet fighter or improve the efficiency of an engine. Turbulence is not always an unfortunate phenomenon that has to be eliminated at every opportunity. In certain fields many engineers work hard trying to increase it. One example, is the introducing of dimples in a golf ball. The dimples increase the turbulence close to the surface, bringing the airstream closer to the ball, see figure This reduces the drag of the golf ball and allows a skilled golfer to drive the ball 250 meters instead of 100 meter. Another example, is in a combustion engine, where the turbulence enhances the mixing of fuel and produces cleaner and more efficient combustion. But what exactly is turbulence? Everyone who has seen smoke streaming upward into still air from a burning cigarette has some idea about the nature of turbulent flow. Immediately above the cigarette, the flow is smooth. Such a flow is known as laminar. A little higher up, it becomes rippled and diffusive, or in other words turbulent. The same thing can be seen with water flowing from a kitchen tap. If we open the tap just a little, the flow is smooth and transparent. Open the tap a bit further, and the flow becomes more rough and fuzzy. However, it is very difficult to give a precise definition of turbulence. All we can do is try to describe some characteristics of turbulent flows. In for instance Tennekes and Lumley [19], there is a list of such characteristics. Turbulence is for example irregular, or random. Turbulence is characterized by high level of fluctuating vorticity. It is rotational and three-dimensional and always occur at high Reynolds numbers. Turbulence is composed of eddies or vortices moving randomly around and about the overall direction of motion [2]. These vortices are continually forming and breaking down.

21 Background 15 Images: Slim Films Figure 2.12: The drag on a golf ball is dominated by pressure forces. The drag arises when the pressure in front of the ball is significantly higher than the pressure behind the ball. The dimples of a golf ball increase the turbulence close to the surface, bringing the high speed airstream closer and increase the pressure behind the ball. The effect is plotted in the chart, which shows that the drag is much lower for the dimpled ball than for a smooth sphere, where the flow remains laminar over a great portion of the surface. The figures are taken from [2]. Large vortices break down into smaller ones, which break down into smaller vortices, and so on. The largest eddies is fed by external forcing whereas the smallest are dissipated into heat by viscous action. Supercomputers have made it possible to simulate turbulence. Due to the huge amount of data derived from direct numerical simulations, it is a challenge to reveal the complex structures when visualizing such flows. Visualization is a tool used to verify and interpret numerical data. Knowing little about the expected behavior of a given problem, we can get an impression whether the simulations are reasonable or not by studying plots and animations. In some applications it can be sufficient to study for example the velocity field, but when it comes to turbulent flows, the instantaneous velocity field tends to be very complex and difficult to study. The vorticity field depicts the structure in a turbulent flow better than, for instance the velocity field due to the fact that vortices are coherent on elongated structures [20]. Since vorticity dynamics plays an essential role in the description of turbulent flow, this particular choice is rather intuitive. Both vorticity (% š'& ÙEÅ ) and enstrophy (("šâø)% ø ), where Å is the velocity field, appear to render the turbulent field very nicely and reveal the vortical structure of the flow. Once a good visual comprehension of the mean structure of the flow is achieved, we can begin searching for dynamical processes relating the structures in the solution. An active field of research in the turbulence research community, is the identification of coherent structures, or in a sense uniform structures. Coherent vortices can be described as regions of the flow satisfying two conditions [21]: 1. The vorticity concentration* should be high enough so that a local roll up of the surrounding fluid is possible. 2. They should approximately keep their shape during a time+-, long enough in front of the

22 16 Background Figure 2.13: Vorticity field represented as vorticity magnitude of a Kelvin Helmholtz billow in a stratified fluid. local turnover time*/.*. Examples of criteria that have been used to investigate coherent vortices and to visualize the structures of a turbulent flow, are pressure, vorticity, enstrophy, the0 -method [22] and the1 - approach [21]. Figure 2.13 shows an example where the vorticity magnitude is used to render data from the simulation of stratified shear turbulence [17] Data file format Computational scientists rarely use only one computer. Typically they use one or more computers to do the simulations and another computer to visualize and analyze the data. Also, they may share data files with other scientist who use different machines and software. To help scientists reduce the time they were spending trying to convert data sets to familiar formats, some standard formats were created. Some examples are HDF, CDF, netcdf, SAIF, SDTS and HDS. A standard format used at FFI is the Hierarchical Data Format (HDF), which is a data file format designed by the National Center for Supercomputing Applications (NCSA) to assist users in the storage, manipulation and access of scientific data across diverse operating systems and machines. HDF comes with a library of callable routines and a set of utility programs and tools for creating and using HDF files. It was designed to address many requirements for storing scientific data, including:

23 Background Support for the types of data and metadata 4 commonly used by scientists. 2. Efficient storage of and access to large data sets. 3. Platform independence. 4. Extensibility for future enhancements and compatibility with other standard formats. There are two distinct versions of HDF [23], known as HDF(version 4 and earlier) and the newer HDF5. HDF5 was designed to address some of the limitations of the older HDF product, which is restricted to a file size of 2 gigabyte (32 bit addressing) and does not support parallel I/O effectively. The Line Integral Convolution application developed in conjunction with this thesis, uses HDF to read and write data. The application is implemented in the C++ programming language and uses the GUI 5 library Qt [24]. 4 HDF files are self-describing. The term self-describing means that, for each HDF data structure in the file, there is comprehensive information about the data and its location in the file. This information is often referred to as metadata. 5 Graphical user interface.

24 Chapter 3 Volume rendering Volume rendering, in scientific visualization, is the process used to create images from volumetric scalar, vector and tensor data. Large data sets obtained from numerical simulations have led to an increasing demand for more efficient visualization techniques and as a result, several volume rendering techniques have emerged. In this chapter we will focus on volume graphics and how to achieve interactive visualizations. We begin the chapter by describing a few concepts important in volume visualization. These are transparency, color mapping and texture mapping. We then continue with a discussion of various rendering techniques and finish by presenting the two volume rendering applications used in this thesis. 3.1 Transparency, opacity and alpha values An important concept in visualization of volumetric data is transparency or opacity. Although many visualization techniques, such as glyphs and streamtubes, involve rendering of opaque objects, there are applications that can benefit from the ability to render objects that emit light. The internal data from a MRI (Magnetic Resonance Imaging) scan, can for instance be shown by making the skin semitransparent, see figure 3.1. Figure 3.1: The skull of a head is emphasized by assigning low opacity to the soft tissues.

25 Volume rendering 19 Opacity and transparency are complements in the sense that high opacity imply low transparency, and are often referred to as alpha in computer graphics. The opacity or alpha value, is a normalized quantity in the range ý Œ zþ. If an object has max opacity ( Lš[ ), it is opaque and the objects and light behind are shielded and invisible. If úõ, the objects are transparent and makes objects behind visible. An alpha value of zero ( š transparent object. Œ ) represents a completely 3.2 Color mapping Color mapping is a common scalar visualization technique that maps scalar data into color values to be rendered. In color mapping, the scalar values are divided into equal intervals and serve as indices into a lookup table, see fig 3.2. The lookup table holds an array of colors that s < min, i = 0 i s > max, i = n 1 i i = n ( s i min max min ) s i rgb rgb rgb color rgb n 1 Figure 3.2: Mapping scalars to colors via a lookup table. can be represented for example by the RGBA (red, green, blue, alpha) [1] or the HSVA (hue, saturation, value, alpha) [1] color system. The RGBA system describes colors based on their red, green, blue and alpha intensities and is used in the raster graphics system [1]. The HSVA system, which is by scientists found to give good control over colors in scientific visualizations, represents colors based on hue, saturation, value and alpha. In this system, the hue component refers to the wavelength which enables us to distinguish one color from another. The value which also is known as the intensity component, represent how much light is in the color and saturation indicates how much of the hue is mixed into the color. Use of colors is important in visualization and should be used to emphasize various features of the data set. However, making a good color table that communicate relevant information is a rather challenging task. Wrong use of colors may exaggerate unimportant details. Some advice in making a color table are given in [25]. Figure 3.3 illustrates the use of color tables in volume visualization. 3.3 Texture mapping Geometric object are, in compute graphics, represented by polygonal primitives. In order to render a complex scene, millions of vertices have to be used to capture the details. A technique that adds detail to a scene without requiring explicit modeling the detail with polygons, is texture mapping. Texture mapping maps or pastes an image (a texture) to the surface of an object in the scene. The image is called a texture map and its individual elements are called texels.

26 ù ù ù 20 Volume rendering Figure 3.3: Visualization of a sphere using the HSVA color system. The scalar data are represented as a voxel set with 8-bit precision (in the range ý Œ 3232 þ ), and serve as indices into a lookup table. A clip plane is used to reveal the various layers of the sphere represented in different colors. Texture maps can be both two- and three-dimensional. A texture may contain from one to four components. A texture with one component contains only the intensity value, and is often referred to as an intensity map. Two component texture contains information about the intensity value and the alpha value. Three component texture contains RGB values and a texture with four components contains RGBA values. To determine how to map the texture onto the polygons, each vertex has an associated texture coordinate. The texture coordinate maps the 54 vertex into 64 the texture map. The texture map in 2D and 3D can be defined at the coordinates and, 87: where4 87 are in the range ý Œ zþ. Texture mapping is a hardware dependent feature and is designed to display complex scenes at real time rates. While most graphics systems have support for 2D texture hardware, some systems like the InfiniteReality [26] have support for 3D texture mapping graphics hardware. 3D textures can be used to store volumetric scalar data obtained from numerical simulations. The scalar values in scientific visualization are often normalized and represented as a voxel set with 8-bit precision (in the range ýðœ 3292 þ ). These values can be used as indices into a lookuptable. In that case, texel values in the volume texture are mapped to color values to be rendered. For some graphics systems (like InfiniteReality), the color tables are implemented in texture hardware. This allows an instant update of the color and opacity in the scene after altering the lookup table. If the color tables are not supported in hardware, the textures have to be regenerated every time the color table changes. InfiniteReality graphics systems support three basic texel sizes, which is 16-bit, 32-bit and 48-bit. Texture memory is presently a very expensive resource. To save memory, 3D textures are often represented with a depth of 16-bit. Using 16-bit textures, a texture memory of 64 MByte

27 Volume rendering 21 has capacity to store a329 8-bit voxel set. 32-bit textures require twice the storage of 16-bit textures, while 48-bit textures require four times the storage of 16-bit textures. Generally, 16-bit textures render faster than 32-bit textures, and 32-bit textures render faster than 48-bit textures [26]. Since the smallest texel supported by the hardware is 16 bits, a texture represented as bytes (8-bit) will end up taking twice the amount of texture memory. The graphics library OpenGL Volumizer 2 [27] allows interleaving of textures, which efficiently utilizes texture memory. 3.4 Volume rendering techniques There are many different rendering techniques which can be divided in the two groups, the geometric rendering and the direct volume rendering Geometric rendering In geometric rendering, which is the the most common group, geometric objects made up of points, lines and polygons, are constructed from the 3D data and then rendered. Glyphs, field lines and streamtubes are all examples of visualizations of vector data using geometric rendering techniques. Since Line Integral Convolution maps a vector field onto a scalar field, we will focus on the rendering of scalar data. A typically way of visualizing scalar data is to display the volume by drawing isosurfaces. An isosurface or a 3D contour consist of many polygons primitives and is created by selecting a scalar value (an isovalue), resulting in a surface showing the regions of the chosen contour level. This works best for volumes with strong and obvious structures, for example in terrain visualizations and when showing the bone from an MRI scan of a part of the human body. Isosurfaces are not suited for volumes with more complex and diffuse topology like various fluids. To extract useful information, only a few surfaces can be rendered in the same scene, and it is hard to get an impression of a complex flow using a few contours only. A cloud is an example where it is difficult to give a realistic rendering using 3D contours. Different algorithms have been proposed for efficiently reconstructing polygonal representation of isosurfaces from scalar volume data [28], [29], [30], but unfortunately none of these approaches can efficiently be used in an interactive application [31]. This is due to the effort that has to be spent to fit the surface and also to the enormous amount of triangles produced. In the paper by Westermann and Ertl [31], an isosurface was reconstructed with a marching cube algorithm [28], [29] from an abdomen data set of resolution2 Ù:2Ì Ù. It took about half a minute to generate 1.4 million triangles. In addition comes the time involved rendering the triangle list which on a high-end graphics computer takes several seconds. Interactive manipulation of the isovalue in large data sets with geometric rendering, is therefore difficult. Ertl [31] proposed a direct approach for rendering isosurfaces. This approach avoids polygonal representation by using 3D texture mapping and used approximately one second to render the same data set Direct volume rendering The other main group of rendering technique is direct volume rendering, and is most common in connection with visualization of scalar data. In direct volume rendering, voxels are used as

28 22 Volume rendering building blocks to represent the entire volume. Typically, each voxel is associated with a single data point and contains information about the color and opacity. As opposed to the indirect techniques, such as isosurface extraction [28], [29], [30], the direct method immediately display the voxel data. This method tries to give a visual impression of the complete 3D data set by taking account the emission and absorbing effects of the volumetric data. According to [32], all known approaches of direct volume rendering can be reduced to the transport theory model which describes the propagation of light in materials. A rendering technique which tries to describe the emerging of light, is ray tracing or ray casting [33], [34]. The basic idea of ray tracing is to determine the value of each pixel in the image by sending a ray through the pixel into the scene. Typically, when rendering volumetric data, the rays are parallel to each other and perpendicular to the view plane, see figure 3.4. The pixel value is computed by evaluating the voxels encountered along the ray using some ray 1 ray 2 screen Figure 3.4: Ray tracing. specified function. Although ray tracing gives high quality renderings, it is seldom used in scientific visualization. The process is very compute intensive, and since it is implemented in software (as it is difficult to implement it on dedicated hardware) it does not yet provide the interactivity needed for the visualizing of large scientific data. However, interactive ray tracing is an active topic of research [35], [36] and it is believed by researchers that ray tracing still have room for performance improvements and can be able to perform interactive rendering, even on a standard PC hardware [36] Direct volume rendering with 3D texture mapping Hardware assisted volume rendering using 3D textures can provide interactive visualizations of 3D scalar fields [37], [38], and was presented by Cabral [37]. The basic idea of the 3D texture mapping approach is to use the scalar field as a 3D texture. If the texture memory is large enough, the entire volume is downloaded into the texture memory once as a preprocess. To render the voxel set, a set of equally spaced planes (slices) parallel to the image plane are clipped against the volume (see figure 3.5). The hardware is then exploited to interpolate 3D

29 Volume rendering 23 Figure 3.5: Volume rendering by 3D texture slicing. texture coordinates at the polygon vertices and to reconstruct the texture samples by trilinearly interpolating within the volume. If a color table is used, the interpolated data values are passed through a lookup table that maps the values into color and opacity values. This way, graphics hardware allows fast response when modifying color and opacity. Finally, the volume is displayed by blending the texture polygons back to front onto the viewing plane. This technique is called volume slicing. Due to trilinear interpolation and dedicated hardware, we are with this technique, able to produce images of high quality at an interactive rate. The result of ray casting and volume slicing are according to SGI (Silicon Graphics) identical, but there are some important differences between the two techniques in processing the volumes. First, volume slicing is faster than ray casting because computations are performed by the dedicated texture hardware, whereas ray casting computations are performed by the CPU. Second, volume slicing reduces the volume to a series of texture mapped semitransparent polygons. These polygons can be merged with any other polygonal data base handed to any graphics API (for example OpenGL) for drawing. Although 3D texture mapping is a powerful method, it strongly depends on the capabilities of the underlying hardware. In the methods used in [37], [38] the entire volume have to be stored in the texture memory. Some graphics library allow the paging of textures [27], [39], however such methods for dealing with volumes whose size exceeds physical texture memory severely hamper the interactivity of the rendering [40]. When the size of the volume data sets exceed the amount of available texture memory, the data can be split into subvolumes or bricks that are small enough to fit into memory. Each brick is then rendered separately, but since the bricks have to be reloaded for every frame, the rendering performance decreases considerably. To reduce texture loading, Weiler [40] has proposed a level-of-detail representation of the textures. In this method each brick stores approximations of the original data at coarser resolution. These smaller bricks may be used when rendering the volume, allowing regions of interest to be displayed at higher resolution than other parts of the data set. In this thesis we will make use of the volume renderers Viz [41] and VoluViz (an application we developed using the OpenGL Volumizerer 2 [27] API). Both use a similar direct volume rendering technique. The rendering algorithm in Viz starts with a voxel set, and a color table containing color and alpha entry for each of the data values associated with the voxel set. Before rendering the major axis is derived. The major axis is the coordinate axis whose direction is closest to the screen normal. After identifying the major axis, the voxel set is rendered in 3D using a set of slice planes, as in volume slicing (see figure 3.6). The difference is that in this method, the

30 E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E Volume rendering data color editor E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E-01 voxels 2D / 3D texture quadrilaterals screen Figure 3.6: An illustration of the Viz s rendering process. slices are derived from the intersection of the voxels set with a set of planes perpendicular to the major axis. Such an approach is used when restricted to 2D texture hardware. What separates viz from the 2D texture mapped volume rendering, is that viz can perform trilinear interpolation and utilizes memory better 1. Finally, the slices are rendered back to front. In OpenGL Volumizer 2, the volume is first tessellated into a tetrahedral mesh. After sorting the tetrahedral in a back to front visibility order, they are rendered separately using the volume slicing technique (see figure 3.7). Figure 3.7: Back to front composited slices for one, three, and five tetrahedra. 3.5 VIZ Viz is a volume renderer application previously developed at FFI. It is a highly interactive renderer when run on a system with texture hardware. Viz has many features including trilinear interpolation, interactive color table (both for RGBA and HSVA), picking of subsets, clip planes, blending of geometries and voxel data and offers visualization of two fields in the same scene. Viz is restricted to data sets on uniform grids. 1 To avoid reloading the textures in 2D texture mapping, all slices from the three major volume orientations have to be stored in memory.

31 Volume rendering VoluViz VoluViz is an application we developed at FFI using the OpenGL Volumizer 2 API (see figure 3.3 on page 20). As VoluViz is based on Viz, many of the features in Viz are now implemented in VoluViz. The OpenGL Volumizer 2 API is a library of C++ classes that facilitates the display and manipulations of volumetric data. OpenGL Volumizer 2 is specifically designed for volume visualization applications. It hides the details of low-level graphics languages and exposes only those functions necessary for viewing volumetric data. OpenGL Volumizer 2 provides a simple interface to the high-end graphics features available in infinitereality systems (such as 3D texture mapping and texture lookup tables). Since OpenGL Volumizer 2 utilizes a tetrahedral mesh, it can handle both regular grids and unstructured meshes. OpenGL Volumizer 2 supports all SGI graphics systems with 3D texture mapping and color tables.

32 Chapter 4 Line Integral Convolution In this chapter we present the Line Integral Convolution (LIC) technique. We begin the chapter with an introduction to LIC. We continue with a description of convolution and a technique called DDA convolution, which is a predecessor of LIC. The rest of the chapter describes the Line Integral Convolution algorithm and presents various improvements to the original algorithm proposed by Cabral and Leedom [5]. 4.1 Introduction to Line Integral Convolution Line Integral Convolution (LIC) is a powerful technique used to represent vector fields with high accuracy. It is a texture based technique that can be used to display both two- and threedimensional fields. LIC is essentially a filtering technique that blurs a texture locally along a given vector field, causes voxel intensities to be highly correlated along the field lines but independent in directions perpendicular to them. It takes a pixel/voxel set and a vector field as inputs and produces a new pixel/voxel set as output, see figure 4.1. Vector field LIC Voxel set Voxel set Figure 4.1: A vector field and a voxel set are inputs to the Line Integral Convolution resulting in a new voxel set. Since introduced in 1993 by Cabral and Leedom [5], Line Integral Convolution has been an active field of research within the computer graphics and visualization community. Several researchers have developed the LIC algorithm further and the method has found many application

33 š š ¾ ¾ ² Line Integral Convolution 27 Figure 4.2: Examples of LIC images. The image on the left depicts the computed velocity field close to a racing car computed at the Italian Aerospace Research Center (CIRA). The image on the right is a picture of flowers convolved by a given 2D vector field taken from [5]. areas, ranging from computer art to scientific visualization. Two examples of LIC images are shown in figure Convolution Convolution is a mathematical definition that can be applied to several areas, such as image processing, optics and signal processing. The convolution of two real functions;íš<;3 and is defined as = š = If we convolve;3 we obtain ï( š>;3 &ì = = &ì ;3 with the Dirac delta function ;3 &ì B Œ DC ;3 E ša 9 ša 9 B ;3 "ÝÛ äš';3 = ÝÛ In ; figure 4.3, we see how a convolution with a box function leads to a smearing of the function. Convolution is commonly used in image processing. The convolution is then typically represented by a two-dimensional convolution matrixf êhgji, where the matrix elements describe the blurring effect applied to the image. The intensity of a pixel ï( in the new image is found by adding intensity values from neighboring pixels in the original image times matrix element matching the position to the pixels. If for example a picture is convolved by the Ù 9² (4.1) (4.2) (4.3)

34 F T W. V 28 Line Integral Convolution f g (f g) * Figure 4.3: Convolution of the function; with a box function=. matrix Fœš KL êogji the only pixel to contribute in the finding of ï( GJI ï ša RQ ST)U SWXU TYW G[I ê\gji 'V where ï š ï( Œ Œ Œ Œ Œ Œ Œ Œ9MN (4.4) êpgji, is itself. The result after such a convolution is the original image. The intensity of a pixel after convolution can be found by ï G «. 8ZI Q «(4.5) and a çù is a normalization constant. Figure 4.4 demonstrates the effect of convolving an image by matrix. Notice the blurring effect in the right image. Figure 4.4: Blurring of a picture. 4.3 Convolution along a vector Line Integral Convolution is a modification of a technique called DDA convolution [5]. In this method, each vector in a field is used to compute a DDA line which is oriented along the vector and going in the positive and negative vector direction some distance]. A convolution is then applied to the texture along the DDA line. The input texture pixels under the convolution kernel

35 Line Integral Convolution 29 are summed, normalized by the length of the convolution kernel,9], and placed in an output pixel image for the vector position. Figure 4.5 illustrates this operation for a single vector in the field. Vector field DDA line Input texture Output texture Figure 4.5: Convolution along a vector. The pixel in the output texture is a weighted average of all the input texture pixels covered by the DDA line. The DDA approach depicts the vector field inaccurately. It assumes that the local vector field can be approximated by a straight line. As a result, DDA convolution gives an uneven rendering, treating linear portions of the field more accurately than areas with high curvature, such as areas with small eddies or vortices. This becomes a problem in visualization of vector fields, since details in the small scale structure are lost. Line Integral Convolution solves some of this problem, as the convolution takes place along curved segments. 4.4 LIC, the idea of Line Integral Convolution is to blur an input texture along field lines of. The LIC algorithm carries out the blurring by applying an one-dimensional convolution throughout the input texture. Each voxel in the output texture is determined by the convolution kernel and the texture voxels along the local field line indicated by the vector field. As a result, the intensity values of the output scalar field are strongly correlated along the field lines, whereas perpendicular to them almost no correlations appear. LIC images can therefore provide a clear visual impression of the directional structure of. This is illustrated in figure 4.6. For a given vector field P

36 É ž «V Ý Œ V ž Œ ¾ º æ 30 Line Integral Convolution Figure 4.6: A 2D example where line integral convolution is applied to a white noise input texture. We see how the input texture is blurred along the field lines of the vector field. The images are taken from [42]. Given a field line ¹, Line Integral Convolution can mathematically be described by where^ Ó É ^ š?`_a _6b. c ƒº Ý)ºÉ d,¹$ ƒº ÉcšÂ¹7,ºÉ is the intensity for a voxel located at. In this equationc denotes the filter kernel of length9] and+ denotes the input texture. The curve ¹7,º is parameterized by the arc-length s. The filter length or the convolution length determine how much the texture is smeared in the direction of the vector field. With] equal to zero, the input texture is passed through unchanged. As the value of] increases, the output texture is blurred to a greater extent. Stalling and Hege [12] found good results by choosing the convolution length9] to be Œ th of the image width. In the algorithm (for 2D) proposed by Cabral and Leedom [5], referred to as CL-LIC hereafter, computation of field lines were done by a variable step Euler s method. The local behavior of the vector field is approximated by computing a local field line that starts at the center of a pixel and moves out in the downstream and upstream directions, % (4.6) e Élš e G š e G.* jž ²2 e ë e G.* q ffë G.* q ²2 9 ffã"º G.* e e É» š» e É G š e» G.* G.*» ffë e G.*» ffã ºz»G.* (4.7)

37 q q q q q q s š š Ý V V _ l ¾ ¾ º l l ž ««V V _ ¾ º ² Line Integral Convolution 31 The convolution is expressed as follows, q where ë e G ³rgihkjq qts G e G ³ ª ³gihkji WU šml G G e WU G É ³ ª G G «Pp š? _o _o ó _5o G ó G ž c É ó ž 7À 77 WJnU GWnU G G e G»G É ³»G ª» ó É ó is the vector from the input vector field at the pointe G. is the output pixel value at point is the input pixel value at pointe G. and» are the convolution distances along the positive and negative directions, respectively. e G represent the ith cell the field line steps in the positive direction, ande G» represent the ith cell in the negative direction. c 7: is the convolution filter function. ã"º G is the arc length between the point º G and º G «ºÉèšœŒ. along the field line. This is done for each pixel, eventually making an output LIC image. 4.5 Fast LIC The algorithm suggested by Cabral and Leedom [5] is very compute intensive. Even in 2D, the algorithm involves a large number of arithmetic operations and can be rather slow. In 1995 Stalling and Hege [12] proposed a fast and more accurate LIC algorithm. In the LIC algorithm proposed by Cabral and Leedom, for each pixel in the output image, a separate field line segment and a separate convolution integral are computed. Stalling and Hege points out two types of redundancies in this approach. First, a single field line usually covers lots of image pixels. Therefore in CL-LIC large parts of a field line are recomputed very frequently. Second, for a constant filter kernelc very similar convolution integrals occur for pixels covered by the same field line. This is not utilized by Cabral and Leedom s algorithm. Consider two points located on the same field line, 3 ¹$ ƒº and i Pšâ¹7,º. Assume, that the points are separated by a small distance ã"ºäšþº lý`º. Then for a constant filter kernelc the convolution integral (4.6) for can be written as ^ ^ i c?a_vu. _vu. «Pp +ä ¹7,º % c?a_vu _vu «Pp +ä ¹7,º % (4.8) The intensities differ by only two small correction terms that are rapidly computed by a numerical integrator. By calculating long field line segments that cover many pixels and by restricting

38 š š ž æ ž É Ý Ý É Æ ² 32 Line Integral Convolution x = σ 0 (s 0 ) x 2 x 1 x 1 x 2 G Figure 4.7: The input texture is sampled at evenly spaced locations along a field line ¹. For each location the convolution integral^ G G is added to the pixel or voxel in 3D containing. A new field line is computed only for those pixels or voxels where the number of samples does not already exceed a user-defined limit. to a constant filter kernel, we avoid both types of redundancies being present in CL-LIC. The length of the field line or the field line length is typically larger than the convolution length. In designing the fast-lic algorithm, Stalling and Hege suggest an + approach which relies on computing the convolution integral by sampling the input texture at evenly spaced loca- G tions along a pre-computed field line ¹7,º ÉPšA¹7,ºÉ ^ with G šj¹$ ƒºé. First a field line is computed for some location (see figure 4.7). The convolution integral (4.6) for this location is approximated as Ôó\j c SU ª š G. ª d G 9 (4.9) ó\j, where is the distance between different sample points. To ensure normalization we setc šë iê. After having computed^, we step in both directions along the current field line, updating the convolution as follows ^ ^ xw «% xw.* % ^ ^ xwèxž xwèxž czy czy d d xw «xw{.* «pª «pª d d xw{. ª xw{. } ð ª } ð² ~ šaœ ~ šßœ ²²²9 Ý ²²² ÝVÆ (4.10) For each sample point the corresponding output image pixel is determined and the current intensity is added to that pixel. In this way, we efficiently obtain intensities for many pixels covered by the same field line. Running through all output image pixels, the algorithm requires the total number of hits already occurred in each pixel to be larger than some minimum. If the number of hits in a pixel is smaller then the minimum, a new field line is computed. Otherwise that pixel is skipped. At the end, accumulated intensities for all pixels has to be normalized against the number of hits. The algorithm referred to as fast-lic can be described by the pseudocode presented in figure 4.8. Accuracy is especially important in fast-lic because multiple field lines determine the intensity of a single pixel. If these lines are incorrectly computed, the LIC pattern gets disturbed. This is most evident near the center of a vortex in the vector field. The LIC-algorithm proposed by Cabral and Leedom, used a variable step Euler s method in the computation of field lines. Stalling and Hege [12] employ a fourth-order Runge-Kutta method, thus making the algorithm more accurate.

39 Œ Line Integral Convolution 33 for each pixel p if (numhits(p) < minnumhits) then initiate field line computation with Ƀ center of p Compute convolution É ˆ add result to pixel set m=1 while m < some limit M Update convolution w ˆ and x.šw ˆ add result to pixels containing w and set >ŒŽ.Šw for each pixel p normalize intensity according to numhits(p) Figure 4.8: Pseudocode of fast-lic. If the step size between the sample points is too big, we may miss some of the pixels (voxels in 3D) along the computed field line. This can lead ²2 to images with aliasing [1] problems. Stalling and Hege have found a step size of óoj:š times the width of a texture cell to be sufficient. 4.6 Some improvements After the first LIC-algorithm was introduced in 1993, a number of suggested improvements have been made. In 1994, Forsell [43] describes an extension that makes it possible to map flat LIC images onto curvelinear surfaces. So far the algorithm only worked for vector fields over regular two-dimensional Cartesian grids 1. In 1995, Stalling and Hege [12] proposed the fast-lic algorithm discussed in section 4.5. Shen, Johnson and Ma [44] introduced in 1996 a technique for injecting dye into the LIC field to highlight the flow field s local feature. The dye insertion method utilizes the LIC s natural smearing to simulate advection of dye within the flow field. The simulation of dye injection is done by assigning colors to isolated local regions in the input white noise texture. Cells whose streamline pass through such regions receive color contributions from the dye. In 1997, Wegenkittl, Gröller and Purgathofer [45] presented Oriented Line Integral Convolution (OLIC), where also the information about the orientation of the vector field is present in the resulting image. And in 1998, Interrante and Grosch [13] looked at some techniques for visualizing 3D flow through a volume. We will take a closer look at the last paper later in this thesis. 1 In this thesis we only work with uniform grids.

40 Chapter 5 Volume LIC Although Line Integral Convolution is most commonly used to depict 2D flows, or flows over a surface in 3D, LIC methods can equivalently be used to depict 3D flows through a volume [13]. When LIC is applied to a solid noise texture, the output is a solid LIC texture that is blurred along the directions of the vector field. For 2D vector fields and surfaces in 3D this works well, because the resulting LIC texture are two-dimensional. But when working with volumetric data, we see from figure 5.1 that it can be difficult to get a good impression of the vector field from a series of solid or partially opaque 2D slices rendered via direct volume rendering. The image of the vector field will be incomplete and the inner details are completely lost. Ù Ù Figure 5.1: Left: A solid white noise input 3D texture. Right: The output texture after Line Integral Convolution. The visualized vector field is a subset of a synthetic data set, used in experimenting with volume LIC. The resolution of the textures are Both images are rendered with VoluViz. In this chapter, we will study techniques for more effectively visualizing three-dimensional vector fields with volume LIC. We begin by presenting some techniques that improve the presentation of the data. This includes specifying a Region Of Interest (ROI) and the application of sparse input textures. We then propose and study a fast LIC algorithm in 3D and conclude with a discussion of techniques to reduce aliasing.

41 Volume LIC Choice of input texture Region Of Interest When working with scientific data, we can make use of scalar values like enstrophy (see page 15), temperature and absolute value of velocity to specify a critical region in the volume where the information of the vector field is especially important [13]. Hence, we can clarify the presentation of the data by isolating and emphasizing information in these critical regions. Interrante and Grosch [13] found that when LIC is used together with a Region Of Interest (ROI), better results can be achieved if the ROI mask is applied as a preprocess to the input texture, before the Line Integral Convolution, rather than as a postprocess to the output afterwards. In the first case, in which the ROI mask is applied before LIC, the Region Of Interest mask is guided by the flow itself, with the result that the boundaries of the ROI will be everywhere aligned with the direction of the vector field. In the second case, the visible portion of the vector field in the LIC texture will be completely determined by ROI mask, making boundaries which will not in general follow the direction of the flow. In figure 5.2, we see the result after applying LIC to an input texture that has been masked by a Region Of Interest. The visualized vector field is a vorticity field obtained from a simulation done at FFI [16]. The vorticity magnitude was used to specify the ROI mask. This was done by only inserting white noise data into the input texture where the vorticity magnitude exceeds a specified threshold value. The rest of the voxels in the input texture are i29 set \Ù3 9 to zero (see figure 5.2). The textures were defined to be twice as large as the vector field \Ùé 9, so that the details could be seen more easily. Figure 5.2: The masked input texture and the resulting LIC texture. The rendered textures are subsets of the computed textures of resolution ŒçÙ3 \Ùé Œ Œ Sparse input texture When Line Integral Convolution is applied to a solid noise texture, even one that has been masked by a Region Of Interest function, the output image looks more or less like a solid

42 36 Volume LIC Figure 5.3: The input texture where 9514 points are distributed after the vorticity magnitude and the resulting LIC texture. object. The details of the vector field can still be difficult to depict. By applying LIC over an input texture consisting of a sparse set of points([13], [14]), we can produce an output image which gives a much better impression of the vector field. Instead of a solid object we now produce a collection of densely placed field lines. One of the strengths with Line Integral Convolution applied to dense (white noise) input textures, is that it is not dependent on the choice of seed points. When LIC is applied to a sparse input texture though, this is not the case. The LIC texture is then computed by generating strokes through the volume by advecting the distributed points in the input texture with the empty space between them. As a result, the output texture is dependent on the placements of the distributed points. However, since texture based techniques allow the display of a much larger number of lines simultaneously in an image, making the position of each stroke less important, we can apply statistical methods for distributing the points in the volume. We have tried different approaches in distributing the points or voxels in the 3D texture. In the first approach the idea was to make a texture where the points was distributed after the scalar value that was used in making the ROI mask. Hence, we get output images where regions with high scalar values are more emphasized than other regions with lower values. In this approach, the regions with the highest scalar values becomes more cluttered than the regions with lower scalar values. Another option is a more random approach. This method leads to a LIC texture where the field lines are more evenly distributed and with some datasets, like the synthetic data set used in this thesis, it can give a better impression of the vector field. Figures (5.3, 5.4, 5.5) show some examples of Line Integral Convolution applied to input textures with different distribution functions. In figure 5.3, the points in the input texture are distributed according to the vorticity magnitude. While in the figures 5.4 and 5.5, a random approach is used. The number of points or spots in the input texture in figures 5.3 and 5.4, are about In figure 5.5, about spots are used. The algorithm for computing a random input texture can be described by the pseudocode in

43 Volume LIC 37 Figure 5.4: The input texture where 9528 points are distributed randomly and the resulting LIC texture. Figure 5.5: The input texture where points are distributed randomly and the resulting LIC texture.

44 s ž 38 Volume LIC figure 5.6. for each voxel v set input texture value to zero for each voxel v if (scalar value(v) > threshold value) then compute random number ([0,1]) if (random number > density factor ([0,1])) input texture value(v) = 255 Figure 5.6: Pseudocode for a random input texture. The density of the distributed points in the input texture is determined by the density factor. The final set of points chosen are set to 255. The rest of the voxels are set to zero. To differentiate the strokes in the output texture, the use of white noise data has been common when applying LIC to a dense input texture. When applying LIC to a sparse input texture though, the use of various level of grey is not necessary. Instead, we differentiate the individual field lines by employing a shading technique called limb darkening. This will be discussed in The algorithm for computing a weighted input texture is similar to the algorithm for computing a random input texture and can be described by the pseudocode in figure 5.7. for each voxel v set input texture value to zero for each voxel v if (scalar value(v) > threshold value) then compute random number ([0,1]) if random number > weight function scalar value(v) input texture value(v) = 255 ([0,1]) Figure 5.7: Pseudocode for a weighted input texture. In this approach, we employ a weight function to select the points in the input texture. The weight function is a function that returns a number between 0 and 1. A low number for high scalar values, and a high number for low scalar values. What function that should be used depends on the range of the scalar field and how dense we want the input texture to be. The input texture shown in figure 5.3, was computed using the weight function7 š 3Ý 3 q º3Ýü, where º is a normalized scalar value in the range [0,1] and ü is the threshold value. This function returns the value 1 when the scalar value º is equal to the threshold ü. Best results were achieved when requiring a minimum distance between the selected points in the input texture. This prevents the spots in the input texture and thus the field lines in the output image from getting too close. In this approach, the details of the vector field is displayed more clearly. To prevent the lines from getting too close, ideally, the distribution of the field lines itself should be controlled, rather than the distribution of points [9]. However, by

45 Volume LIC 39 limiting the total length of the field line and if the field lines are integrated an equal distance in upstream and downstream direction (as in LIC), we obtain reasonable results by just controlling the position of the points. Figure 5.8 shows the result after applying LIC to a sparse random input texture with a minimum distance between the selected points. The visualized vector field is obtained from [17] and depicts half of the domain of the vorticity field in the Kelvin- Helmholtz billow shown in figure 2.13 on page 16. The resolution of the output texture is 3 Ù$9 Ù 9, and with the fast-lic algorithm 1 we implemented, it took 25 hours to compute. The long computation time required for large volumes, is the reason that we propose the Seed LIC. When using rather sparse input textures, aliasing often occur in the rendered image. Aliasing will be discussed in Spot size A method that can bring additional information of a flow field in the rendered image, is the use of varying spot sizes in the input texture ( [13], [14]). If we let the size of the spots depend on some additional scalar variable, we get images where the line width of the strokes or field lines are connected to the scalar field. We have not tried this technique, but used bigger spots or droplets to try to create a halo effect and to reduce aliasing. This will be discussed in Detail enlargement For data on a coarse grid or if we want to study details in a subset, detail enlargement [12] is an effective method to achieve images with higher resolution. Detail enlargement involves the use of an input and output texture which are bigger than the size of the grid, so that each data cell is covered by lots of texture voxels. This makes the strokes or field lines in the LIC texture more fine and accurate. When computing the convolution integrals (4.9) and (4.10), we have to use a smaller step size between the sample points in order to ensure that we still hit most of the voxels covered by the field line. If we for example use textures twice as large as the vector field, we should reduce the step size by a factor of two. Some examples of detail enlargement are shown in figure 5.9. Stalling and Hege [12] made the size of the output image in fast-lic independent of both the vector field resolution and the input texture. By doing this, they avoid resampling the input texture whenever they want to create a LIC texture of various sizes. That the input texture remains unchanged also results in better comparisons of images of different resolution. The fast-lic algorithm was initially made for the two-dimensional case. In volume LIC, one approach is to apply LIC on sparse input textures. One of the reasons for doing this, is to prevent the depicted field lines in the output image from getting too close. If we make the input texture independent of the resolution, we loose this property. When just increasing the resolution of the output texture, many voxels in the output texture will cover a single cell in the input texture. A single spot in the input texture therefore maps to several neighboring output voxels, resulting in a cluster of smaller strokes in the output image where the details of the single strokes can be difficult to depict. If the local field lines of the vector field have similar directions, the cluster will look like one big field line. In such cases the details are preserved 1 Our fast-lic algorithm is not an optimized algorithm.

46 40 Volume LIC Figure 5.8: Visualization of a vorticity field using Line Integral Convolution applied to a sparse input texture with a specified minimum distance between the distributed points.

47 Volume LIC 41 Figure 5.9: Details of the synthetic vector field defined on page 13 displayed at different resolution factors (1,2 and 4). The size of the vector field is9 "Ù9 \Ù9. in the image. It is when the local field lines diverge from one another, the strokes in the output image may get a little cluttered. 5.2 Seed LIC Computing Line Integral Convolution is a very time consuming task. For very large datasets, even the fast-lic uses some time to create a LIC image. As an example, it took 25 hours to compute the LIC texture displayed in figure 5.15 on page 46, with the fast-lic algorithm we implemented. We will now present a method which is quicker than fast-lic, and perhaps a better solution for large datasets, if we want to reduce the computation time. When using a sparse input texture that has been masked by a Region Of Interest, most of the voxels in the 3D input texture are set to zero. Using the fast-lic algorithm on a texture like this, many computations will be performed in areas where nothing happens. In fast-lic, sufficient field lines are calculated covering every voxel in the output texture, correlating the voxel values along these lines. But in the cases where the field lines do not hit cells in the input texture that are turned on (set to 255), no additional information is brought to the voxels along them. The values in those voxels will be the same in the input and output texture (which is zero) and the

48 42 Volume LIC computations in these cells seems unnecessary. By only calculating field lines from the voxels that are turned on (the seed points), we save a lot of computer time. The algorithm, which we have called Seed LIC, is based upon fast-lic and can be described by the pseudocode in figure for each voxel v set voxel value to zero for each voxel v if (Input texture value(v) > 0) then initiate field line computation with É center of v Compute convolution ˆ É add result to voxel set m=1 while m < some limit M Update convolution w ˆ and.šw ˆ add result to voxels containing w and set >ŒD.Šw for each voxel v normalize intensity according to numhits(v) for each voxel v normalize intensity so that highest value is 255 Figure 5.10: Pseudocode of Seed LIC. The algorithm behaves similar to fast-lic. The main difference is that in Seed LIC, we initiate the field lines and compute the convolution starting from the seed points only. In addition, we normalize the intensity of the voxels so that the voxel values vary from Hence, we increase the range of the data. All the voxels are also set to zero at the beginning of the algorithm. The voxels that are not altered during the convolution retain their values. The Seed LIC does not give quite as good result as the fast-lic. In Stalling and Hege s [12] algorithm, the values in every voxel of the output image are computed, while in our algorithm we only compute some of them. As a result, we loose some of the information in the final image. Nevertheless, we see from figure 5.11 that Seed LIC results in images where the directional structure of the vector field is clearly visible, though the details are not as clear as for the fast- LIC. To compare the computational time of the two algorithms, we have implemented both the fast-lic and the Seed LIC and applied the two techniques to textures of different sizes. The vector field used in the test was the synthetic vector field defined on page 13 of dimension RÙÃ RÙÃ. The textures were defined using the resolution factors 1, 2, 4 and 8 times the vector field size. The LIC textures were computed using a field line length of 15 times the length of the grid cells, and the length of the the filter kernel or the convolution length was set to 3 times the grid cell. In table 5.1, the results from the computations can be found. We see by using Seed LIC, we may for sparse input textures reduce the computation time immensely. There is a relation between the sparsity of the input texture and the resulting LIC images after fast-lic and Seed LIC. The more dense the input texture is, the more closer our algorithm

49 Volume LIC 43 Figure 5.11: Left: The output image after applying the fast-lic to an input texture of resolution Ùä Ùä. Right: The output image after applying the Seed LIC to the same input texture. The texture computed with fast-lic took 752 seconds while the texture computed with Seed LIC took 4.7 seconds. The images show a subset of the computed textures. While the Seed LIC algorithm only assigns values to the voxels along the field lines from the seed points, the fast-lic assigns values to nearby voxels which leads to a smearing of the field lines. ópj Resolution ù fast-lic Seed LIC çù çù \Ù \Ù s 0.09 s 9 "Ù 9 "Ù s 0.69 s s 2.73 s Ùé Ùé s 4.70 s Table 5.1: The time of computation using fast-lic and Seed LIC. The value óhj is the distance between the different sample points and ù is number of voxels turned on (set to 255) in the input texture.

50 44 Volume LIC Figure tion929 Ù 32"Ù : Visualization of a texture after Seed LIC was applied to an input texture of resolu-, where the number of seed points were The rendered texture is a subset of the computed texture. becomes to the fast-lic algorithm. For a full 2 voxel set, there are as many seed points as there are voxels. When applying Seed LIC to a texture like this, field lines and convolution integrals are computed in every voxel. If we add a minimum number voxel hit test in the Seed LIC algorithm applied to a full input texture, the two algorithms becomes identical. Figure 5.12 shows 329 Ù 329 the result of Seed ² LIC applied to a little more dense input texture of resolution32 Ù. It took ÊŒ seconds to compute the LIC texture. By increasing the resolution and using took 923 a proper sparse input texture, we can reveal the details of the vector field. In comparison, it seconds to create the fast-lic image in figure Aliasing Aliasing [1] can be a problem when using voxel graphics. Representing a field line with voxels, as in LIC, results typically in a stair-stepped appearance. Some of the aliasing present in the output texture is removed by the trilinear interpolation performed during the rendering 3. The reason is when Line Integral Convolution is applied to a sparse input texture, the resulting field lines in the LIC texture are mostly covered by very low values, and interpolations between these values and the core values, results in smoother field lines. Figure 5.13 demonstrates the effect of the interpolation. We see from the figure, that trilinear interpolation not only reduces aliasing but also, with proper color and opacity tables, creates a halo effect which makes it easier to separate the individual lines from one another. The halo effect will be discussed in section A dense input texture, for example white noise, where all the voxels in the texture are turned on and varies from Both Viz and Volumizer 2 have support for trilinear interpolation.

51 Volume LIC 45 Figure 5.13: The effect of trilinear interpolation. Left: A rendered LIC texture without the use of trilinear interpolation. Right: A rendered LIC texture with the use of trilinear interpolation. Both the LIC textures are computed with fast-lic. Trilinear interpolation leads to smoother field lines. Figure 5.14: Left: The result after applying seedlic. Right: The result after convolving the output texture obtained from Seed LIC. Convolving the LIC texture leads to smoother field lines. The Seed LIC is more influenced by aliasing than the fast-lic. This is due to the fewer number of voxels calculated in the Seed LIC algorithm. While this algorithm only assigns values to the voxels along the field lines from the seed points, the fast-lic assigns values to nearby voxels which leads to a smearing of the field line. To reduce aliasing present in the Seed LIC textures, we propose a method which involves convolving the LIC texture by aù ÙŽ matrix. Convolving the texture with a convolution matrix F, where - Ô Ô = 1 and the rest of the entries are set to for example 0.25, leads to a smearing of the field lines, resulting in smoother strokes. Figure 5.14 show both the results obtained from Seed LIC and the combined Seed LIC and convolution technique. Convolving the output texture leads to thicker strokes. If preserving the thickness of the individual strokes is wanted, this can be achieved by increasing the resolution of the input and output texture by a factor of three prior to the convolution. It should be mentioned that the goal in scientific visualization is not to render the scene in a photo-realistic way, but to generate images which provide maximum insight into the data and the underlying processes. Nevertheless, the convolution technique leads to a smearing of the

52 46 Volume LIC data and can with the appropriate color table provide a better spatial perception of the rendered LIC volume. The voxel sets used to compare the visualization techniques depicted in the figures 5.13 and 5.14 are subsets of the computed textures. While the fast-lic texture took about 25 hours to compute, the Seed LIC only took seconds. The convolution of the LIC texture took seconds to calculate. Although convolution increases the computation time, it still computes much faster than fast-lic. The complete fast-lic texture and the convolved Seed LIC texture are displayed in the figures 5.15 and The resolution of the textures are3 Ù 9 Ù 9. Figure 5.15: Visualization of a vorticity field, obtained from [17], using fast-lic.

53 Volume LIC 47 Figure 5.16: Visualization of a vorticity field, obtained from [17], using Seed LIC and convolution.

54 Chapter 6 Volume visualization with LIC In chapter 3, we found that hardware assisted volume rendering using 3D textures can provide interactive visualization of 3D scalar fields. In this chapter, we will show how volume visualization techniques used together with Line Integral Convolution can be used to explore the details of a vector field interactively, and to improve the 3D perception of the rendered data. We begin the chapter by describing some techniques that simplify the process of producing meaningful visualizations of volume LIC textures. This includes interactive assignment of color and opacity tables and the use of clip planes. Both these techniques can easily be implemented in a volume renderer application. We then suggest the use of a shading technique called limb darkening, to convey the 3D shape of the field lines traced by LIC. Finally, a description of visualization of multiple scalar fields is given and we propose a two field visualization technique to depict the behavior of the vorticity field inside vortical structures obtained from simulations of turbulence. 6.1 Assignment of color and opacity values Some graphics systems allow interactive modification of the texture lookup tables used for the assignment of color and opacity values. This is an indispensable feature that simplifies the process of finding an appropriate color table and to produce a meaningful visualization of the LIC texture. A possible problem in representing volume LIC textures, is how to clearly and effectively convey the inner details of the texture and the 3D shape and relative depth relations among the similarly directed, densely clustered field lines traced by LIC. From subsection 5.1.2, we have seen that the use of a sparse input texture is one way to reveal the inner details of a vector field. The manipulation of the opacity value is another possibility. Suppressing or assigning low opacity to the lowest scalar values results in a semi-transparent representation of the LIC texture and can be used as an alternative to the use of sparse input textures [15]. We have, when creating most of the images in this thesis, used a combination of these techniques. Figure 6.1 shows how the transfer function for the opacity or the alpha values affect the visualization of a dense LIC texture. The visualized vector field is the synthetic vector field defined on page 13. Color and opacity can also be used to enhance the contrast between the various strokes present in the output texture. This is especially needed when applying Line Integral Convolution to sparse input textures, since most of the voxels values in the resulting LIC texture have a tendency to be rather low. Good results are achieved with a ramp-like function for alpha and value (in the HSVA model) that increases from low to high data values (see figure 6.1). Manip-

55 Volume visualization with LIC 49 Figure 6.1: Visualization of a dense LIC texture with different alpha and value functions. ulation of color and opacity can also be used to reveal the 3D shape of the individual field lines in the LIC texture. This will be discussed in Clipping functionality The use of clip planes is another approach that allows the user to explore the interior structures of the LIC texture. By interactively moving a clip plane inside the volume, we are able to follow the direction of the field lines more clearly and significantly improve the spatial understanding of the vector field. The figures 6.2 and 6.3 demonstrates the effect of interactive clip planes in a volume renderer application. Figure 6.2 depicts the nature of the synthetic vector field, while figure 6.3 reveals the directional structures of the vorticity field inside a vortex obtained from [17]. Since the surfaces of the clip planes do not generally follow the direction of the field lines, best results are achieved when the field lines are separated from one another. This can be done, either by using sparse input texture or manipulation with the opacity values. If a dense texture is used, we see from figure 6.4, that it can be difficult to get an impression of the directional structures of the vector field because of the apparently missing correlation. 6.3 Halo effect We mentioned earlier the possible problem of effectively displaying the 3D shape and depth relations among the densely clustered field lines in the final LIC texture. Interrante and Grosch

56 50 Volume visualization with LIC Figure 6.2: Visualization of a LIC texture using a clip plane to reveal the inner structures of the vector field. Figure 6.3: Visualization of a vortex using a clip plane.

57 Volume visualization with LIC 51 Figure 6.4: Visualization of a dense LIC texture using a clip plane. [13] solves this problem by using gaps to highlight the depth relations. Two LIC textures are computed, one for the field lines and one for an enclosing set of halos. The final image is rendered during ray casting volume rendering, taking both textures as inputs. As tracing proceeds through the volume of halos, entries and exits into the halo region are recorded. Here we will look at a different approach in making the 3D shape and depth relations among the field lines more clear Shading in volume visualization Although there exist lighting models in volume graphics, they are seldom used when rendering voxel data in scientific visualization [16]. While geometric objects have reflective properties so the use of light sources emphasize the 3D shape, a volume set should be thought of as emitting light, where the emitted light express the data value of a particular voxel. One could then anticipate that volume rendering would result in flat images, with little information about the depth relations in the volume. By using a technique named limb darkening [16], [25], we can get images with a more three-dimensional look. The technique has its name because the effect obtained is similar to what is well known in astrophysics. Looking at the sun with a small telescope, it has been evident that the center is brighter compared to the limb. This is due to the fact that when watching the center of the sun we see deeper into its atmosphere where the temperature is higher than in the layers we see closer to the edge. Since higher temperatures imply higher light intensity and are visible as brighter, we observe a darkening effect at the limb. This can be utilized in volume visualization as well. By assigning darker values and decreasing the opacity near the edges of an object we obtain a more three-dimensional look. This is illustrated in figure 6.5. If we use the HSVA color model we can control the temperature or

58 Œ 52 Volume visualization with LIC Figure 6.5: Limb darkening used to visualize a sphere with voxel graphics. the intensity of the light emitted, by value ë in the HSVA color table and opacity through the alpha table. An effect like limb darkening is much harder to control using RGBA Shading with LIC Limb darkening can also be utilized when visualizing LIC textures. When using a volume renderer with support for trilinear interpolation 1, some of the halo effect automatically occur in the rendered image. The reason is when Line Integral Convolution is applied to a sparse input texture, the resulting field lines in the LIC texture are mostly covered by very low values, and interpolations between these values and the core values, results in a darker layer around the core of the field lines, see figure 5.13 on page 45. By using proper color and opacity tables, we are then able to create a reasonable limb darkening (see figure 6.6). To emphasize the effect we have tried two different techniques. In the first technique we made the spot sizes in the input texture bigger by assigning lower values in neighboring voxels of the originally chosen cells. This works well if the integrated lines from the modified spots have somewhat similar directions. But in places where the field lines diverge from one another we lose the effect. To avoid the problem above, we propose an improvement of the shading effect by convolving the LIC texture with a Ù: ÙD convolution matrixf. Convolution leads to a smearing of the field lines and makes the strokes thicker and the 3D shape more clear. We have achieved good result using a convolution matrixf, where Ì Ô Ô cš and the rest of the entries are set to ²92. Both these techniques reduces aliasing present in the output texture. Figure 5.14 on page 45 demonstrates the effect of the convolution applied to a texture computed by the Seed LIC. The natural halo effect caused by the interpolation, is stronger in fast-lic than in Seed LIC. This is because the fast-lic algorithm already leads to a smearing of the individual field 1 Both Viz and Volumizer 2 have support for trilinear interpolation.

59 Volume visualization with LIC 53 Figure 6.6: Limb darkening used to visualize field lines in a LIC texture. Left: By letting the alpha change from zero to higher opacity values and the value ë go from dark to bright, we š obtain a three-dimensional look. Right: If a constant value (ë ) is applied to all the voxels in the LIC volume, this will lead to a more flat appearance.

60 54 Volume visualization with LIC lines. The proposed methods that emphasize the 3D shape of the field lines are therefore mostly intended on textures computed by the Seed LIC algorithm. 6.4 Two fields visualization Direct volume rendering is a powerful tool that allows the display of multiple scalar fields in the same image. The rendering of two fields, can for example be achieved by using two independent sets of color and opacity tables. The merging of the two data sets is done by applying a double set of quads, one for each field. This is useful when comparing two fields. Another approach is to let one of the fields through opacity define the structure or the body and the other field to determine the color. This method is useful for conveying information about related scalar quantities obtained from a simulation. As an example of the first approach, we have in figure 6.7 visualized both the enstrophy ((Ëš ø &ÿùvå ø ) and a LIC texture displaying the vorticity field. The visualized data are obtained from [17]. To obtain smooth field lines, we have oversampled the textures by a factor of three before computing the Seed LIC, and then convolved the output texture obtained from the Seed LIC. In order to compare the two data sets, we oversampled the enstrophy scalar field by a factor of three. This was achieved by trilinearly interpolating the data. The latter approach can be used to display additional scalar variables over a 3D flow. This can be achieved by letting the scalar values from the LIC texture define the opacity, and a related scalar quantity, like temperature or the vorticity magnitude, define the color. In figure 6.8, color is used to indicate vector magnitude across the synthetic vector field defined on page 13. Since the LIC texture only determine the opacity, shading of the field lines with limb darkening becomes difficult 2. To improve the clarity of the depth relations among the field lines, we therefore employ a black background. For sparse LIC textures, the use of a black background makes it easier to differentiate the individual lines Polkagris visualization Finally, we will present a variation of the two field visualization technique to depict the behavior of the vorticity field inside vortex tubes obtained from [17]. In this method, which we have called polkagris 3 visualization, the enstrophy is used to define the opacity while the voxel set obtained from LIC (applied to the vorticity field) is used to define the color. Since enstrophy expresses the vortical structures of the flow, we get images of vortex tubes colored by the LIC texture conveying the directional structure of the vorticity field. The reason for naming the technique polkagris visualization, can be seen in figure 6.9. The field lines of the vorticity field twists around the vortices, resulting in objects very similar to the candy humbug or candy cane. Figure 6.10 shows the color table used to visualize the candy canes in figure 6.9. The low values in the sparse LIC texture are set to white, while the field lines are colored red. Examples of the polkagris technique are shown in figures 6.11, 6.12, 6.13 and To achieve shading of the field lines with limb darkening, both the opacity and the value in the HSVA color model have to be defined by the LIC texture. 3 Polkagris is a Swedish candy similar to the English humbug or the American candy cane.

61 Volume visualization with LIC 55 Figure 6.7: Two field visualization, using two independent sets of color and opacity tables. Top: The enstrophy field is displayed by assigning low opacity values to the LIC texture. The color varies from yellow to red with increasing enstrophy value. Middle: The vorticity field is displayed by assigning low opacity to the enstrophy field. Bottom: Both fields displayed in the same image. The visualized data are obtained from [17]. The resolution of the textures are 9 3 äù Š Ì cù Œ3.

62 56 Volume visualization with LIC Figure 6.8: Visualization of two fields simultaneously. The opacity is defined by the LIC texture and the color is defined by vector magnitude. The color varies from red to blue with increasing vector magnitude. Figure 6.9: Polkagris visualization.

63 Volume visualization with LIC 57 Figure 6.10: The color table for the visualization of the LIC texture shown in figure 6.9. Figure 6.11: Polkagris visualization used on the same subset as visualized in figure 6.7. The opacity is defined by enstrophy, and the color is defined by a LIC texture conveying the directional structures of a vorticity field inside the vortices.

64 58 Volume visualization with LIC Figure 6.12: Visualization of two scalar fields simultaneously. The opacity is defined by enstrophy, and the color is defined by a LIC texture conveying the directional structures of a vorticity field inside the vortices.

65 Volume visualization with LIC 59 Figure 6.13: Visualization of two scalar fields simultaneously using clip plane. The opacity is defined by enstrophy, and the color is defined by a LIC texture conveying the directional structures of a vorticity field inside the vortices.

66 60 Volume visualization with LIC Figure 6.14: Visualization of two scalar fields simultaneously. The opacity is defined by enstrophy, and the color is defined by a LIC texture conveying the directional structures of a vorticity field inside the vortices.

Vector Field Visualisation

Vector Field Visualisation Vector Field Visualisation Computer Animation and Visualization Lecture 14 Institute for Perception, Action & Behaviour School of Informatics Visualising Vectors Examples of vector data: meteorological

More information

Vector Visualization. CSC 7443: Scientific Information Visualization

Vector Visualization. CSC 7443: Scientific Information Visualization Vector Visualization Vector data A vector is an object with direction and length v = (v x,v y,v z ) A vector field is a field which associates a vector with each point in space The vector data is 3D representation

More information

Visualization Computer Graphics I Lecture 20

Visualization Computer Graphics I Lecture 20 15-462 Computer Graphics I Lecture 20 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] November 20, 2003 Doug James Carnegie Mellon University http://www.cs.cmu.edu/~djames/15-462/fall03

More information

Lecture notes: Visualization I Visualization of vector fields using Line Integral Convolution and volume rendering

Lecture notes: Visualization I Visualization of vector fields using Line Integral Convolution and volume rendering Lecture notes: Visualization I Visualization of vector fields using Line Integral Convolution and volume rendering Anders Helgeland FFI Chapter 1 Visualization techniques for vector fields Vector fields

More information

Visualization. Images are used to aid in understanding of data. Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26]

Visualization. Images are used to aid in understanding of data. Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26] Visualization Images are used to aid in understanding of data Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26] Tumor SCI, Utah Scientific Visualization Visualize large

More information

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes CSCI 420 Computer Graphics Lecture 26 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 2.11] Jernej Barbic University of Southern California Scientific Visualization

More information

Visualization. CSCI 420 Computer Graphics Lecture 26

Visualization. CSCI 420 Computer Graphics Lecture 26 CSCI 420 Computer Graphics Lecture 26 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 11] Jernej Barbic University of Southern California 1 Scientific Visualization

More information

Visualization Computer Graphics I Lecture 20

Visualization Computer Graphics I Lecture 20 15-462 Computer Graphics I Lecture 20 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 15, 2003 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University

Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University 15-462 Computer Graphics I Lecture 21 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Volume Illumination & Vector Field Visualisation

Volume Illumination & Vector Field Visualisation Volume Illumination & Vector Field Visualisation Visualisation Lecture 11 Institute for Perception, Action & Behaviour School of Informatics Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

Data Visualization. Fall 2017

Data Visualization. Fall 2017 Data Visualization Fall 2017 Vector Fields Vector field v: D R n D is typically 2D planar surface or 2D surface embedded in 3D n = 2 fields tangent to 2D surface n = 3 volumetric fields When visualizing

More information

Flow Visualisation 1

Flow Visualisation 1 Flow Visualisation Visualisation Lecture 13 Institute for Perception, Action & Behaviour School of Informatics Flow Visualisation 1 Flow Visualisation... so far Vector Field Visualisation vector fields

More information

8. Tensor Field Visualization

8. Tensor Field Visualization 8. Tensor Field Visualization Tensor: extension of concept of scalar and vector Tensor data for a tensor of level k is given by t i1,i2,,ik (x 1,,x n ) Second-order tensor often represented by matrix Examples:

More information

3D vector fields. Contents. Introduction 3D vector field topology Representation of particle lines. 3D LIC Combining different techniques

3D vector fields. Contents. Introduction 3D vector field topology Representation of particle lines. 3D LIC Combining different techniques 3D vector fields Scientific Visualization (Part 9) PD Dr.-Ing. Peter Hastreiter Contents Introduction 3D vector field topology Representation of particle lines Path lines Ribbons Balls Tubes Stream tetrahedra

More information

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization Volume visualization Volume visualization Volumes are special cases of scalar data: regular 3D grids of scalars, typically interpreted as density values. Each data value is assumed to describe a cubic

More information

Vector Visualization

Vector Visualization Vector Visualization Vector Visulization Divergence and Vorticity Vector Glyphs Vector Color Coding Displacement Plots Stream Objects Texture-Based Vector Visualization Simplified Representation of Vector

More information

Part I: Theoretical Background and Integration-Based Methods

Part I: Theoretical Background and Integration-Based Methods Large Vector Field Visualization: Theory and Practice Part I: Theoretical Background and Integration-Based Methods Christoph Garth Overview Foundations Time-Varying Vector Fields Numerical Integration

More information

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Rendering Computer Animation and Visualisation Lecture 9 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Volume Data Usually, a data uniformly distributed

More information

Lecture overview. Visualisatie BMT. Vector algorithms. Vector algorithms. Time animation. Time animation

Lecture overview. Visualisatie BMT. Vector algorithms. Vector algorithms. Time animation. Time animation Visualisatie BMT Lecture overview Vector algorithms Tensor algorithms Modeling algorithms Algorithms - 2 Arjan Kok a.j.f.kok@tue.nl 1 2 Vector algorithms Vector 2 or 3 dimensional representation of direction

More information

Applications of Explicit Early-Z Culling

Applications of Explicit Early-Z Culling Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

Vector Field Visualization: Introduction

Vector Field Visualization: Introduction Vector Field Visualization: Introduction What is a Vector Field? A simple 2D steady vector field A vector valued function that assigns a vector (with direction and magnitude) to any given point. It typically

More information

Flow Visualization with Integral Surfaces

Flow Visualization with Integral Surfaces Flow Visualization with Integral Surfaces Visual and Interactive Computing Group Department of Computer Science Swansea University R.S.Laramee@swansea.ac.uk 1 1 Overview Flow Visualization with Integral

More information

An Introduction to Flow Visualization (1) Christoph Garth

An Introduction to Flow Visualization (1) Christoph Garth An Introduction to Flow Visualization (1) Christoph Garth cgarth@ucdavis.edu Motivation What will I be talking about? Classical: Physical experiments to understand flow. 2 Motivation What will I be talking

More information

Vector Field Visualization: Introduction

Vector Field Visualization: Introduction Vector Field Visualization: Introduction What is a Vector Field? Why It is Important? Vector Fields in Engineering and Science Automotive design [Chen et al. TVCG07,TVCG08] Weather study [Bhatia and Chen

More information

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller Scalar Data Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Basic strategies Function plots and height fields Isolines Color coding Volume visualization (overview) Classification Segmentation

More information

Scalar Visualization

Scalar Visualization Scalar Visualization 5-1 Motivation Visualizing scalar data is frequently encountered in science, engineering, and medicine, but also in daily life. Recalling from earlier, scalar datasets, or scalar fields,

More information

Flow Visualisation - Background. CITS4241 Visualisation Lectures 20 and 21

Flow Visualisation - Background. CITS4241 Visualisation Lectures 20 and 21 CITS4241 Visualisation Lectures 20 and 21 Flow Visualisation Flow visualisation is important in both science and engineering From a "theoretical" study of o turbulence or o a fusion reactor plasma, to

More information

System Design for Visualizing Scientific Computations

System Design for Visualizing Scientific Computations 25 Chapter 2 System Design for Visualizing Scientific Computations In Section 1.1 we defined five broad goals for scientific visualization. Specifically, we seek visualization techniques that 1. Can be

More information

Lecture overview. Visualisatie BMT. Fundamental algorithms. Visualization pipeline. Structural classification - 1. Structural classification - 2

Lecture overview. Visualisatie BMT. Fundamental algorithms. Visualization pipeline. Structural classification - 1. Structural classification - 2 Visualisatie BMT Fundamental algorithms Arjan Kok a.j.f.kok@tue.nl Lecture overview Classification of algorithms Scalar algorithms Vector algorithms Tensor algorithms Modeling algorithms 1 2 Visualization

More information

Vector Visualisation 1. global view

Vector Visualisation 1. global view Vector Field Visualisation : global view Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics Vector Visualisation 1 Vector Field Visualisation : local & global Vector

More information

CIS 467/602-01: Data Visualization

CIS 467/602-01: Data Visualization CIS 467/602-01: Data Visualization Vector Field Visualization Dr. David Koop Fields Tables Networks & Trees Fields Geometry Clusters, Sets, Lists Items Items (nodes) Grids Items Items Attributes Links

More information

Lecture overview. Visualisatie BMT. Transparency. Transparency. Transparency. Transparency. Transparency Volume rendering Assignment

Lecture overview. Visualisatie BMT. Transparency. Transparency. Transparency. Transparency. Transparency Volume rendering Assignment Visualisatie BMT Lecture overview Assignment Arjan Kok a.j.f.kok@tue.nl 1 Makes it possible to see inside or behind objects Complement of transparency is opacity Opacity defined by alpha value with range

More information

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Illumination Visualisation Lecture 11 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

Computer Graphics and Visualization. What is computer graphics?

Computer Graphics and Visualization. What is computer graphics? CSCI 120 Computer Graphics and Visualization Shiaofen Fang Department of Computer and Information Science Indiana University Purdue University Indianapolis What is computer graphics? Computer graphics

More information

Texture Advection. Ronald Peikert SciVis Texture Advection 6-1

Texture Advection. Ronald Peikert SciVis Texture Advection 6-1 Texture Advection Ronald Peikert SciVis 2007 - Texture Advection 6-1 Texture advection Motivation: dense visualization of vector fields, no seed points needed. Methods for static fields: LIC - Line integral

More information

Data Visualization (CIS/DSC 468)

Data Visualization (CIS/DSC 468) Data Visualization (CIS/DSC 468) Vector Visualization Dr. David Koop Visualizing Volume (3D) Data 2D visualization slice images (or multi-planar reformating MPR) Indirect 3D visualization isosurfaces (or

More information

Course Review. Computer Animation and Visualisation. Taku Komura

Course Review. Computer Animation and Visualisation. Taku Komura Course Review Computer Animation and Visualisation Taku Komura Characters include Human models Virtual characters Animal models Representation of postures The body has a hierarchical structure Many types

More information

Scientific Visualization. CSC 7443: Scientific Information Visualization

Scientific Visualization. CSC 7443: Scientific Information Visualization Scientific Visualization Scientific Datasets Gaining insight into scientific data by representing the data by computer graphics Scientific data sources Computation Real material simulation/modeling (e.g.,

More information

Adarsh Krishnamurthy (cs184-bb) Bela Stepanova (cs184-bs)

Adarsh Krishnamurthy (cs184-bb) Bela Stepanova (cs184-bs) OBJECTIVE FLUID SIMULATIONS Adarsh Krishnamurthy (cs184-bb) Bela Stepanova (cs184-bs) The basic objective of the project is the implementation of the paper Stable Fluids (Jos Stam, SIGGRAPH 99). The final

More information

Over Two Decades of IntegrationBased, Geometric Vector Field. Visualization

Over Two Decades of IntegrationBased, Geometric Vector Field. Visualization Over Two Decades of IntegrationBased, Geometric Vector Field Visualization Tony McLoughlin1, 1, Ronald Peikert2, Frits H. Post3, and Min Chen1 1 The Visual and Interactive Computing Group Computer Science

More information

Scalar Data. CMPT 467/767 Visualization Torsten Möller. Weiskopf/Machiraju/Möller

Scalar Data. CMPT 467/767 Visualization Torsten Möller. Weiskopf/Machiraju/Möller Scalar Data CMPT 467/767 Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Basic strategies Function plots and height fields Isolines Color coding Volume visualization (overview) Classification

More information

Vector Visualization Chap. 6 March 7, 2013 March 26, Jie Zhang Copyright

Vector Visualization Chap. 6 March 7, 2013 March 26, Jie Zhang Copyright ector isualization Chap. 6 March 7, 2013 March 26, 2013 Jie Zhang Copyright CDS 301 Spring, 2013 Outline 6.1. Divergence and orticity 6.2. ector Glyphs 6.3. ector Color Coding 6.4. Displacement Plots (skip)

More information

1. Interpreting the Results: Visualization 1

1. Interpreting the Results: Visualization 1 1. Interpreting the Results: Visualization 1 visual/graphical/optical representation of large sets of data: data from experiments or measurements: satellite images, tomography in medicine, microsopy,...

More information

A Broad Overview of Scientific Visualization with a Focus on Geophysical Turbulence Simulation Data (SciVis

A Broad Overview of Scientific Visualization with a Focus on Geophysical Turbulence Simulation Data (SciVis A Broad Overview of Scientific Visualization with a Focus on Geophysical Turbulence Simulation Data (SciVis 101 for Turbulence Researchers) John Clyne clyne@ucar.edu Examples: Medicine Examples: Biology

More information

Data Representation in Visualisation

Data Representation in Visualisation Data Representation in Visualisation Visualisation Lecture 4 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Data Representation 1 Data Representation We have

More information

Volume Rendering. Lecture 21

Volume Rendering. Lecture 21 Volume Rendering Lecture 21 Acknowledgements These slides are collected from many sources. A particularly valuable source is the IEEE Visualization conference tutorials. Sources from: Roger Crawfis, Klaus

More information

Scientific Visualization Example exam questions with commented answers

Scientific Visualization Example exam questions with commented answers Scientific Visualization Example exam questions with commented answers The theoretical part of this course is evaluated by means of a multiple- choice exam. The questions cover the material mentioned during

More information

Prerequisites: This tutorial assumes that you are familiar with the menu structure in FLUENT, and that you have solved Tutorial 1.

Prerequisites: This tutorial assumes that you are familiar with the menu structure in FLUENT, and that you have solved Tutorial 1. Tutorial 22. Postprocessing Introduction: In this tutorial, the postprocessing capabilities of FLUENT are demonstrated for a 3D laminar flow involving conjugate heat transfer. The flow is over a rectangular

More information

Introduction to Computer Graphics

Introduction to Computer Graphics Introduction to 1.1 What is computer graphics? it would be difficult to overstate the importance of computer and communication technologies in our lives. Activities as wide-ranging as film making, publishing,

More information

Scalar Visualization

Scalar Visualization Scalar Visualization Visualizing scalar data Popular scalar visualization techniques Color mapping Contouring Height plots outline Recap of Chap 4: Visualization Pipeline 1. Data Importing 2. Data Filtering

More information

The Spalart Allmaras turbulence model

The Spalart Allmaras turbulence model The Spalart Allmaras turbulence model The main equation The Spallart Allmaras turbulence model is a one equation model designed especially for aerospace applications; it solves a modelled transport equation

More information

Fundamental Algorithms

Fundamental Algorithms Fundamental Algorithms Fundamental Algorithms 3-1 Overview This chapter introduces some basic techniques for visualizing different types of scientific data sets. We will categorize visualization methods

More information

A Direct Simulation-Based Study of Radiance in a Dynamic Ocean

A Direct Simulation-Based Study of Radiance in a Dynamic Ocean A Direct Simulation-Based Study of Radiance in a Dynamic Ocean Lian Shen Department of Civil Engineering Johns Hopkins University Baltimore, MD 21218 phone: (410) 516-5033 fax: (410) 516-7473 email: LianShen@jhu.edu

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 11794--4400 Tel: (631)632-8450; Fax: (631)632-8334

More information

Chapter 6 Visualization Techniques for Vector Fields

Chapter 6 Visualization Techniques for Vector Fields Chapter 6 Visualization Techniques for Vector Fields 6.1 Introduction 6.2 Vector Glyphs 6.3 Particle Advection 6.4 Streamlines 6.5 Line Integral Convolution 6.6 Vector Topology 6.7 References 2006 Burkhard

More information

Isosurface Rendering. CSC 7443: Scientific Information Visualization

Isosurface Rendering. CSC 7443: Scientific Information Visualization Isosurface Rendering What is Isosurfacing? An isosurface is the 3D surface representing the locations of a constant scalar value within a volume A surface with the same scalar field value Isosurfaces form

More information

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming L1 - Introduction Contents Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming 1 Definitions Computer-Aided Design (CAD) The technology concerned with the

More information

Using Integral Surfaces to Visualize CFD Data

Using Integral Surfaces to Visualize CFD Data Using Integral Surfaces to Visualize CFD Data Tony Mcloughlin, Matthew Edmunds,, Mark W. Jones, Guoning Chen, Eugene Zhang 1 1 Overview Flow Visualization with Integral Surfaces: Introduction to flow visualization

More information

Data Visualization (DSC 530/CIS )

Data Visualization (DSC 530/CIS ) Data Visualization (DSC 530/CIS 60-0) Isosurfaces & Volume Rendering Dr. David Koop Fields & Grids Fields: - Values come from a continuous domain, infinitely many values - Sampled at certain positions

More information

CIS 467/602-01: Data Visualization

CIS 467/602-01: Data Visualization CIS 467/60-01: Data Visualization Isosurfacing and Volume Rendering Dr. David Koop Fields and Grids Fields: values come from a continuous domain, infinitely many values - Sampled at certain positions to

More information

Driven Cavity Example

Driven Cavity Example BMAppendixI.qxd 11/14/12 6:55 PM Page I-1 I CFD Driven Cavity Example I.1 Problem One of the classic benchmarks in CFD is the driven cavity problem. Consider steady, incompressible, viscous flow in a square

More information

MAE 3130: Fluid Mechanics Lecture 5: Fluid Kinematics Spring Dr. Jason Roney Mechanical and Aerospace Engineering

MAE 3130: Fluid Mechanics Lecture 5: Fluid Kinematics Spring Dr. Jason Roney Mechanical and Aerospace Engineering MAE 3130: Fluid Mechanics Lecture 5: Fluid Kinematics Spring 2003 Dr. Jason Roney Mechanical and Aerospace Engineering Outline Introduction Velocity Field Acceleration Field Control Volume and System Representation

More information

Coupling of STAR-CCM+ to Other Theoretical or Numerical Solutions. Milovan Perić

Coupling of STAR-CCM+ to Other Theoretical or Numerical Solutions. Milovan Perić Coupling of STAR-CCM+ to Other Theoretical or Numerical Solutions Milovan Perić Contents The need to couple STAR-CCM+ with other theoretical or numerical solutions Coupling approaches: surface and volume

More information

Character Modeling COPYRIGHTED MATERIAL

Character Modeling COPYRIGHTED MATERIAL 38 Character Modeling p a r t _ 1 COPYRIGHTED MATERIAL 39 Character Modeling Character Modeling 40 1Subdivision & Polygon Modeling Many of Maya's features have seen great improvements in recent updates

More information

v Data Visualization SMS 12.3 Tutorial Prerequisites Requirements Time Objectives Learn how to import, manipulate, and view solution data.

v Data Visualization SMS 12.3 Tutorial Prerequisites Requirements Time Objectives Learn how to import, manipulate, and view solution data. v. 12.3 SMS 12.3 Tutorial Objectives Learn how to import, manipulate, and view solution data. Prerequisites None Requirements GIS Module Map Module Time 30 60 minutes Page 1 of 16 Aquaveo 2017 1 Introduction...

More information

v SMS 11.1 Tutorial Data Visualization Requirements Map Module Mesh Module Time minutes Prerequisites None Objectives

v SMS 11.1 Tutorial Data Visualization Requirements Map Module Mesh Module Time minutes Prerequisites None Objectives v. 11.1 SMS 11.1 Tutorial Data Visualization Objectives It is useful to view the geospatial data utilized as input and generated as solutions in the process of numerical analysis. It is also helpful to

More information

CS GPU and GPGPU Programming Lecture 2: Introduction; GPU Architecture 1. Markus Hadwiger, KAUST

CS GPU and GPGPU Programming Lecture 2: Introduction; GPU Architecture 1. Markus Hadwiger, KAUST CS 380 - GPU and GPGPU Programming Lecture 2: Introduction; GPU Architecture 1 Markus Hadwiger, KAUST Reading Assignment #2 (until Feb. 17) Read (required): GLSL book, chapter 4 (The OpenGL Programmable

More information

Strömningslära Fluid Dynamics. Computer laboratories using COMSOL v4.4

Strömningslära Fluid Dynamics. Computer laboratories using COMSOL v4.4 UMEÅ UNIVERSITY Department of Physics Claude Dion Olexii Iukhymenko May 15, 2015 Strömningslära Fluid Dynamics (5FY144) Computer laboratories using COMSOL v4.4!! Report requirements Computer labs must

More information

Data Visualization SURFACE WATER MODELING SYSTEM. 1 Introduction. 2 Data sets. 3 Open the Geometry and Solution Files

Data Visualization SURFACE WATER MODELING SYSTEM. 1 Introduction. 2 Data sets. 3 Open the Geometry and Solution Files SURFACE WATER MODELING SYSTEM Data Visualization 1 Introduction It is useful to view the geospatial data utilized as input and generated as solutions in the process of numerical analysis. It is also helpful

More information

Clipping. CSC 7443: Scientific Information Visualization

Clipping. CSC 7443: Scientific Information Visualization Clipping Clipping to See Inside Obscuring critical information contained in a volume data Contour displays show only exterior visible surfaces Isosurfaces can hide other isosurfaces Other displays can

More information

SPC 307 Aerodynamics. Lecture 1. February 10, 2018

SPC 307 Aerodynamics. Lecture 1. February 10, 2018 SPC 307 Aerodynamics Lecture 1 February 10, 2018 Sep. 18, 2016 1 Course Materials drahmednagib.com 2 COURSE OUTLINE Introduction to Aerodynamics Review on the Fundamentals of Fluid Mechanics Euler and

More information

Separation in three-dimensional steady flows. Part 2 : DETACHMENT AND ATTACHMENT SEPARATION LINES DETACHMENT AND ATTACHMENT SEPARATION SURFACES

Separation in three-dimensional steady flows. Part 2 : DETACHMENT AND ATTACHMENT SEPARATION LINES DETACHMENT AND ATTACHMENT SEPARATION SURFACES Separation in three-dimensional steady flows Part 2 : DETACHMENT AND ATTACHMENT SEPARATION LINES DETACHMENT AND ATTACHMENT SEPARATION SURFACES Separation lines or separatrices A separation line is a skin

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Reading for Today A Practical Model for Subsurface Light Transport, Jensen, Marschner, Levoy, & Hanrahan, SIGGRAPH 2001 Participating Media Measuring BRDFs

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

Rendering Smoke & Clouds

Rendering Smoke & Clouds Rendering Smoke & Clouds Game Design Seminar 2007 Jürgen Treml Talk Overview 1. Introduction to Clouds 2. Virtual Clouds based on physical Models 1. Generating Clouds 2. Rendering Clouds using Volume Rendering

More information

BCC Comet Generator Source XY Source Z Destination XY Destination Z Completion Time

BCC Comet Generator Source XY Source Z Destination XY Destination Z Completion Time BCC Comet Generator Comet creates an auto-animated comet that streaks across the screen. The comet is compromised of particles whose sizes, shapes, and colors can be adjusted. You can also set the length

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Participating Media Measuring BRDFs 3D Digitizing & Scattering BSSRDFs Monte Carlo Simulation Dipole Approximation Today Ray Casting / Tracing Advantages? Ray

More information

Emissive Clip Planes for Volume Rendering Supplement.

Emissive Clip Planes for Volume Rendering Supplement. Emissive Clip Planes for Volume Rendering Supplement. More material than fit on the one page version for the SIGGRAPH 2003 Sketch by Jan Hardenbergh & Yin Wu of TeraRecon, Inc. Left Image: The clipped

More information

Computer Graphics I Lecture 11

Computer Graphics I Lecture 11 15-462 Computer Graphics I Lecture 11 Midterm Review Assignment 3 Movie Midterm Review Midterm Preview February 26, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Chapter 8 Visualization and Optimization

Chapter 8 Visualization and Optimization Chapter 8 Visualization and Optimization Recommended reference books: [1] Edited by R. S. Gallagher: Computer Visualization, Graphics Techniques for Scientific and Engineering Analysis by CRC, 1994 [2]

More information

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11 Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION

More information

Supplemental Material Deep Fluids: A Generative Network for Parameterized Fluid Simulations

Supplemental Material Deep Fluids: A Generative Network for Parameterized Fluid Simulations Supplemental Material Deep Fluids: A Generative Network for Parameterized Fluid Simulations 1. Extended Results 1.1. 2-D Smoke Plume Additional results for the 2-D smoke plume example are shown in Figures

More information

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models 3D Programming Concepts Outline 3D Concepts Displaying 3D Models 3D Programming CS 4390 3D Computer 1 2 3D Concepts 3D Model is a 3D simulation of an object. Coordinate Systems 3D Models 3D Shapes 3D Concepts

More information

lecture 21 volume rendering - blending N layers - OpenGL fog (not on final exam) - transfer functions - rendering level surfaces

lecture 21 volume rendering - blending N layers - OpenGL fog (not on final exam) - transfer functions - rendering level surfaces lecture 21 volume rendering - blending N layers - OpenGL fog (not on final exam) - transfer functions - rendering level surfaces - 3D objects Clouds, fire, smoke, fog, and dust are difficult to model with

More information

COMP environment mapping Mar. 12, r = 2n(n v) v

COMP environment mapping Mar. 12, r = 2n(n v) v Rendering mirror surfaces The next texture mapping method assumes we have a mirror surface, or at least a reflectance function that contains a mirror component. Examples might be a car window or hood,

More information

Visualization, Lecture #2d. Part 3 (of 3)

Visualization, Lecture #2d. Part 3 (of 3) Visualization, Lecture #2d Flow visualization Flow visualization, Part 3 (of 3) Retrospect: Lecture #2c Flow Visualization, Part 2: FlowVis with arrows numerical integration Euler-integration Runge-Kutta-integration

More information

Visualization of Turbulent Flow by Spot Noise

Visualization of Turbulent Flow by Spot Noise Visualization of Turbulent Flow by Spot Noise Willem C. de Leeuw Frits H. Post Remko W. Vaatstra Delft University of Technology, Faculty of Technical Mathematics and Informatics, P.O.Box 356, 2600AJ Delft,

More information

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves CSC 418/2504: Computer Graphics Course web site (includes course information sheet): http://www.dgp.toronto.edu/~elf Instructor: Eugene Fiume Office: BA 5266 Phone: 416 978 5472 (not a reliable way) Email:

More information

COMP Preliminaries Jan. 6, 2015

COMP Preliminaries Jan. 6, 2015 Lecture 1 Computer graphics, broadly defined, is a set of methods for using computers to create and manipulate images. There are many applications of computer graphics including entertainment (games, cinema,

More information

Topics and things to know about them:

Topics and things to know about them: Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will

More information

Volume Illumination and Segmentation

Volume Illumination and Segmentation Volume Illumination and Segmentation Computer Animation and Visualisation Lecture 13 Institute for Perception, Action & Behaviour School of Informatics Overview Volume illumination Segmentation Volume

More information

CS 563 Advanced Topics in Computer Graphics QSplat. by Matt Maziarz

CS 563 Advanced Topics in Computer Graphics QSplat. by Matt Maziarz CS 563 Advanced Topics in Computer Graphics QSplat by Matt Maziarz Outline Previous work in area Background Overview In-depth look File structure Performance Future Point Rendering To save on setup and

More information

Volume Visualization

Volume Visualization Volume Visualization Part 1 (out of 3) Overview: Volume Visualization Introduction to volume visualization On volume data Surface vs. volume rendering Overview: Techniques Simple methods Slicing, cuberille

More information

Quantifying Three-Dimensional Deformations of Migrating Fibroblasts

Quantifying Three-Dimensional Deformations of Migrating Fibroblasts 45 Chapter 4 Quantifying Three-Dimensional Deformations of Migrating Fibroblasts This chapter presents the full-field displacements and tractions of 3T3 fibroblast cells during migration on polyacrylamide

More information

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render 1 There are two major classes of algorithms for extracting most kinds of lines from 3D meshes. First, there are image-space algorithms that render something (such as a depth map or cosine-shaded model),

More information

Data Visualization (DSC 530/CIS )

Data Visualization (DSC 530/CIS ) Data Visualization (DSC 530/CIS 60-01) Scalar Visualization Dr. David Koop Online JavaScript Resources http://learnjsdata.com/ Good coverage of data wrangling using JavaScript Fields in Visualization Scalar

More information

CS 5630/6630 Scientific Visualization. Volume Rendering I: Overview

CS 5630/6630 Scientific Visualization. Volume Rendering I: Overview CS 5630/6630 Scientific Visualization Volume Rendering I: Overview Motivation Isosurfacing is limited It is binary A hard, distinct boundary is not always appropriate Slice Isosurface Volume Rendering

More information

Simulation vs. measurement vs. modelling 2D vs. surfaces vs. 3D Steady vs time-dependent d t flow Direct vs. indirect flow visualization

Simulation vs. measurement vs. modelling 2D vs. surfaces vs. 3D Steady vs time-dependent d t flow Direct vs. indirect flow visualization Flow Visualization Overview: Flow Visualization (1) Introduction, overview Flow data Simulation vs. measurement vs. modelling 2D vs. surfaces vs. 3D Steady vs time-dependent d t flow Direct vs. indirect

More information