Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU

Size: px
Start display at page:

Download "Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU"

Transcription

1 LITH-ITN-MT-EX--07/056--SE Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU Ajden Towfeek Department of Science and Technology Linköping University SE Norrköping, Sweden Institutionen för teknik och naturvetenskap Linköpings Universitet Norrköping

2 LITH-ITN-MT-EX--07/056--SE Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU Examensarbete utfört i medieteknik vid Tekniska Högskolan vid Linköpings unversitet Ajden Towfeek Handledare Fredrik Häll Examinator Anders Ynnerman Norrköping

3 Upphovsrätt Detta dokument hålls tillgängligt på Internet eller dess framtida ersättare under en längre tid från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida Copyright The publishers will keep this document online on the Internet - or its possible replacement - for a considerable time from the date of publication barring exceptional circumstances. The online availability of the document implies a permanent permission for anyone to read, to download, to print out single copies for your own use and to use it unchanged for any non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional on the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its WWW home page: Ajden Towfeek

4 Abstract Volume rendering techniques can be powerful tools when visualizing medical data sets. The characteristics of being able to capture 3-D internal structures make the technique attractive. Scanning equipment is producing medical images, with rapidly increasing resolution, resulting in heavily increased size of the data set. Despite the great amount of processing power CPUs deliver, the required precision in image quality can be hard to obtain in real-time rendering. Therefore, it is highly desirable to optimize the rendering process. Modern GPUs possess much more computational power and is available for general purpose programming through high level shading languages. Efficient representations of the data are crucial due to the limited memory provided by the GPU. This thesis describes the theoretical background and the implementation of an approach presented by Patric Ljung, Claes Lundström and Anders Ynnerman at Linköping University. The main objective is to implement a fully working multi-resolution framework with two separate pipelines for pre-processing and real-time rendering, which uses the GPU to visualize large medical data sets.

5 Acknowledgements I would like to give especial thanks to Fredrik Häll, my supervisor at Sectra- Imtec AB and his colleague Aron Ernvik, for great support and many profitable discussions that made this thesis what it is. Thanks to Patric Ljung, whose Ph.D. thesis was a big inspiration and for his helpful hints and discussions. Thanks to my academic supervisor Anders Ynnerman and my opponent Anders Hagvall for giving valuable feedback on the report. Also a lot of thanks to my family and all my friends for support by just being there. Finally, a special thanks to my fiancé for all her love and support.

6 Abbreviations CPU Central Processing Unit GPU Graphics Processing Unit CT Computed Tomography MRI Magnetic Resonance Imaging FPS Frames Per Second TF Transfer Function LOD Level of Detail II Interblock Interpolation NB Nearest Block SW Software PACS Picture Archiving and Communications System IDS Image Display System, a Sectra Workstation

7 Contents List of Figures List of Tables iii v 1 Introduction Problem Description Thesis Objectives Outline of Report Reader Prerequisites Background Medical Imaging Modalities Medical Volume Visualization Volume Data Transfer Functions Direct Volume Rendering Volume Rendering Integral Ray Casting Alpha Blending Graphics Processing Unit The Graphics Pipeline The Programmable Graphics Pipeline Vertex Shaders Fragment Shaders State-of-the-art Direct Volume Rendering Multi-resolution Volumes Flat and Hierarchical Blocking Pre-processing Pipeline Volume Subdivision and Blocking i

8 CONTENTS ii 3.3 Multi-resolution Volume Rendering Sampling of Multi-resolution Volumes Intrablock Volume Sampling Interblock Interpolation Implementation Application Environment Extending The Class Hierarchy Packing The Volume Texture Meta-data Level-of-Detail Selection Rendering Pipeline GPU-based Ray Casting Sampling Algorithm Interpolation Algorithm Results Data Sets and Test Environment Volume Pre-Processing Nearest Block Sampling Interblock Interpolation Discussion Conclusions Related Work Implemented Methods Future Work Bibliography 39

9 List of Figures 2.1 Illustration of volume data as voxels Direct volume rendering with different TF-settings (a) Bone (b) Skin (c) Dense bone (d) Air Raycasting from the viewer through a pixel on the screen The graphics pipeline Illustration of a flat multi-resolution blocking grid Comparison of hierarchical and flat blocking Illustration of block neighborhood and boundaries (a) Sample boundary (b) Eight block neighborhood The class hierarchy extension Texture packing and lookup coordinates (a) Original texture (b) Packed texture (c) Scale factors and new coordinates Comparison of 512x512x512 data sets with NB sampling (a) NB Sampling (b) SW Based (c) NB Sampling (d) SW Based Comparison of 512x512x512 data sets with II sampling (a) II Sampling (b) SW Based (c) II Sampling (d) SW Based iii

10 LIST OF FIGURES iv 5.3 2% texture usage for II and NB, and 100% for SW (a) NB Sampling (b) II Sampling (c) SW Full Resolution Comparison of single- and multi-resolution rendering (a) Single-resolution (b) Multi-resolution (c) Full Resolution

11 List of Tables 5.1 Time performance measured in seconds and reduction ratio FPS for II and NB on GeForce 8800 GTX and block count FPS for II and NB on ATI X1950 Pro and block count v

12 Chapter 1 Introduction This chapter introduces the reader to the thesis. The first section gives a short overview of the problems that are addressed, the following section summarizes the main objectives, followed by an outline of the report, finally recommended prerequisites are given. 1.1 Problem Description Supporting volume rendering techniques to visualize medical image stacks has become increasingly important for Sectra PACS. Capturing internal structures of blood vessels and the skeleton provides the user with valuable information during diagnosis. Furthermore, the ability of rotating, zooming and changing the point of view in 3-D helps other end-users than radiologists to understand the shapes of the organs and their location. Scanning equipment is producing medical images, with rapidly increasing resolution, resulting in heavily increased size of the data set. Despite the great amount of processing power CPUs deliver, the required precision in image quality can be hard to obtain in real-time rendering. Therefore, it is highly desirable to optimize the rendering process. Modern GPUs possess much more computational power and is available for general purpose programming through high level shading languages. Efficient representations of the data are crucial due to the limited memory provided by the GPU. This thesis implements an approach presented by Patric Ljung, Claes Lundström and Anders Ynnerman at Linköping University [1]. The main objective is to implement a fully working multi-resolution framework with two separate pipelines for pre-processing and real-time rendering, which uses the GPU to visualize large medical data sets. Problems like how to interpolate between arbitrary resolutions in multi- 1

13 Introduction 2 resolution volumes are areas of research, this thesis will implement an interblock interpolation technique developed by Ljung et al. [1]. There have been several approaches proposed on how to select the LOD, this thesis will present a new and fairly simple method that bases the selection on the TF. However, the main focus with this thesis is to integrate a multi-resolution rendering framework in the system IDS7, developed by Sectra-Imtec AB. This thesis will extend the existing volume rendering framework to support multiresolution representations. To make this possible a pre-processing pipeline must be implemented and modifications need to be made in the real-time rendering loop. Overall this thesis describes the theoretical background and the implementation of an approach presented by Ljung et al. [1] and its capability to integrate with IDS Thesis Objectives Given the stated problem description in the previous section, the objectives of this thesis are the following: Evaluate existing technique for volume rendering in IDS7. Implement a multi-resolution rendering framework [1]. Integrate and test the methods in IDS Outline of Report Chapter 2 gives an introduction to the area of medical imaging in terms of how image data is produced and visualized. Furthermore, a description of volume rendering techniques is given. Chapter 3 presents methods for improving direct volume rendering proposed by Ljung et al. [1] e.g. efficient volume texture packing and multi-resolution volume rendering. The implementation of the methods in chapter 3 is described in chapter 4. Chapter 5 presents the results of the proposed methods by Ljung et al. and their implementations. Finally, chapter 6 states the conclusions drawn from previous work made in the area of research. Also fulfillment of the presented state-of-the-art methods are discussed with respect to their objectives and how they can be further developed.

14 Introduction Reader Prerequisites To fully understand and appreciate the contents of this thesis, good knowledge in computer graphics, image processing and analysis are recommended. Furthermore, the concept of volume rendering and its use in medical imaging will also help understanding the contents of this report.

15 Chapter 2 Background This chapter introduces the reader to the area of medical imaging and a brief overview is given of techniques available for visualizing volume data sets. The first section will describe how medical image data is produced and the characteristics of the data. Furthermore, tools for medical image volume visualization are presented, i.e. volume rendering techniques that are the foundation for the objectives of this thesis. Finally, a more thorough presentation of the GPU is given. The last section presents the programming ability on GPUs nowadays and how this can be used for efficient volume rendering and optimization. 2.1 Medical Imaging Medical imaging is a powerful tool for diagnostics in medicine. The traditional way of producing medical images is to use a flat-panel x-ray detector to visualize the bone structure [2]. The films produced by the equipment are then put in front of a light screen for analysis. Nowadays, this technique is being digitalized and many radiology departments produce digital images and uses PACS instead. Full body virtual autopsy is for instance one field of use [3]. The cost and time efficiency gain is convincing Modalities The apparatus that produces medical images is called a modality. The primary modality types are Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound, PET, Mammography, Computed- and Digital Radiography (CR/DR). The apparatus that are of interest for this thesis are CT and MRI scanners since they have the ability to produce 3-D medical 4

16 Background 5 image data sets [2]. Computed Tomography, also known as CAT (Computed Axial Tomography) scanning is one of the most commonly used modality types and can produce 3-D data sets. A CT modality uses x-rays and has the characteristics of capturing materials that absorb radiation, such as bone. The images produced by the CT scanner are slices with a normal in the direction of the main axis going through the body. These are produced by letting a tube rotate in a spiral around the patient emitting x-ray radiation and measuring the amount of attenuation on the opposite side. A slice is produced each time the tube has rotated 360 degrees. Magnetic Resonance Imaging can also produce 3-D image data sets. The produced images are somewhat noisy and also this technique has the ability of separating different kinds of soft tissue. The patient being scanned is surrounded by a strong magnetic field. Radio pulses are emitted with a frequency that puts the hydrogen molecules in the body into a high energy state. Extra energy will then be emitted when the pulse is turned off and the molecules go back to their normal state. When the modality detects the time it took for the emission to decompose the final image is produced. 2.2 Medical Volume Visualization The images that the scanners produce need to be visualized. The most straightforward way of doing so is to view each image slice one after another, yielding a volume. Taking it one step further we can produce a slice with an arbitrary normal direction by interpolating its values from the volume data. This plane can then be moved along its normal, giving the opportunity for interactively exploring the content of the data set. The technique is called Multi Planar Reconstruction (MPR); an image can be produced from any plane in the volume data Volume Data A pixel in an image is some how the 2-D equivalent of a voxel in volume data. Discrete volume data sets can be thought of as a three-dimensional array of cubic elements, as showed in figure 2.1. Each small cube represents a voxel and together they form the volume data. It is not correct to assume that the entire voxel has the same value. Instead, we should think of the voxel as a small volume in which we know the density value at an infinitesimal point in its center [2]. The distance

17 Background 6 Figure 2.1: Illustration of volume data as voxels. between voxel centers then becomes the sampling distance. The Nyquist theorem states that when sampling a continuous signal we need to sample with at least twice the highest frequency in the signal [2]. Ideal reconstruction according to theory requires the convolution of the sample points with a sinc function in the spatial domain but this is computationally too expensive. Instead, when reconstructing a continuous signal from an array of voxels in practice the sinc filter is usually replaced by either a box filter or a tent filter. Convolution with the box filter gives the nearest neighbor interpolation, which results in a rather blocky appearance due to the sharp discontinuities between neighboring voxels. A trilinear interpolation is achieved by convolution with a 3-D tent filter, resulting in a nice interpolation with a good trade-off between cost and smoothness Transfer Functions Density values from the scans must be mapped into some form of optical property in order to be visualized. Most often the desirable result is not to let all voxels in the volume contribute to the final image. Instead, a TF maps the value of a voxel to optical properties, such as color and opacity [4]. Interaction is allowed in this step to let the user decide what to visualize. It is unlikely that one would like to see the air surrounding the body, but if the density values for air are mapped to opaque colors they will be visualized. The mapping between data and optical properties is often referred to as classification. Lots of research have been made in the area of designing TFs and

18 Background 7 it is a quite complex task. Therefore, presets are often used, where the user simply can choose what to visualize, e.g. bone, skin or dense bone, instead of mapping the density values by hand, see figure 2.2. (a) Bone (b) Skin (c) Dense bone (d) Air Figure 2.2: Direct volume rendering with different TF-settings.

19 Background Direct Volume Rendering Direct volume rendering is the process of generating images directly from a volumetric data set. Volume rendering operates directly on the data values of each voxel. Each voxel s density value can be represented with a color, as described in the previous section, yielding a resulting color image. Many methods exist for performing volume rendering. Ray casting is a direct volume rendering method, meaning that rendering is performed by evaluating an optical model. In comparison, surface rendering uses geometrical primitives to describe features in the data set. Evaluating an optical model implies assigning optical characteristics to the density values of each voxel in the volume data. This mapping of density values to colors are usually done with TFs, and is often referred to as classification, as described in the previous section. Volume Rendering Integral When performing the actual rendering, the volume rendering integral needs to be evaluated [2]. The integral numerically approximates rays from a light source to the eye of a viewer. Evaluation is considering both emission and absorption along the ray and the integral is continuous in its definition. In computer graphics visual correctness is more important than the physical, therefore most of the volume rendering integrals only account for absorption and reflection. Equation (2.1) is known as the low-albedo volume rendering integral [5]. I λ (x, r) = L 0 C λ (s)µ(s)e s 0 µ(t)dt ds (2.1) The equation calculates the intensity, I, of a certain wavelength, λ, that is received at position x on the image plane from direction r. C represents the material properties that determines how the light is reflected, µ is the density of the particles and L is the length of the ray. The discrete version of the integral follows as: I λ (x, r) = L/ s i=0 i 1 C λ (s i )α(s i ) (1 α(s j )) (2.2) The material properties, C λ, togheter with the transparency α produce a mapping between the characteristics of the volume data and the visual properties. j=0

20 Background 9 Figure 2.3: Raycasting from the viewer through a pixel on the screen. Ray Casting There are generally two approaches for evaluating the volume rendering integral. The object-order approach determines how each voxel in the volume will contribute to the rendered image. In opposite, image-order rendering determines how much the volume contributes to each pixel in the final image [1]. It is only natural to assume that the ray tracing starts at the light source since light source emit rays of light. However, even if it is theoretically possible, most of the rays never reach the observer, yielding in waste of computational power. Instead, rays are cast from the observer into the scene, as shown in figure 2.3. Ray casting is a simplification of ray tracing in the sense that it does not allow for any interactions. For each pixel in the image a ray is shot into the volume, along this ray samples are taken. A simple optimization is to implement early ray termination, integrating in a front-to-back order. The idea of this algorithm is to terminate the integration if it at some point reaches full opacity, since samples lying behind will not contribute to the final pixel [2]. Alpha Blending Alpha blending is a technique for blending two semi-transparent colors, e.g. looking through a pair of sunglasses causes the colors to look different, because they are blended with the color of the glass [5]. Blending is usually done with the following equation:

21 Background 10 C out = C src α src + (1 α src ) C dst (2.3) C out is the color that is displayed, C src and α src are the RGB and alpha values of the color being blended and C dst is the present color before the blending [2]. Alpha blending is useful when solving the volume rendering integral numerically and introducing indices in equation (2.3) gives: C i = C i α i + (1 α i ) C i+1 (2.4) Stepping the equation from n 1 to 0, n being the number of samples, solves the discrete version of the integral numerically. Since the iteration is a back-to-front order this can be very inefficient, hence the following equations will solve the alpha blending in a front-to-back order [2]. C i = C i 1 + (1 α i 1) α i C i (2.5) α i = α i 1 + (1 α i 1) α i (2.6) Equation (2.6) can then be evaluated by stepping equation (2.5) from i = 1 to the number of samples n. This allows for checking if early ray termination is possible by examining whether the current accumulated opacity is high enough to hide anything that lies behind. 2.3 Graphics Processing Unit The modern GPU is a highly optimized data-parallel streaming processor. The major innovation in recent years is the programmable pipeline that has replaced the traditional fixed-function pipeline. It allows the programmer to upload user-written programs to be executed very efficiently on the GPU. Programs written for the GPU are well suited for implementing object-order and image-order algorithms for direct volume rendering The Graphics Pipeline Rendering is the process of producing an image from a higher-level description of its components. The GPU efficiently handles as many computations as possible, this is done by pipelining. For convenience, the graphics pipeline can be divided into three steps to provide a general overview [2], depicted in figure 2.4.

22 Background 11 Figure 2.4: The graphics pipeline. Geometry Processing computes linear transformations of the input vertices such as rotation, translation and scaling. Basically it is the stage at which the higher level description is processed into geometric primitives such as points, lines, triangles and polygons. Rasterization decomposes the primitives into fragments, such that each fragment corresponds to a single pixel on the screen. This step is commonly known as rasterization. Fragment Operations are performed subsequently, the per-fragment operations modifies the fragments attributes, such as color and transparency. The resulting raster image contained in the frame buffer can be displayed on the screen or written to a file. The basic graphics pipeline is often referred to as the fixed-function pipeline due to the fact that the programmer has very little control over the process [6]. On modern graphic cards it is possible to control the details of this process by using the so-called programmable graphics pipeline, which is reviewed in the next section The Programmable Graphics Pipeline True programmability is the major innovation provided by today s GPU. Programs can be uploaded to the GPU s memory and executed at the geometry stage (vertex shader) and the rasterization unit (fragment shader). A vertex shader is a program that runs on the GPU; it can change properties such as position of each vertex. I.e. it can make a plain grid look bumpy by randomly translating the vertices in a direction. A fragment shader works on a per-pixel level; it can change properties such as color for each pixel. The

23 Background 12 terms vertex shader, vertex program, fragment shader and fragment program have the same meaning respectively [5]. Shader programs are usually written in C-like programming languages, such as Cg or HLSL. Standards are important since different hardware supports different levels of programmability. A widely known standard developed by Microsoft is Shader Models [5]. Limitations have up to now been programmability restrictions. Shader Model 1.1 does not support loops, only straight code. Shader Models 2.x introduces the looping ability. Conditional branching for dynamical if-else blocks came first with Shader Model 3.0. Shader Model 4.0 introduces geometry shaders for creating vertices on the GPU and unification of shaders, i.e. no more differences between pixel and vertex shaders. Vertex Shaders Objects in a 3-D scene are typically described using triangles, which in turn are defined by their vertices. A vertex shader is a graphics processing function used to add special effects to objects in a 3-D environment by performing mathematical operations on the object s vertex data. Each vertex can be defined by many different variables. For instance, a vertex is always defined by its location in a 3-D environment using the x-, y-, and z- coordinates [5]. Vertices may also be defined by colors, texture mapping coordinates (e.g. each pixel in an RGB-texture can represent a position by defining the domain coordinates) and lighting characteristics. Vertex shaders do not change the type of data, instead they change its values, so that a vertex emerges with a different color, different textures or a different position in space. Fragment Shaders Fragment shaders create ambiance with materials and surfaces that mimic reality. Material effects replace the artificial organic surfaces. By altering the lighting and surface effects, artists are able to manipulate colors, textures, or shapes and to generate complex, realistic scenes. A fragment shader is a graphics function that calculates effects on a per-pixel basis. Depending on resolution over 2 million pixels may need to be rendered, lit, shaded, and colored for each frame, at 30 frames per second [5]. That in turn creates a tremendous computational load. Moreover, applying multiple textures in one pass almost always yield better performance than performing multiple passes. Multiple passes translate into multiple geometry transformations and multiple Z-buffer calculations, slowing the overall rendering process [2].

24 Chapter 3 State-of-the-art Direct Volume Rendering This chapter presents methods proposed by Ljung et al. [1] for efficient direct volume rendering of large data sets on the GPU. Due to the limited memory capacity on GPUs techniques for efficient rendering of volume data sets are required. A theoretical background of several methods that optimize the volume for rendering is described and a more detailed step by step algorithm overview is presented in the implementation chapter. The first section clarifies why multi-resolution volumes are desirable and how storage space can be saved. Furthermore, fundamentals of flat and hierarchical blocking is presented. The following section focuses on the actual rendering rather than the structure of data. A general introduction to volume rendering of multiresolution volumes is given, going on to issues that can occur when sampling. Two different sampling methods are presented in the sub chapters, NB and II sampling. These methods are essential for the reader to understand since the fundamentals of this thesis rely upon them. The content of this chapter is based on the work done by Ljung et al. [1] at NVIS (Norrkoping Visualization and Interaction Studio), Linköping University. 3.1 Multi-resolution Volumes The traditional way of storing medical data sets is to align each slice along the z-direction and directly store it in a volume texture. This approach will store data that might be invisible given a specific TF. For instance, it is not necessary to store density values that maps to air. By dividing the volume data into several smaller axis-aligned blocks and classifying which ones are visible given the TF, we can avoid wasting storage space on invisible data. 13

25 State-of-the-art Direct Volume Rendering 14 It would also be highly desirable to store regions in different resolutions depending on their importance. Homogeneous regions could then be stored in a lower resolution without reducing image quality, while on the other hand areas with great variation and with high importance would be stored in full resolution. Simply skipping invisible blocks might not reduce the volume size sufficiently, the footprint of the remaining visible blocks may still exceed the available texture memory. The memory capacity is strictly limited on the GPU, which is one of the main causes that creates the need for efficient usage of the available texture memory. Several methods have been proposed on how to efficiently pack the volume texture and this thesis implements an approach similar to Kraus et al [7]. Fundamental aspects of flat and hierarchical multi-resolution blocking are presented in the following sections and the differences between the schemes are described at some length Flat and Hierarchical Blocking The concepts presented in this thesis are based on flat multi-resolution blocking, therefore an overview of the blocking scheme is given here. Samples and block data are located at the center of the uniform grid cells instead of at the cell vertices as in flat blocking. Figure 3.1 shows the placement of samples in red, grid in black and the blue grid indicates the block grid. Furthermore, each block is created individually for multi-resolution presentation, either from a wavelet transform or through average downsampling, whereas the latter approach is used in this thesis. Since no global hierarchy is created, this scheme is referred to as flat multi-resolution blocking, or flat blocking. The spatial extent of a block is constant and does not grow with reduced resolution level. The key advantages of this approach are that uniform addressing schemes is supported and arbitrary resolution differences between neighboring blocks can also be supported, since a block is independent of its neighbors and the resolution is not restricted to be in powers of two. Flat blocking also provides higher memory efficiency than a corresponding hierarchical scheme, but a disadvantage is that the number of blocks is constant, while hierarchical schemes scale with reduced memory budget. On the other hand, it is trivial to exploit parallelism in many processing task for flat blocking, since there are no hierarchical dependencies. Figure 3.2 compares hierarchical and flat multi-resolution blocking, blue squares are blocks with constant spatial position and size. The resolution of each block is arbitrary and independent of neighbouring blocks. The LOD is selected so that a block on the interior should have the second lowest

26 State-of-the-art Direct Volume Rendering 15 Figure 3.1: Illustration of a flat multi-resolution blocking grid. resolution, level 1, while blocks that intersect the boundary of the embedded object must have full resolution. Blocks on the exterior are to be ignored, level 0. The LOD selection is indicated with level-specific coloring. The most common scheme is however the hierarchical scheme. It is created by recursive downsampling of the original volume, resulting in that each lower resolution level is 2 3 the size of the previous level. The block size is usually kept equal at each resolution level. Blocking is suitable for hierarchical representations as well, yielding the ability to select different resolution levels in different parts of the volume. 3.2 Pre-processing Pipeline The purpose of the pre-processing pipeline is to reconstruct the data into a more efficient representation for rendering. The first section describes how reorganizing data into blocks can improve the performance for data access in the memory. The algorithm for creating the packed volume texture is outlined in the implementation chapter. Blocks with varying resolution will allocate varying amount of memory and these need to be packed tightly in order to make efficient use of the memory. Furthermore, the creation of metadata is presented. Meta-data may hold information about the location and resolution of blocks in the volume texture and also minimum and maximum block values, thus serving as accelerating structures in the rendering pipeline Volume Subdivision and Blocking The block size is usually derived from the size of the CPU s level 1 and 2 caches. Numerous publications indicate that the optimal size for many block related processing tasks is blocking by 16 or 32 voxels [8, 9]. The

27 State-of-the-art Direct Volume Rendering 16 Figure 3.2: Comparison of hierarchical and flat blocking. addressing of blocks and sampling within blocks are straightforward for the 2-D case and slightly more complicated in 3-D and this problem is addressed in the implementation chapter. Blocking introduces an additional level of complexity for handling of block boundaries, especially when a sample request is ignored when requested from a neighboring block. When volume data is passed to the texture manager for the first time the volume texture is created. The next step is to create downsampled versions of the data set that correspond to the LOD. In this implementation the volume data is subdivided into uniform blocks of 16 3 voxels and each block is classified as either visible or invisible given a preset TF. The spatial dimensions of each block are constant and independent of the resolution level at which the block is selected. Data sets are stored block-wise at all resolution levels, in powers of two, resulting in a 14% data increase. Creation of meta-data is parallelized with LOD creation. If a block is classified as visible a LOD is set based on similarities in the histograms for the block and the TF. Finally, the block is added to the volume texture buffer. The meta-data, i.e. the positions and resolutions of bricks are held in a buffer, which is used in a later step when writing this to a texture. The complete algorithm is presented in the implementation chapter. The downsampling is done by averaging an eight block neighborhood to a single

28 State-of-the-art Direct Volume Rendering 17 voxel, resulting in 2 3 of the size for each downsampled level compared to the previous level. The concept is to use the memory efficiently by packing the volume texture tightly and successfully being able to store a data set in a volume texture. Skipping voxels that only contain air might not reduce the memory sufficiently, inducing a requirement for the ability to store different regions in separate resolutions, independent of neighboring blocks. 3.3 Multi-resolution Volume Rendering Special care needs to be taken when rendering volumes that have blocks with varying resolution. Approaches like octree-based hierarchies have been presented, where each brick is rendered separately using texture slicing. Performance can be improved by rendering lower resolution blocks with lower sampling density. The opacity, α,must be modified when the sampling density along a ray is changed so that the final result is invariant to the sampling density in a homogeneous material. This can generally be expressed as: α adj = 1 (1 α orig ) adj/ orig (3.1) where adj and orig specify the adjusted and original sampling distances respectively [1]. Artifacts may occur at block boundaries when rendering each block separately. Primarily these artifacts occur due to discontinuities when neighboring blocks are of different resolution level. A general block-based rendering scheme, Adaptive Texture Maps, is presented by Kraus and Ertl [7]. The fundamental idea of their approach is that an index texture redirects the sampling to a texture holding the packed blocks. This technique supports rendering of the whole volume instead of block-wise rendering and this approach is also implemented in this thesis Sampling of Multi-resolution Volumes A uniform addressing scheme is provided by the flat multi-resolution blocking structure. The volume range is defined by the number of blocks along each dimension. The block index can then be retrieved as the integer part of the position p, within the volume. The intrablock local coordinate is then defined by the remainder of the position as p = frac(p). The block index map holds the size of each block and the location q of the block in the packed volume. These are then used for computing the coordinate for the sample to take.

29 State-of-the-art Direct Volume Rendering 18 (a) Sample boundary (b) Eight block neighborhood Figure 3.3: Illustration of block neighborhood and boundaries. Special care has to be taken when sampling in a block since blocks in the packed volume are rarely neighbors in the spatial domain Intrablock Volume Sampling This approach restricts sampling to only access data completely within the current block. The inset from the blocks spatial boundaries is indicated by δ i for block i. The restricted sample location, p C, is defined as: p C = C δ 1 δ(p ) (3.2) where C β α(x) clamps the value of x to the interval [α, β]. Figure 3.3a shows the valid coordinate domain for intrablock samples by squares of red, dashed lines. The sample boundary is defined as the smallest box spanning all samples: for a given block of resolution level l, and a box inset δ(l) from all edges of the block boundary, equation (3.3). δ(l) = l (3.3) This sampling method is referred to as Nearest Block (NB) sampling by Ljung et al [10], meaning that no interpolation between blocks is performed, resulting in artifacts at block boundaries Interblock Interpolation Block artifacts can be overcome using the interblock interpolation technique developed by Ljung et al [11]. Other approaches suggest sample replication

30 State-of-the-art Direct Volume Rendering 19 and padding between blocks. Sample replication counteracts the data reduction and may also distort the sample when a block has higher resolution than its neighbors. Interblock interpolation is a scheme for direct interpolation between blocks of arbitrary resolution, and removes the need for sample replication. The idea is to take samples from each of the closest neighboring blocks using NB sampling and compute a sample value as a normalized weighted sum. The domain for interblock interpolation is indicated by the shaded area between block centers in figure 3.3a. A block neighborhood is illustrated in figure 3.3b. The local coordinates r,s,t for blocks 1 and 8 centers are (-0.5, -0.5, -0,5) and (0.5, 0.5, 0,5) respectively. A sample, ϕ b, is taken from each of the blocks using r,s,t as the intrablock coordinates adjusted with unit offsets specific to each block s location relative to the local eight block neighborhood. Using the labeling in figure 3.3b three edge sets E r, E s, E t are introduced for edges of equal orientation (3.4). E r = {(1, 2), (3, 4), (5, 6), (7, 8)} E s = {(1, 3), (2, 4), (5, 7), (6, 8)} E t = {(1, 5), (2, 6), (3, 7), (4, 8)} (3.4) For each edge in the neighborhood, the edge weights e i,j [0, 1] are computed and they determine the block weights, ω b. The sample value is computed as a normalized sum of all block samples according to equation (3.5). ϕ = 8 b=1 ω bϕ b 8 b=1 ω b (3.5) Where ϕ b is an NB sample from block b and the block weights, ω b, are defined as: ω 1 = (1 e 1,2 ) (1 e 1,3 ) (1 e 1,5 ) ω 2 = e 1,2 (1 e 2,4 ) (1 e 2,6 ) ω 3 = (1 e 3,4 ) e 1,3 (1 e 3,7 ) ω 4 = e 3,4 e 2,4 (1 e 4,8 ) ω 5 = (1 e 5,6 ) (1 e 5,7 ) e 1,5 (3.6) ω 6 = e 5,6 (1 e 6,8 ) e 2,6 ω 7 = (1 e 7,8 ) e 5,7 e 3,7 ω 8 = e 7,8 e 6,8 e 4,8

31 State-of-the-art Direct Volume Rendering 20 The contributions of the II sampling technique can be summarized as: Provision of high quality rendering without discontinuities caused by blocking. Permitting high LOD adaptivity through smooth interpolation between arbitrary resolutions. Maintaining data reduction rate by avoiding data replication. Supporting highly parallel LOD pre-processing, since the method does not impose any interblock dependencies.

32 Chapter 4 Implementation This chapter presents the practical methods that were implemented for improving direct volume rendering. For the reader to fully understand the context of this chapter, the theoretical description presented in the previous chapter needs to be understood. The first section presents the working environment and the extent of the existing class hierarchy in the application IDS7. Next the pre-processing pipeline is outlined, covering how the volume texture structure is built from subdividing volume data. Furthermore, it is described how meta-data is used to retrieve the sample position. Finally, the interblock interpolation technique is described, which helps to avoid sample artifacts at block boundaries. 4.1 Application Environment This thesis is implemented in the PACS system IDS7 developed by Sectra- Imtec AB. The language of choice was naturally C# since the rest of the application is implemented in it, with DirectX [12] for graphics programming. A high level shading language (HLSL) was used for programs written for execution on the GPU. The development environment was Microsoft Visual Studio 2005, with.net Extending The Class Hierarchy The current volume rendering technique in IDS7 uses ray casting for direct volume rendering, whereas this thesis will extend the existing framework to support multi-resolution representations. The essential changes lie within how we pre-process the volume and the texturing. The extension of the class hierarchy is outlined in figure 4.1. The volume render object is the base for 21

33 Implementation 22 Figure 4.1: The class hierarchy extension. all volume rendering techniques; we extend this object to a multi-resolution object that handles pre-processing of the volume and implements the texture manager. The ray casting class performs the actual rendering Packing The Volume Texture Introducing a block map structure allows for arbitrary placement of blocks and packing in memory. Invisible blocks given the applied TF are ignored, which results in saved memory. The explanation of generating the packed volume texture is given for two dimensions since it is likely to be more comprehensible than a discussion of the general case. Emphasizing that the example presented here is for illustrational purposes only. The generation of the volume texture from its initial state is showed in figure 4.2a and the algorithm can be described in the following steps: 1. Build a hierarchy of downsampled versions of the original grid, letting the i-th level being of size 2 i Ns 2 i Nt vertices, where N s, N t denotes the dimensions of the index data. 2. Decompose the original grid, i.e. the 0-th level of the hierarchy into n s n t cells of size b s b t, by letting b s and b t be the maximum block size. 3. For each cell in step 2, determine if the data values of the cell are empty. In this case, mark the cell as empty; otherwise determine a scale factor m = 2 i and copy a corresponding data block of size (m(b s 1) + 1) (m(b s 1) + 1) from the data of the i-th level of the grids hierarchy. 4. Build a buffer of data blocks created in the step 3 and append an empty data block, referenced by an empty cell.

34 Implementation Pack all data blocks, i.e. the data buffer, into a grid of size n s n t, which represents the packed data of the volume texture. The result of the given algorithm can be compared to figure 4.2b. So far we have established the transformation from a regular texture with a lot of empty space, allocating unnecessary memory to a tightly packed texture. Although the cells of the coarse grid are of uniform size, the packed data blocks are of different sizes, based on their resolution. Blocks of full resolution correspond to large blocks of the packed data. The corresponding index data in figur 4.2c is further discussed in the following section Meta-data Based on the cell s references to data blocks established in the previous section, the scale factor and the positions of the packed data blocks are represented in the index map in figure 4.2c. The first upper level is a coarse uniform grid covering the domain of the texture map. For each cell, the data consist of a reference to the texture data of the cell and a scaling factor specifying it s resolution relative to the maximum resolution. This data is essential for retrieving the sample position in the rendering pipeline, which is presented in the following section. Gradients for shading are also precomputed meta-data. On-the-fly gradient computations with several texture lookups can be very expensive even for fragment programs executed on the GPU Level-of-Detail Selection The developed LOD selection technique does not implement any error measurement to minimize visual incorrectness. A simple selection is implemented to distribute blocks of different resolutions over the volume only to avoid sharp boundaries in the image. More complex classifications based on histograms and TF design are proposed by Ljung et al. [13, 14, 15], however the purpose if this thesis is not to address the LOD selection problem, but to create an efficient representation of the volume data. Therefore, no emphasis is put on this in the result and discussion chapters. Classification of block relevance is based on histogram comparison with the TF. Binary quantized histograms with 1024 bins are stored for each block and the TF separately. This simplification tells us only whether we have data represented in an interval or not, thus this information is binary. The final LOD selection is based on how many bins a block has in common with the TF. We represent a block with high bin union with a high resolution

35 Implementation 24 (a) Original texture. (b) Packed texture. (c) Scale factors and new coordinates. Figure 4.2: Texture packing and lookup coordinates.

36 Implementation 25 and vice versa. The assumption of basing the relevance of a block on the number of common bins with the TF is in many cases incorrect. Since we only have binary data, we do not have information about the magnitude in a bin, hence we will under-represent blocks that contain thin peaks that perhaps only span over a few bins. On the other hand, this classification is very fast and the meta-data is kept at a minimum. 4.3 Rendering Pipeline This is the final stage, volume data is rendered to a projection for presentation on the user s screen. The principle has already been presented, rays being generated from the viewpoint via a pixel in the image and through the volume, defined by the integral in equation (2.1). Several approaches exist, with varying trade-off between performance and render quality. The following sections will present the implementation of NB sampling and II proposed by Ljung et al., which has been integrated in IDS GPU-based Ray Casting Modern GPUs can be programmed using shader programs. This programmability allows for the use of more complex sampling and shading operations in the volume rendering, and a more direct implementation of the ray integral can be made [16]. The pixel shader that performs volume rendering uses conditional breaks, requiring Shader Model 3.0, which also simplifies early ray termination. Using linear interpolation between samples that practically comes free with the GPU, and with no respect to II, the ray casting algorithm can be expressed as follows: for (int i = 0; i < 255 &&!break; i++) { for (int j = 0;j < 255 &&!break; j++) { # Compute sample position along the ray # Lookup new sample position in index texture # Classify the sample as visible/invisible # Compute gradients, on-the-fly or by lookup # Front to back compositing # Increase position along ray # Conditional break, if opacity is high enough Or if position is beyond the exit-point } }

37 Implementation 26 The main loop consists of two nested loops due to the 255 loop-limit in Shader Model 3 but also to be able to conditionally break the looping if conditions for breaking are fulfilled. A lookup is needed to find out where in the packed volume texture the corresponding sample is stored. How this position is defined exactly is presented in the following section. Also within the loop the contribution of the current sample is calculated and then added to earlier samples using front-to-back compositing Sampling Algorithm A texture lookup in a packed texture is performed in several steps (see figure 4.2), these are: 1. Determine which cell in the index data that includes the sample point (s, t). 2. Compute the coordinates (s 0, t 0 ) corresponding to the cell origin. 3. Lookup of the index data for the cell, i.e. the scale factor m and the origin (s 0, t 0) of the data block in the packed data. 4. Compute the coordinates (s, t ) in the packed data corresponding to (s, t) in the index data. 5. Finally, lookup the actual texture data at (s, t ) in the packed data. The origin in texture coordinates (s 0, t 0 ) of the cell including the point (s, t) may be computed by: s 0 = s n s n s and t 0 = t n t n t (4.1) where the floor function x gives the largest integer less than or equal to x. The scale factor m and the origin (s 0, t 0) of the corresponding packed data block are given as functions of (s 0, t 0 ), thus the texture coordinates (s, t ) in the packed data may be computed by: s = s 0 + (s s 0 )m (4.2) t = t 0 + (t t 0 )m (4.3) Finally, the computation of local block texture coordinates u, v, w are shown for u by equation (4.4), ensuring that samples are not taken outside the boundary.

38 Implementation 27 u = C 1 δ δ (u ) (4.4) where C β α(γ) clamps the value δ to the [α, β] interval Interpolation Algorithm The task of interblock interpolation is to retrieve a sample value for a position between sample boundaries of neighboring blocks. The overall structure for doing so can be summarized in following steps: 1. Determine the eight block neighborhood and setup a local coordinate system. 2. Take samples from each block using the intrablock sample method described in the previous section. 3. Compute edge weights between side-facing neighbors. 4. Compute block weights from three edge weights. 5. Compute a normalized weighted sum, yielding the final sample value. This thesis implements the maximum distance approach of the described interblock interpolation technique presented by Ljung et al [11]. Other variations are minimum distance and boundary split, where the differences lie within how edge weights are computed. The maximum distance method avoids discontinuity in the derivate of the interpolated weight within the interval. The interpolated value can generally be expressed as: e i,j (ρ) = C 1 0((ρ + δ i )/(δ i + δ j )) (4.5) where the value is interpolated over the whole distance between neighboring sample boundaries. When all the blocks have the same resolution, this interpolation is equivalent to trilinear interpolation, i.e. let δ i = δ j = δ, then equation (4.5) will result in: e i,j (ρ) = C 1 0(0.5 + ρ/2δ) (4.6) which is a linear interpolation kernel, yielding that artifacts will not occur when sampling within the block boundaries. Finally, the sample value is computed as a normalized sum of all block samples according to equation (3.5).

39 Chapter 5 Results This chapter presents the achieved results of the project, mainly in the sense of frame rates, but also from a time perspective regarding the time it takes to pre-process the volume. The first section describes the circumstances the thesis is implemented in. Furthermore, the result from each step of the system pipeline is presented in sections separately. 5.1 Data Sets and Test Environment This thesis was implemented and tested in a 3-D module that partially constitutes the PACS software IDS7. By working in this stand-alone module the development process became more efficient, since compiling the entire solution could be avoided. In the same time, assertions could be made that the implemented methods worked in the entire solution as well, since the module is used as it is in IDS7. The test data is medical volumes captured with modalities from real world examination cases and are presented by courtesy of Sectra-Imtec AB and CMIV (Centre for Medical Imaging and Visualization). Two major test cases have been used. The first examination is a CT scanned data set of the upper body from a trauma case. The primary objective of this data set is to examine how much storage space we can save by avoiding storing invisible blocks given the applied TF. This data set is well suited for this purpose, since it contains lots of air. The other data set is a head scan from an accident also produced by a CT scanner. The objective of this examination is to study how block resolutions need to be distributed to fit the data set into the GPU s texture memory. 28

40 Results 29 Dataset Size Read Volume Meta-data Pack Total Ratio Trauma ,7 14,9 4,9 26,5 31:1 Accident ,6 18,9 5,8 33,3 29:1 Trauma ,8 1,9 0,6 3,3 9:1 Accident ,0 2,1 0,8 3,9 4:1 Table 5.1: Time performance measured in seconds and reduction ratio. 5.2 Volume Pre-Processing The major disadvantage of multi-resolution representations is the processing time to create the meta-data and the memory it allocates. The preparation for rendering basically consists of these three steps: reading the volume, generating meta-data and packing the volume texture. Fortunately, the most time expensive process, generating meta-data, only needs to be done once for each volume. The second time we can read the meta-data from file. For instance, stored meta-data is central difference gradients for shading, these need to be computed for all LODs and downsampled versions of the raw volume data are stored as well. To avoid recreating meta-data that can be read from file a fingerprint is calculated for each volume i.e. when reading a volume we first calculate the fingerprint and match it against stored fingerprints, if the volume has been processed before we can skip the meta-data generating step and load it directly from file. Table 5.1 shows the time performance. Doubling each dimension, i.e to gives 2 3 times the previous data, yielding that we will have 2 3 times more blocks to process. This calls for efficient algorithms, suggestions are discussed in the next chapter.

41 Results 30 Dataset Size B 0 B 1 B 2 B 3 NB II Trauma ,1 7,1 Accident ,5 6,8 Trauma ,2 8,9 Accident ,2 7,3 Table 5.2: FPS for II and NB on GeForce 8800 GTX and block count Dataset Size B 0 B 1 B 2 B 3 NB II Trauma ,2 8,1 Accident ,1 6,7 Trauma ,7 8,8 Accident ,4 7,2 Table 5.3: FPS for II and NB on ATI X1950 Pro and block count 5.3 Nearest Block Sampling The quality of the NB sampling technique is shown in figure 5.1. Performance varies somehow between NVIDIA and ATI GPUs, as shown in table 5.2 and 5.3 respectively. The performance is measured in a viewport of pixels and in full zoom, meaning that the volume projection populates the entire viewport. A factor worth considering is that the X1950 Pro model from ATI uses the AGP slot, while NVIDIA s GeForce 8800 GTX uses the PCI-Express slot. Furthermore, level 0 corresponds to the highest resolution, having the size of 16 3 voxels per block and level 3 is the lowest resolution with each block being the size of 2 3 voxels. This means that it is possible to store 16 blocks in each x-, y- and z-direction for the highest resolution and corresponding 128 blocks for the lowest resolution, assuming that we have a volume texture of voxels for our disposal. The image quality is highly dependent of how LOD is selected for each block. The artifacts that occur when clamping from boundaries become more visible if the LOD is selected poorly. Therefore, it is desirable that as many blocks as possible are selected at the highest resolution.

42 31 Results (a) NB Sampling (b) SW Based (c) NB Sampling (d) SW Based Figure 5.1: Comparison of 512x512x512 data sets with NB sampling.

43 Results Interblock Interpolation The quality of the II sampling technique is shown in figure 5.2. Comparing the results from II with the software based version, the wood-grained artefacts seems to be less, almost non-existing. It is worth considering if the software based ray casting and the NB technique can be fully eliminated, since it seems possible to produce full quality in real-time on the GPU. However, when navigating close-up to the volume the computations increase rapidly and the rendering time becomes suffering. Due to many texture lookups and longer fragment programs the frame rate drops when using II. Although the empty space skipping helps us to save a lot of unnecessary computations, the algorithm still is inefficient to fully replace the NB sampling during interaction, but the software-based version is fully replaceable. It is important to consider the difference between the error introduced by varying LOD and the error introduced by the NB sampling, this thesis focuses on the quality of the II scheme. Comparison between NB, II and SW is presented in figure 5.3. To enhance the gain in image quality by II, all block resolutions are set to level 2, creating clearly visible ringing-artifacts. This extreme case is for illustrational purpose, using only 2% of the volume texture for NB and II, compared to full resolution SW rendering of a data set. Comparing the reduction ratio, 341:1 in this case, to the ratios in table 5.1 further clarifies the gain in image quality by II, artifacts are reduced to a minimum even though the extreme reduction ratio. Finally, figure 5.4 shows the image quality gain for a data set by the implemented multi-resolution framework compared to the traditional volume rendering technique in IDS7. Figure 5.4a shows the quality of the traditional rendering pipeline and figure 5.4b shows the quality of the implemented methods, compared to figure 5.4c, which is the full resolution SW rendering. It becomes obvious that multi-resolution volume rendering on the GPU is the superior technique, since there is almost no visual difference between figure 5.4b and 5.4c.

44 33 Results (a) II Sampling (b) SW Based (c) II Sampling (d) SW Based Figure 5.2: Comparison of 512x512x512 data sets with II sampling.

45 34 Results (a) NB Sampling (b) II Sampling (c) SW Full Resolution Figure 5.3: 2% texture usage for II and NB, and 100% for SW. (a) Single-resolution (b) Multi-resolution (c) Full Resolution Figure 5.4: Comparison of single- and multi-resolution rendering.

HTTP Based Adap ve Bitrate Streaming Protocols in Live Surveillance Systems

HTTP Based Adap ve Bitrate Streaming Protocols in Live Surveillance Systems HTTP Based Adapve Bitrate Streaming Protocols in Live Surveillance Systems Daniel Dzabic Jacob Mårtensson Supervisor : Adrian Horga Examiner : Ahmed Rezine External supervisor : Emil Wilock Linköpings

More information

Interactive GPU-based Volume Rendering

Interactive GPU-based Volume Rendering Examensarbete LITH-ITN-MT-EX--06/011--SE Interactive GPU-based Volume Rendering Philip Engström 2006-02-20 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden Institutionen

More information

Large fused GPU volume rendering

Large fused GPU volume rendering LiU-ITN-TEK-A--08/108--SE Large fused GPU volume rendering Stefan Lindholm 2008-10-07 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik och

More information

Design and evaluation of a system that coordinate clients to use the same server

Design and evaluation of a system that coordinate clients to use the same server Linköpings universitet/linköping University IDA Department of Computer and Information Science Bachelor Thesis Information Technology Spring term 2017 LIU-IDA/LITH-EX-G--17/067--SE Design and evaluation

More information

Towards Automatic Detection and Visualization of Tissues in Medical Volume Rendering

Towards Automatic Detection and Visualization of Tissues in Medical Volume Rendering Examensarbete LITH-ITN-MT-EX--06/012--SE Towards Automatic Detection and Visualization of Tissues in Medical Volume Rendering Erik Dickens 2006-02-03 Department of Science and Technology Linköpings Universitet

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final Thesis Network usage profiling for applications on the Android smart phone by Jakob Egnell LIU-IDA/LITH-EX-G 12/004

More information

Automatic LOD selection

Automatic LOD selection LiU-ITN-TEK-A--17/054--SE Automatic LOD selection Isabelle Forsman 2017-10-20 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik och naturvetenskap

More information

Design, Implementation, and Performance Evaluation of HLA in Unity

Design, Implementation, and Performance Evaluation of HLA in Unity Linköping University IDA Bachelor Thesis Computer Science Spring 2017 LIU-IDA/LITH-EX-G-17/007--SE Design, Implementation, and Performance Evaluation of HLA in Unity Author: Karl Söderbäck 2017-06-09 Supervisor:

More information

Optimal Coherent Reconstruction of Unstructured Mesh Sequences with Evolving Topology

Optimal Coherent Reconstruction of Unstructured Mesh Sequences with Evolving Topology LiU-ITN-TEK-A-14/040-SE Optimal Coherent Reconstruction of Unstructured Mesh Sequences with Evolving Topology Christopher Birger 2014-09-22 Department of Science and Technology Linköping University SE-601

More information

Personlig visualisering av bloggstatistik

Personlig visualisering av bloggstatistik LiU-ITN-TEK-G-13/005-SE Personlig visualisering av bloggstatistik Tina Durmén Blunt 2013-03-22 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Case Study of Development of a Web Community with ASP.NET MVC 5 by Haci Dogan LIU-IDA/LITH-EX-A--14/060--SE 2014-11-28

More information

Institutionen för datavetenskap

Institutionen för datavetenskap Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer Final thesis and Information Science Minimizing memory requirements

More information

Illustrative Visualization of Anatomical Structures

Illustrative Visualization of Anatomical Structures LiU-ITN-TEK-A--11/045--SE Illustrative Visualization of Anatomical Structures Erik Jonsson 2011-08-19 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Context-based algorithm for face detection

Context-based algorithm for face detection Examensarbete LITH-ITN-MT-EX--05/052--SE Context-based algorithm for face detection Helene Wall 2005-09-07 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden Institutionen

More information

Advanced Visualization Techniques for Laparoscopic Liver Surgery

Advanced Visualization Techniques for Laparoscopic Liver Surgery LiU-ITN-TEK-A-15/002-SE Advanced Visualization Techniques for Laparoscopic Liver Surgery Dimitrios Felekidis 2015-01-22 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Evaluation of BizTalk360 From a business value perspective

Evaluation of BizTalk360 From a business value perspective Linköpings universitet Institutionen för IDA Kandidatuppsats, 16 hp Högskoleingenjör - Datateknik Vårterminen 2018 LIU-IDA/LITH-EX-G--18/069--SE Evaluation of BizTalk360 From a business value perspective

More information

Multi-Volume Rendering in OpenSpace Using A-Buffers for Space Weather Visualizations

Multi-Volume Rendering in OpenSpace Using A-Buffers for Space Weather Visualizations LiU-ITN-TEK-A--17/006--SE Multi-Volume Rendering in OpenSpace Using A-Buffers for Space Weather Visualizations Jonas Strandstedt 2017-02-24 Department of Science and Technology Linköping University SE-601

More information

Face detection for selective polygon reduction of humanoid meshes

Face detection for selective polygon reduction of humanoid meshes LIU-ITN-TEK-A--15/038--SE Face detection for selective polygon reduction of humanoid meshes Johan Henriksson 2015-06-15 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Tablet-based interaction methods for VR.

Tablet-based interaction methods for VR. Examensarbete LITH-ITN-MT-EX--06/026--SE Tablet-based interaction methods for VR. Lisa Lönroth 2006-06-16 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden Institutionen

More information

Creating User Interfaces Using Web-based Technologies to Support Rapid Prototyping in a Desktop Astrovisualization Software

Creating User Interfaces Using Web-based Technologies to Support Rapid Prototyping in a Desktop Astrovisualization Software LiU-ITN-TEK-A--17/062--SE Creating User Interfaces Using Web-based Technologies to Support Rapid Prototyping in a Desktop Astrovisualization Software Klas Eskilson 2017-11-28 Department of Science and

More information

Visual Data Analysis using Tracked Statistical Measures within Parallel Coordinate Representations

Visual Data Analysis using Tracked Statistical Measures within Parallel Coordinate Representations Examensarbete LITH-ITN-MT-EX--05/030--SE Visual Data Analysis using Tracked Statistical Measures within Parallel Coordinate Representations Daniel Ericson 2005-04-08 Department of Science and Technology

More information

Clustered Importance Sampling for Fast Reflectance Rendering

Clustered Importance Sampling for Fast Reflectance Rendering LiU-ITN-TEK-A--08/082--SE Clustered Importance Sampling for Fast Reflectance Rendering Oskar Åkerlund 2008-06-11 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Introducing Mock framework for Unit Test in a modeling environment by Joakim Braaf LIU-IDA/LITH-EX-G--14/004--SE

More information

Automatic Test Suite for Physics Simulation System

Automatic Test Suite for Physics Simulation System Examensarbete LITH-ITN-MT-EX--06/042--SE Automatic Test Suite for Physics Simulation System Anders-Petter Mannerfelt Alexander Schrab 2006-09-08 Department of Science and Technology Linköpings Universitet

More information

Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflow

Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflow LiU-ITN-TEK-A--17/003--SE Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflow Ludvig Mangs 2017-01-09 Department of Science and Technology Linköping University SE-601

More information

Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail

Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail LiU-ITN-TEK-A--18/033--SE Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail Benjamin Wiberg 2018-06-14 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Hybrid Particle-Grid Water Simulation using Multigrid Pressure Solver

Hybrid Particle-Grid Water Simulation using Multigrid Pressure Solver LiU-ITN-TEK-G--14/006-SE Hybrid Particle-Grid Water Simulation using Multigrid Pressure Solver Per Karlsson 2014-03-13 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Information visualization of consulting services statistics

Information visualization of consulting services statistics LiU-ITN-TEK-A--16/051--SE Information visualization of consulting services statistics Johan Sylvan 2016-11-09 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Object Migration in a Distributed, Heterogeneous SQL Database Network

Object Migration in a Distributed, Heterogeneous SQL Database Network Linköping University Department of Computer and Information Science Master s thesis, 30 ECTS Computer Engineering (Datateknik) 2018 LIU-IDA/LITH-EX-A--18/008--SE Object Migration in a Distributed, Heterogeneous

More information

Audial Support for Visual Dense Data Display

Audial Support for Visual Dense Data Display LiU-ITN-TEK-A--17/004--SE Audial Support for Visual Dense Data Display Tobias Erlandsson Gustav Hallström 2017-01-27 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Study of Local Binary Patterns

Study of Local Binary Patterns Examensarbete LITH-ITN-MT-EX--07/040--SE Study of Local Binary Patterns Tobias Lindahl 2007-06- Department of Science and Technology Linköpings universitet SE-60 74 Norrköping, Sweden Institutionen för

More information

Analysis of GPU accelerated OpenCL applications on the Intel HD 4600 GPU

Analysis of GPU accelerated OpenCL applications on the Intel HD 4600 GPU Linköping University Department of Computer Science Master thesis, 30 ECTS Computer Science Spring term 2017 LIU-IDA/LITH-EX-A--17/019--SE Analysis of GPU accelerated OpenCL applications on the Intel HD

More information

Multi-Video Streaming with DASH

Multi-Video Streaming with DASH Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Datateknik 217 LIU-IDA/LITH-EX-G--17/71--SE Multi-Video Streaming with DASH Multi-video streaming med DASH Sebastian Andersson

More information

Creating a Framework for Consumer-Driven Contract Testing of Java APIs

Creating a Framework for Consumer-Driven Contract Testing of Java APIs Linköping University IDA Bachelor s Degree, 16 ECTS Computer Science Spring term 2018 LIU-IDA/LITH-EX-G--18/022--SE Creating a Framework for Consumer-Driven Contract Testing of Java APIs Fredrik Selleby

More information

Department of Electrical Engineering. Division of Information Coding. Master Thesis. Free Viewpoint TV. Mudassar Hussain.

Department of Electrical Engineering. Division of Information Coding. Master Thesis. Free Viewpoint TV. Mudassar Hussain. Department of Electrical Engineering Division of Information Coding Master Thesis Free Viewpoint TV Master thesis performed in Division of Information Coding by Mudassar Hussain LiTH-ISY-EX--10/4437--SE

More information

HTTP/2, Server Push and Branched Video

HTTP/2, Server Push and Branched Video Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Datateknik 2017 LIU-IDA/LITH-EX-G--17/073--SE HTTP/2, Server Push and Branched Video Evaluation of using HTTP/2 Server Push

More information

Efficient implementation of the Particle Level Set method

Efficient implementation of the Particle Level Set method LiU-ITN-TEK-A--10/050--SE Efficient implementation of the Particle Level Set method John Johansson 2010-09-02 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Comparing Costs of Browser Automation Test Tools with Manual Testing

Comparing Costs of Browser Automation Test Tools with Manual Testing Linköpings universitet The Institution of Computer Science (IDA) Master Theses 30 ECTS Informationsteknologi Autumn 2016 LIU-IDA/LITH-EX-A--16/057--SE Comparing Costs of Browser Automation Test Tools with

More information

Calibration of traffic models in SIDRA

Calibration of traffic models in SIDRA LIU-ITN-TEK-A-13/006-SE Calibration of traffic models in SIDRA Anna-Karin Ekman 2013-03-20 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Migration process evaluation and design by Henrik Bylin LIU-IDA/LITH-EX-A--13/025--SE 2013-06-10 Linköpings universitet

More information

Statistical flow data applied to geovisual analytics

Statistical flow data applied to geovisual analytics LiU-ITN-TEK-A--11/051--SE Statistical flow data applied to geovisual analytics Phong Hai Nguyen 2011-08-31 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Network optimisation and topology control of Free Space Optics

Network optimisation and topology control of Free Space Optics LiU-ITN-TEK-A-15/064--SE Network optimisation and topology control of Free Space Optics Emil Hammarström 2015-11-25 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Markörlös Augmented Reality för visualisering av 3D-objekt i verkliga världen

Markörlös Augmented Reality för visualisering av 3D-objekt i verkliga världen LiU-ITN-TEK-A-14/019-SE Markörlös Augmented Reality för visualisering av 3D-objekt i verkliga världen Semone Kallin Clarke 2014-06-11 Department of Science and Technology Linköping University SE-601 74

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis A systematic literature Review of Usability Inspection Methods by Ali Ahmed LIU-IDA/LITH-EX-A--13/060--SE 2013-11-01

More information

Volume Rendering. Lecture 21

Volume Rendering. Lecture 21 Volume Rendering Lecture 21 Acknowledgements These slides are collected from many sources. A particularly valuable source is the IEEE Visualization conference tutorials. Sources from: Roger Crawfis, Klaus

More information

Slow rate denial of service attacks on dedicated- versus cloud based server solutions

Slow rate denial of service attacks on dedicated- versus cloud based server solutions Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Information technology 2018 LIU-IDA/LITH-EX-G--18/031--SE Slow rate denial of service attacks on dedicated-

More information

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes CSCI 420 Computer Graphics Lecture 26 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 2.11] Jernej Barbic University of Southern California Scientific Visualization

More information

Visualization. CSCI 420 Computer Graphics Lecture 26

Visualization. CSCI 420 Computer Graphics Lecture 26 CSCI 420 Computer Graphics Lecture 26 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 11] Jernej Barbic University of Southern California 1 Scientific Visualization

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Towards efficient legacy test evaluations at Ericsson AB, Linköping by Karl Gustav Sterneberg LIU-IDA/LITH-EX-A--08/056--SE

More information

Functional and Security testing of a Mobile Application

Functional and Security testing of a Mobile Application Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Information Technology 2017 LIU-IDA/LITH-EX-G--17/066--SE Functional and Security testing of a Mobile Application Funktionell

More information

OMSI Test Suite verifier development

OMSI Test Suite verifier development Examensarbete LITH-ITN-ED-EX--07/010--SE OMSI Test Suite verifier development Razvan Bujila Johan Kuru 2007-05-04 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden

More information

Visualization Computer Graphics I Lecture 20

Visualization Computer Graphics I Lecture 20 15-462 Computer Graphics I Lecture 20 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 15, 2003 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Development of water leakage detectors

Development of water leakage detectors LiU-ITN-TEK-A--08/068--SE Development of water leakage detectors Anders Pettersson 2008-06-04 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

Intelligent boundary extraction for area and volume measurement

Intelligent boundary extraction for area and volume measurement Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2017 LIU-IDA/LITH-EX-A--17/009--SE Intelligent boundary extraction for area and volume measurement Using LiveWire for

More information

Debug Interface for Clone of DSP. Examensarbete utfört i Elektroniksystem av. Andreas Nilsson

Debug Interface for Clone of DSP. Examensarbete utfört i Elektroniksystem av. Andreas Nilsson Debug Interface for Clone of 56000 DSP Examensarbete utfört i Elektroniksystem av Andreas Nilsson LITH-ISY-EX-ET--07/0319--SE Linköping 2007 Debug Interface for Clone of 56000 DSP Examensarbete utfört

More information

Evaluation of a synchronous leader-based group membership

Evaluation of a synchronous leader-based group membership Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Information Technology Spring 2017 LIU-IDA/LITH-EX-G--17/084--SE Evaluation of a synchronous leader-based group membership protocol

More information

Motion Capture to the People: A high quality, low budget approach to real time Motion Capture

Motion Capture to the People: A high quality, low budget approach to real time Motion Capture Examensarbete LITH-ITN-MT-EX--05/013--SE Motion Capture to the People: A high quality, low budget approach to real time Motion Capture Daniel Saidi Magnus Åsard 2005-03-07 Department of Science and Technology

More information

Semi-automatic code-to-code transformer for Java

Semi-automatic code-to-code transformer for Java Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2016 LIU-IDA/LITH-EX-A--16/031--SE Semi-automatic code-to-code transformer for Java Transformation of library calls

More information

Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University

Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University 15-462 Computer Graphics I Lecture 21 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Optimizing a software build system through multi-core processing

Optimizing a software build system through multi-core processing Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2019 LIU-IDA/LITH-EX-A--19/004--SE Optimizing a software build system through multi-core processing Robin Dahlberg

More information

Storage and Transformation for Data Analysis Using NoSQL

Storage and Transformation for Data Analysis Using NoSQL Linköping University Department of Computer Science Master thesis, 30 ECTS Information Technology 2017 LIU-IDA/LITH-EX-A--17/049--SE Storage and Transformation for Data Analysis Using NoSQL Lagring och

More information

Design and Proof-of-Concept Implementation of Interactive Video Streaming with DASH.js

Design and Proof-of-Concept Implementation of Interactive Video Streaming with DASH.js Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Datateknik 2017 LIU-IDA/LITH-EX-G--17/081--SE Design and Proof-of-Concept Implementation of Interactive Video

More information

Design of video players for branched videos

Design of video players for branched videos Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Computer Science 2018 LIU-IDA/LITH-EX-G--18/053--SE Design of video players for branched videos Design av videospelare

More information

Towards automatic asset management for real-time visualization of urban environments

Towards automatic asset management for real-time visualization of urban environments LiU-ITN-TEK-A--17/049--SE Towards automatic asset management for real-time visualization of urban environments Erik Olsson 2017-09-08 Department of Science and Technology Linköping University SE-601 74

More information

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology Point Cloud Filtering using Ray Casting by Eric Jensen 01 The Basic Methodology Ray tracing in standard graphics study is a method of following the path of a photon from the light source to the camera,

More information

A Back-End for the SkePU Skeleton Programming Library targeting the Low- Power Multicore Vision Processor

A Back-End for the SkePU Skeleton Programming Library targeting the Low- Power Multicore Vision Processor Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2016 LIU-IDA/LITH-EX-A--16/055--SE A Back-End for the SkePU Skeleton Programming Library targeting the Low- Power Multicore

More information

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Rendering Computer Animation and Visualisation Lecture 9 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Volume Data Usually, a data uniformly distributed

More information

Terrain Rendering using Multiple Optimally Adapting Meshes (MOAM)

Terrain Rendering using Multiple Optimally Adapting Meshes (MOAM) Examensarbete LITH-ITN-MT-EX--04/018--SE Terrain Rendering using Multiple Optimally Adapting Meshes (MOAM) Mårten Larsson 2004-02-23 Department of Science and Technology Linköpings Universitet SE-601 74

More information

An Approach to Achieve DBMS Vendor Independence for Ides AB s Platform

An Approach to Achieve DBMS Vendor Independence for Ides AB s Platform Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Datateknik 2017 LIU-IDA/LITH-EX-G--17/008--SE An Approach to Achieve DBMS Vendor Independence for Ides AB s Platform Niklas

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Master s Thesis An Approach on Learning Multivariate Regression Chain Graphs from Data by Babak Moghadasin LIU-IDA/LITH-EX-A--13/026

More information

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller Scalar Data Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Basic strategies Function plots and height fields Isolines Color coding Volume visualization (overview) Classification Segmentation

More information

Visualization Computer Graphics I Lecture 20

Visualization Computer Graphics I Lecture 20 15-462 Computer Graphics I Lecture 20 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] November 20, 2003 Doug James Carnegie Mellon University http://www.cs.cmu.edu/~djames/15-462/fall03

More information

Automatic analysis of eye tracker data from a driving simulator

Automatic analysis of eye tracker data from a driving simulator LiU-ITN-TEK-A--08/033--SE Automatic analysis of eye tracker data from a driving simulator Martin Bergstrand 2008-02-29 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Bachelor thesis A TDMA Module for Waterborne Communication with Focus on Clock Synchronization by Anders Persson LIU-IDA-SAS

More information

Design Optimization of Soft Real-Time Applications on FlexRay Platforms

Design Optimization of Soft Real-Time Applications on FlexRay Platforms Institutionen för Datavetenskap Department of Computer and Information Science Master s thesis Design Optimization of Soft Real-Time Applications on FlexRay Platforms by Mahnaz Malekzadeh LIU-IDA/LITH-EX-A

More information

A collision framework for rigid and deformable body simulation

A collision framework for rigid and deformable body simulation LiU-ITN-TEK-A--16/049--SE A collision framework for rigid and deformable body simulation Rasmus Haapaoja 2016-11-02 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis A database solution for scientific data from driving simulator studies By Yasser Rasheed LIU-IDA/LITH-EX-A--11/017

More information

Visualization. Images are used to aid in understanding of data. Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26]

Visualization. Images are used to aid in understanding of data. Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26] Visualization Images are used to aid in understanding of data Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26] Tumor SCI, Utah Scientific Visualization Visualize large

More information

Applications of Explicit Early-Z Culling

Applications of Explicit Early-Z Culling Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of

More information

Adapting network interactions of a rescue service mobile application for improved battery life

Adapting network interactions of a rescue service mobile application for improved battery life Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Information Technology Spring term 2017 LIU-IDA/LITH-EX-G--2017/068--SE Adapting network interactions of a rescue

More information

Design and Implementation of an Application Programming Interface for Volume Rendering

Design and Implementation of an Application Programming Interface for Volume Rendering LITH-ITN-MT-EX--02/06--SE Design and Implementation of an Application Programming Interface for Volume Rendering Examensarbete utfört i Medieteknik vid Linköpings Tekniska Högskola, Campus Norrköping Håkan

More information

Raspberry pi to backplane through SGMII

Raspberry pi to backplane through SGMII LiU-ITN-TEK-A--18/019--SE Raspberry pi to backplane through SGMII Petter Lundström Josef Toma 2018-06-01 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Data Visualization (DSC 530/CIS )

Data Visualization (DSC 530/CIS ) Data Visualization (DSC 530/CIS 60-0) Isosurfaces & Volume Rendering Dr. David Koop Fields & Grids Fields: - Values come from a continuous domain, infinitely many values - Sampled at certain positions

More information

Real-Time Magnetohydrodynamic Space Weather Visualization

Real-Time Magnetohydrodynamic Space Weather Visualization LiU-ITN-TEK-A--17/048--SE Real-Time Magnetohydrodynamic Space Weather Visualization Oskar Carlbaum Michael Novén 2017-08-30 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Real-Time Ray Tracing on the Cell Processor

Real-Time Ray Tracing on the Cell Processor LiU-ITN-TEK-A--08/102--SE Real-Time Ray Tracing on the Cell Processor Filip Lars Roland Andersson 2008-09-03 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Blood vessel segmentation for neck and head computed tomography angiography

Blood vessel segmentation for neck and head computed tomography angiography LiU-ITN-TEK-A-13/056-SE Blood vessel segmentation for neck and head computed tomography angiography Anders Hedblom 2013-10-11 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Ad-hoc Routing in Low Bandwidth Environments

Ad-hoc Routing in Low Bandwidth Environments Master of Science in Computer Science Department of Computer and Information Science, Linköping University, 2016 Ad-hoc Routing in Low Bandwidth Environments Emil Berg Master of Science in Computer Science

More information

Monte Carlo Simulation of Light Scattering in Paper

Monte Carlo Simulation of Light Scattering in Paper Examensarbete LITH-ITN-MT-EX--05/015--SE Monte Carlo Simulation of Light Scattering in Paper Ronnie Dahlgren 2005-02-14 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping,

More information

Implementation and Evaluation of Bluetooth Low Energy as a communication technology for wireless sensor networks

Implementation and Evaluation of Bluetooth Low Energy as a communication technology for wireless sensor networks Linköpings universitet/linköping University IDA HCS Bachelor 16hp Innovative programming Vårterminen/Spring term 2017 ISRN: LIU-IDA/LITH-EX-G--17/015--SE Implementation and Evaluation of Bluetooth Low

More information

Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8

Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8 Institutionen för Datavetenskap Department of Computer and Information Science Master s thesis Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8 by Daniel Lazarovski LIU-IDA/LITH-EX-A

More information

Medical Image Processing: Image Reconstruction and 3D Renderings

Medical Image Processing: Image Reconstruction and 3D Renderings Medical Image Processing: Image Reconstruction and 3D Renderings 김보형 서울대학교컴퓨터공학부 Computer Graphics and Image Processing Lab. 2011. 3. 23 1 Computer Graphics & Image Processing Computer Graphics : Create,

More information

Efficient Simulation and Rendering of Sub-surface Scattering

Efficient Simulation and Rendering of Sub-surface Scattering LiU-ITN-TEK-A--13/065-SE Efficient Simulation and Rendering of Sub-surface Scattering Apostolia Tsirikoglou 2013-10-30 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

CIS 467/602-01: Data Visualization

CIS 467/602-01: Data Visualization CIS 467/60-01: Data Visualization Isosurfacing and Volume Rendering Dr. David Koop Fields and Grids Fields: values come from a continuous domain, infinitely many values - Sampled at certain positions to

More information

First Steps in Hardware Two-Level Volume Rendering

First Steps in Hardware Two-Level Volume Rendering First Steps in Hardware Two-Level Volume Rendering Markus Hadwiger, Helwig Hauser Abstract We describe first steps toward implementing two-level volume rendering (abbreviated as 2lVR) on consumer PC graphics

More information

Semi-automated annotation of histology images

Semi-automated annotation of histology images Linköping University Department of Computer science Master thesis, 30 ECTS Computer science 2016 LIU-IDA/LITH-EX-A--16/030--SE Semi-automated annotation of histology images Development and evaluation of

More information

Volume Rendering with libmini Stefan Roettger, April 2007

Volume Rendering with libmini Stefan Roettger, April 2007 Volume Rendering with libmini Stefan Roettger, April 2007 www.stereofx.org 1. Introduction For the visualization of volumetric data sets, a variety of algorithms exist which are typically tailored to the

More information

Geometric Representations. Stelian Coros

Geometric Representations. Stelian Coros Geometric Representations Stelian Coros Geometric Representations Languages for describing shape Boundary representations Polygonal meshes Subdivision surfaces Implicit surfaces Volumetric models Parametric

More information