Illustrative Visualization of Anatomical Structures

Size: px
Start display at page:

Download "Illustrative Visualization of Anatomical Structures"

Transcription

1 LiU-ITN-TEK-A--11/045--SE Illustrative Visualization of Anatomical Structures Erik Jonsson Department of Science and Technology Linköping University SE Norrköping, Sweden Institutionen för teknik och naturvetenskap Linköpings universitet Norrköping

2 LiU-ITN-TEK-A--11/045--SE Illustrative Visualization of Anatomical Structures Examensarbete utfört i medieteknik vid Tekniska högskolan vid Linköpings universitet Erik Jonsson Examinator Karljohan Lundin Palmerius Norrköping

3 Upphovsrätt Detta dokument hålls tillgängligt på Internet eller dess framtida ersättare under en längre tid från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida Copyright The publishers will keep this document online on the Internet - or its possible replacement - for a considerable time from the date of publication barring exceptional circumstances. The online availability of the document implies a permanent permission for anyone to read, to download, to print out single copies for your own use and to use it unchanged for any non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional on the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its WWW home page: Erik Jonsson

4 Abstract Illustrative visualization is a term for visualization techniques inspired by traditional technical and medical illustration. These techniques are based on knowledge of the human perception and provide effective visual abstraction to make the visualizations more understandable. Within volume rendering these expressive visualizations can be achieved using non-photorealistic rendering that combines different levels of abstraction to convey the most important information to the viewer. In this thesis I will look at illustrative techniques and show how these can be used to visualize anatomical structures in a medical volume data. The result of the thesis is a prototype of an anatomy education application, that makes use of illustrative techniques to have a focus+context visualization with feature enhancement, tone shading and labels describing the anatomical structures. This results in an expressive visualization and interactive exploration of the human anatomy. 1

5 Acknowledgements I would like to thank my supervisor Karl-Johan Lundin Palmerius and Lena Tibell at the Department of Science and Technology, Linköping University for their help and assistance throughout the thesis work. Thanks also to Daniel Forsberg at the Department of Biomedical Engineering, Linköping University for providing the human body data set together with the segmented data. 2

6 Contents 1 Introduction Motivation Purpose & Goal Limitations Outline Background Anatomy Education Dissections Volume Rendering Volume Rendering Integral Segmented Volume Data Ray Casting GPU-based Ray Casting Transfer Functions Local Illumination Illustrative Visualization Medical Illustrations Visual Abstraction Cut-away Views and Ghosted Views Visibility Control Textual Annotations Voreen Theory The Importance-aware Composition Scheme The Tone Shading Model Implementation Illustrative Ray Casting Segmentation Classification Tone Shading Importance-aware Composition Labeling of Segmented Data Segment Description File Layout Algorithm Rendering Anatomy Application Design and User Interface

7 4.3.2 Focus+Context Widget Labeling Widget Conclusion Results Result of the Importance-aware Composition Result of the Tone Shading Result of the Anatomy Application Performance Discussion The Illustrative Techniques The Anatomy Application Future work Additional Features

8 List of Figures 2.1 The front and back face from the bounding box of the volume The ray casting technique via rasterization A transfer function represented by a 1D texture Cut-away and ghosted illustration of a sphere Medical illustrations by Leonardo da Vinci The standard workspace in VoreenVE Tone shading of a red object with blue/yellow tones D TF textures stored in a 2D segmentation TF texture Tone shading parameters Importance Measurements Parameters Convex hull: A set of points enclosed by an elastic band The placement of labels The network of the anatomy application Layout of the Labeling widget The intensity measurement The gradient magnitude, silhouetteness and background measurement Focus+context visualization Comparision of Blinn-Phong shading and Tone shading The Anatomy Application: Selection on Pericardium The Anatomy Application: The Digestive and Urinary System. 39

9 List of Tables 5.1 Performance measurements of front-to-back composition and importanceaware composition with different settings on importance measurements (IM) and early ray termination (ERT) Performance measurement of tone shading and Blinn-Phong shading using front-to-back composition

10 Chapter 1 Introduction In this Master s thesis an illustrative volume rendering system has been developed at the division for Media and Information Technology, Department of Science and Technology at Linköping University. Illustrative techniques are used in the system to achieve an expressive visualization of anatomical structures. The thesis serves as a fulfillment of a Master of Science degree in Media Technology at Linköping University, Sweden. 1.1 Motivation The study of medicine and biology has always relied on visualizations to learn about anatomical relationships and structures. In these studies are dissections often used to support the anatomical learning with both visual and tactile experience. However, the use of dissection is declining for schools that have anatomical education [14]. High schools and universities are more often using other aids such as textbooks, plastic specimens and simulators to support their anatomy education. The computerized aids offer many new possibilities, where simulators and educational software lets the user explore anatomical structures in three dimensions. Often these applications are using surface rendering to render pre-modeled 3D models. However, through a technique called volume rendering the structures can be rendered directly from the medical data. Volume rendering has for a long time been considered as much slower than surface rendering, but with newer GPU s it is possible to achieve interactive frame rates. With volume rendering it is possible to acquire renders that better corresponds with the real material. The density values in the medical data sets are directly mapped to RGBA values for the pixels in the rendered images. This allows for fuzzy surfaces with varying opacity, where surface and internal details can be rendered together, for example material such as soft tissue and blood vessels. 1.2 Purpose & Goal In this thesis an interactive volume visualization system for illustrative visualization and exploration of medical volume data is proposed. The purpose with the thesis is to develop a volume rendering application for anatomy education, which allows the user to interactively explore anatomical structures in a medical 7

11 8 Introduction data set. The goal of using illustrative techniques is to achieve an expressive visualization, where complex data is conveyed in an intuitive and understandable way. Otherwise, the information can quickly overwhelm the user, which makes it harder for the user to convey the information. The goal with the thesis is to achieve illustrative visualization of anatomical structures and to show its use in an application for anatomy education. 1.3 Limitations The application in this thesis is based on research material and is developed as a proof-of-concept, where the potential of the methods are evaluated. This means that the user s satisfaction is not evaluated and no user requirements are collected. Otherwise, the user s need and opinions in such an application would have been questioned. The potential users are medical students and other medical expertise that would be interested in an application for anatomy education. 1.4 Outline The structure of the thesis is outlined as follows. Chapter 1: Introduction Describes the motivation, purpose, goal and limitations of the thesis. Chapter 2: Background Presents anatomy education and how it is performed at schools. Explains the theory and background behind volume rendering and illustrative visualization. Chapter 3: Theory Explains the theory behind the used illustrative methods in the thesis. Chapter 4: Implementation Explains the implementation of the illustrative methods and how they have been used in the anatomy application. Chapter 5: Conclusion Presents the result of the implementation and the performance of the methods. Discusses the result and the future work arising from the thesis. 8

12 Chapter 2 Background 2.1 Anatomy Education In medicine and biology education, the anatomy of animals and humans are studied to learn about the anatomical structures, functions and relationships. With this knowledge we can understand how our bodies work and how the evolution has created us and other living creatures. Textbooks are often used as an aid in anatomy education, where illustrations give a better understanding of the anatomical structures. Another aid is the use of dissections that give both visual and tactile experience to the anatomy education. Dissections can be traced back to the Renaissance [9], where dissections were applied on human cadavers. In modern time, dissections are often introduced in high school, where animal cadavers are studied. In veterinary and medical school, the studies are done on both animal and human cadavers. However, this has started to change and dissections are declining as an aid in anatomy education as described by Winkelmann [14] Dissections The role of dissections as an anatomy teaching tool for medical students are described by McLachlan et al. [9] as an opportunity to study real material opposed to textbooks and other teaching material. The dissections also give an important three-dimensional view of the anatomy, where knowledge from lectures and tutorials can be used. Moreover, McLachlan et al. [9] mentions that it increases the self-directed learning and team working. However, the use of dissections also has its shortcomings, where practical problems concern ethical and moral issues, cost-effectiveness and safety. Cadavers might be dealt with improperly, the preservations are expensive and it can have potential health risks. Other problems are more about the educational value, where the major consideration is if the dissections are the most suitable way for high school students to study anatomy and also for those medical and veterinary students that will not work with real material in their future work. These students may only encounter anatomy through medical imaging and then the knowledge from dissections would be hard to translate to the views produced by imaging as described by McLachlan et al. [9]. 9

13 10 Background 2.2 Volume Rendering Volume rendering is a technique to visualize three-dimensional data and has grown as a major field in scientific visualization. The volume data can be acquired from many different sources such as simulations of water, wind, clouds, fog, fire or other natural phenomena. However, the major application area for volume visualization is medical imaging, where the data is acquired from computed tomography (CT) or magnetic resonance imaging (MRI). These techniques use either x-ray beams or magnetic fields to extract and visualize scanned bodies or objects. With modern graphics hardware more efficient volume rendering techniques has evolved, which makes it possible to achieve volume rendering with interactive frame rates. The graphical processor units (GPUs) allow for hardware accelerated volume rendering techniques that takes advantage of the parallelism of modern graphic hardware. The ray casting techniques in volume rendering benefits especially from the parallelism where multiple rays can be processed at the same time and thus achieve real-time volume rendering. In this section I will explain the fundamental parts of volume rendering and how it efficiently can be produced by modern graphics hardware. Most of the volume rendering parts can be referred to the book by Engel et al. [5], which presents the fundamental parts of real-time volume graphics Volume Rendering Integral The volume rendering integral is the physical description of the volume rendering technique. The integral uses an optical model to find a solution to the light transport, where the flow of light is followed to produce the virtual imagery. In the light transport the light can interact with participating media and be emitted, absorbed and scattered. However, after a number of interactions the light transport becomes very complex and the complete solution becomes a computationally intensive task. Simplified optical models are therefore often used to achieve a more efficient volume rendering. The most common models are the following. Absorption only (light can only be absorbed) Emission only (light can only be emitted) Emission-Absorption model (light can be absorbed and emitted) Single scattering and shadowing (local illumination) Multiple scattering (global illumination) In the classic volume rendering integral 2.1 the emission-absorption model is used. In this model the light can be emitted and absorbed, however it cannot be scattered as in other more complete illumination models. I(D) = I 0 e D s 0 κ(t)dt + D s 0 D q(s)e s κ(t)dt ds (2.1) In the volume rendering integral the light flow is followed from the background of the volume s 0, through the volume and against the position of the eye 10

14 11 Background D. The result is the total outgoing intensity I(D). In equation 2.1 the optical properties of emission and absorption is described respectively with the terms q(s) and κ(t). To simplify the integral the term τ(s 1, s 2 ) = s2 s 1 κ(t)dt (2.2) is defined as the optical depth between the positions s 1 and s 2 and the corresponding transparency is defined as T (s 1, s 2 ) = e τ(s 1,s 2 ) = e s 2 s 1 κ(t)dt (2.3) With these definitions of optical depth and transparency the following volume rendering integral can be obtained. D I(D) = I 0 T (s 0, D) + q(s)t (s, D)ds (2.4) s 0 In the first term of equation 2.4 the initial intensity I 0 from the background is attenuated through the volume, where the optical depth τ controls the transparency T of the medium. For small values of τ the medium is rather transparent and for larger values the medium is more opaque. In the second term the integral contribution of the emission source term q(t) is attenuated by the participating media along the remaining path through the volume to the viewer. To be able to compute the volume rendering integral 2.1 it needs to be discretized. This is commonly done by partioning the integration domain into several parts and thus approximates the integral with a Riemann sum. The discrete volume rendering integral can then be written as n I(D) = c i n T j (2.5) i=0 j=i+1 with c 0 = I(s 0 ), where the integral is approximated from the starting point s 0 to the eye point D with n number of intervals Segmented Volume Data The volume data consists of a 3D scalar field and is represented on a discrete uniform grid, where each of the cubic volume elements is called a voxel. In segmented volume data, each of the voxels is tagged as belonging to a segment The segments can be seen as individual objects, that has been separated from the volume by a process called volume segmentation. This is done before the actual volume rendering and is used to distinguish individual objects. In medical visualization this can for example be used to visualize a specific organ in a human body data set Ray Casting Ray casting is an image-based volume rendering technique, where the volume integral is evaluated along rays through the volume data. Usually the traversal order is front-to-back, where the volume data is traversed from the eye and into the volume. For each pixel in the image to be rendered, a ray is cast into the volume and data is sampled at discrete positions along the ray. At 11

15 12 Background each sample point on the ray an interpolation is done to get the correct voxel position. Transfer functions are then used to map the scalar data values to optical properties such as color and opacity. In the last step the samples are composited together to get the resulting pixel color. With the discrete volume rendering integral in equation 2.5 the composition schemes can be obtained, where the illumination I is represented with RGBA components with the color as C and the opacity as α. The composition equation for front-to-back traversal is given as follows. C i = C i 1 + (1 α i 1)C i α i = α i 1 + (1 α i 1)α i (2.6) The new values C i and α i are calculated from the color C i 1 and opacity α i 1 from the previous location i 1, and the color and opacity C i and α i at the current location i. With these steps the color and opacity is accumulated along the ray, which results in a RGBA value for the current pixel. In a similar way the back-to-front composition schemes are obtained as follows, C i = (1 α i )C i+1 + C i (2.7) where the value of C i is calculated from the color C i and α i from the current location i, and the color C i+1 from the previous location i + 1. In the back-to-front composition the opacity α i is not updated as in the front-to-back composition 2.6. That is since the color contribution C i can be determined without using the accumulated opacity in a back-to-front traversal. However, a major advantage of the front-to-back traversal is that the traversal can be terminated when the accumulated opacity α i [0,1] reaches above one. Then the most opaque material has been evaluated along the ray and further traversal is unnecessary. This is a technique called early ray termination and is an efficient way to optimize the rendering and can be easily executed in the ray casting loop. For this reason is front-to-back composition the most commonly used composition scheme in volume rendering GPU-based Ray Casting In GPU-based ray casting the entire volume is stored in a 3D texture. The texture is transferred to a fragment shader and the rays are cast through the volume in a per-pixel basis. In order to calculate the ray direction different approaches can be used. The most basic solution is to compute the direction from the camera position and the screen space coordinates, but another way is to use rasterization [8]. In this technique the range of depths from where the ray enters the volume and to where the ray exits the volume is computed in a ray setup prior to the ray casting. This yields the front face and the back face from the bounding box of the volume, as can be seen in figure 2.1. The front and back face coordinates can then be used to compute the direction coordinates as follows, D(x, y) = T exit (x, y) T entry (x, y) (2.8) 12

16 13 Background (a) Front face (b) Back face Figure 2.1: The front and back face from the bounding box of the volume where the coordinates can be seen as the entry and exit points of the ray traversing the volume. After the ray setup the ray casting is performed in a ray casting loop where equation 2.8 is used to determine when the ray has reach the exit point of the volume. The ray casting technique via rasterization can be seen in figure 2.2, where the rays (r) are traversed from the front faces (f) to the back faces (b). b 0 r 0 f 0 b 1 r1 f 1 b 2 r 2 f 2 f3 r 3 b 3 f x r x b x Figure 2.2: The ray casting technique via rasterization In the ray casting loop the rays are cast through the volume, where each ray is iteratively stepped through and the 3D texture is sampled using tri-linear interpolation. The sample is then used to apply the transfer function and get the color and opacity of the given sample. Finally, a composition scheme is used to blend the samples together. When the last sample has been reached the final RGBA value of the pixel has been computed and can be returned from the fragment shader. The expensive stage in this algorithm is the actual ray casting loop and therefore has many optimization techniques been developed to make it more efficient. Early ray termination is one technique that already has been presented in section 2.2.3, but another powerful technique is called empty 13

17 14 Background space skipping. This technique tries to not sample empty space through the volume, which occurs when visible parts of the volume does not fill up the entire bounding box. However, if the volume is subdivided into smaller blocks we can determine for each block whether it is empty or not. To achieve this we can use the front and back face of the smaller blocks and have a a much more fit and close bounding box of the volume, that more closely resembles the visible parts of the volume Transfer Functions The transfer functions are applied in the ray casting process as explained in section These are used to map the optical properties such as absorption and emission to the scalar values in the volume and by that evaluate the volume rendering integral. In medical volume data the scalar values most commonly represent material density. The transfer functions classify the data and map it to color contributions, where each scalar value between 0 and 255 corresponds to a color and opacity. The transfer functions are commonly applied with the use of lookup tables, which contains discrete samples from the transfer function and are stored in a 1D or 2D texture. An example of a transfer function stored in a 1D texture can be seen in figure Figure 2.3: A transfer function represented by a 1D texture Local Illumination The emission-absorption model presented in section does not involve local illumination. However, the volume rendering integral in equation 2.1 can be extended to handle local illumination by adding an illumination term to the emission source term q(s): q extended (s) = q emission (s) + q illumination (s) (2.9) where q emission (s) is identical to the emission source term in the absorptionemission model. The term q illumination (s) describes the local reflection from a light that comes directly from the light source. With this term is is possible to achieve single scattering effects using local illumination models similar to traditional methods for surface lighting. In these the surface normal is used to calculate the light reflection. However, to use the local illumination models in volume rendering the normal is substituted by the normalized gradient vector of the volume. To do this the gradient is computed in the fragment shader using finite differencing schemes. These are based on Taylor expansion and can estimate the gradients by forward, backward or central difference. The most common approach in volume rendering is central differencing as seen in equation 14

18 15 Background 2.10, which has a higher-order approximation error than forward and backward differences and thus creates a better estimation. f f(x + h) f(x h) (x) = (2.10) 2h With the central difference formula in equation 2.10 the three components in the gradient vector f(x, y, z) are estimated and can be used in a local illumination model, like for example the Blinn-Phong model. This model is the most common shading technique and computes the light reflected by an object by combination of the terms, ambient, diffuse and specular reflection. I BlinnP hong = I ambient + I diffuse + I specular (2.11) The ambient term I ambient is used to compensate from the missing indirect illumination. This is achieved by modeling a constant global ambient light that prevents the shadows from being completely black. With the diffuse and specular term the reflected incident light is modeled to create matte and shiny surfaces. The diffuse term I diffuse corresponds to the light that is scattered in all directions and the specular term I specular to the light that is scattered around the direction of the perfect reflection. The local illumination model in equation 2.11 can be integrated into the absorption model by adding the scattered light to the emission term as explained in equation 2.9. This means that the illumination of the volume can be determined by adding the local illumination to the emission of the volume. 2.3 Illustrative Visualization Volume rendering is often concerned with photorealistic rendering, where the goal is to produce highly realistic images. This is important for many applications, but photorealism can also prohibit the effective depiction of features of interest as described by Rautek et al. [10]. Important features may not be recognizable among the other visual content. Non-photorealistic rendering (NPR) has therefore emerged to visualize features that cannot be shown using a physically correct light transport. These techniques have been inspired by the artistic styles used in pen-and-ink drawings, hatching, stippling and water color paintings. The techniques that are inspired by technical and medical illustrations are called illustrative visualization techniques [10]. These make use of visual abstraction to effectively convey information to the viewer, where the techniques concerns about what and how to visualize the features in order to achieve an expressive visualization Medical Illustrations Scientific illustrations are often used for educational purposes to instruct and explain complex technical information. These can illustrate the mounting of a table, a surgical procedure, the anatomy of an animal or a technical device. Medical illustrations are used extensively in the medical field to represent anatomical structures in a clear and informative way. An illustration of a heart can for example give insight into its function and relation to other organs. These illustrations are often drawn using traditional or digital techniques and used in 15

19 16 Background textbooks, advertisements, presentations and many other contexts. The illustrations can also be of three-dimensions and be used as material in educational applications, instructional films or medical simulators. In the educational process the illustrations give an impact to the learning, where the illustrations provide insight by effectively conveying the information. The main goal of scientific illustrations are to convey information to the viewer, which is done by letting the viewer focus on the important parts instead of the parts that is not interesting. This is an approach called visual abstraction and is most commonly used in medical illustrations to emphasize important structures without removing them completely from its context. Visual abstraction and its use in medical illustrations are further explained in the following sections and Visual Abstraction Visual abstraction is an important component in illustrative visualization, which is inspired by the abstraction techniques from traditional illustration. With abstraction the most important information is conveyed to the viewer, where the visual overload is reduced by letting the viewer focus on what is important. This is often done by having certain structures be emphasized and others be suppressed to ensure the visibility of the important structures and reduce the visual clutter. The different ways to provide abstraction can be divided into lowlevel and high-level abstraction techniques as described by Rautek et al. [10]. Where the low level techniques deals with how to visualize features of interest and high level techniques deals with what to visualize. The low-level techniques represent the artistic style of the illustration. Some examples of handcrafted techniques are silhouettes (or contours), hatching and stippling. The silhouette technique draws lines along the contours to enhance the shape depiction. Hatching and stippling are handcrafted shading techniques, which draws the illustration by only using strokes or small points. In computer graphics many of these techniques have been simulated to provide computerized stylized depiction and most effective is the silhouette technique, which is often used in surface and volume rendering. The high-level abstraction techniques concerns about the visibility in the illustration, where the most important information is uncovered to provide visibility of the more important features. Some examples of illustration techniques that are used in technical and medical illustrations are cut-away views, ghosted views and exploded views. These change the level of abstraction or the spatial arrangement of features to reveal the important information. These techniques are also called focus+context techniques and by Viola et al. [12] referred as smart visibility techniques Cut-away Views and Ghosted Views In technical and medical illustrations it is often important to visualize the interiors to understand the relation between different parts. However, without the context it is hard to see the spatial relationship and put together a mental picture of how parts are related. Different techniques have therefore been developed to be able to focus on important features while still maintaining the context. In the 16

20 17 Background area of information visualization these are often called focus+context techniques and have become one key component in illustrative visualization [10]. Cut-away views and ghosted views are techniques used in traditional illustrations to apply this sort of abstraction to the data. The techniques use different approaches to reveal the most important parts in an illustration. In cut-away views the occluding parts are simply cut away to make the important parts visible, whereas the ghosted views removes the occluding parts by fading them away. These techniques concerns about the occluding parts, which is either removed or faded. This results in an illustration with focus on the important parts that are not completely removed from its context. An example of cut-away and ghosted view can be seen in figure 2.4. (a) Whole sphere (b) Cut-away sphere (c) Ghosted sphere Figure 2.4: Cut-away and ghosted illustration of a sphere Medical illustrations have been using cut-away views, ghosted views and other similar techniques for centuries, where it helped the viewers recognize what they were looking at. It was already used by Leonardo da Vinci in the beginning of the 16th century in his drawings of anatomical structures as shown in figure 2.5. Nowadays, these techniques are frequently used, since we still gain most information from unknown data by seeing only small portions of the data as described by Krüger et al [7]. Figure 2.5: Medical illustrations by Leonardo da Vinci (Courtesy of The Royal c Collection 2005, Her Majesty Queen Elizabeth II ) 17

21 18 Background Visibility Control High-level visual abstraction as explained in is one of the main components in illustrative visualization. This abstraction technique reveals the most important features by controlling the visibility in illustrations. In order to achieve similar things in illustrative visualization the visibility need to be controlled during volume rendering. Importance-driven volume rendering is introduced in the work by Viola et al. [11], where importance is defined as a visibility priority to determine the visibility of features in the rendering. In their work a high importance gives a high visibility priority in the rendering, which ensures the visibility of the important features. The rendering is based on segmented data, uses two rendering passes and consist of the following steps: 1. Importance values are assigned to the segmented data 2. The volume is traversed to estimate the level of sparseness 3. The final image is rendered with respect to the object importance Objects occluding more important structures in the rendering are rendered more sparsely to reveal the important structures. With this feature enhancement it is possible achieve both cut-away and ghosted views. Another approach is context-preserving volume rendering introduced by Bruckner et al. [2]. This approach modulates the opacity based on volume illumination, where regions that receive little illumination are emphasized, like for example the silhouettes of an object. The opacity is modulated by using the shading intensity, gradient magnitude, distance to eye and previously accumulated opacity. This makes it possible to explore the interiors of a data set without the need of segmentation. The context-aware volume rendering can be implemented in a single-pass fragment shader on the GPU. This makes this approach much more effective than importance-driven volume rendering, which requires multiple rendering passes. However, both of these approaches depends on data parameters and gives only indirect control over the focus location. A different approach was introduced by Krüger et al. [7] called ClearView, a context-preserving hot spot visualization technique. With this approach a user has direct control over the focus and can interactively explore the data sets. This technique uses a context layer and a focus layer which are rendered separately and composed into a final image. The contents of the layers are defined by the user together with a focus point, which allows for an interactive focus+context exploration Textual Annotations Textual annotations, labels or legends are techniques often seen in illustrations. These are used to describe the illustration and thus make the identification of different parts easier. This creates more meaningful illustrations, which the viewer can relate to and understand better. This is often used in medical illustrations, where education material uses textual annotations to explain anatomical structures. In anatomy education this is used to help medical students identify structures and see its relation to other structures. In the work by Bruckner et al. [3] an illustrative volume rendering system called VolumeShop was developed with the intent to provide a system for medical 18

22 19 Background illustrators. Within this system, textual annotations were implemented to match traditional illustrations and simplify the orientation in the interactive system. In this implementation an anchor point is connected with a line to a label, which is placed with the following guidelines for all objects in the data. Labels shall not overlap Lines connected between label and anchor point shall not cross A label shall be placed as close as possible to its anchor point Moreover, the annotations are placed along the silhouette of the object, in order to not be occluded by the object. To do this the algorithm approximates the silhouette and places the labels at the closest distance to their anchor point, but outside of the silhouette. This results in rendered textual annotations that for example describe a medical data set with a text label for each anatomical structure. 2.4 Voreen Voreen [13] is a volume rendering engine developed by the Visualization and Computer Graphics Research Group (VisCG) at the Department of Computer Science at the University of Münster. The software is open source and built in C++. In Voreen a framework is provided for rapid prototyping of ray-castingbased volume visualizations, where a data-flow network concept is used to provide flexibility and reusability to the system. The network consists of processors, ports and properties. The nodes in the network are called processors, which have ports of different types (e.g. volume, data and geometry) to transfer data between them. The properties are used to control the processors and can be linked between different processor nodes. An example of a Voreen network is shown in figure 2.6. Figure 2.6: The standard workspace in VoreenVE The environment in figure 2.6 is called VoreenVE and is developed together with Voreen. VoreenVE provides an environment to visualize the network, where 19

23 20 Background processors are visualized as nodes and can interactively be added, removed or connected to other processors. The environment also simplifies changing userdefined parameters with interactive GUI widgets, like for example sliders, color pickers and transfer function widgets. 20

24 Chapter 3 Theory This chapter describes the theory behind the illustrative techniques. This involves the composition scheme, importance-aware composition and the shading model, tone shading. These have been chosen to achieve both high- and low-level abstraction in the illustrative visualization of medical data. 3.1 The Importance-aware Composition Scheme In order to have visibility control in the visualization the method importanceaware composition [4] was chosen. This is a method closely related to the visibility control techniques presented in section The front-to-back composition equation 2.6 is modified in the Importance-aware Composition method to also measure sample importance, which makes it possible to achieve importancebased visibility control in a single rendering pass. In the composition equation 2.6 the visibility (transparency) can be obtained as one minus the accumulated opacity (1 α i ), where the visibility is a value between [0,1]. This means that the visibility of a sample i can be controlled by modulating the accumulated opacity α i of the previous samples. Obviously, a sample would be visible if all previous samples would be invisible. To modulate the opacity based on the importance we thus need to control the visibility through sample importance and accumulated importance as explained by Pinto et al. [4]. This is done with a visibility function of the sample importance I i and accumulated importance I i, which computes the minimum visibility required for a sample i. vis(i i, I i) = 1 exp(i i, I i ) (3.1) Using the visibility function in equation 3.1 the opacity and color can be modulated with a scale (modulation) factor m as follows, m i = 1 if I i <= I i, 1 if 1 α i >= vis(i i, I i ), 1 vis(i i,i i ) a i otherwise (3.2) which is applied for each sample i in the composition step. The scale factor m in equation 3.2 modifies the accumulated opacity and color when the sample importance is greater than the accumulated importance and the visibility from the accumulated opacity is less than the required minimum visibility. With this 21

25 22 Theory we can obtain an importance-aware composition scheme that is valid for opaque samples as described by Pinto et al [4], C i = mc i 1 + (1 mα i 1)C i α i = mα i 1 + (1 mα i 1)α i I i = max(i i 1, I i ) (3.3) where the accumulated opacity is always one for opaque samples. However, for translucent samples the accumulated importance computation also need to involve the sample opacity. This can be seen as a measurement of sample relevance, where a sample with zero opacity should not have any influence on the visualization as explained by Pinto et al. [4]. This leads to equation 3.4, where the accumulated importance I i is computed based on the previously accumulated importance I i 1, the current sample importance I i and the sample opacity α i. I i = max(i i 1, ln(α i + (1 α i ) exp(i i 1 I i )) + I i ) (3.4) With this we can finally write the complete importance-aware composition scheme: Ĉ i = mc i 1 + (1 mα i 1)C i ˆα i = mα i 1 + (1 mα i 1)α i α i = α i 1 + (1 α i 1)α i 0 if ˆα C i i = = 0, otherwise α iĉ i ˆα i I i = max(i i 1, ln(α i + (1 α i ) exp(i i 1 I i )) + I i ) (3.5) In equation 3.5 the opacity α i is used to scale up the accumulated color C i to ensure a desired composition of a low-opacity/high-importance sample followed by a high-opacity/low-importance sample as described by Pinto et al. [4]. To use the importance-aware composition scheme the sample importance need to be measured for each sample in the composition step. Several importance measurements are presented in the work by Pinto et al. [4], but only some are used in this implementation. These are the measurements of intensity, gradient and silhouetteness. The implementation of these measurements together with the importance-aware composition scheme is further explained in section The Tone Shading Model Tone shading presented by Gooch et al. [6] is a non-photorealistic (NPR) shading technique that is based on technical illustrations, where surfaces are often shaded in both hue and luminance. From observations it is known that we perceive warm tones, such as red, orange or yellow closer to us and cool tones like blue, purple or green farther away. Shadows are for example perceived in a bluish tone closer to the horizon, due to the scattering effect. Therefore it is possible to improve 22

26 23 Theory the depth perception by interpolating from a warm tone to a cool tone in the shading. By that a clearer picture of shapes and structures can be obtained as described by Gooch et al. [6]. Tone shading can either be used as a variant or extend the existing local illumination model. Most commonly is it used to modify the diffuse term in the Blinn-Phong model, which is described in section The diffuse term of the Blinn-Phong model determines the intensity of diffuse reflected light with (n l), where n is the surface normal and l is the light vector. This gives the full range of angles [-1, 1] between the vectors, but to avoid surfaces being lit from behind, the model uses max((n l, 0), which restricts the range to [0, 1]. However, by doing this the shape information in the dark regions is hidden, which makes the actual shape of the object harder to perceive. Unlike this, the Tone shading model uses the full range [-1, 1] to interpolate from a cool color to a warm color as shown in the following equation, 1 + (n l) 1 + (n l) I = ( )k a + (1 )k b (3.6) 2 2 where l is the light vector and n is the normalized gradient of the volume. In equation 3.6 the terms k a and k b are derived from a linear blend between the colors k cool and k warm and the color of the transfer function k t as shown in the following equations, k a = k cool + αk t (3.7) k b = k warm + βk t (3.8) where the factors α and β are parameters between 0 and 1 to control the contribution of the sample color k t. With these equations the Tone shading + = Figure 3.1: Tone shading of a red object with blue/yellow tones model can be evaluated in a fragment shader, where the shading is applied to the samples on a per-fragment basis. An example of tone shading is shown in figure

27 Chapter 4 Implementation In this thesis, an application for anatomy education has been developed using the Voreen volume-rendering engine and the visualization environment, VoreenVE [13]. In Voreen a module with a new set of processors has been developed to create an illustrative visualization of anatomical structures. The application uses the Qt framework for the graphical user interface and the graphics library OpenGL and the shading language GLSL for the visualization and volume rendering implementation. The following sections describe the implementation of the chosen illustrative methods. 4.1 Illustrative Ray Casting This section describes the ray casting process and how it was implemented to achieve an illustrative visualization. The ray caster was constructed to allow both non- and pre-segmented data. In the ray caster processor the segmented volume data is rendered by receiving entry and exit points as well as volume data and segmented volume data. The ray casting loop is performed in a single-pass fragment shader and implemented using OpenGL Shading Language (GLSL). The entry and exit points are used to utilize rasterization in the GPU ray casting process as explained in section In the fragment shader each ray are cast from the eye towards the volume, where the ray direction is computed from the entry and exit points of the volume. The ray casting loop then iteratively steps through the volume along the ray, samples the 3D volume texture using tri-linear interpolation, applies transfer function, shading and performs composition to achieve the final rendering. To achieve illustrative visualization of the segmented data a couple of additional methods have been implemented. This includes a segmentation transfer function, tone shading and importance-aware composition implementation. These have been implemented as described in the following sections Segmentation Classification In the classification step the data is mapped to optical properties in the volume rendering integral. This is often done using a transfer function, which maps the samples to color and opacity. In this implementation the ray casting is done on the GPU and the transfer function is stored as 1D textures. The textures are passed to the fragment shader, where RGBA samples are found through texture 24

28 25 Implementation lookups as described in section However, with segmented volume data this becomes a bit different. In this case each segment can have its own transfer function, where different color and opacity can be applied for each segment. The implementation of the segmentation classification is based on the SegmentationRaycaster processor in Voreen. In this processor the volume and segmented volume are sent as 3D textures to the shader. Whereas the multiple transfer functions are sent as a single 2D texture, where all the segments 1D transfer functions are stored as shown in figure 4.1. segment id Figure 4.1: 1D TF textures stored in a 2D segmentation TF texture In figure 4.1, each row in the 2D texture corresponds to a segment ID. In order to find the correct 1D transfer function of the segment the implementation determines the segment ID from the 3D texture of the segmentation volume. This is then used to look up the 1D TF textures that are stored in the 2D segmentation TF texture. The transfer function of a segment can then be found and used in the rendering. In the implementation of the importance-aware ray caster this is used to achieve rendering of segmented volume data, with different transfer function for each segment Tone Shading Tone shading is implemented as a shading technique in the fragment shader to achieve illustrative shading in the visualization. This technique uses a warm and a cool color to increase the perception of depth and shapes as described in section 3.2. This technique is implemented similar to other shading techniques in Voreen, such as the Blinn-Phong shading. The diffuse term of the Blinn-Phong model is replaced with the Tone shading model in equation 3.6. This way the ambient and specular terms was kept as a part of the model. To have a more flexible ray casting processor the Tone shading technique was added to a drop-down box containing the existing shading techniques. The parameters of the technique were also made updatable, where the factors, the warm and cool color were made definable in VoreenVE through sliders and color pickers. The parameter setup for the Tone shading technique can be seen in figure

29 26 Implementation Figure 4.2: Tone shading parameters Importance-aware Composition The importance-aware composition method was implemented to provide visibility control in the illustrative visualization. This was achieved by replacing the traditional composition method in the GPU ray casting loop with one that was based on sample importance as described in section 3.1. The composition method was implemented in the fragment shader as shown in algorithm 1. Algorithm 1 Importance-aware Composition Set importance (I i ) to zero for all samples do Compute sample importance (I i ) Set scale factor (m) Perform composition scheme and scale the result Accumulate importance (I i ) end for The sample importance can be computed with several measurements as proposed by Pinto et al. [4]. These are calculated for each sample during rendering and are used to emphasize features in different ways based on its importance. However, these measurements are not dependent on segmented data, which this thesis also covers. For this reason have a measurement for segmented data also been implemented among other measurements proposed by Pinto et al. [4], such as intensity, gradient, and silhouetteness measurement, and measurements for suppressing structures and achieving focus+context visualizations. The importance measurement is implemented in the fragment shader to compute the sample importance value, where the measurements are combined in a weighted sum to compute the final importance value. This is done together with a global weight to scale the weighted sum (I i = W global (W i I i W ni n)), where a global weight of zero, results in zero importance and thus a traditional composition. In the implementation every weight is passed to the shader and can be changed in VoreenVE with sliders from 0 to 1 as seen in figure 4.3. The following importance measurements are the ones that have been implemented. Intensity: The intensity measurement was implemented as described in Pinto et al. [4]. In this measurement the visibility is ensured for samples with high intensity, I S = W intensity intensity (4.1) where W intensity is the corresponding weight and can be used to control the sample importance. This was implemented by using the intensity of the samples. 26

30 27 Implementation Figure 4.3: Importance Measurements Parameters Gradient Magnitude: The gradient magnitude measurement ensures the visibility for the strongest boundaries with I S = W gradient gradient (4.2) where the magnitude of the gradient is obtained from the sample in the implementation. Silhouetteness: The silhouetteness measurement was implemented to emphasize the silhouettes in the rendering. This measures how much a sample belongs to a silhouette using the normalized view vector V, the normalized gradient N and the gradient magnitude m G as described by Pinto et al. [4]. sil = m p G smoothstep(s 1, s 2, (1.0 abs(v N))) (4.3) I S = W sil sil (4.4) By changing the influence of the gradient magnitude (p) or the slope of the smoothstep function (s 1, s 2 ) the look of the silhouettes is controlled. However, this only ensures the visibility of silhouettes, so to make them more distinguishable the sample color C i is scaled with a factor ρ to make the silhouettes darker, exp( ρ sil). The silhouettes become darker as the silhoutteness importance increases. This was implemented in the composition step by using the gradient obtained from the sample and the normalized view vector obtained as the difference between the view and the sample position. Segment Visibility: Another measurement was also implemented to be able to control the visibility of segments in segmented volume data. First a 1D texture is generated in the ray caster processor that stores the visibility values [0,1] of each segment. This is then passed to the shader where the segment ID of the samples is used to lookup the corresponding visibility value. The sample importance is then measured with the visibility value and a visibility weight, which is used to control the importance of the visible segments. Background: The intensity, gradient and silhoutteness can be used to emphasize important structures. However, to have a much clearer picture of the important structures the unimportant structures could be de-emphasized and 27

31 28 Implementation suppressed. As described by Pinto et al. [4] this can be accomplished by considering the background as a layer of opaque samples, that have a importance assigned to them together with a adjustable weight. This is implemented by considering the last sample in the ray traversal as the background, where the sample color is set to the background color. When a background sample is reached, the sample importance is set to the background weight. By adjusting the weight the background is made visible through the volume, as it suppresses the less important structures. Focus+Context: The combined weight sum is scaled with a global weight to control all the weights with the same variable as described previously. However, we can also scale the weights using a per-ray-global weight, where weights are scaled differently for each ray. This can be used to achieve focus+context visualizations as described by Pinto et al. [4]. In the implementation of the widget a focus is defined by a circular area, which can be interactively dragged and resized in the view. This is implemented as a global weight in the composition step, where the weight is obtained from the current ray (pixel) P coord, position P focus and radius r focus of the focus area as shown in the following algorithm. Algorithm 2 Focus+context weight P = P focus D r = min(d x, D y ) r focus l = length(p coord P ) Compute the weight W focus = smoothstep(r 1, r + 1, l) In algorithm 2 the viewport dimension D is used together with the min() function to makes it possible to have a tall and narrow viewport as well as a wide and low one. By using -1 and 1 to adjust the step width of the smoothstep() function the circle is anti-aliased, independent on the viewport resolution. It is also possible to achieve a softer circle by changing the step width. 4.2 Labeling of Segmented Data Textual annotations (or labels) are implemented to add descriptions to the visualization of segmented data as described in section With this a user is allowed to easier identify the different segments in the visualization. The labeling implementation is based on the Labeling processor in Voreen, which can be used to have illustrative labels in visualizations of segmented data. However, this processor have been extended and modified to be more interactive and include an information panel (or labeling widget), where the segments are presented a hierarchical list together with an information view. The implementation of this panel is further explained in section The labeling process consist of the following steps, read the segmentation description file, generate the labels, position the labels and render the labels to the screen. These steps are described in the following sections. 28

32 29 Implementation Segment Description File The labeling processor in Voreen uses an XML file to describe the segments with information about id and caption. However, this was extended with a group, name and info node to be able to have a tree hierarchy of labels and label groups as shown in the following example file.... <group> <name>top Level</name> <i n f o>the top l e v e l item</ i n f o> <group> <name>node</name> <i n f o>the node item</ i n f o> <l a b e l> <id>0</ id> <caption>leaf</ caption> <i n f o>the l e a f item</ i n f o> </ l a b e l> </ group> </ group> Listing 4.1: Example of a segment description file Where the Top Level item is a parent to the Node item and the Node item is a parent to the Leaf item. This results in the following tree structure: Top Level Node Leaf. With this a tree hierarchy list could be achieved, which is used in the labeling widget presented in section Layout Algorithm The Labeling processor uses an IDRaycaster that renders an ID map to position the labels. The IDRaycaster receives the entry and exit points of the volume, the segmented volume data and the first hit points of the volume data ray casting result. The result of the ID map is a color coded map, where the segmentation ID s are stored in the alpha channel together with the first hit positions in the three color channels. In the Labeling processor this is used to place the labels at the correct positions, where the ID map tells which segments that are visible at the moment. To place the labels the processor applies distance transform (or distance maps) on the ID map, which stores the closest distance to the segment border for each pixel. This is used to place the anchor points according to the size of the segment and the distance from the particular pixel to the segment border. The labels are then placed according to the guidelines in section 2.3.5, where the labels should be placed near the anchor point, but outside of the objects borders, without overlapping another label or causing an intersection with another connection line. This is done by approximating the silhouette of the object with a convex hull algorithm, which is used to compute the convex shape for a set of points. This can be seen as an elastic band that is stretched open and released to fit the boundary of the object as seen in figure 4.4. The convex hull is calculated in the Labeling processor, where the silhouette points of the ID map is used to give an approximation of the silhouette. This 29

33 30 Implementation Figure 4.4: Convex hull: A set of points enclosed by an elastic band is then used in the placement of the labels, where the labels are placed outside of the convex hull at the closest distance to their anchor point. Finally the label positions are corrected for line intersections and label overlaps and can be rendered to the screen. The placement of labels is illustrated in figure 4.5. Figure 4.5: The placement of labels Rendering In the rendering step the labels, anchor points and connection lines are rendered. This is done in two rendering passes, where the first one renders halos around the anchor points and connection lines. This is done by using thicker lines and points colored with a specified halo color. The next pass renders the anchor points and connection lines with normal thickness and colors them with the same color as the label text. After that the pass renders quads at the label positions and maps the font texture onto them. The font texture is pre-generated for each label in the XML file. The caption of the labels is rendered to a bitmap using the font rendering library FreeType [1] and bound to a texture. However, to also be able to mark certain labels a selection color is added to the labeling processor. This is used in the font rendering to highlight the selected labels with a different font color than the specified label color. How the label selection is implemented is further explained in section

34 31 Implementation 4.3 Anatomy Application Dissections, plastic models and textbooks are often used as aids in the anatomy education as explained in section 2.1. However, computerized technology offers new possibilities in how the teaching can be done. For this purpose a prototype of an anatomy application has been implemented. Text books with medical illustrations are often used as aids in anatomy education. These provide abstraction, which is crucial in order to have effective illustrations. The prototype is for this reason based on illustrative visualization techniques to achieve abstraction in the visualization. In the prototype a pre-segmented human body data set is used, which have been provided by the Center for Medical Image Science and Visualization (CMIV) in Linköping Design and User Interface In the design of the anatomy application the illustrative ray casting and labeling component are used together, where the Compositor processor is used to blend the renders together. The network of the system can be seen in figure 4.6, which is taken from VoreenVE. The user interface is designed to allow a user to interactively explore the anatomical structures, where the illustrative rendering can be controlled and information about the segmented data can be presented. This is achieved by the implementation of a focus+context and labeling widget as described in the following sections Focus+Context Widget The focus+context widget is implemented to interactively control the position and radius of the focus area. This is implemented as a geometry renderer in Voreen, which is used together with a geometry processor to be able to have multiple geometry rendering processors as seen in figure 4.6. In the focus+context widget a draggable and resizable 2D circle is rendered on the view plane. The circle is rendered and made clickable through the methods render() and renderp icking() derived from the GeometryRendererBase class in Voreen. In the render() method the outer borders of the circle are rendered using its color, position and radius, where the lines are anti-aliased using GL_LINE_SMOOTH. The renderp icking() method is used to render the pickable regions to a IDManager object, which color codes the pickable regions and stores it in a render target. The method then performs the rendering similar to the render method, but the inner parts of the circle are rendered instead of the outer border lines. This allows the user to pick the circle by clicking somewhere within the circle. The circle can then be dragged or resized by checking ishit() in the IDManager object and see if the circle has been picked or not. The circle is dragged by saving its initial position (p x, p y ) and mouse coordinates (x 0, y 0 ) when the circle is hit. The circle position P is then updated according to the new mouse coordinates (x, y) until the user decides to release the circle. To resize the circle the initial radius r is saved instead. The circle radius R is then updated according to the change in the y direction. For a positive change in the y direction the circle is enlarged and vice versa. The drag and resizing computation can be seen in algorithm 3. 31

35 32 Implementation Figure 4.6: The network of the anatomy application Labeling Widget A labeling widget was created to be able to interactively change which organs that are important to see in the visualization. This is added as a processor widget to the labeling processor. The widget is created as an abstraction layer in the Labeling class and is implemented in VoreenQt, the Qt GUI library of Voreen. In the Qt implementation a view is setup to hold a text label, text area, a tree view and three buttons as seen in figure 4.7. The tree view is implemented to organize the anatomical structures into which biological system they belong to and group structures that belong together with other structures, for example a group was created for the heart and its 32

36 33 Implementation Algorithm 3 Drag and resize circle with mouse if isclicked then x = x x 0 D x y = y 0 y D y if dragcircle then //Update circle position P P x = p x + x P y = p y + y else if resizecircle then //Update circle radius R offset = ( x, y) resizedir = (0, 1) factor = 1 1+(offset resizedir) R = r factor end if end if Figure 4.7: Layout of the Labeling widget different parts was included as childs to the group. The different biological systems are the groups of organs that work together to achieve certain task, these are for example the circulatory, digestive and respiratory system. Where for example the heart belongs to the circulatory system. These systems were chosen for the implementation since they are often studied in human anatomy. In the tree view a tree hierarchy view is created, which is filled with labels and label groups from the segmentation description file as described in listing 4.1. When traversing the XML file the labels and label groups are added to the tree view according to their parent. If the item has a parent it is found in the tree and the item can be added as a child to the found parent. This way their hierarchy in the XML file can be translated to the tree view. For each item a checkbox is also added. The text area and text label are used to show information about the labels. These are updated when a label or label group is selected in the tree view. This is done by linking the selection in the tree view to the text area and text label. The selection is also linked to the labels in the rendering, where a selected label is highlighted with a chosen color. These 33

37 34 Implementation rendered labels were also made clickable through an IDM anager object similar as in the focus+context widget in section A rendering target for picking is used to render the quads of the labels as color coded regions. This is then used to find if a label is hit or not using the mouse coordinates and the ishit() method of the IDManager. A picked label is highlighted in the view and set as selected in the tree view. In order to change which organ or system that should be visible a process was implemented to toggle the visibility of segments (organs). By using the segmentation visibility measurement as described in 4.1.3, the visibility is changed by updating the 1D texture. The visibility can then be changed in several ways. Either by selecting the checkbox icon of one of the items in the tree view, the button Show all segments or one of the buttons, Show/Hide or Hide others when having one item selected. When an action is performed on a label group or label it is propagated to the ray caster processor using property linkage. This linkage is done between two processors in VoreenVE and allows a property to be updated by another property of the same type. For example when choosing an action on a selected label, the action and segment ID is set in the labeling processor, which automatically sets the same properties in the ray caster processor. This processor then updates the visibility texture according to the action and segment ID. For example if the action is set to Hide the corresponding segment ID is found in the visibility texture and set to 0, which means that the segment has no importance in the importance-aware composition and will not be rendered. 34

38 Chapter 5 Conclusion 5.1 Results In this section the result of the implementation is presented. The importanceaware composition result is first presented followed by the tone shading result. Finally, the anatomy application result is presented, which uses the two previous components together with the labeling component. A human hand data set is used as test data for both the importance-aware composition and tone shading component and a pre-segmented human body data set is used for the anatomy application Result of the Importance-aware Composition The importance-aware composition is implemented with different sample importance measurements as explained in section In this method, the measurements for intensity, gradient magnitude, silhouetteness, background and focus+context are implemented together with a measurement for segmented data, which is used in the anatomy application. To combine multiple importance measurements a weighted sum with a global weight is used in the implementation, where every measurement has its own weight to control its contribution to the visualization. The results of the different sample importance measurements are shown in the following figures. The intensity measurement is shown in figure 5.1, where the importance weight is changed from no weight (5.1a), to a moderate (5.1b) and high weight (5.1c). The gradient magnitude, silhouetteness and background measurement are seen in figure 5.2. In figure 5.2a the gradient magnitude is combined with the intensity measurement in a weighted sum. This increases the importance of the boundaries, which makes the shape more distinct than using only intensity measurement. In figure 5.2a the silhouetteness and background measurement are also added to the weighted sum. The silhouetteness values s 1 = 0.4, s 2 = 3.0, p = 0.5 are used to create the emphasized contours in the image and the background is suppressed by using a non-zero background weight. In figure 5.3 the combined weighted sum of the result in 5.2b is scaled with a per-ray-global weight. With this a focus+context visualization is produced, where the focus is defined as a circular area. The step width [radius+1, radius/2] is used to achieve the soft circle area. 35

39 36 Conclusion (a) No intensity weight (b) Moderate intensity weight (c) High intensity weight Figure 5.1: The intensity measurement (a) Rendered with intensity and gradient measurements (b) Rendered as in a) but combined with a silhouetteness measurement and background measurement Figure 5.2: The gradient magnitude, silhouetteness and background measurement Figure 5.3: Focus+context visualization 36

40 37 Conclusion Result of the Tone Shading The result of the tone shading implementation is seen in figure 5.4, where tone shading (5.4b) is compared with the traditional Blinn-Phong shading (5.4a). In the figure the tone shading is setup using an orange warm tone with factor α = 0.8 and a blue cool tone with factor β = 0.3. (a) Blinn-Phong shading (b) Tone shading Figure 5.4: Comparision of Blinn-Phong shading and Tone shading Result of the Anatomy Application The implementation of the anatomy application resulted in an educational tool for anatomy education. An illustrative visualization is achieved by using the importance-aware composition, tone shading and labeling implementation. With these implementations the expressiveness of the volume rendering is increased. Within the application a user can explore a human body through a focus+context technique and view information about selected organs. The application interface consists of a 3D canvas view and an information panel. In the canvas view the user controls the volume visualization by rotating, zooming and panning the view. By using the focus+context widget the user can also control the size and position of the circular focus area. The information panel holds the list of organ structures available in the human body data set and presents them in a hierarchical list based on its biological system. Through the panel a user can hide and show specific organs or biological systems. In figure 5.5 the pericardium is selected, which contains the heart and belongs to the circulatory system. The anatomical structures that does not belong to the circulatory, digestive and respiratory systems has been hidden to have a clear view of the pericardium, for example the skin, muscle and skeleton structures. The information panel is shown to the left in figure 5.5, where information about the pericardium is presented and its place in the tree list view is shown. In the canvas view the visualization of the human anatomy is shown, where the pericardium label is highlighted to show the current selection. Another view of the application is shown in figure 5.6, where the digestive and urinary system is visualized. In this view a user has hidden the other systems in the data set to only make the current ones visible. 37

41 38 Conclusion Figure 5.5: The Anatomy Application: Selection on Pericardium Performance The performance of the composition and shading method can be seen in table and 5.1.4, where the performance is measured for two data sets with different settings. The result is rendered in a 256x256 viewport on the following system: 2.0GHz Intel Core 2 Duo T6400, 4GB RAM, ATI Mobility Radeon HD CT Human Hand CT Human Thorax Composition scheme 244x124x x256x256 Front-to-back 18.2 (55) 29.4 (34) Front-to-back (no ERT) 27.8 (36) 37.0 (27) Importance-aware (no IM s) 28.6 (35) 38.5 (26) Importance-aware (all IM s) 38.5 (26) 43.5 (23) ms (fps) ms (fps) Table 5.1: Performance measurements of front-to-back composition and importance-aware composition with different settings on importance measurements (IM) and early ray termination (ERT). CT Human Hand CT Human Thorax Shading method 244x124x x256x256 Blinn-Phong shading 66.7 (15) 76.9 (13) Tone shading 71.4 (14) 83.3 (12) ms (fps) ms (fps) Table 5.2: Performance measurement of tone shading and Blinn-Phong shading using front-to-back composition. 38

42 39 Conclusion (a) The front of the human body (b) The back of the human body Figure 5.6: The Anatomy Application: The Digestive and Urinary System 39

HTTP Based Adap ve Bitrate Streaming Protocols in Live Surveillance Systems

HTTP Based Adap ve Bitrate Streaming Protocols in Live Surveillance Systems HTTP Based Adapve Bitrate Streaming Protocols in Live Surveillance Systems Daniel Dzabic Jacob Mårtensson Supervisor : Adrian Horga Examiner : Ahmed Rezine External supervisor : Emil Wilock Linköpings

More information

Design and evaluation of a system that coordinate clients to use the same server

Design and evaluation of a system that coordinate clients to use the same server Linköpings universitet/linköping University IDA Department of Computer and Information Science Bachelor Thesis Information Technology Spring term 2017 LIU-IDA/LITH-EX-G--17/067--SE Design and evaluation

More information

Advanced Visualization Techniques for Laparoscopic Liver Surgery

Advanced Visualization Techniques for Laparoscopic Liver Surgery LiU-ITN-TEK-A-15/002-SE Advanced Visualization Techniques for Laparoscopic Liver Surgery Dimitrios Felekidis 2015-01-22 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final Thesis Network usage profiling for applications on the Android smart phone by Jakob Egnell LIU-IDA/LITH-EX-G 12/004

More information

Personlig visualisering av bloggstatistik

Personlig visualisering av bloggstatistik LiU-ITN-TEK-G-13/005-SE Personlig visualisering av bloggstatistik Tina Durmén Blunt 2013-03-22 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik

More information

Automatic LOD selection

Automatic LOD selection LiU-ITN-TEK-A--17/054--SE Automatic LOD selection Isabelle Forsman 2017-10-20 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik och naturvetenskap

More information

Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU

Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU LITH-ITN-MT-EX--07/056--SE Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU Ajden Towfeek 2007-12-20 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Case Study of Development of a Web Community with ASP.NET MVC 5 by Haci Dogan LIU-IDA/LITH-EX-A--14/060--SE 2014-11-28

More information

Clustered Importance Sampling for Fast Reflectance Rendering

Clustered Importance Sampling for Fast Reflectance Rendering LiU-ITN-TEK-A--08/082--SE Clustered Importance Sampling for Fast Reflectance Rendering Oskar Åkerlund 2008-06-11 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Institutionen för datavetenskap

Institutionen för datavetenskap Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer Final thesis and Information Science Minimizing memory requirements

More information

Large fused GPU volume rendering

Large fused GPU volume rendering LiU-ITN-TEK-A--08/108--SE Large fused GPU volume rendering Stefan Lindholm 2008-10-07 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik och

More information

Face detection for selective polygon reduction of humanoid meshes

Face detection for selective polygon reduction of humanoid meshes LIU-ITN-TEK-A--15/038--SE Face detection for selective polygon reduction of humanoid meshes Johan Henriksson 2015-06-15 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Tablet-based interaction methods for VR.

Tablet-based interaction methods for VR. Examensarbete LITH-ITN-MT-EX--06/026--SE Tablet-based interaction methods for VR. Lisa Lönroth 2006-06-16 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden Institutionen

More information

Design, Implementation, and Performance Evaluation of HLA in Unity

Design, Implementation, and Performance Evaluation of HLA in Unity Linköping University IDA Bachelor Thesis Computer Science Spring 2017 LIU-IDA/LITH-EX-G-17/007--SE Design, Implementation, and Performance Evaluation of HLA in Unity Author: Karl Söderbäck 2017-06-09 Supervisor:

More information

Audial Support for Visual Dense Data Display

Audial Support for Visual Dense Data Display LiU-ITN-TEK-A--17/004--SE Audial Support for Visual Dense Data Display Tobias Erlandsson Gustav Hallström 2017-01-27 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Context-based algorithm for face detection

Context-based algorithm for face detection Examensarbete LITH-ITN-MT-EX--05/052--SE Context-based algorithm for face detection Helene Wall 2005-09-07 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden Institutionen

More information

Creating User Interfaces Using Web-based Technologies to Support Rapid Prototyping in a Desktop Astrovisualization Software

Creating User Interfaces Using Web-based Technologies to Support Rapid Prototyping in a Desktop Astrovisualization Software LiU-ITN-TEK-A--17/062--SE Creating User Interfaces Using Web-based Technologies to Support Rapid Prototyping in a Desktop Astrovisualization Software Klas Eskilson 2017-11-28 Department of Science and

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Introducing Mock framework for Unit Test in a modeling environment by Joakim Braaf LIU-IDA/LITH-EX-G--14/004--SE

More information

Information visualization of consulting services statistics

Information visualization of consulting services statistics LiU-ITN-TEK-A--16/051--SE Information visualization of consulting services statistics Johan Sylvan 2016-11-09 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Evaluation of BizTalk360 From a business value perspective

Evaluation of BizTalk360 From a business value perspective Linköpings universitet Institutionen för IDA Kandidatuppsats, 16 hp Högskoleingenjör - Datateknik Vårterminen 2018 LIU-IDA/LITH-EX-G--18/069--SE Evaluation of BizTalk360 From a business value perspective

More information

Multi-Volume Rendering in OpenSpace Using A-Buffers for Space Weather Visualizations

Multi-Volume Rendering in OpenSpace Using A-Buffers for Space Weather Visualizations LiU-ITN-TEK-A--17/006--SE Multi-Volume Rendering in OpenSpace Using A-Buffers for Space Weather Visualizations Jonas Strandstedt 2017-02-24 Department of Science and Technology Linköping University SE-601

More information

Interactive GPU-based Volume Rendering

Interactive GPU-based Volume Rendering Examensarbete LITH-ITN-MT-EX--06/011--SE Interactive GPU-based Volume Rendering Philip Engström 2006-02-20 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden Institutionen

More information

Optimal Coherent Reconstruction of Unstructured Mesh Sequences with Evolving Topology

Optimal Coherent Reconstruction of Unstructured Mesh Sequences with Evolving Topology LiU-ITN-TEK-A-14/040-SE Optimal Coherent Reconstruction of Unstructured Mesh Sequences with Evolving Topology Christopher Birger 2014-09-22 Department of Science and Technology Linköping University SE-601

More information

Hybrid Particle-Grid Water Simulation using Multigrid Pressure Solver

Hybrid Particle-Grid Water Simulation using Multigrid Pressure Solver LiU-ITN-TEK-G--14/006-SE Hybrid Particle-Grid Water Simulation using Multigrid Pressure Solver Per Karlsson 2014-03-13 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis A systematic literature Review of Usability Inspection Methods by Ali Ahmed LIU-IDA/LITH-EX-A--13/060--SE 2013-11-01

More information

Statistical flow data applied to geovisual analytics

Statistical flow data applied to geovisual analytics LiU-ITN-TEK-A--11/051--SE Statistical flow data applied to geovisual analytics Phong Hai Nguyen 2011-08-31 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR)

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR) Introduction Illustrative rendering is also often called non-photorealistic rendering (NPR) we shall use these terms here interchangeably NPR offers many opportunities for visualization that conventional

More information

Object Migration in a Distributed, Heterogeneous SQL Database Network

Object Migration in a Distributed, Heterogeneous SQL Database Network Linköping University Department of Computer and Information Science Master s thesis, 30 ECTS Computer Engineering (Datateknik) 2018 LIU-IDA/LITH-EX-A--18/008--SE Object Migration in a Distributed, Heterogeneous

More information

Calibration of traffic models in SIDRA

Calibration of traffic models in SIDRA LIU-ITN-TEK-A-13/006-SE Calibration of traffic models in SIDRA Anna-Karin Ekman 2013-03-20 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik

More information

Comparing Costs of Browser Automation Test Tools with Manual Testing

Comparing Costs of Browser Automation Test Tools with Manual Testing Linköpings universitet The Institution of Computer Science (IDA) Master Theses 30 ECTS Informationsteknologi Autumn 2016 LIU-IDA/LITH-EX-A--16/057--SE Comparing Costs of Browser Automation Test Tools with

More information

Towards Automatic Detection and Visualization of Tissues in Medical Volume Rendering

Towards Automatic Detection and Visualization of Tissues in Medical Volume Rendering Examensarbete LITH-ITN-MT-EX--06/012--SE Towards Automatic Detection and Visualization of Tissues in Medical Volume Rendering Erik Dickens 2006-02-03 Department of Science and Technology Linköpings Universitet

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Migration process evaluation and design by Henrik Bylin LIU-IDA/LITH-EX-A--13/025--SE 2013-06-10 Linköpings universitet

More information

Efficient implementation of the Particle Level Set method

Efficient implementation of the Particle Level Set method LiU-ITN-TEK-A--10/050--SE Efficient implementation of the Particle Level Set method John Johansson 2010-09-02 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Creating a Framework for Consumer-Driven Contract Testing of Java APIs

Creating a Framework for Consumer-Driven Contract Testing of Java APIs Linköping University IDA Bachelor s Degree, 16 ECTS Computer Science Spring term 2018 LIU-IDA/LITH-EX-G--18/022--SE Creating a Framework for Consumer-Driven Contract Testing of Java APIs Fredrik Selleby

More information

Network optimisation and topology control of Free Space Optics

Network optimisation and topology control of Free Space Optics LiU-ITN-TEK-A-15/064--SE Network optimisation and topology control of Free Space Optics Emil Hammarström 2015-11-25 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Slow rate denial of service attacks on dedicated- versus cloud based server solutions

Slow rate denial of service attacks on dedicated- versus cloud based server solutions Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Information technology 2018 LIU-IDA/LITH-EX-G--18/031--SE Slow rate denial of service attacks on dedicated-

More information

Visual Data Analysis using Tracked Statistical Measures within Parallel Coordinate Representations

Visual Data Analysis using Tracked Statistical Measures within Parallel Coordinate Representations Examensarbete LITH-ITN-MT-EX--05/030--SE Visual Data Analysis using Tracked Statistical Measures within Parallel Coordinate Representations Daniel Ericson 2005-04-08 Department of Science and Technology

More information

Automatic Test Suite for Physics Simulation System

Automatic Test Suite for Physics Simulation System Examensarbete LITH-ITN-MT-EX--06/042--SE Automatic Test Suite for Physics Simulation System Anders-Petter Mannerfelt Alexander Schrab 2006-09-08 Department of Science and Technology Linköpings Universitet

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Towards efficient legacy test evaluations at Ericsson AB, Linköping by Karl Gustav Sterneberg LIU-IDA/LITH-EX-A--08/056--SE

More information

Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflow

Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflow LiU-ITN-TEK-A--17/003--SE Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflow Ludvig Mangs 2017-01-09 Department of Science and Technology Linköping University SE-601

More information

Markörlös Augmented Reality för visualisering av 3D-objekt i verkliga världen

Markörlös Augmented Reality för visualisering av 3D-objekt i verkliga världen LiU-ITN-TEK-A-14/019-SE Markörlös Augmented Reality för visualisering av 3D-objekt i verkliga världen Semone Kallin Clarke 2014-06-11 Department of Science and Technology Linköping University SE-601 74

More information

Analysis of GPU accelerated OpenCL applications on the Intel HD 4600 GPU

Analysis of GPU accelerated OpenCL applications on the Intel HD 4600 GPU Linköping University Department of Computer Science Master thesis, 30 ECTS Computer Science Spring term 2017 LIU-IDA/LITH-EX-A--17/019--SE Analysis of GPU accelerated OpenCL applications on the Intel HD

More information

HTTP/2, Server Push and Branched Video

HTTP/2, Server Push and Branched Video Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Datateknik 2017 LIU-IDA/LITH-EX-G--17/073--SE HTTP/2, Server Push and Branched Video Evaluation of using HTTP/2 Server Push

More information

Multi-Video Streaming with DASH

Multi-Video Streaming with DASH Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Datateknik 217 LIU-IDA/LITH-EX-G--17/71--SE Multi-Video Streaming with DASH Multi-video streaming med DASH Sebastian Andersson

More information

Study of Local Binary Patterns

Study of Local Binary Patterns Examensarbete LITH-ITN-MT-EX--07/040--SE Study of Local Binary Patterns Tobias Lindahl 2007-06- Department of Science and Technology Linköpings universitet SE-60 74 Norrköping, Sweden Institutionen för

More information

Department of Electrical Engineering. Division of Information Coding. Master Thesis. Free Viewpoint TV. Mudassar Hussain.

Department of Electrical Engineering. Division of Information Coding. Master Thesis. Free Viewpoint TV. Mudassar Hussain. Department of Electrical Engineering Division of Information Coding Master Thesis Free Viewpoint TV Master thesis performed in Division of Information Coding by Mudassar Hussain LiTH-ISY-EX--10/4437--SE

More information

OMSI Test Suite verifier development

OMSI Test Suite verifier development Examensarbete LITH-ITN-ED-EX--07/010--SE OMSI Test Suite verifier development Razvan Bujila Johan Kuru 2007-05-04 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden

More information

Development of water leakage detectors

Development of water leakage detectors LiU-ITN-TEK-A--08/068--SE Development of water leakage detectors Anders Pettersson 2008-06-04 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik

More information

Storage and Transformation for Data Analysis Using NoSQL

Storage and Transformation for Data Analysis Using NoSQL Linköping University Department of Computer Science Master thesis, 30 ECTS Information Technology 2017 LIU-IDA/LITH-EX-A--17/049--SE Storage and Transformation for Data Analysis Using NoSQL Lagring och

More information

Functional and Security testing of a Mobile Application

Functional and Security testing of a Mobile Application Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Information Technology 2017 LIU-IDA/LITH-EX-G--17/066--SE Functional and Security testing of a Mobile Application Funktionell

More information

Optimizing a software build system through multi-core processing

Optimizing a software build system through multi-core processing Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2019 LIU-IDA/LITH-EX-A--19/004--SE Optimizing a software build system through multi-core processing Robin Dahlberg

More information

Evaluation of a synchronous leader-based group membership

Evaluation of a synchronous leader-based group membership Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Information Technology Spring 2017 LIU-IDA/LITH-EX-G--17/084--SE Evaluation of a synchronous leader-based group membership protocol

More information

Design Optimization of Soft Real-Time Applications on FlexRay Platforms

Design Optimization of Soft Real-Time Applications on FlexRay Platforms Institutionen för Datavetenskap Department of Computer and Information Science Master s thesis Design Optimization of Soft Real-Time Applications on FlexRay Platforms by Mahnaz Malekzadeh LIU-IDA/LITH-EX-A

More information

Automatic analysis of eye tracker data from a driving simulator

Automatic analysis of eye tracker data from a driving simulator LiU-ITN-TEK-A--08/033--SE Automatic analysis of eye tracker data from a driving simulator Martin Bergstrand 2008-02-29 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis A database solution for scientific data from driving simulator studies By Yasser Rasheed LIU-IDA/LITH-EX-A--11/017

More information

Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail

Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail LiU-ITN-TEK-A--18/033--SE Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail Benjamin Wiberg 2018-06-14 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Semi-automatic code-to-code transformer for Java

Semi-automatic code-to-code transformer for Java Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2016 LIU-IDA/LITH-EX-A--16/031--SE Semi-automatic code-to-code transformer for Java Transformation of library calls

More information

Debug Interface for Clone of DSP. Examensarbete utfört i Elektroniksystem av. Andreas Nilsson

Debug Interface for Clone of DSP. Examensarbete utfört i Elektroniksystem av. Andreas Nilsson Debug Interface for Clone of 56000 DSP Examensarbete utfört i Elektroniksystem av Andreas Nilsson LITH-ISY-EX-ET--07/0319--SE Linköping 2007 Debug Interface for Clone of 56000 DSP Examensarbete utfört

More information

Towards automatic asset management for real-time visualization of urban environments

Towards automatic asset management for real-time visualization of urban environments LiU-ITN-TEK-A--17/049--SE Towards automatic asset management for real-time visualization of urban environments Erik Olsson 2017-09-08 Department of Science and Technology Linköping University SE-601 74

More information

Motion Capture to the People: A high quality, low budget approach to real time Motion Capture

Motion Capture to the People: A high quality, low budget approach to real time Motion Capture Examensarbete LITH-ITN-MT-EX--05/013--SE Motion Capture to the People: A high quality, low budget approach to real time Motion Capture Daniel Saidi Magnus Åsard 2005-03-07 Department of Science and Technology

More information

Intelligent boundary extraction for area and volume measurement

Intelligent boundary extraction for area and volume measurement Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2017 LIU-IDA/LITH-EX-A--17/009--SE Intelligent boundary extraction for area and volume measurement Using LiveWire for

More information

A Back-End for the SkePU Skeleton Programming Library targeting the Low- Power Multicore Vision Processor

A Back-End for the SkePU Skeleton Programming Library targeting the Low- Power Multicore Vision Processor Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2016 LIU-IDA/LITH-EX-A--16/055--SE A Back-End for the SkePU Skeleton Programming Library targeting the Low- Power Multicore

More information

An Approach to Achieve DBMS Vendor Independence for Ides AB s Platform

An Approach to Achieve DBMS Vendor Independence for Ides AB s Platform Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Datateknik 2017 LIU-IDA/LITH-EX-G--17/008--SE An Approach to Achieve DBMS Vendor Independence for Ides AB s Platform Niklas

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Bachelor thesis A TDMA Module for Waterborne Communication with Focus on Clock Synchronization by Anders Persson LIU-IDA-SAS

More information

Design and Proof-of-Concept Implementation of Interactive Video Streaming with DASH.js

Design and Proof-of-Concept Implementation of Interactive Video Streaming with DASH.js Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Datateknik 2017 LIU-IDA/LITH-EX-G--17/081--SE Design and Proof-of-Concept Implementation of Interactive Video

More information

Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8

Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8 Institutionen för Datavetenskap Department of Computer and Information Science Master s thesis Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8 by Daniel Lazarovski LIU-IDA/LITH-EX-A

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Master s Thesis An Approach on Learning Multivariate Regression Chain Graphs from Data by Babak Moghadasin LIU-IDA/LITH-EX-A--13/026

More information

Real-Time Magnetohydrodynamic Space Weather Visualization

Real-Time Magnetohydrodynamic Space Weather Visualization LiU-ITN-TEK-A--17/048--SE Real-Time Magnetohydrodynamic Space Weather Visualization Oskar Carlbaum Michael Novén 2017-08-30 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Design of video players for branched videos

Design of video players for branched videos Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Computer Science 2018 LIU-IDA/LITH-EX-G--18/053--SE Design of video players for branched videos Design av videospelare

More information

Raspberry pi to backplane through SGMII

Raspberry pi to backplane through SGMII LiU-ITN-TEK-A--18/019--SE Raspberry pi to backplane through SGMII Petter Lundström Josef Toma 2018-06-01 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

GPU accelerated Nonlinear Soft Tissue Deformation

GPU accelerated Nonlinear Soft Tissue Deformation LiU-ITN-TEK-A--12/020--SE GPU accelerated Nonlinear Soft Tissue Deformation Sathish Kottravel 2012-03-29 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

CS 5630/6630 Scientific Visualization. Volume Rendering I: Overview

CS 5630/6630 Scientific Visualization. Volume Rendering I: Overview CS 5630/6630 Scientific Visualization Volume Rendering I: Overview Motivation Isosurfacing is limited It is binary A hard, distinct boundary is not always appropriate Slice Isosurface Volume Rendering

More information

Real-time visualization of a digital learning platform

Real-time visualization of a digital learning platform LiU-ITN-TEK-A--17/035--SE Real-time visualization of a digital learning platform Kristina Engström Mikaela Koller 2017-06-20 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Semi-automated annotation of histology images

Semi-automated annotation of histology images Linköping University Department of Computer science Master thesis, 30 ECTS Computer science 2016 LIU-IDA/LITH-EX-A--16/030--SE Semi-automated annotation of histology images Development and evaluation of

More information

Monte Carlo Simulation of Light Scattering in Paper

Monte Carlo Simulation of Light Scattering in Paper Examensarbete LITH-ITN-MT-EX--05/015--SE Monte Carlo Simulation of Light Scattering in Paper Ronnie Dahlgren 2005-02-14 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping,

More information

Implementation and Evaluation of Bluetooth Low Energy as a communication technology for wireless sensor networks

Implementation and Evaluation of Bluetooth Low Energy as a communication technology for wireless sensor networks Linköpings universitet/linköping University IDA HCS Bachelor 16hp Innovative programming Vårterminen/Spring term 2017 ISRN: LIU-IDA/LITH-EX-G--17/015--SE Implementation and Evaluation of Bluetooth Low

More information

Visualisation of data from IoT systems

Visualisation of data from IoT systems Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2017 LIU-IDA/LITH-EX-A--17/027--SE Visualisation of data from IoT systems A case study of a prototyping tool for data

More information

Adapting network interactions of a rescue service mobile application for improved battery life

Adapting network interactions of a rescue service mobile application for improved battery life Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Information Technology Spring term 2017 LIU-IDA/LITH-EX-G--2017/068--SE Adapting network interactions of a rescue

More information

Efficient Simulation and Rendering of Sub-surface Scattering

Efficient Simulation and Rendering of Sub-surface Scattering LiU-ITN-TEK-A--13/065-SE Efficient Simulation and Rendering of Sub-surface Scattering Apostolia Tsirikoglou 2013-10-30 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Evaluation of cloud-based infrastructures for scalable applications

Evaluation of cloud-based infrastructures for scalable applications LiU-ITN-TEK-A--17/022--SE Evaluation of cloud-based infrastructures for scalable applications Carl Englund 2017-06-20 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Real-Time Ray Tracing on the Cell Processor

Real-Time Ray Tracing on the Cell Processor LiU-ITN-TEK-A--08/102--SE Real-Time Ray Tracing on the Cell Processor Filip Lars Roland Andersson 2008-09-03 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

ITS 102: Visualize This! Lecture 7: Illustrative Visualization. Introduction

ITS 102: Visualize This! Lecture 7: Illustrative Visualization. Introduction Introduction ITS 102: Visualize This! Lecture 7: Illustrative Visualization Illustrative rendering is also often called non-photorealistic rendering (NPR) we shall use these terms here interchangeably

More information

Rendering Realistic Augmented Objects Using a Image Based Lighting Approach

Rendering Realistic Augmented Objects Using a Image Based Lighting Approach Examensarbete LITH-ITN-MT-EX--05/049--SE Rendering Realistic Augmented Objects Using a Image Based Lighting Approach Johan Karlsson Mikael Selegård 2005-06-10 Department of Science and Technology Linköpings

More information

Institutionen för datavetenskap

Institutionen för datavetenskap Institutionen för datavetenskap Department of Computer and Information Science Final thesis Developing a new 2D-plotting package for OpenModelica by Haris Kapidzic LIU-IDA/LITH-EX-G 11/007 SE 2011-04-28

More information

Implementing a scalable recommender system for social networks

Implementing a scalable recommender system for social networks LiU-ITN-TEK-A--17/031--SE Implementing a scalable recommender system for social networks Alexander Cederblad 2017-06-08 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

A collision framework for rigid and deformable body simulation

A collision framework for rigid and deformable body simulation LiU-ITN-TEK-A--16/049--SE A collision framework for rigid and deformable body simulation Rasmus Haapaoja 2016-11-02 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Distributed Client Driven Certificate Transparency Log

Distributed Client Driven Certificate Transparency Log Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Information Technology 2018 LIU-IDA/LITH-EX-G--18/055--SE Distributed Client Driven Transparency Log Distribuerad

More information

Volume Illumination & Vector Field Visualisation

Volume Illumination & Vector Field Visualisation Volume Illumination & Vector Field Visualisation Visualisation Lecture 11 Institute for Perception, Action & Behaviour School of Informatics Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

Volume Rendering. Lecture 21

Volume Rendering. Lecture 21 Volume Rendering Lecture 21 Acknowledgements These slides are collected from many sources. A particularly valuable source is the IEEE Visualization conference tutorials. Sources from: Roger Crawfis, Klaus

More information

Permissioned Blockchains and Distributed Databases: A Performance Study

Permissioned Blockchains and Distributed Databases: A Performance Study Linköping University Department of Computer and Information Science Master thesis, 30 ECTS Datateknik 2018 LIU-IDA/LITH-EX-A--2018/043--SE Permissioned Blockchains and Distributed Databases: A Performance

More information

Institutionen för datavetenskap. Study of the Time Triggered Ethernet Dataflow

Institutionen för datavetenskap. Study of the Time Triggered Ethernet Dataflow Institutionen för datavetenskap Department of Computer and Information Science Final thesis Study of the Time Triggered Ethernet Dataflow by Niclas Rosenvik LIU-IDA/LITH-EX-G 15/011 SE 2015-07-08 Linköpings

More information

Adaptive Probabilistic Routing in Wireless Ad Hoc Networks

Adaptive Probabilistic Routing in Wireless Ad Hoc Networks LiU-ITN-TEK-A-13/018-SE Adaptive Probabilistic Routing in Wireless Ad Hoc Networks Affaf Hasan Ismail Liaqat 2013-05-23 Department of Science and Technology Linköping University SE-601 7 Norrköping, Sweden

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

Magic mirror using motion capture in an exhibition environment

Magic mirror using motion capture in an exhibition environment LiU-ITN-TEK-A--10/068--SE Magic mirror using motion capture in an exhibition environment Daniel Eriksson Thom Persson 2010-11-18 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

A latency comparison of IoT protocols in MES

A latency comparison of IoT protocols in MES Linköping University Department of Computer and Information Science Master thesis Software and Systems Division Spring 2017 LIU-IDA/LITH-EX-A--17/010--SE A latency comparison of IoT protocols in MES Erik

More information

Evaluating Deep Learning Algorithms

Evaluating Deep Learning Algorithms Linköping University Department of Computer and Information Science Master thesis, 30 ECTS Datateknik 202018 LIU-IDA/LITH-EX-A--2018/034--SE Evaluating Deep Learning Algorithms for Steering an Autonomous

More information

Institutionen för datavetenskap

Institutionen för datavetenskap Institutionen för datavetenskap Department of Computer and Information Science Final thesis Implementation of a Profibus agent for the Proview process control system by Ferdinand Hauck LIU-IDA/LITH-EX-G--09/004--SE

More information

Multiple scattering in participating media using Spherical Harmonics

Multiple scattering in participating media using Spherical Harmonics Examensarbete LITH-ITN-MT-EX--06/033--SE Multiple scattering in participating media using Spherical Harmonics Johan Åkesson 2006-06-05 Department of Science and Technology Linköpings Universitet SE-601

More information

A user-centered development of a remote Personal Video Recorder prototype for mobile web browsers

A user-centered development of a remote Personal Video Recorder prototype for mobile web browsers LiU-ITN-TEK-G--09/004--SE A user-centered development of a remote Personal Video Recorder prototype for mobile web browsers Johan Collberg Anders Sjögren 2009-01-29 Department of Science and Technology

More information

Development of a Game Portal for Web-based Motion Games

Development of a Game Portal for Web-based Motion Games Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2017 LIU-IDA/LITH-EX-A--17/013--SE Development of a Game Portal for Web-based Motion Games Ozgur F. Kofali Supervisor

More information