3D Visualization of Cerebral Aneurysms. Sanaz Ghodousi. MSc Cognitive Systems 2006/2007

Size: px
Start display at page:

Download "3D Visualization of Cerebral Aneurysms. Sanaz Ghodousi. MSc Cognitive Systems 2006/2007"

Transcription

1 3D Visualization of Cerebral Aneurysms Sanaz Ghodousi MSc Cognitive Systems 2006/2007 The candidate confirms that the work submitted is their own and the appropriate credit has been given where reference has been made to the work of others. I understand that failure to attribute material which is obtained from another source may be considered as plagiarism. (Signature of student) Sanaz Ghodousi

2 Summary This project work was aimed to do a research in the field of visualization which studied the feasibility of integration of isosurface rering as scientific visualization with the focus+context techniques as information visualization. The motivation for this project was originated from the medical domain to answer the need to visualize 3D large medical datasets in a way which can provide the visualization of the region of interest ( aneurysm in the dataset of this project) with more detail while providing its relation to the context ( arteries in the dataset of this project) at the same window. The project progressed to reach its objectives by the implementation of different focus+context techniques and the integration of these algorithms with different sample datasets in order to progress towards the last stage of project. During the progress of this research work, the 3D visualization of aneurysm was provided with different visualization tools such as VTK and MATLAB. At the final stage of this work, the integration of isosurfacing as the selected volume visualization technique with three different distortion algorithms (Bifocal, Fisheye and Lens) was implemented. It was hoped to implement the tool which can be used for neurosurgeons to apply the above focus+context algorithms on the 3D visualization of aneurysms. In order to reach this stage, the GUI development had been done for similar approach but on 2D dataset. The extension of this GUI implementation, to be applied on the specific 3D visualization, can be done by the access to sufficient memory resources. The overall evaluation of this project has been done by the comparison of the results to the related research work of Cohen [3] which had investigated the same integration of focus+context algorithms with volume rering rather than isosurfacing. i

3 Acknowledgment First, I would like to acknowledge the sincere support and guidance of Professor Ken Brodlie, my supervisor. This project could not be done without his vision and dedication. I can never be thankful enough for all his teaching efforts during the visualization module and the way he taught me all the focus+context concepts. I am really grateful for the insight he gave me about problem solving in the visualization field through the progress of this project. Also I thank my project assessor, Dr. Hubbard, who gave me invaluable hints in the interim report and during the demonstration. Also I like to thank Professor Hogg for his efforts in the teaching of vision module which provided me with MATLAB experience and image processing knowledge that were applied through the implementation stage of this project. Special thanks to my classmates and fris in Leeds who supported me mentally by their encouragements and specially their enjoyable friship. Finally I would like to deeply thank my family for their less love and support. Specific thanks to my brother who introduced me the concept of visualization in GIS for the first time in my life many years ago. Also I am deeply grateful to my mother for her forever guidance and encouragement all through my life. ii

4 Table of Contents 1. Introduction D Visualization and Medical Application Motivation Objectives Volume Visualization Volume Rering Isosurfacing Focus and Context Distortion Classification Purpose for Choice of Techniques Methodology Theory of Incremental Approach Fisheye Cartesian Fisheye Polar Fisheye Bifocal Lens Choice of Tools VTK Preparation...14 iii

5 Implementation Results MATLAB and Volume Visualization Development D Fisheye Bifocal Lens D D Isosurfacing Evaluation Conclusion and Future Work...34 Appix A Project Reflection...36 Appix B MATLAB source codes...38 Appix C Sample Evaluation form for 2D GUI of F+C application...58 Appix D MSc Interim Project Report...60 Bibliography...70 iv

6 Visualization offers a method for seeing the unseen Scientific Computing Workshop Report Chapter 1: Introduction Scientific Visualization [1] is a discipline which provides the ability to compreh large and complex data. This task is done by creating images which convey salient information [2]. The discipline connects traditional sciences like physics, biology and medicine to computer graphics. Medical domain has benefited most from this field due to its increasing needs for visualization of three dimensional datasets. Enhanced treatment planning, surgery planning and better diagnosis are the goals which have driven the application of visualization techniques. Neurosurgery is the field which will be focused in this project by considering the visualization of aneurysms in a way which can help neurosurgeons to achieve a precise understanding of the structure of aneurysms for better surgical planning. The concept of aneurysms and the reasons for the importance of its visualization will be discussed in the next section D Visualization and Medical Application The two sciences of visualization and medicine are cooperating with each other to find the new horizons for both disciplines in a way which can provide better insight for the ever increasing scale of modern data in these fields. The effect of 3D visualization in better understanding of 2D medical images has been largely applied in various medical applications. In recent years, the availability of virtual environments has been utilized in combination with 3D visualization to provide the better perception of the visualized dataset from medical domain. The aim of this project is to study the application of an innovative volume visualization approach which combines isosurfacing, a established visualization technique, with focus and context, a concept from information visualization, in order to allow better visualization of cerebral aneurysms based on the needs of neurosurgeons. This approach is partly motivated by earlier research work which has resulted in an application providing volume visualization of cerebral aneurysms. In this research work, Cohen s thesis [3], a different visualization technique, volume rering, was combined with focus and context to study this problem. The motivation for this work was emerged from the need of neurosurgeons to access the efficient visualization of aneurysms and particularly with the hope to have its application in operating theatre [3]. 1

7 1.1.1 Motivation Visualization techniques are chosen mostly based on the application field since this is where the data comes from. The motivation for this work is originated from medical neurosurgery which visualization has a critical role in it. The 3D visualization of aneurysms is a field of interest for neurosurgeons since it helps in better understanding of traditional medical imaging datasets. An aneurysm is a dilation of an artery caused by the vessel wall yielding and stretching due to the pressure of the blood that may cause to death by haemorrhage due to its rupture [4]. There are different types of cerebral aneurysms which are usually categorized as saccular or fusiform. A saccular aneurysm has sac-like shape (Figure 1.1 (a)) which is the most common type and fusiform (Figure 1.1 (b)) elongates the vessel diameter over sometimes a considerable length [3]. Treatment differs according to the type, so it becomes important for the neurosurgeon to know exactly which type is dealt with. Figure 1.1 Two common types of aneurysms ([3]) According to Marcelo Cohen s thesis [3], the goal is the effective visualization of aneurysms based on the point of view of a neurosurgeon from Leeds General Infirmary (LGI). The most important aspect which is considered in the mentioned research is to provide a visualization technique which can be applied for better diagnosis and treatment planning of aneurysms. For this goal, the challenge is to provide a method of visualization which can focus on aneurysms without losing context of the surrounding arteries [5]. One of the diagnostic techniques for aneurysms in medical imaging is CT angiography (CTA). Since CTA is considered as a suitable approach for diagnosis and evaluation of aneurysms, the volume dataset which Cohen used for the visualization process consists of a CTA set of images [3]. The same dataset will be used for this project too. Some researchers declare that visualization of aneurysms based on CTA has some advantages like the capacity to view the internal anatomy visualization of arteries and aneurysms also the relationship of vascular structures to bones can be visualized which all of them are important for surgical planning viewpoint [6]. This project is mostly based on the same motivation which is considered by Cohen s thesis on Focus and Context for Volume Visualization [3]. This project was aimed to introduce a new technique as an aid for neurosurgeons in visualization of aneurysm. 2

8 1.1.2 Objectives Based on the overall aim which is the effective visualization of aneurysms and considering the characteristics of our medical visualization problem such as rering speed or interactive performance (according to the needs of neurosurgeons) the following objectives are defined for the current project. The first objective is to study the visualization of aneurysm by applying the concept of focus and context [3] to surface extraction as a volume visualization technique. Based on previous research, surface extraction and reconstruction is the gold standard for visualization of aneurysms. The reason is the clear distinction between the vessels and surrounding tissues and also 3D reconstruction can visualize fine details in aneurysm images. [3] The second objective is to create an application which can be used as a tool to implement and evaluate the proposed visualization technique. Due to the complexity of this objective, the chosen approach for providing the solution for this visualization technique, was subdivided into simpler tasks. In order to evaluate each stage of the project and the technique which was being used in that stage, simple datasets were applied in different stages and the results were used to evaluate the approach in that stage of implementation. The third objective is to evaluate the implemented visualization techniques in a way which can be used as guidance for future work. 1.2 Volume Visualization There has been much research in the field of aneurysm visualization which clearly shows the importance of its application in the medical domain. The pieces of research differ mostly based on their input volume dataset which is acquired from different imaging modalities (e.g. CTA, MRA) or on their applied visualization techniques. The common approach in all the research is based on some fundamental volume visualization concepts. Medical visualization is mostly based on the visualization of 3D volume images which will be categorized as 3D scalar data. So based on Brodlie and Wood [7], three different techniques can be applied as follows for these datasets. Surface extraction and reconstruction: These methods extract surfaces of constant scalar value (isosurfaces) from the volume data [3] Surface-fitting is slow, but rering is fast in this technique. [8] Slicing: 2D slice production through the domain of the 3D data [3] Direct volume rering: Techniques which visualize data directly without the creation of underlying geometric structures [3] 3

9 1.2.1 Volume Rering The principle, which is used in the volume rering algorithms, is based on the ability to give a qualitative feel for the density changes in the data. [2] Different approaches to volume rering include ray-casting, 2D and 3D texture slicing, shear-warp rering and splatting. The general approach in the volume rering is to approximate how the volume data affects light [2]. This principle is understood as the mapping of voxel values to colour and opacity. [3] The comprehensive research work of Cohen [3] had considered all the related medical applications of volume rering which is based on graphics hardware programming Isosurfacing This technique maps the volumetric data to geometric primitives. [9] Methods for fitting geometric primitives to surfaces are contour connection and voxel intersection. [8] The voxel intersection method provided by Lorenson and Cline [10] which is known as Marching Cubes is applied in this project for surface extraction. This method works by considering each cell of the dataset at a time. The decision whether the isosurface intersects that cell or not is based on the user threshold which is compared to the vertex configuration of the cell. Each vertex can be greater or lower than the threshold value which makes 2 8 possible combinations for a cell then each combination determines which polygon should be created for that configuration. [3] The reduction of similar patterns from 256 combinations will reduce the possible combinations to the 14 structures which are shown in the following figure. This figure presents the structure of cube model in the marching cubes algorithm too. Figure 1.2 Marching Cubes and Triangulated Cells ([10]) The advantages of surface extraction are the quality of the image and fast rering possibility. The limitation of the technique is that only the selected threshold surface is visualized. Also the 4

10 internal structures and details of data can be lost during the visualization process due to the extraction of surface before rering. [9] 1.3 Focus and Context According to the taxonomy of Cohen and Brodlie about focus and context [4], this idea in information visualization can be combined with scientific visualization techniques in order to overcome the problem of showing high level of detail with having the overall context at the same time. The idea of focus and context which emerged from the paper of Spence and Apperley [11] is done by folding the information space in a way to provide a magnified view of the focus, and a demagnified view of the context. This is known as a bifocal display. Also another concept, fish eye view, for solving this type of problems is provided by Furnas [12]. This method suppresses irrelevant data according to a degree of interest [4]. For 3D datasets which is our domain of interest, the first published work is the research of Carpale [13] which studied the effects of distortion on 3D data. This research considered the occlusion problem of the focus region whereas the proposed research provided a clear focus region [4]. According to Cohen and Brodlie [4], the distortion in focus and context taxonomy is based on the basic distortion idea from the work of Winch [14] which uses centre of focus and magnification factor to map the original coordinates in the original volume to distorted voxel coordinates in the distorted new volume. Since the volume in the medical dataset is discrete data (voxels), the inverse mapping from distorted volume to original volume is used to prevent having holes in the image [4]. The framework which is used in Cohen s thesis [3] is based on the recent Hauser s paper in generalizing focus and context visualization [15]. This paper presents different graphics resources which can be used to discriminate between focus and context. Graphics resources such as space, opacity, color, frequency etc. can be used to incorporate the idea of focus and context in scientific visualization. [15] Distortion Classification The distortion techniques or focus + context algorithms refer to the principle of having more space on the user specified focus region and at the same time providing the context surrounding the focus but in a compressed space. The difference of these approaches to detail+overview ones is that the focus and context regions are displayed in the same window at the same time. This characteristic in focus+context techniques will prevent to distract the user attention from the focus point. 5

11 In this project, three focus and context approaches, fisheye, bifocal and lens are selected with regard to the previous research work of Cohen. This allowed the possibility for the comparison of the results in each implementation stage. The first paper about focus and context from Furnas [12] described the effect of distance from focal point on the selection of data item which should be shown after distortion. After this work, bifocal display was presented as a one-dimension distortion function in Spence and Apperly paper [11]. Considering different techniques which implemented later to enhance these basic focus and context techniques, two classes of approaches are as follows. The magnification function is the derivative of transformation function from the distorted view to original one. [15] Continuous magnification function Piece-wise magnification function In the paper of Leung and Apperly [16], the characteristics of distortion techniques are presented based on their transformation function. Figure 1.3 Classification of distortion techniques based on magnification function ([16]) The following figure will present the transformation function (inverse mapping) for the three selected focus+context approaches. This will present the reason for specific characteristics of each technique. These characteristics with the detailed definition for transformation function will be provided in the next chapter. 6

12 Fisheye Lens Bifocal Figure 1.4 Transformation functions for different distortion techniques 1.4 Purpose for Choice of Techniques The main reason for the selection of these specific techniques in the implementation process of this project was to follow a research work, Cohen s thesis, which had resulted in new findings related to our 3D visualization problem. Since the practical problem for both works was the same, it was tried to solve the problem considering another possible solution regarding volume visualization techniques. The implementation of similar focus and context techniques presented the opportunity to assess the accuracy of our approach in a more comprehensible manner by comparing the results with Cohen s work. Also the isosurface approach was chosen regarding the characteristics of the dataset as a scalar 3D domain. Since the comprehensive research has been done for the volume rering technique and its integration with focus and context in Cohen s work, it could be used as a research ground for a new research field which studies the possibility of this integration of scientific and information visualization with the application of isosurfacing. To determine the usability of this new combination of techniques, the above approach was applied on our medical visualization problem. 7

13 Chapter 2: Methodology Based on the mentioned motivation in 1.1.1, the goal of this project is to study the feasibility and efficiency of a proposed approach which is the combination of scientific and information visualization techniques in order to respond to the specific needs in the visualization of our medical dataset. The incorporation of focus and context distortion techniques with isosurface as a volume visualization approach has been chosen to facilitate the visualization and the perception of a specific part of 3D medical dataset, aneurysm. Based on previous research in this field [4], the particular visualization approach has been implemented responding to the needs of neurosurgeons who hope to apply this visualization as a tool to acquire 3D observation of the structure of aneurysm in detail as well as its surrounding arteries in context. Following the discussion in the previous chapter which introduced the reasons for this specific choice of techniques to solve the problem, this chapter presents a comprehensive review of the incremental approach which is used to prepare the foundation for the development stage of our specified visualization technique. The integration of Focus+context techniques in the 3D visualization of the provided large medical dataset is considered to be a multipurpose task. Thus, at the first stage of development, the general task is divided into three simple individual tasks as follows. The first task, which will be explained in detail in this chapter, aims at providing the appropriate procedure for the implementation of isosurface on our specific dataset. The next task is to develop the preferred focus+context techniques in a way which the result of algorithms can be observed for the three dimensional space. The final task is to integrate the isosurface visualization implementation with the 3D version of the implemented distortion algorithms. The development of the two latter tasks will be presented in the next chapter but the theory of focus+context techniques is presented in this chapter. This provides an overlook of the whole approach and tools in this chapter then the results of the development stage will be provided in the chapter three. Based on the complexity of the preferred solution, a multidimensional approach was chosen to proceed towards the final stage. This procedure consists of exting the implementation of focus and context distortion methods from 1D to 2D and then to 3D in order to simplify the analysis of the results in the final stage where the incorporation of these methods with isosurfacing is applied for 3D visualization of aneurysm. The following sections will discuss this visualization procedure for three selected distortion techniques which were introduced in previous chapter including Fisheye, Bifocal and Volume Lens. This also considers the quality of implementation of similar techniques which were used by the related research work on volume rering [4]. In the second section, the tools which were used as 8

14 visualization packages in the implementation of this process are introduced and their related functionalities which applied for the delivery of this solution will be discussed. The application of isosurface, as a core volume visualization technique in the implementation stage, was based on the use of existing isosurface algorithms in the referred 3D visualization software tools. To make the use and the implementation of these algorithms clear, each tool is described with the related implementation for its isosurface algorithm. Following the details which will be provided in this chapter for the theoretical foundations of the specified incremental approach, the practical results of using these algorithms on simple datasets at the first stage and the final implementation for real medical dataset will be displayed in chapter three. 2.1 Theory of Incremental Approach After the definition of our choice of method to solve this visualization problem and presenting the reasons to do it in the first stage, based on the complexity of our 3D image dataset, the need to define an incremental approach for the second stage of this project was clarified. The goal for incorporating focus+context techniques with the visualization of a large 3D dataset was divided into relatively simpler steps as follows. The specified focus and context distortion techniques were applied on simple prototypes such as colorbar, 2D image or 3D sphere which are appropriate for the related level of dimension (1D, 2D and 3D). This procedure could facilitate both the implementation and evaluation of these techniques by observing the results of application on the simplified datasets rather than the complex 3D medical dataset. In this approach, the overall goal is to apply three main distortion techniques on the appropriate simplified multidimensional datasets in a way which can clarify the effect of these different algorithms on the 3D isosurface rering of aneurysm in final implementation stage. Considering the presented definitions of each focus and context techniques in the first chapter, the following sections include the explanation of Fisheye, Bifocal and Volume Lens with regard to the theory and implementation structure of techniques. The development of these focus+context theories with an incremental approach will be demonstrated in chapter three. 2.2 Fisheye As introduced in chapter 1, the first focus+context implementation [17] which used distortion concept to implement the focus+context concept was introduced in Furnas paper [12]. Furnas considers three main components to explain the implementation procedure of Fisheye technique. The first important component is the focal point, the second is the distance of each point in dataset from focus point and the type of distance should be defined relative to the structure of dataset. The last component is the so called level of detail which assumes that the importance of 9

15 each point can be defined with regard to the underlying structure of dataset and relation of each specific point to the general structure. At last, the Degree of Interest (DOI) for each point in the dataset will be provided as a function which is depent on both previous values to define fisheye view as follows. The point x will be displayed in fisheye view if and only if the DOI is above some threshold value (c). [12] This threshold will be a tool to select the points which should be presented on interface after the implementation of fisheye. The threshold, c, will be high if there are not enough resources [18]. 1) Focal point (.) 2) Level of Detail (LOD) 3) Distance from user s current focus D(.,x) DOI (x.) = F( LOD(x), D(.,x)) (2.1) For all x, it will be shown if and only if DOI(x.)>c The function (F) can be a combining function that is monotonic increasing in the first argument and decreasing in the second argument. It means that the Degree of Interest in point (x) increases with its global importance and decreases with its distance from the current focus. Based on Furnas viewpoint, the DOI function is defined generally to allow it be implemented for different kinds of world [18] Considering the fisheye definition in Furnas paper, it can be understood that the focal point is at the highest level of magnification or importance and by gradually increasing the distance of data points from focal point, the importance decreases relative to distance from focal point. In following subsections two different approaches based on Sarkar and Brown paper is demonstrated. The algorithm which is used in the implementation process of Fisheye concept in this project will be displayed in chapter three which is based on the Cohen s work in the 3D implementation of this theory for volume rering [4] Cartesian Fisheye Based on the paper of Sarkar and Brown, the fisheye technique was implemented for graphical applications based on Cartesian and polar transformation system. [19] In this paper, the transformation function which is used to transform normal coordinates to fisheye coordinates is considered the same for the horizontal and vertical distortions. As it is mentioned in the first chapter, fisheye distortion technique has a continuous transformation function. The following equation considers the normalized distance from a point ( x ) to focal point and distortion factor ( d ) to calculate the transformation function. The transformation equation is referred in the Leung and Apperly s paper [16] as follows. T(x) = (d+1) x / (dx+1) (2.2) 10

16 The following figure shows the effect of distortion factor in the transformation function of fisheye. It is obvious that the value of zero for distortion will not generate any distortion effect and the original data and distorted one will be similar. Also the increase in the value of distortion factor will generate more magnification near the focal point. The figure presents the relation of transformation function (G(x)) on the value of distortion and the magnification function. At the time that the value of distortion is high, the magnification function is high for the values of x near to the focal point (normalized value of zero) and for the values near to the boundary (normalized value of one) the magnification is low. The slope of graph demonstrates the magnification value since the magnification function is the derivative of transformation function. Figure 2.1 The effect of distortion value on transformation function Polar Fisheye The transformation function of this method is similar to Cartesian Fisheye but polar coordinates of the points (of interest), rather Cartesian coordinates, are considered. Also polar distance is measured from the focal point. The following formula is mentioned in Sarkar and Brown paper for the implementation of polar fisheye [19]. 1) ( r, θ ) : normalized polar coordinate point norm 2) r max : maximum radial distance from focus point r feye = r max rnorm ( d + 1) rmax rnorm d + 1 r max ( rnorm, θ ) is transferred to fisheye coordinates view ( rfeye, θ ) (2.3) 11

17 2.3 Bifocal The bifocal display effect based on Spence and Apperly work was discussed in chapter one. To implement the bifocal display, the focus region rather than focal point in Fisheye view should be considered. Based on Leung and Apperly s paper [16] the following transformation function can be derived for Bifocal display based on the following figure. For x<=a T bifocal (x)= x.b/a For x>a T bifocal (x)= b+ (x-a)(1-b)/(1-a) Figure 2.2 Bifocal distortion ([16]) The implementation technique which was used in different dimensions will be explained in the next chapter. 2.4 Lens The lens distortion effect is the most intuitive one since it applies the magnification function for the focus region without any change in the context which is similar to the real lens effect. Based on the research of Cohen s thesis, the following intuitive function is considered as transformation algorithm which transfers the distorted point back to its original position in original dataset (Inverse mapping). 1) Focus region 2) x in distorted dataset and x in the original dataset 3) Magnification factor (mag) If x is outside focus region then x = x If x is inside the focus region, the magnification function is x x f + mag. xf x = mag 12

18 Another factor called transition factor is included in this implementation which will be discussed in next chapter when providing the complete implementation algorithm. 2.5 Choice of Tools This section will introduce specific software tools, VTK and MATLAB, which were used during the implementation of the solution for our 3D visualization problem. Also, the main objective of this project is to provide the 3D visualization for our specific dataset with the application of isosurface as a volume visualization technique. As it discussed in the first chapter, the isosurfacing technique has been considered as a fundamental volume visualization technique for this research project. So, considering the research objectives, the procedure which was used for the implementation of isosurfacing in VTK and MATLAB is presented in the following sections. The first section will review the capabilities of VTK which were used as a visualization tool in the first steps of project development to capture the aneurysm visualization without the incorporation of focus and context concept. Following the results which acquired during the implementation process in VTK, it was considered to revise the methodology in a way which can lead to less implementation time and more comprehensible results for the implementation of focus and context concept. So in the next stage of implementation, MATLAB software was selected as the alternative tool which has the 3D visualization capability and at the same time an easy and familiar environment to develop Graphical User Interface (GUI). The related characteristics of MATLAB which considered it as an alternative tool for this project will be presented in section 2.2. In both sections, the general approach which can be used to implement the isosurface technique is represented. The incremental approach in MATLAB directed towards the implementation of a 2D GUI which can be used as an interactive two dimensional focus + context application to apply different distortion techniques for the manipulation of 2D images. The implementation of focus+context will be discussed in chapter three while presenting the 2D Focus+Context algorithm and their extension in 3D VTK The Visualization Toolkit (VTK) is an open source object oriented software system which provides the required functionalities for 3D data visualization [20]. One of the powerful characteristics of VTK is that its implementation in C++ does not limit the use of it with other languages such as Java, Tcl and Python. This toolkit was the first option in the choice of tools for this project considering the above capabilities and the well-known visualization algorithms in the toolkit. Although there was not any previous experience on programming with VTK for the author, being familiar with Java language and the principles of object oriented systems provided the ability to 13

19 understand the structure of example VTK codes in Java or C++. The following sections describe the stage of project development which the VTK was used to gain the 3D visualization of aneurysm from CTA dataset. This includes some sections about the preparation of VTK platform for Java programming and also the implementation of the applied VTK visualization pipeline, which provides aneurysm visualization. Also the implementation of isosurfacing in VTK will be discussed in this section. At last, the reasons which influenced the change of tool to MATLAB are discussed in the results section Preparation The first step to implement an application based on Java and VTK is to compile VTK source code through using Cross Platform Make (CMake) [21] environment which is provided by VTK Kitware Inc [22]. Some tutorials on the internet mention sufficient information for the details of the compilation process which prepares new library and class files needed for Java programming. [4], [5], [6]. Also the VTK application provides a facility for the implementation of Graphical User Interface (GUI) which can be integrated in Java program. A Java AWT-based rering canvas is provided in VTK as a Java class which is known as vtkpanel [20]. So all the components being prepared as compiled Java class files, it is feasible to implement a visualization application in Java language based on related visualization algorithms and GUI methods Implementation The VTK is a toolkit with higher level Application Programming Interface (API) than OpenGL since it simplifies the visualization process by direct visualization algorithms such as iso-contouring, streamlines and glyphing. Also VTK like many other visualization software systems such as IRIS Explorer is based on the architecture of the visualization pipeline. It applies a data-flow principle to transform information into graphical data [20]. Based on the above characteristics and considering the object oriented structure of VTK, the implementation of an application with VTK can be straightforward by having the Object Oriented Programming (OOP) viewpoint during implementation process. In this stage of implementation, the VTK book [23] and its examples were quite helpful in order to become familiar with the palette of objects and the way they interact [20]. Another resource on the internet provided the access to the VTK code examples (which are mostly written in Tcl or C++ language in the VTK book) in Java language [24]. These examples simplified the understanding of how to use vtkpanel in the development of our aneurysm visualization (VTK based) program in Java. This class prepares the rerer window to be used with Java GUI methods. The structure of objects in VTK can be used easily but the challenge is to learn how they interact [20]. Schroeder and Martin [20] present the visualization pipeline topology of VTK, as it is displayed in Figure 2.1. This pipeline describes the main components which should be included in the development of any visualization application with the VTK. The two types of objects in the pipeline 14

20 include data objects which provide access to data and process objects which represent the algorithms in VTK. The process objects include Source, Filter and Mapper objects which their roles are defined by Schroeder and Martin [20] as follows. Source process objects produce data by reading from files or producing data objects by procedural source objects. Filter objects, which can have multiple inputs, generate new data object based on the specific functionality of the filter. Mapper objects transform data into graphics data. The process and data objects are connected together using setinput and setoutput functions in the VTK. The following Java code example presents a typical way to form visualization pipelines in VTK. It connects the data object output of the process, anotherfilter, to the input of the filter afilter. Java VTK code example: afilter.setinput ( anotherfilter.getoutput()); Source Filter Graphics Data Object Data Object Filter Mapper Source Filter Data Object Data Object Data Object Figure 2.3 The VTK Pipeline topology ([20]) The VTK based application for 3D visualization of aneurysm used the above visualization pipeline in VTK which is shown as a data flow diagram in Figure 2.3. Regarding the VTK pipeline topology, the three important stages of the process are as follows. First, the data source from our CTA image dataset prepared as a proper data object for the VTK by using the source object relevant to the image type of the dataset ( vtkjpegreader ). Second, the proper filter which could be suitable for our preferred visualization method, iso-surfacing, was selected from the VTK filter methods ( vtkcontourfilter ). The last stage was to transform the data object output from the filter to graphical data by using the appropriate Mapper ( vtkpolydatamapper ). The graphics output of this visualization pipeline will be rered by vtkactor as a part of the graphics API in VTK. There is a graphics subsystem in VTK which simplifies reringvisualization tasks by introducing abstract objects such as cameras, lights and actors ( based on movie-making industry [25]). The vtkactor as a part of this subsystem transforms graphical data into images. [20] To rer the instance of vtkactor, it will be added to the vtkrer which is prepared by vtkpanel to be compatible for Java GUI and finally it will be displayed in the vtkrerwindow. 15

21 VTK Rerer in Java language vtkjpegreader GetRerer() vtkcontourfilter vtkpanel VTK Graphics Subsystem addactor() vtkrerer vtkpolydatamapper vtkactor Figure 2.4 Visualization pipeline for isosurfacing of stack of Jpeg images in Java The main implementation part in this project was to consider how to generate the 3D visualization of aneurysm (as a specific part of CTA dataset). Considering the reasons which were discussed in the first chapter, the preferred volume visualization technique is isosurfacing. In VTK, the isosurface of the predefined data object can be provided by two types of methods: vtkcontourfilter and vtkmarchingcubes. The difference is in the input type of these two methods but both of them are implemented based on Marching Cubes technique [10]. The vtkcontourfilter gets scalar values as input while vtkmarchingcubes gets volume data object as input. Based on the scalar values of the stack of CTA images, the vtkcontourfilter was chosen to implement isosurfacing in the visualization of aneurysm. The threshold factor, which should be declared to extract the proper isosurface for the visualization of aneurysm, is defined for the instance of vtkcontourfilter by setting the isovalue of this object to 200. This intensity was chosen by a trial and error approach to capture the best visualization of vessels and aneurysm. Since the intensity of bones covers the same intensity range of other tissues in CTA datasets, some parts of bone will be displayed with vessels by using the specific intensity value of vessels. The iso-value specifies the appropriate isosurface which visualizes the vessels with the structure and position of aneurysm in relation to the extracted vessels Results Following the above implementation steps, the 3D visualization of aneurysm was provided. The network of arteries could be observed even though the quality of the 3D visualization was not satisfactory. Also the interaction (scene rotation and transformation) which was provided with the VTK rerer was not sufficient to see the exact position and shape of aneurysm on the network of vessels. 16

22 aneurysm Figure 2.5 The result of 3D visualization of aneurysm in VTK As it described in the first chapter, the objective of the project was to integrate the concept of focus and context in the isosurface volume visualization technique. According to this fact, the coordinates of the points in the prepared data object output of the vtkjpegreader class had to be saved for further manipulation in the previously defined distortion algorithms. Since Focus+Context techniques transform the original data space to a new distorted data space, there should be a specific process object in the visualization pipeline to save the coordinates of points before the application of vtkcontourfilter, as the isosurfacing module in the pipeline. Due to the large class hierarchy in VTK, it takes time to figure out the proper class for a specific functionality. Also, considering the complexity of our 3D dataset and based on the reasons which described before for the choice of incremental and multidimensional approach, the requirement for the first stage of implementation was to apply focus and context algorithms on simple 1D and 2D prototypes. The structure of VTK language is not a simple platform to define basic and appropriate prototypes for our multidimensional approach. It was not straightforward to implement the different focus and context techniques in VTK, specifically considering the implementation time and the complexity in data space manipulation. So an alternative tool, MATLAB software [26], was selected with regard to its ability as a fast prototyping environment. The MATLAB environment provides efficient tool for the preparation of simple prototypes in one, two and three dimensions. Furthermore, it provides the main functions which are needed for volume visualization such as isosurface function. Another set of results which were acquired during the implementation of GUI for 3D visualization application in VTK are as follows. The implementation of the user interaction with the interface in the focus and context applications needs the capacity to select the region of interest for the user. The first challenge in the implementation of this type of interaction in 3D visualization environment is how to get the 3D coordinates of the depth points in the selection region since the user interacts with 17

23 the 2D screen. Another level of interaction, which is tried to be implemented, was the ability to choose the visualization factors (such as different thresholds for isosurface) by the use of slider. In order to implement this type of interaction, there should be a powerful processing and memory capabilities in the system to provide the quality of resources which are needed for the 3D rering algorithms. These interaction issues could not be solved in this project either in VTK or MATLAB since it is related to the challenges regarding the complexity of graphics algorithms MATLAB and Volume Visualization The MATLAB software is widely known as a computing environment which provides a highlevel programming language and functions for data visualization. [26] The two characteristics of MATLAB which were the reasons for the selection of this software as the alternative tool for our implementation are as follows. First, MATLAB includes all the necessary tools needed to implement a 3D visualization application. The provided functions for volume visualization such as isosurface help to implement our specific 3D visualization within a fast and easy to code environment. Also the need to make a GUI for this project is responded in MATLAB GUI Development (GUIDE) platform which can help to develop an interactive interface for the application. Second, MATLAB environment facilitate the computational programming with its vast number of mathematical functions. Also the simplification of computations, based on working with matrices in MATLAB, provides a flexible environment to develop prototypes of applications or model the results of different algorithms. The above characteristics are used in the development of our desired focus and context algorithms regarding the predefined incremental approach. The goal was to simplify the design of different focus+context algorithms and at the same time providing more evidence about the accuracy of algorithms based on the results from simple data sets. In the previous section, the visualization pipeline (Fig. 2.4) presented the procedure of isosurface extraction from our 3D dataset in VTK. In this section, the same approach is represented in MATLAB. The isosurface() function can be used directly from the provided set of functions in MATLAB. Also the preparation of data set, as a 3D volume for the isosurface function, can be done by simply making a three dimensional matrix from the stack of 2D CTA images. As it will be discussed in the 3D development section in the next chapter, the incorporation of MATLAB GUI with the 3D visualization of aneurysm could not be achieved due to the performance problems in MATLAB and the limitation in memory resource. The following diagram (Fig. 2.3) displays the order of functions which were used to provide the 3D visualization. The visualization result in MATLAB will be shown in the standard figure viewer which provides interaction facilities such as 3D rotation of viewpoint or zoom in and zoom out. In comparison to the programming in 18

24 VTK, the similar visualization process can be done much faster and with less coding in MATLAB. In contrary to the simplicity of visualization implementation in MATLAB, the disadvantage is the restriction of access to low level implementation of codes such as isosurface code. It is explicitly defined in the VTK that the implementation of isosurface functions is based on Marching Cubes algorithm but there is not such a reference in MATLAB documents about the underlying implementation of isosurface function. At the final step of visualization programming in MATLAB, some of the capabilities like lighting became useful to provide better interpretation of 3D objects structure. Also in compare to VTK where there was a specific function to set the value of thickness for each image slice in the prepared 3D volume image, in MATLAB, the scale ratio in z direction will be increased by the use of daspect function to compensate the 2D slice thickness.this process was done based on the similar approach in the visualization examples of MATLAB for MRI data [26]. Stack of 2D images Imread 3D Matrix Isosurfac e Threshold Value Figure 2.6 Visualization Pipeline for isosurface implementation in MATLAB The result of this visualization process is shown in the following image. The position of aneurysm is displayed to make the figure clear. To compare to VTK, the result of MATLAB shows the whole structure of arteries and aneurysm better than the output from VTK. Also the different level of interactions such as rotation of viewpoint and zooming can be used in the image viewer of MATLAB. Figure 2.7 The result of aneurysm visualization in MATLAB (from low resolution set of image) 19

25 Chapter 3: Development As we discussed in the previous chapter, the overall approach to reach the final stage of implementation was subdivided into simpler tasks which can be developed towards the complete integration of isosurface visualization of our specific dataset with the selected focus and context algorithms. In the previous chapter the theory of the selected focus and context techniques and the purpose of incremental approach were discussed. In the following sections, the process of the implementation of the predefined incremental approach will be discussed in detail for each dimension of the data space. In the following sections, some appropriate and simple datasets for each dimension were considered in order to apply the focus+context algorithms on them. In this procedure, each dataset was manipulated by the relevant focus and context algorithm regarding its data space dimension. The process helped to implement the basic algorithms for lower dimensions first therefore it was simpler to ext similar distortion algorithms for higher dimensions. The following sections will provide the comprehensive details of implementation procedure for three dimensions (1D, 2D and 3D) successively. For each dimension, the appropriate dataset is selected in order to simplify the interpretation of the results of distortion. This approach was chosen with the hope to provide the proper implementation of algorithms in 3D which had to be integrated with isosurface visualization. The same procedure was considered in the related research, the thesis of Cohen [3], where the 2D counterparts of 3D distortion algorithms were implemented in the first stage. For each dimension, the results of the application of the three predefined distortion techniques (Fisheye, Bifocal and Lens) are presented and the implementation algorithms are provided considering the theory which was mentioned in the previous chapter. In order to clarify the implementation presentation of the above algorithms which will be presented in the following sections, two concepts of interpolation (inverse mapping) and normalization should be defined explicitly before the detailed explanation of the implemented algorithms in the next sections. The following definitions for both concepts will be common in the implementation of focus+context algorithms. Normalized data space In order to prevent the relation of the algorithms to the size of data space, the first step in each algorithm is to normalize the region of the original data space which needs to be distorted. The normalization will take effect by transforming the original data space region to the values between 0 and 1 (which zero will be for the focal point and 1 will be for the boundary of the original dataset). 20

26 Inverse Mapping The implementation of focus+context algorithms in this project is mainly based on the equations which had been applied in Cohen s thesis [3] for the integration of distortion algorithms with 3D volume visualization by the use of volume rering. Since the objective of this project was similar to the work of Cohen but with another type of volume visualization ( isosurface ), the similar equations were used for this project. The equations in the following sections are based on the concept of inverse mapping in order to find the value of each data point (or each voxel in 3D) in the distorted dataset from its corresponding value in the original one. The reason for this type of mapping is to prevent from having holes in the 3D visualization after the application of distortion algorithms. The reason for the presence of such effect is because of the nature of discrete data in the 3D dataset of this research work. [4] The same situation happens in this project since the similar 3D dataset is used. The following figure displays the concept of inverse mapping which finds the original coordinates from the new distorted coordinates by the application of inverse mapping. Based on the computed position from inverse mapping, the corresponding value for this position will be stored as the value of the new corresponding distorted point. Figure 3.1 Inverse mapping with an image (from [3]) The principle of inverse mapping will be applied in the equations of the following sections by using some scaling factors. To make the equations clear, the following figure presents those factors on a model of 2D data space. The centre box is the focus region (original region) and the middle box is the distorted region. Since all the coordinates are normalized, the outer box shows the normalized boundary. 21

27 Figure 3.2 Scaling factors for bifocal and fisheye Cartesian distortion In the following sections, for each dimension, the results of implementation of three focus+context algorithms on the simple datasets will be displayed. At the last stage, the final integration results on the 3D visualization of aneurysm will be presented in the isosurfacing section D The implementation codes for the one dimension can be exted to 2D and 3D in the following sections. This follows the same approach which had been used in the Cohen s thesis [3]. So in this section, the basic algorithms which present the implementation on horizontal direction ( X ) will be discussed. Based on the Cohen s research work [3], these equations can be simply repeated in Y and Z directions for 2D and 3D distortions Fisheye In fisheye implementation, the algorithm is based on the transformation function which mentioned in section Considering the inverse mapping principle in the following implementation, the distorted coordinate ( x ) will be mapped to the original coordinate ( x ) by the use of inverse mapping function. Based on the definition of fisheye, this technique considers the focal point rather than focus region (in bifocal). So, considering the above characteristics, the following 22

28 algorithm can be used to calculate the original coordinate values from the distorted ones. Also based on the above figure (there is only the centre of focus in fisheye and not any source region), the following distances between the centre of focus and each dataset boundary will be used in the algorithm definition: x + = x x = x f 1 f The distortion factor ( d ) is defined in section This figure shows the algorithm structure from the Cohen s thesis [3] Figure 3.3 Fisheye Algorithm([4]) The fisheye algorithm was implemented on a simple one dimensional (1D) dataset, colorbar, in MATLAB. The related MATLAB code ( colormapfisheye.m ) is shown in Appix B. The figure 3.7 will show the results of applying fisheye, bifocal and lens algorithms in MATLAB. Regarding the definitions which mentioned in the previous chapter, the fisheye algorithm has two types of implementation based on the type of coordinates. 1) Cartesian 2) Polar The above algorithm applied the Cartesian coordinates and distance in its implementation. In the implementation of polar fisheye, the only difference is that the polar coordinates will be replaced the Cartesian ones in the transformation function. The transformation function for polar fisheye (section 2.2.2) can be used to implement the similar approach for polar coordinates. The application of this function will be discussed in the 3D section Bifocal Based on the definition of transformation function for bifocal in the section 2.3 and considering the mentioned principles of normalization and inverse mapping, the following algorithm is 23

29 implemented in Cohen s work [3] to provide bifocal distortion for a rectangular focus region. Cohen introduces the 3D implementation of bifocal algorithm as an adapted work from Winch [14]. The following parameters are defined based on the regions on figure 3.2. The two following scaling factor is defined to determine the amount of compression which is needed in each side of focus region [4]. As it can be seen in the figure 3.2, x + is the distance from the right edge of the original focus region ( x max ) to the boundary, also x + is the corresponding distance after distortion. scale + x =, scalex x x+ + x = The algorithm in Cohen s work [4] is as follows. x Figure 3.4 Bifocal Algorithm ([4]) The similar colorbar dataset is used in MATLAB to apply the bifocal algorithm on it and the code which implemented the above algorithm in MATLAB (colormapbifocal.m) can be seen in Appix B. The effect of bifocal distortion on colorbar can be seen in figure Lens As it mentioned in the first chapter, this distortion is like magnifying glass. To implement the lens algorithm, some parameters will be defined to prepare a transition region inside the distorted region to allow the gradual movement from the highly magnified inside region to the non-magnified outside. The figure 3.6 shows the new factors which are needed to define the transition region. These parameters are as follows. The following definitions and algorithm are based on Cohen s work too. [4] L = x x Distance from the centre of focus to the start of the transition region f min x T = trans. L Distance from the centre of focus to the start of the transition region x x 24

30 R = L T Distance from the start of the transition region to the edge of the focus x x x region Figure 3.5 Lens Algorithm ([3]) Figure 3.6 Transition region (Limits needed for computing the lens factors) The implementation code for lens algorithm to be applied on colorbar is provided in Appix B. (colormapvollens.m). After the explanation of all three focus+context algorithms, the results of applying them on the mentioned simple dataset, colorbar, will be presented in the figure 3.7. To acquire these results, the value of distortion or magnification factor for all techniques was set to 3. Also in the lens algorithm, the value of transition factor was set to The focal point in fisheye is 41 while in the other two algorithms, the focus region is selected from 41 to 48. The bifocal result can be simply compared to the figure 2.3 which shows the effect of magnification on the horizontal axis. It proves that the structure of algorithm in one dimension works properly. For the result of fisheye distortion, it can be seen that more space is given to the regions near to the focal point and also the effect of highest demagnification can be seen at the boundary which is far from the focal point. The lens effect is somehow similar to the result of bifocal which should be because of the 25

31 definition of transition region ( and trans factor) in the lens algorithm, otherwise the lens algorithm should magnify the focus region and does not change the outside of the region. Normal Fisheye Bifocal Lens Figure 3.7 The effect of 1D Focus+Context algorithms on colour bar 3.2 2D For 2D implementation of the previous Focus+Context algorithms, the best data set was 2D CTA images which were the original dataset of our visualization project. The same structure of algorithms, which were applied in horizontal axis for one dimension, was repeated for the vertical axis to provide 2D algorithm. With regard to Cohen s work, while all of these algorithms are referred to his approach, the same equations of X coordinate will be iterated for Y and Z coordinates to provide 2D or 3D distortion algorithms. For this stage of implementation, a 2D GUI was implemented to allow the application of all the above distortion algorithms on 2D images. One of the characteristics of this GUI, which was desired to be implemented in 3D, was the interactive selection of the focus region on the user interface. This user interface was developed by the use of MATLAB GUI Development (GUIDE) environment. The implemented 2D focus+context algorithms are the extension of the defined 1D implementation of techniques in the previous section. In fisheye and bifocal algorithms, the same code will be iterated for vertical coordinate but in the implementation of the lens algorithm the intersection of X and Y 26

32 coordinates should be considered to correctly apply the distortion inside the focus region with the generation of transition region. The complete MATLAB code of the three implemented 2D techniques and the GUI development, are provided in the Appix B. The result of GUI application environment is shown in figure 3.8. The user selected focus region is shown by a rectangular region on the selected image. Figure 3.8 The GUI Application for 2D focus and context algorithms 27

33 Normal Fisheye Bifocal Lens Figure 3.9 Application of focus and context algorithms on 2D image (CTA slice) The above results can be compared to the similar results which are shown in Cohen s work [3]. As it is observed, the effect of all algorithms is predictable with regard to the definitions which had mentioned before in the first and second chapter. In the above GUI, the focal point will be the centre of the rectangular environment for the fisheye implementation and as it can be seen in the above image the highest magnification is shown in that point. Also the bifocal implementation in 2D is similar to the 28

34 Figure 3.10 Applying focus+context effects on the similar 2D dataset (aneurysm dataset) 3.3 3D For the 3D implementation of the distortion techniques, the similar approach to the previous section which is the extension of 2D implementation to 3D coordinates was developed. The aim was the integration of focus+context algorithms with the isosurface technique. Considering this fact, the provided 3D dataset was used to visualize the aneurysm in MATLAB based on the approach which was discussed in chapter two. As it mentioned before, it was desired during the design stage to have a user interface which provides the ability to interact with the 3D visualization. This could be considered as a 3D extension for the implemented 2D GUI in the previous section. The overall goal was to provide the facility for the user to select the aneurysm as the focus region and apply one of the distortion algorithms on the region in order to give more detail on the shape of aneurysm while the network of arteries can be seen in the context. Considering our large dataset, providing the 3D visualization with the isosurface approach needed too much processing and memory resources. Due to the lack of resources, the integration of the implemented isosurface with the GUI environment in MATLAB was not achievable since the MATLAB software could not operate on the dataset and the 29

35 run out of memory problem caused to change the 3D dataset to a simple prototype. This provided another simplification approach and helped to progress the implementation in a way which the results of the integration of isosurface and focus+ context algorithms can be observed in MATLAB. The following section presents the process which the sphere surface was used as a prototype to apply the different distortion techniques on it. The MATLAB codes for these algorithms are provided in the Appix B. (bifocal3dsphere.m, fisheye3dsphere.m and vollens3dsphere.m) Isosurfacing ( Indirect Volume Rering) In this section the results which acquired during the process of the application of different distortion techniques on the sphere isosurface as a prototype will be demonstrated. The figure 3.11 compares the results which were taken after the implementation of different techniques on the isosurface of our prototype, sphere. For all algorithms, a grid of 20x20 points in the region (-1,1) were considered as the simple dataset. The normal sphere constructed as the isosurface of the points with the isovalue from the 3D spherical equation. The specific implementation which had been added to the previous 2D algorithms was the implementation of polar fisheye. The code is in Appix B. The result can be compared with the result of the simple Cartesian fisheye. There is difference in the value of distortion or magnification in these algorithms implementation. The performance problem of MATLAB in the integration of isosurface with the above focus and context algorithms caused to change the approach to apply algorithms on simple dataset. Although the results could not prove that the integration of these techniques are efficient but the feasibility of such an approach could be testified with these implementation. Also the alternatives such as reduction of dataset resolution or size of dataset were considered to make the integration feasible. The results of this work were not satisfactory due to the low resolution of the image since the effect of distortion algorithms could not be visualized properly in the acquired images. 30

36 Normal Fisheye Bifocal Vollens Polar Fisheye Figure 3.11 Integration of Isosurface with focus+context algorithms 31

37 Chapter 4: Evaluation For the evaluation purpose, the Cohen s work was used as a standard base during the implementation stages. The following criteria were under consideration when we compared both approaches responding to the needs of neurosurgeons and the specific visualization objectives for our specific medical dataset. Volume Rnedering Isosurfacing Strenghts Fast Rering in medical datasets (possible fast rering approaches by the use of graphics hardware capacities) Easy coding (possible to get fast results) Possible to visualize different isosurfaces at the same window( good to show medical data sets without occlusion ) Weaknesses Need to set many parameters accurately to get the proper visualization Slow rering due to the large number of polygons and vertices in large medical datasets Table 4.1 Comparison of two volume visualization techniques regarding the visualization of medical dataset The same characteristics which mentioned in the above table can be considered to compare the efficiency of the integration of each volume visualization technique with the focus+context algorithm. As seen in the result of this project, the rering of the medical dataset based on isosurface or in other words marching cubes algorithm, needs powerful system resources to be implemented. Since the results should provide sufficient quality for medical purposes, it seems that the application of algorithms such as fast marching cubes may help to mitigate this weakness. Based on the goal of this project, it was hoped to evaluate the application of focus+context algorithms in order to visualize aneurysm position and structure. This could be done by providing a user interface to be tested by professional users. Since the integration of GUI with distortion algorithms was impossible due to the lack of resources (regard to MATLAB performance). The 32

38 evaluation form was prepared for the 2D GUI which applied the algorithms on 2D dataset of CTA images (aneurysm). The evaluation form is added in the Appix C. Some explanations about the three F+C approaches were added to familiarize the user with the goals of provided application. Three persons used the program and filled the questionnaire. The questionnaire was designed to assess two aspects of work at the same time. First, some questions like 1, 2 and 3 were used to evaluate the quality of the user interface with regard to its interactive capacities. The answers in all three case studies were highly positive about the good performance of the GUI. The questions 4, 5, 6 and 7 were mentioned to get feedback about the ability of focus+context algorithms to explicitly present their goals by the application of these three algorithms. The answers for this part proved that the three algorithms can definitely meet the requirements which are needed to provide the perception of focus and context idea for the user. The question 8 could not be answered reasonably by all users. It seems that the reason is because of the users profession which is not related to medical field. They all accepted the advantage of these algorithms for medical images without mentioning the reason. The last two questions were designed to understand whether there is any particular preference for the users to select one of the algorithms as the best one. Since the number of users was not enough, the answers of this section can not be interpreted accurately but it can be deducted that all users were able to have a particular reason for their specific selection which was exactly related to the specific characteristics of that technique. For example, a user who selected bifocal as the preferred technique mentioned the capability of this technique as not greatly distorted context region for the reason of selection. This proves that the techniques can visually provide the goals which are technically based on. To conclude, the evaluation proved the capabilities of these algorithms to provide effective visualization in 2D images. 33

39 Chapter 5: Conclusion and Future Work Considering the three objectives which mentioned in the first chapter for this project, all the main stages of this project tried to respond those objectives. First, the feasibility of the integration of F+C techniques with isosurface was proved by the implementation stages which mentioned in the third chapter. Also the design of an interface which can be used to integrate F+C algorithms with the visualization techniques had been done as the second objective. Although this interface could not be integrated on the main 3D visualization of our medical dataset (due to performance problems) but the same approach can be easily exted to the implementation in 3D. The last objective of this project was to evaluate this integration of techniques in comparison to the related research work (Cohen s thesis). During all stages of project progress, the comparison of techniques and results was considered with regard to the results of Cohen s thesis. In the final stage which was the most desired part to be evaluated in comparison to volume rering, the results could not be compared on the similar dataset. This happened because of the problems which confined the possibility of programming on the similar 3D volume dataset. But it was tried to evaluate the similar techniques on a simple dataset. To conclude, both isosurfacing and focus+context were efficient techniques to be applied for medical datasets in specific. In contrary, both techniques need a large amount of system resources to be implemented on the large medical datasets. Since the focus+context algorithms need interaction and isosurfacing needs to generate lots of geometry shapes to provide the visualization for the 3D dataset, the integration of both methods together will need higher standards of system resources or enhanced algorithms which can be used to rer the visualization in an interactive environment. There is a long way to go to make a perfect integration of both techniques in a way which can be used as an interface for better visualization of the user specified focus region. Some possibilities for future work is to apply the combination of different software like VTK and MATLAB in a way which the powerful characteristics of each software will be used to provide better results. Based on some online resources, it is possible to integrate a VTK algorithm in MATLAB. This may facilitate the implementation of the application since the powerful characteristics of MATLAB in the computational aspects can be integrated with the powerful characteristics of VTK with its rering and visualization algorithm. Another possible future work could be the enhancement of visualization with the use of image segmentation algorithms, but according to the simple implementations which had been done in this work, the results of isosurface rering is better than the incorporation of thersholding techniques with the isosurface. It is been observed that the quality of isosurface to extract the desired threshold can be visualized with higher resolution than the output from thresholding. 34

40 Another possibility to be considered in this project as a 3D visualization one is to integrate the output in a virtual environment by the use of VR toolbox which is provided in MATLAB. This may result in the implementation of an environment which the focus region can be selected in 3D interactively. 35

41 Appix A: Project Reflection The process of MSc project does not start with the allocation of a specific project to an MSc student. It is a complementary part for the whole process of research and learning which happens after gaining deep knowledge in different topics from MSc studies and even the first degree. So the student as a project leader is the one who should decide to start the project based on all the capabilities that have been gained in the past to reach this stage of research work. Regarding this fact, it is quite helpful to try to use all the acquired knowledge during this process for better presentation of the final results. Considering the following steps helps to proceed towards the goals in any research work, as it definitely did for the author through this research work: Starting a project on the subject which the researcher really likes to do research on it Following the project progress based on specific and quantifiable milestones Considering the revision in the procedure of the research work as an opportunity to gain better results and understanding and even finding new ways to solve the problem o The revision in the procedure may provide the possibility of finding new ways to be integrated in the old procedure too, although it takes time to prepare the results which you had acquired till that stage with the new approach, it is an invaluable time which can guide through new findings for the project. This process happened for his project when the VTK tool was totally changed to MATLAB to make the progress of work faster Using the classic papers in the field of interest helps to understand the core research goals of the subject Using search engines efficiently (such as Google) will help a lot to reach the main important papers in any subject o This tip may help mostly for visualization research, the author could reach the most useful and related papers by using Image Search It is advisable to keep track of the project progress and project management till the To fulfil the goals of each project, it should be done on a proper time schedule and based on all the knowledge which may help to do the work better. So it is important to lead the project considering the allocated time and accessible references. Although this project mostly has its ground on another research work but the specific combinations of techniques were unique for this project regarding its application in medical visualization and its special dataset. This provided the opportunity to learn the way of problem 36

42 solving in a situation which there were many possible ways to follow the project but none of them seem to answer directly the needs of our specific project. With the invaluable guides of my supervisor, I found out that the idea of simplification and abstraction can be used efficiently in such situation for problem solving. This allows the progress of project to be followed while the new resources and solutions will be prepared during this progress automatically. This idea will greatly enhance the understanding of problem. The milestones and Gantt Chart of project management for this project is included in its web log. To conclude, the procedure which followed in this project was a complete research work which could enhance my knowledge in many different aspects of work. 37

43 Appix B: MATLAB source codes ************************************************** Colormapfisheye.m figure subplot(1,2,1); old_values(1:8)=0; old_values(9:16)=8/63; old_values(17:24)=16/63; old_values(25:32)=24/63; old_values(33:40)=32/63; old_values(41:48)=40/63; old_values(49:56)=48/63; old_values(57:64)=56/63; imagesc(old_values); xc=41; xf=round((xc-1)/63); %focal point dist=3; for i=1:64 i_norm = (i-1)/63 if i_norm < xf x_norm= xf-(xf*(xf-i_norm)/(xf+dist*i_norm)); else if i_norm>xf x_norm= xf+(1-xf)*(i_norm-xf)/(1-xf+dist*(1-i_norm)); x = round(63*x_norm + 1); new_values(i)=old_values(x); subplot(1,2,2); title('fisheye'); imagesc(new_values) ************************************************** Colormapbifocal.m old_values(1:8)=0; old_values(9:16)=8/63; old_values(17:24)=16/63; old_values(25:32)=24/63; old_values(33:40)=32/63; old_values(41:48)=40/63; old_values(49:56)=48/63; old_values(57:64)=56/63; %old_values(1:64)=[0:1/63:1]; 38

44 figure subplot(1,2,1); imagesc(old_values); %focus region xmin..xmax xmin=41; xmax=48; %normalized xmin and xmax xmin_norm=(xmin-1)/63 xmax_norm=(xmax-1)/63; xf=(xmin_norm+xmax_norm)/2 %centre of focus sx=xmax_norm-xf %half of focus region size mag=3; %magnified focus region xmind...xmaxd xmind=xf-mag*sx; xmaxd=xf+mag*sx; scalexr=(1-xmaxd)/(1-xmax_norm); scalexl=xmind/xmin_norm; for i=1:64 i_norm=(i-1)/63 if i_norm < xf-mag*sx x_norm= i_norm/scalexl; else if i_norm>xf+mag*sx x_norm= 1-(1-i_norm)/scalexR; else x_norm= (i_norm-xf+mag*xf)/mag; x = round(63*x_norm + 1); new_values(i)=old_values(x); %figure subplot(1,2,2); title('bifocal'); imagesc(new_values) ************************************************** Colormaplens.m old_values(1:8)=0; old_values(9:16)=8/63; old_values(17:24)=16/63; old_values(25:32)=24/63; old_values(33:40)=32/63; old_values(41:48)=40/63; old_values(49:56)=48/63; old_values(57:64)=56/63; %old_values(1:64)=[0:1/63:1]; figure; subplot(1,2,1); 39

45 colormap(jet); imagesc(old_values); %focus region xmin..xmax xmin=41; xmax=48; %normalized xmin and xmax xmin_norm=(xmin-1)/63 xmax_norm=(xmax-1)/63; xf=(xmin_norm+xmax_norm)/2 %centre of focus sx=xmax_norm-xf %half of focus region size %Magnification factor mag=3; %magnified focus region xmind...xmaxd xmind=xf-mag*sx; xmaxd=xf+mag*sx; %Transition factor trans=0.25; %Calculation of transition region Lx=xf-xminD; Tx=trans*Lx; Rx=Lx-Tx; for i=1:64 i_norm=(i-1)/63 % x outside focus region => x'=x if i_norm < xf-mag*sx x_norm= i_norm; elseif i_norm>xf+mag*sx x_norm= i_norm; % x inside focus region and transition region elseif i_norm < xf-tx x_norm= (i_norm-xf+mag*xf)/mag; lens= (xf-tx-i_norm)/rx; x_norm= lens*lens*i_norm+(1-lens*lens)*x_norm; elseif i_norm > xf+tx x_norm= (i_norm-xf+mag*xf)/mag; lens= (i_norm-xf-tx)/rx; x_norm= lens*lens*i_norm+(1-lens*lens)*x_norm; % x outside transition region else x_norm= (i_norm-xf+mag*xf)/mag; x= round(63*x_norm + 1); new_values(i)=old_values(x); subplot(1,2,2); title('vollens'); imagesc(new_values) ************************************************** 40

46 ************************************************** SimpleFC.m (2D F+C GUI) function varargout = simplefc(varargin) % SIMPLEFC M-file for simplefc.fig % SIMPLEFC, by itself, creates a new SIMPLEFC or raises the existing % singleton*. % % H = SIMPLEFC returns the handle to a new SIMPLEFC or the handle to % the existing singleton*. % % SIMPLEFC('CALLBACK',hObject,eventData,handles,...) calls the local % function named CALLBACK in SIMPLEFC.M with the given input arguments. % % SIMPLEFC('Property','Value',...) creates a new SIMPLEFC or raises the % existing singleton*. Starting from the left, property value pairs are % applied to the GUI before simplefc_openingfunction gets called. An % unrecognized property name or invalid value makes property application % stop. All inputs are passed to simplefc_openingfcn via varargin. % % *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one % instance to run (singleton)". % % See also: GUIDE, GUIDATA, GUIHANDLES % Edit the above text to modify the response to help simplefc % Last Modified by GUIDE v Aug :20:02 % Begin initialization code - DO NOT EDIT gui_singleton = 1; gui_state = struct('gui_name', mfilename,... 'gui_singleton', gui_singleton,... 'gui_layoutfcn', [],... 'gui_callback', []); if nargin & isstr(varargin{1}) gui_state.gui_callback = str2func(varargin{1}); if nargout [varargout{1:nargout}] = gui_mainfcn(gui_state, varargin{:}); else gui_mainfcn(gui_state, varargin{:}); % End initialization code - DO NOT EDIT % --- Executes just before simplefc is made visible. function simplefc_openingfcn(hobject, eventdata, handles, varargin) % This function has no output args, see OutputFcn. 41

47 % hobject handle to figure % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % varargin command line arguments to simplefc (see VARARGIN) % Choose default command line output for simplefc handles.output = hobject; axes(handles.newimg); axis off; axes(handles.orgimg); axis off; % Update handles structure guidata(hobject, handles); % UIWAIT makes simplefc wait for user response (see UIRESUME) % uiwait(handles.figure1); % --- Outputs from this function are returned to the command line. function varargout = simplefc_outputfcn(hobject, eventdata, handles) % varargout cell array for returning output args (see VARARGOUT); % hobject handle to figure % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Get default command line output from handles structure varargout{1} = handles.output; % --- Executes on button press in FishEye. function FishEye_Callback(hObject, eventdata, handles) % hobject handle to FishEye (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) off =[handles.lens,handles.bifocal]; mutual_exclude(off) % Hint: get(hobject,'value') returns toggle state of FishEye % --- Executes on button press in Lens. function Lens_Callback(hObject, eventdata, handles) % hobject handle to Lens (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) off =[handles.fisheye,handles.bifocal]; mutual_exclude(off) % Hint: get(hobject,'value') returns toggle state of Lens % --- Executes on button press in Bifocal. 42

48 function Bifocal_Callback(hObject, eventdata, handles) % hobject handle to Bifocal (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) off =[handles.fisheye,handles.lens]; mutual_exclude(off) % Hint: get(hobject,'value') returns toggle state of Bifocal %To make mutual exclusive radiobuttons function mutual_exclude(off) set(off,'value',0) % --- Executes during object creation, after setting all properties. function thresholdslider_createfcn(hobject, eventdata, handles) % hobject handle to thresholdslider (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles empty - handles not created until after all CreateFcns called % Hint: slider controls usually have a light gray background, change % 'usewhitebg' to 0 to use default. See ISPC and COMPUTER. usewhitebg = 1; if usewhitebg set(hobject,'backgroundcolor',[.9.9.9]); else set(hobject,'backgroundcolor',get(0,'defaultuicontrolbackgroundcolor')); % --- Executes on slider movement. function thresholdslider_callback(hobject, eventdata, handles) % hobject handle to thresholdslider (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: get(hobject,'value') returns position of slider % get(hobject,'min') and get(hobject,'max') to determine range of slider t = get(handles.thresholdslider,'value'); set(handles.slidertext,'string',num2str(t)); % --- Executes on button press in Apply. function Apply_Callback(hObject, eventdata, handles) % hobject handle to Apply (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) 43

49 % find out radiobutton choice if ( get(handles.fisheye,'value')) choice='fisheye'; elseif ( get(handles.lens,'value')) choice='lens'; else ( get(handles.bifocal,'value')) choice='bifocal'; axes(handles.newimg); filename=getappdata(0,'filename'); [focusedimg,map]=imread(filename); % imshow(focusedimg); % colormap(map); out=getappdata(0,'focusregion'); switch(choice) case 'FishEye' Fisheye(out(1),out(2),out(3),out(4),focusedImg,handles.newImg,handles.thresholdSlider,map); case 'Lens' Lens(out(1),out(2),out(3),out(4),focusedImg,handles.newImg,handles.thresholdSlider,map); case 'Bifocal' Bifocal(out(1),out(2),out(3),out(4),focusedImg,handles.newImg,handles.thresholdSlider,map); % --- Executes on button press in OpenFile. function OpenFile_Callback(hObject, eventdata, handles) % hobject handle to OpenFile (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) [filename,pathname]= uigetfile('*.jpg','open File', 100,100); file=strcat(pathname,filename); setappdata(0,'filename',file); if ~isempty(file) [original,map]=imread(file); set(handles.newimg,'handlevisibility','on'); set(handles.orgimg,'handlevisibility','on'); axes(handles.orgimg); colormap(map); imshow(original); ; %%%%%%%%%%%FOCUS and Context Functions%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%Bifocal%%%%%%%%%%%%%%%%%%%%%%% function Bifocal(xmin,xmax,ymin,ymax,oldimage,h,slider,colormap) oxmax=size(oldimage,1) oymax=size(oldimage,2) xmin_norm=(xmin-1)/(oxmax-1); xmax_norm=(xmax-1)/(oxmax-1); 44

50 ymin_norm=(ymin-1)/(oymax-1); ymax_norm=(ymax-1)/(oymax-1); xf=(xmin_norm+xmax_norm)/2; %xcentre of focus yf=(ymin_norm+ymax_norm)/2; %ycentre of focus sx=xmax_norm-xf; %half of focus region Xsize sy=ymax_norm-yf; %half of focus region Ysize %get the value of magnification factor from slider mag=get(slider,'value'); %Distorted region xmind=xf-mag*sx; xmaxd=xf+mag*sx; ymind=yf-mag*sy; ymaxd=yf+mag*sy; scalexr=(1-xmaxd)/(1-xmax_norm); scalexl=xmind/xmin_norm; scaleyr=(1-ymaxd)/(1-ymax_norm); scaleyl=ymind/ymin_norm; for i=1:oxmax for j=1:oymax i_norm = (i-1)/(oxmax-1); j_norm = (j-1)/(oymax-1); if i_norm < xf-mag*sx x_norm= i_norm/scalexl; elseif i_norm>xf+mag*sx x_norm= 1-(1-i_norm)/scalexR; else x_norm= (i_norm-xf+mag*xf)/mag; if j_norm < yf-mag*sy y_norm= j_norm/scaleyl; elseif j_norm>yf+mag*sy y_norm= 1-(1-j_norm)/scaleyR; else y_norm= (j_norm-yf+mag*yf)/mag; x = round((oxmax-1)*x_norm+1); y = round((oymax-1)*y_norm+1); newimage(j,i)=oldimage(y,x); filename=getappdata(0,'filename'); [file,map]=imread(filename); colormap(colormap); axes(h); imshow(newimage); colormap(map); title('bifocal'); %%%%%%%%%%%%%%%%Fisheye%%%%%%%%%%%%%%%%%%%%%%% function Fisheye(xmin,xmax,ymin,ymax,oldimage,h,slider,colormap) 45

51 oxmax=size(oldimage,1); oymax=size(oldimage,2); %focal point == centre of selected rectangle xc = (xmin+xmax)/2; yc = (ymin+ymax)/2; %%%%%%%%%%%%%%%%%%%%%%%%% xf=(xc-1)/(oxmax-1); yf=(yc-1)/(oymax-1); %get the value of distortion from slider dist=get(slider,'value'); for i=1:oxmax for j=1:oymax i_norm = (i-1)/(oxmax-1); j_norm = (j-1)/(oymax-1); if i_norm < xf x_norm= xf-(xf*(xf-i_norm)/(xf+dist*i_norm)); else x_norm= xf+(1-xf)*(i_norm-xf)/(1-xf+dist*(1-i_norm)); if j_norm <yf y_norm= yf-(yf*(yf-j_norm)/(yf+dist*j_norm)); else y_norm= yf+(1-yf)*(j_norm-yf)/(1-yf+dist*(1-j_norm)); x = round((oxmax-1)*x_norm+1); y = round((oymax-1)*y_norm+1); newimage(j,i)=oldimage(y,x); filename=getappdata(0,'filename'); [file,map]=imread(filename); colormap(map); axes(h); imshow(newimage); title('fisheye'); %%%%%%%%%%%%%%%%Volume Lens%%%%%%%%%%%%%%%%%%%%%%% function Lens(xmin,xmax,ymin,ymax,oldimage,h,slider,colormap) oxmax=size(oldimage,1); oymax=size(oldimage,2); xmin_norm=(xmin-1)/(oxmax-1); xmax_norm=(xmax-1)/(oxmax-1); ymin_norm=(ymin-1)/(oymax-1); ymax_norm=(ymax-1)/(oymax-1); xf=(xmin_norm+xmax_norm)/2; %xcentre of focus yf=(ymin_norm+ymax_norm)/2; %ycentre of focus sx=xmax_norm-xf; %half of focus region Xsize sy=ymax_norm-yf; %half of focus region Ysize %get the value of magnification factor from slider mag=get(slider,'value'); 46

52 %magnified focus region xmind...xmaxd ymind...ymaxd xmind=xf-mag*sx; xmaxd=xf+mag*sx; ymind=yf-mag*sy; ymaxd=yf+mag*sy; %Transition factor trans=0.5; %Calculation of transition region Lx=xf-xminD; Tx=trans*Lx; Rx=Lx-Tx; Ly=yf-yminD; Ty=trans*Ly; Ry=Ly-Ty; for i=1:oxmax for j=1:oymax i_norm = (i-1)/(oxmax-1); j_norm = (j-1)/(oymax-1); %%%%%%%%%% X axis %%%%%%%%%%%% % x outside focus region => x'=x if i_norm < xf-mag*sx j_norm < yf-mag*sy x_norm= i_norm; elseif i_norm>xf+mag*sx j_norm>yf+mag*sy x_norm= i_norm; % x inside focus region and transition region elseif i_norm < xf-tx x_norm= (i_norm-xf+mag*xf)/mag; lens= (xf-tx-i_norm)/rx; x_norm= lens*lens*i_norm+(1-lens*lens)*x_norm; elseif i_norm > xf+tx x_norm= (i_norm-xf+mag*xf)/mag; lens= (i_norm-xf-tx)/rx; x_norm= lens*lens*i_norm+(1-lens*lens)*x_norm; % x outside transition region else x_norm= (i_norm-xf+mag*xf)/mag; %%%%%%%%%% Y axis %%%%%%%%%%%% % y outside focus region => y'=y if i_norm < xf-mag*sx j_norm < yf-mag*sy y_norm= j_norm; elseif i_norm>xf+mag*sx j_norm > yf+mag*sy y_norm= j_norm; % y inside focus region and transition region elseif j_norm < yf-ty y_norm= (j_norm-yf+mag*yf)/mag; lens= (yf-ty-j_norm)/ry; y_norm= lens*lens*j_norm+(1-lens*lens)*y_norm; 47

53 elseif j_norm > yf+ty y_norm= (j_norm-yf+mag*yf)/mag; lens= (j_norm-yf-ty)/ry; y_norm= lens*lens*j_norm+(1-lens*lens)*y_norm; % y outside transition region else y_norm= (j_norm-yf+mag*yf)/mag; x = round((oxmax-1)*x_norm+1); y = round((oymax-1)*y_norm+1); newimage(j,i)=oldimage(y,x); filename=getappdata(0,'filename'); [focusedimg,map]=imread(filename); colormap(map); axes(h); imshow(newimage); colormap(map); title('lens'); % --- Executes on button press in selection. function selection_callback(hobject, eventdata, handles) % hobject handle to selection (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hint: get(hobject,'value') returns toggle state of selection button_state = get(hobject,'value'); if button_state == get(hobject,'max') % toggle button is pressed rect=getrect(gcf); rectangle('position',rect,'linestyle','-','linewidth',1,'edgecolor','y'); %focus region xmin..xmax xmin=rect(1); xmax=rect(3)+xmin; ymin=rect(2); ymax=rect(4)+ymin; out=[xmin,xmax,ymin,ymax]; %Transfer the value of out between functions with use of a (GUI) setappdata(0,'focusregion',out); elseif button_state == get(hobject,'min') % toggle button is not pressed clear rect,gcf; filename=getappdata(0,'filename'); [file,map]=imread(filename); axes(handles.orgimg); colormap(map); imshow(file); 48

54 ************************************************** Fisheye3DSphere.m clear all; % 3D Grid data x=-1:1/10:1; y=-1:1/10:1; z=-1:1/10:1; xmin= min(x); xmax= max(x); ymin= min(y); ymax= max(y); zmin= min(z); zmax= max(z); % xc=0.5; % yc=0.5; % zc=0.5; xc=(xmin+xmax)/2; yc=(ymin+ymax)/2; zc=(zmin+zmax)/2; xf=(xc-xmin)/(xmax-xmin); yf=(yc-ymin)/(ymax-xmin); zf=(zc-zmin)/(zmax-zmin); dist=5; for i=1:21 for j=1:21 for k=1:21 %fold(i,j)= x(i)*x(i)+y(j)*y(j); fold(i,j,k)= x(i)*x(i)+y(j)*y(j)+z(k)*z(k); %fold(i,j,k)= y(j)*y(j)+z(k)*z(k); grid on; height(1)=1.0; %contour(fold,height); isosurface(x,y,z,fold,1); camlight; lighting phong; for i=1:21 for j=1:21 for k=1:21 xnorm(i)= (x(i)-xmin)/(xmax-xmin); ynorm(j)= (y(j)-ymin)/(ymax-ymin); znorm(k)=(z(k)-zmin)/(zmax-zmin); if xnorm(i)<xf xnew_norm= xf-(xf*(xf-xnorm(i))/(xf+dist*xnorm(i))); else 49

55 xnew_norm= xf+(1-xf)*(xnorm(i)-xf)/(1-xf+dist*(1-xnorm(i))); if ynorm(j) <yf ynew_norm= yf-(yf*(yf-ynorm(j))/(yf+dist*ynorm(j))); else ynew_norm= yf+(1-yf)*(ynorm(j)-yf)/(1-yf+dist*(1-ynorm(j))); if znorm(k) <zf znew_norm= zf-(zf*(zf-znorm(k))/(zf+dist*znorm(k))); else znew_norm= zf+(1-zf)*(znorm(k)-zf)/(1-zf+dist*(1-znorm(k))); xnew(i) = xnew_norm *(xmax-xmin)+xmin; ynew(j) = ynew_norm *(ymax-ymin)+ymin; znew(k) = znew_norm *(zmax-zmin)+zmin; figure; grid on %contour(xnew,ynew,fold,height); isosurface(xnew,ynew,znew,fold,1); camlight; lighting phong; ************************************************** Fisheye3Dpolar.m clear all; x=-1:1/10:1; y=-1:1/10:1; z=-1:1/10:1; xmin= min(x); xmax= max(x); ymin= min(y); ymax= max(y); zmin= min(z); zmax= max(z); % xc=0; yc=0; zc=0; % xc=(xmin+xmax)/2; % yc=(ymin+ymax)/2; % zc=(zmin+zmax)/2; % xf=(xc-xmin)/(xmax-xmin); % yf=(yc-ymin)/(ymax-xmin); 50

56 % zf=(zc-zmin)/(zmax-zmin); dist=0.5; for i=1:21 for j=1:21 for k=1:21 %fold(i,j)= x(i)*x(i)+y(j)*y(j); fold(i,j,k)= x(i)*x(i)+y(j)*y(j)+z(k)*z(k); %fold(i,j,k)= y(j)*y(j)+z(k)*z(k); figure(1) grid on; height(1)=1.0; %contour(fold,height); axis vis3d; isosurface(x,y,z,fold,1); camlight; lighting phong; d(1) = sqrt((xmin-xc)*(xmin-xc) + (ymin-yc)*(ymin-yc) + (zmin-zc)*(zmin-zc)); d(2) = sqrt((xmin-xc)*(xmin-xc) + (ymax-yc)*(ymax-yc) + (zmin-zc)*(zmin-zc)); d(3) = sqrt((xmax-xc)*(xmax-xc) + (ymax-yc)*(ymax-yc) + (zmin-zc)*(zmin-zc)); d(4) = sqrt((xmax-xc)*(xmax-xc) + (ymin-yc)*(ymin-yc) + (zmin-zc)*(zmin-zc)); d(5) = sqrt((xmin-xc)*(xmin-xc) + (ymin-yc)*(ymin-yc) + (zmax-zc)*(zmax-zc)); d(6) = sqrt((xmin-xc)*(xmin-xc) + (ymax-yc)*(ymax-yc) + (zmax-zc)*(zmax-zc)); d(7) = sqrt((xmax-xc)*(xmax-xc) + (ymax-yc)*(ymax-yc) + (zmax-zc)*(zmax-zc)); d(8) = sqrt((xmax-xc)*(xmax-xc) + (ymin-yc)*(ymin-yc) + (zmax-zc)*(zmax-zc)); rmax = max(d); h = 1; for i=1:21 for j=1:21 for k=1:21 % xnorm(i)= (x(i)-xmin)/(xmax-xmin); % ynorm(j)= (y(j)-ymin)/(ymax-ymin); % znorm(k)=(z(k)-zmin)/(zmax-zmin); [theta,r,zp]=cart2pol(x(i),y(j),z(k)); rnorm = r/rmax; rnew_norm = (1+dist)*rnorm/(1 + dist*rnorm); rnew = rnew_norm*rmax; [xnew(h), ynew(h),znew(h)] = pol2cart(theta,rnew,zp); f(h) = fold(i,j,k); xold(h) = x(i); yold(h) = y(j); zold(h) = z(k); h = h+1; 51

57 figure(2); grid on plot3(xold,yold,zold,'.') line(xold,yold,zold,'color','r'); title('original data'); figure(4); grid on plot3(xnew,ynew,znew,'.') title('distorted data'); % figure(5); grid on [xi,yi,zi] = meshgrid(-1.0:0.1:1.0); fi = griddata3(xnew,ynew,znew,f,xi,yi,zi); %contour(zi,height); % contour(xnew,ynew,fold,height); axis vis3d; %colormap copper; isosurface(xi,yi,zi,fi,1); camlight; lighting phong; ************************************************** Bifocal3DSphere.m x=-1:1/10:1; y=-1:1/10:1; z=-1:1/10:1; %min & max of original mesh oxmin= min(x); oxmax= max(x); oymin= min(y); oymax= max(y); ozmin= min(z); ozmax= max(z); %focus region xmin..xmax ; ymin..ymax; zmin..zmax xmin=0; xmax=0.5; ymin=0; ymax=0.5; zmin=0; zmax=0.5; xmin_norm=(xmin-oxmin)/(oxmax-oxmin); xmax_norm=(xmax-oxmin)/(oxmax-oxmin); ymin_norm=(ymin-oymin)/(oymax-oymin); ymax_norm=(ymax-oymin)/(oymax-oymin); zmin_norm=(zmin-ozmin)/(ozmax-ozmin); 52

58 zmax_norm=(zmax-ozmin)/(ozmax-ozmin); xf=(xmin_norm+xmax_norm)/2 %xcentre of focus yf=(ymin_norm+ymax_norm)/2 %ycentre of focus zf=(zmin_norm+zmax_norm)/2 %zcentre of focus sx=xmax_norm-xf %half of focus region Xsize sy=ymax_norm-yf %half of focus region Ysize sz=zmax_norm-zf %half of focus region Zsize %Magnification factor mag=1.5; %Distorted region xmind=xf-mag*sx; xmaxd=xf+mag*sx; ymind=yf-mag*sy; ymaxd=yf+mag*sy; zmind=zf-mag*sz; zmaxd=zf+mag*sz; scalexr=(1-xmaxd)/(1-xmax_norm); scalexl=xmind/xmin_norm; scaleyr=(1-ymaxd)/(1-ymax_norm); scaleyl=ymind/ymin_norm; scalezr=(1-zmaxd)/(1-zmax_norm); scalezl=zmind/zmin_norm; for i=1:21 for j=1:21 for k=1:21 fold(i,j,k)= x(i)*x(i)+y(j)*y(j)+z(k)*z(k); grid on; isosurface(x,y,z,fold,0.75); camlight; lighting phong; %sliceomatic(fold,x,y,z); for i=1:21 for j=1:21 for k=1:21 xnorm(i)= (x(i)-xmin)/(xmax-xmin); ynorm(j)= (y(j)-ymin)/(ymax-ymin); znorm(k)=(z(k)-zmin)/(zmax-zmin); if xnorm(i) < xf-mag*sx xnew_norm= xnorm(i)/scalexl; elseif xnorm(i)>xf+mag*sx xnew_norm= 1-(1-xnorm(i))/scalexR; else xnew_norm= (xnorm(i)-xf+mag*xf)/mag; if ynorm(j) < yf-mag*sy ynew_norm= ynorm(j)/scaleyl; 53

59 elseif ynorm(j)>yf+mag*sy ynew_norm= 1-(1-ynorm(j))/scaleyR; else ynew_norm= (ynorm(j)-yf+mag*yf)/mag; if znorm(k) < zf-mag*sz znew_norm= znorm(k)/scalezl; elseif znorm(k)>zf+mag*sz znew_norm= 1-(1-znorm(k))/scalezR; else znew_norm= (znorm(k)-zf+mag*zf)/mag; xnew(i) = xnew_norm *(xmax-xmin)+xmin; ynew(j) = ynew_norm *(ymax-ymin)+ymin; znew(k) = znew_norm *(zmax-zmin)+zmin; figure grid on isosurface(xnew,ynew,znew,fold,0.75); camlight; lighting phong; ************************************************** Vollens3DSphere.m x=-1:1/10:1; y=-1:1/10:1; z=-1:1/10:1; %min & max of original mesh oxmin= min(x); oxmax= max(x); oymin= min(y); oymax= max(y); ozmin= min(z); ozmax= max(z); %focus region xmin..xmax xmin=0; xmax=0.5; ymin=0; ymax=0.5; zmin=0; zmax=0.5; xmin_norm=(xmin-oxmin)/(oxmax-oxmin); xmax_norm=(xmax-oxmin)/(oxmax-oxmin); ymin_norm=(ymin-oymin)/(oymax-oymin); ymax_norm=(ymax-oymin)/(oymax-oymin); zmin_norm=(zmin-ozmin)/(ozmax-ozmin); zmax_norm=(zmax-ozmin)/(ozmax-ozmin); xf=(xmin_norm+xmax_norm)/2 %xcentre of focus 54

60 yf=(ymin_norm+ymax_norm)/2 %ycentre of focus zf=(zmin_norm+zmax_norm)/2 %zcentre of focus sx=xmax_norm-xf %half of focus region Xsize sy=ymax_norm-yf %half of focus region Ysize sz=zmax_norm-zf %half of focus region Zsize %Magnification factor mag=2; %magnified focus region xmind...xmaxd ymind...ymaxd xmind=xf-mag*sx; xmaxd=xf+mag*sx; ymind=yf-mag*sy; ymaxd=yf+mag*sy; zmind=xf-mag*sz; zmaxd=xf+mag*sz; %Transition factor trans=1; %Calculation of transition region Lx=xf-xminD; Tx=trans*Lx; Rx=Lx-Tx; Ly=yf-yminD; Ty=trans*Ly; Ry=Ly-Ty; Lz=zf-zminD; Tz=trans*Lz; Rz=Lz-Tz; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% for i=1:21 for j=1:21 for k=1:21 fold(i,j,k)= x(i)*x(i)+y(j)*y(j)+z(k)*z(k); %fold(i,j,k)= y(j)*y(j)+z(k)*z(k); grid on; isosurface(x,y,z,fold,0.75); camlight; lighting phong; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% for i=1:21 for j=1:21 for k=1:21 xnorm(i)= (x(i)-xmin)/(xmax-xmin); ynorm(j)= (y(j)-ymin)/(ymax-ymin); znorm(k)=(z(k)-zmin)/(zmax-zmin); %%%%%%%%%% X axis %%%%%%%%%%%% % x outside focus region => x'=x if xnorm(i) < xf-mag*sx ynorm(j) < yf-mag*sy 55

61 xnew_norm= xnorm(i); elseif xnorm(i) >xf+mag*sx ynorm(j)>yf+mag*sy xnew_norm= xnorm(i); % X inside focus region and transition region elseif xnorm(i) < xf-tx xnew_norm= (xnorm(i)-xf+mag*xf)/mag; lens= (xf-tx-xnorm(i))/rx; xnew_norm= lens*lens*xnorm(i)+(1-lens*lens)*xnorm(i); elseif xnorm(i) > xf+tx xnew_norm= (xnorm(i)-xf+mag*xf)/mag; lens= (xnorm(i)-xf-tx)/rx; xnew_norm= lens*lens*xnorm(i)+(1-lens*lens)*xnorm(i); % x outside transition region else xnew_norm= (xnorm(i)-xf+mag*xf)/mag; %%%%%%%%%% Y axis %%%%%%%%%%%% if xnorm(i) < xf-mag*sx ynorm(j) < yf-mag*sy ynew_norm= ynorm(j); elseif xnorm(i) >xf+mag*sx ynorm(j)>yf+mag*sy ynew_norm= ynorm(j); % Y inside focus region and transition region elseif ynorm(j) < yf-ty ynew_norm= (ynorm(j)-yf+mag*yf)/mag; lens= (yf-ty-ynorm(j))/ry; ynew_norm= lens*lens*ynorm(j)+(1-lens*lens)*ynorm(j); elseif ynorm(j) > yf+ty ynew_norm= (ynorm(j)-yf+mag*yf)/mag; lens= (ynorm(j)-yf-ty)/ry; ynew_norm= lens*lens*ynorm(j)+(1-lens*lens)*ynorm(j); % Y outside transition region else ynew_norm= (ynorm(j)-yf+mag*yf)/mag; %%%%%%%%%% Z axis %%%%%%%%%%%% if znorm(k) < zf-mag*sz znorm(k) < zf-mag*sz znew_norm= znorm(k); elseif znorm(k) >zf+mag*sz znorm(k)>zf+mag*sz znew_norm= znorm(k); % Z inside focus region and transition region elseif znorm(k) < zf-tz znew_norm= (znorm(k)-zf+mag*zf)/mag; lens= (zf-tz-znorm(k))/rz; znew_norm= lens*lens*znorm(k)+(1-lens*lens)*znorm(k); elseif znorm(k) > zf+tz 56

62 znew_norm= (znorm(k)-zf+mag*zf)/mag; lens= (znorm(k)-zf-tz)/rz; znew_norm= lens*lens*znorm(k)+(1-lens*lens)*znorm(k); % Z outside transition region else znew_norm= (znorm(k)-zf+mag*zf)/mag; xnew(i) = xnew_norm *(xmax-xmin)+xmin; ynew(j) = ynew_norm *(ymax-ymin)+ymin; znew(k) = znew_norm *(zmax-zmin)+zmin; figure grid on isosurface(xnew,ynew,znew,fold,0.75); camlight; lighting phong; 57

63 Appix C: Sample Evaluation form for 2D GUI of F+C application In the following questionnaire, the aim is to evaluate the efficiency of the three distortion techniques by using the provided interactive viewer in MATLAB. The specific purpose of this viewer is to evaluate the application of Fisheye, Bifocal and 2D Lens for medical image analysis. The provided sample image is a CTA scan which can be viewed in detail for the selected focus region without losing overall context. The following definitions are provided to help answering the questions when the comparison of these Focus and Context techniques is required. All of the techniques display the magnified focus region (the region of most interest to the user) with the demagnified context surrounding it at the same window. Fisheye: This technique provides the highest magnification on the focal point (which is the centre of the rectangular selected region) and demagnification increases gradually by increasing the distance of points from the focal point. Bifocal: This technique magnifies the focus region and the context is constantly demagnified in x,y or both dimentions. 2D Lens: This technique magnifies the focus region like magnifying glass while keeping the context unchanged. Distortion Factor: The distortion or magnification factor can be changed by the provided slider The following steps should be done to see the effects of these techniques: 1) Select the image 2) Select the specific technique 3) Define parameters a. Select focus region b. Set the desired magnification (default : 2.0) 4) Apply technique ( Apply Focus+Context button) 58

64 1) Is it simple to understand how to use the interface? 2) Are you familiar with Distortion or Focus and Context concepts? 3) Is it easy to select the region of interest ( focus region ) on your selected image? 4) Is the program effective to view your focused region while keeping the context in the same window of the image? (Is the application of F+C useful?) 5) Do you consider these techniques more useful than simple Zooming in MATLAB image viewer? 6) Does the program help you in general to feel the preferences of different Focus+Context techniques comparing to each other? 7) Do you find the techniques useful for the better interpretation of the full image considering your specific focused region? 8) Do you consider the program useful to be applied specifically for medical image? (*) 9) Which technique did you find most useful? 10) What was the reason for the selection of one technique as most useful in the above question? Yes No If Yes, please mention the degree of your agreement : Fully Mostly Partially Yes No If the answer is No, does this program help to understand the concepts? Yes No Yes No If Yes, how much does it help to identify carefully the focused region? Fully Mostly Partially Yes No If Yes, please mention the degree of your agreement : Fully Mostly Partially Yes No If Yes, please mention the degree of your agreement : Fully Mostly Partially Yes No If Yes, please mention the degree of your agreement : Fully Mostly Partially Yes No If Yes, please mention the degree of your agreement : Fully Mostly Partially Yes No If Yes, please mention your reason: Fisheye Bifocal Lens Please mention any characteristic regarding the degree of comprehensibility or clarity of the results: Continuous Distortion (*) The sample medical image is provided with this form to do the experiment for question 8 59

65 Appix D: MSC INTERIM PROJECT REPORT School of Computing, University of Leeds MSC INTERIM PROJECT REPORT All MSc students must submit an interim report on their project to the CSO by 9am Tuesday 12 th June. Note that it may require two or three iterations to agree a suitable report with your supervisor, so you should let him/her have an initial draft well in advance of the deadline. The report should be a maximum of 10 pages long and be attached to this header sheet. It should include: the overall aim of the project the objectives of the project the minimum requirements of the project and further enhancements a list of deliverables resources required project schedule and progress report proposed research methods a draft chapter on the literature review and/or an evaluation of tools/techniques The report will be commented upon both by the supervisor and the assessor in order to provide you with feedback on your approach and progress so far. Student: Programme of Study: Title of project: Sanaz Ghodousi MSc Cognitive Systems 3D Visualization of Cerebral Aneurysms Supervisor: Professor Ken Brodlie External Company (if appropriate): Web address of project log wwwdev.comp.leeds.ac.uk/scs5sg Signature of student: Date: 12/06/07 60

66 Supervisor's comments on the Interim Report 61

67 Assessor's comments on the Interim Report 62

A Study of Medical Image Analysis System

A Study of Medical Image Analysis System Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun

More information

Visualization. Images are used to aid in understanding of data. Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26]

Visualization. Images are used to aid in understanding of data. Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26] Visualization Images are used to aid in understanding of data Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26] Tumor SCI, Utah Scientific Visualization Visualize large

More information

Scalar Visualization

Scalar Visualization Scalar Visualization Visualizing scalar data Popular scalar visualization techniques Color mapping Contouring Height plots outline Recap of Chap 4: Visualization Pipeline 1. Data Importing 2. Data Filtering

More information

Introduction to Python and VTK

Introduction to Python and VTK Introduction to Python and VTK Scientific Visualization, HT 2013 Lecture 2 Johan Nysjö Centre for Image analysis Swedish University of Agricultural Sciences Uppsala University 2 About me PhD student in

More information

Scalar Algorithms: Contouring

Scalar Algorithms: Contouring Scalar Algorithms: Contouring Computer Animation and Visualisation Lecture tkomura@inf.ed.ac.uk Institute for Perception, Action & Behaviour School of Informatics Contouring Scaler Data Last Lecture...

More information

Volume Illumination & Vector Field Visualisation

Volume Illumination & Vector Field Visualisation Volume Illumination & Vector Field Visualisation Visualisation Lecture 11 Institute for Perception, Action & Behaviour School of Informatics Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

Visualisation : Lecture 1. So what is visualisation? Visualisation

Visualisation : Lecture 1. So what is visualisation? Visualisation So what is visualisation? UG4 / M.Sc. Course 2006 toby.breckon@ed.ac.uk Computer Vision Lab. Institute for Perception, Action & Behaviour Introducing 1 Application of interactive 3D computer graphics to

More information

Volume Illumination, Contouring

Volume Illumination, Contouring Volume Illumination, Contouring Computer Animation and Visualisation Lecture 0 tkomura@inf.ed.ac.uk Institute for Perception, Action & Behaviour School of Informatics Contouring Scaler Data Overview -

More information

A Survey of Volumetric Visualization Techniques for Medical Images

A Survey of Volumetric Visualization Techniques for Medical Images International Journal of Research Studies in Computer Science and Engineering (IJRSCSE) Volume 2, Issue 4, April 2015, PP 34-39 ISSN 2349-4840 (Print) & ISSN 2349-4859 (Online) www.arcjournals.org A Survey

More information

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization Volume visualization Volume visualization Volumes are special cases of scalar data: regular 3D grids of scalars, typically interpreted as density values. Each data value is assumed to describe a cubic

More information

Contours & Implicit Modelling 4

Contours & Implicit Modelling 4 Brief Recap Contouring & Implicit Modelling Contouring Implicit Functions Visualisation Lecture 8 lecture 6 Marching Cubes lecture 3 visualisation of a Quadric toby.breckon@ed.ac.uk Computer Vision Lab.

More information

ACGV 2008, Lecture 1 Tuesday January 22, 2008

ACGV 2008, Lecture 1 Tuesday January 22, 2008 Advanced Computer Graphics and Visualization Spring 2008 Ch 1: Introduction Ch 4: The Visualization Pipeline Ch 5: Basic Data Representation Organization, Spring 2008 Stefan Seipel Filip Malmberg Mats

More information

Human Heart Coronary Arteries Segmentation

Human Heart Coronary Arteries Segmentation Human Heart Coronary Arteries Segmentation Qian Huang Wright State University, Computer Science Department Abstract The volume information extracted from computed tomography angiogram (CTA) datasets makes

More information

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes CSCI 420 Computer Graphics Lecture 26 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 2.11] Jernej Barbic University of Southern California Scientific Visualization

More information

Visualization. CSCI 420 Computer Graphics Lecture 26

Visualization. CSCI 420 Computer Graphics Lecture 26 CSCI 420 Computer Graphics Lecture 26 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 11] Jernej Barbic University of Southern California 1 Scientific Visualization

More information

CIS 467/602-01: Data Visualization

CIS 467/602-01: Data Visualization CIS 467/60-01: Data Visualization Isosurfacing and Volume Rendering Dr. David Koop Fields and Grids Fields: values come from a continuous domain, infinitely many values - Sampled at certain positions to

More information

Fast Interactive Region of Interest Selection for Volume Visualization

Fast Interactive Region of Interest Selection for Volume Visualization Fast Interactive Region of Interest Selection for Volume Visualization Dominik Sibbing and Leif Kobbelt Lehrstuhl für Informatik 8, RWTH Aachen, 20 Aachen Email: {sibbing,kobbelt}@informatik.rwth-aachen.de

More information

CPSC 583 Presentation Space: part I. Sheelagh Carpendale

CPSC 583 Presentation Space: part I. Sheelagh Carpendale CPSC 583 Presentation Space: part I Sheelagh Carpendale Context Basic ideas Partition Compression Filtering Non-linear magnification Zooming Partition: Windowing Xerox Star The main presentation ideas

More information

Scalar Visualization

Scalar Visualization Scalar Visualization 5-1 Motivation Visualizing scalar data is frequently encountered in science, engineering, and medicine, but also in daily life. Recalling from earlier, scalar datasets, or scalar fields,

More information

Project report Augmented reality with ARToolKit

Project report Augmented reality with ARToolKit Project report Augmented reality with ARToolKit FMA175 Image Analysis, Project Mathematical Sciences, Lund Institute of Technology Supervisor: Petter Strandmark Fredrik Larsson (dt07fl2@student.lth.se)

More information

Using VTK and the OpenGL Graphics Libraries on HPCx

Using VTK and the OpenGL Graphics Libraries on HPCx Using VTK and the OpenGL Graphics Libraries on HPCx Jeremy Nowell EPCC The University of Edinburgh Edinburgh EH9 3JZ Scotland, UK April 29, 2005 Abstract Some of the graphics libraries and visualisation

More information

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Rendering Computer Animation and Visualisation Lecture 9 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Volume Data Usually, a data uniformly distributed

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Scene-Based Segmentation of Multiple Muscles from MRI in MITK

Scene-Based Segmentation of Multiple Muscles from MRI in MITK Scene-Based Segmentation of Multiple Muscles from MRI in MITK Yan Geng 1, Sebastian Ullrich 2, Oliver Grottke 3, Rolf Rossaint 3, Torsten Kuhlen 2, Thomas M. Deserno 1 1 Department of Medical Informatics,

More information

Introduction to Scientific Visualization

Introduction to Scientific Visualization CS53000 - Spring 2018 Introduction to Scientific Visualization Introduction to January 11, 2018 The Visualization Toolkit Open source library for Visualization Computer Graphics Imaging Written in C++

More information

Contours & Implicit Modelling 1

Contours & Implicit Modelling 1 Contouring & Implicit Modelling Visualisation Lecture 8 Institute for Perception, Action & Behaviour School of Informatics Contours & Implicit Modelling 1 Brief Recap Contouring Implicit Functions lecture

More information

LAPLACIAN MESH SMOOTHING FOR TETRAHEDRA BASED VOLUME VISUALIZATION 1. INTRODUCTION

LAPLACIAN MESH SMOOTHING FOR TETRAHEDRA BASED VOLUME VISUALIZATION 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol.4/2002, ISSN 642-6037 Rafał STĘGIERSKI *, Paweł MIKOŁAJCZAK * volume data,triangle mesh generation, mesh smoothing, marching tetrahedra LAPLACIAN MESH

More information

Scientific Visualization Example exam questions with commented answers

Scientific Visualization Example exam questions with commented answers Scientific Visualization Example exam questions with commented answers The theoretical part of this course is evaluated by means of a multiple- choice exam. The questions cover the material mentioned during

More information

Data Visualization (CIS/DSC 468)

Data Visualization (CIS/DSC 468) Data Visualization (CIS/DSC 46) Volume Rendering Dr. David Koop Visualizing Volume (3D) Data 2D visualization slice images (or multi-planar reformating MPR) Indirect 3D visualization isosurfaces (or surface-shaded

More information

CSC Computer Graphics

CSC Computer Graphics // CSC. Computer Graphics Lecture Kasun@dscs.sjp.ac.lk Department of Computer Science University of Sri Jayewardanepura Polygon Filling Scan-Line Polygon Fill Algorithm Span Flood-Fill Algorithm Inside-outside

More information

Data Representation in Visualisation

Data Representation in Visualisation Data Representation in Visualisation Visualisation Lecture 4 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Data Representation 1 Data Representation We have

More information

Scientific Visualization. CSC 7443: Scientific Information Visualization

Scientific Visualization. CSC 7443: Scientific Information Visualization Scientific Visualization Scientific Datasets Gaining insight into scientific data by representing the data by computer graphics Scientific data sources Computation Real material simulation/modeling (e.g.,

More information

Data Visualization (DSC 530/CIS )

Data Visualization (DSC 530/CIS ) Data Visualization (DSC 530/CIS 60-0) Isosurfaces & Volume Rendering Dr. David Koop Fields & Grids Fields: - Values come from a continuous domain, infinitely many values - Sampled at certain positions

More information

Visualization Computer Graphics I Lecture 20

Visualization Computer Graphics I Lecture 20 15-462 Computer Graphics I Lecture 20 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 15, 2003 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University

Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University 15-462 Computer Graphics I Lecture 21 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] April 23, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Course Review. Computer Animation and Visualisation. Taku Komura

Course Review. Computer Animation and Visualisation. Taku Komura Course Review Computer Animation and Visualisation Taku Komura Characters include Human models Virtual characters Animal models Representation of postures The body has a hierarchical structure Many types

More information

Volume Illumination and Segmentation

Volume Illumination and Segmentation Volume Illumination and Segmentation Computer Animation and Visualisation Lecture 13 Institute for Perception, Action & Behaviour School of Informatics Overview Volume illumination Segmentation Volume

More information

Computer Graphics: Programming, Problem Solving, and Visual Communication

Computer Graphics: Programming, Problem Solving, and Visual Communication Computer Graphics: Programming, Problem Solving, and Visual Communication Dr. Steve Cunningham Computer Science Department California State University Stanislaus Turlock, CA 95382 copyright 2002, Steve

More information

Visualization Toolkit (VTK) An Introduction

Visualization Toolkit (VTK) An Introduction Visualization Toolkit (VTK) An Introduction An open source, freely available software system for 3D computer graphics, image processing, and visualization Implemented as a C++ class library, with interpreted

More information

AUTOMATIC GRAPHIC USER INTERFACE GENERATION FOR VTK

AUTOMATIC GRAPHIC USER INTERFACE GENERATION FOR VTK AUTOMATIC GRAPHIC USER INTERFACE GENERATION FOR VTK Wilfrid Lefer LIUPPA - Université de Pau B.P. 1155, 64013 Pau, France e-mail: wilfrid.lefer@univ-pau.fr ABSTRACT VTK (The Visualization Toolkit) has

More information

Volume Visualization

Volume Visualization Volume Visualization Part 1 (out of 3) Overview: Volume Visualization Introduction to volume visualization On volume data Surface vs. volume rendering Overview: Techniques Simple methods Slicing, cuberille

More information

COMP Preliminaries Jan. 6, 2015

COMP Preliminaries Jan. 6, 2015 Lecture 1 Computer graphics, broadly defined, is a set of methods for using computers to create and manipulate images. There are many applications of computer graphics including entertainment (games, cinema,

More information

Isosurface Rendering. CSC 7443: Scientific Information Visualization

Isosurface Rendering. CSC 7443: Scientific Information Visualization Isosurface Rendering What is Isosurfacing? An isosurface is the 3D surface representing the locations of a constant scalar value within a volume A surface with the same scalar field value Isosurfaces form

More information

First Steps in Hardware Two-Level Volume Rendering

First Steps in Hardware Two-Level Volume Rendering First Steps in Hardware Two-Level Volume Rendering Markus Hadwiger, Helwig Hauser Abstract We describe first steps toward implementing two-level volume rendering (abbreviated as 2lVR) on consumer PC graphics

More information

Computer Graphics: Introduction to the Visualisation Toolkit

Computer Graphics: Introduction to the Visualisation Toolkit Computer Graphics: Introduction to the Visualisation Toolkit Visualisation Lecture 2 Taku Komura Institute for Perception, Action & Behaviour Taku Komura Computer Graphics & VTK 1 Last lecture... Visualisation

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Data Visualization. Fall 2016

Data Visualization. Fall 2016 Data Visualization Fall 2016 Information Visualization Upon now, we dealt with scientific visualization (scivis) Scivisincludes visualization of physical simulations, engineering, medical imaging, Earth

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 11794--4400 Tel: (631)632-8450; Fax: (631)632-8334

More information

3D Volume Mesh Generation of Human Organs Using Surface Geometries Created from the Visible Human Data Set

3D Volume Mesh Generation of Human Organs Using Surface Geometries Created from the Visible Human Data Set 3D Volume Mesh Generation of Human Organs Using Surface Geometries Created from the Visible Human Data Set John M. Sullivan, Jr., Ziji Wu, and Anand Kulkarni Worcester Polytechnic Institute Worcester,

More information

Open Topology: A Toolkit for Brain Isosurface Correction

Open Topology: A Toolkit for Brain Isosurface Correction Open Topology: A Toolkit for Brain Isosurface Correction Sylvain Jaume 1, Patrice Rondao 2, and Benoît Macq 2 1 National Institute of Research in Computer Science and Control, INRIA, France, sylvain@mit.edu,

More information

Data Visualization (DSC 530/CIS )

Data Visualization (DSC 530/CIS ) Data Visualization (DSC 530/CIS 60-01) Scalar Visualization Dr. David Koop Online JavaScript Resources http://learnjsdata.com/ Good coverage of data wrangling using JavaScript Fields in Visualization Scalar

More information

Introduction to 3D Graphics

Introduction to 3D Graphics Graphics Without Polygons Volume Rendering May 11, 2010 So Far Volumetric Rendering Techniques Misc. So Far Extended the Fixed Function Pipeline with a Programmable Pipeline Programming the pipeline is

More information

Scalar Data. Alark Joshi

Scalar Data. Alark Joshi Scalar Data Alark Joshi Announcements Pick two papers to present Email me your top 3/4 choices. FIFO allotment Contact your clients Blog summaries: http://cs.boisestate.edu/~alark/cs564/participants.html

More information

TNM093 Tillämpad visualisering och virtuell verklighet. Jimmy Johansson C-Research, Linköping University

TNM093 Tillämpad visualisering och virtuell verklighet. Jimmy Johansson C-Research, Linköping University TNM093 Tillämpad visualisering och virtuell verklighet Jimmy Johansson C-Research, Linköping University Introduction to Visualization New Oxford Dictionary of English, 1999 visualize - verb [with obj.]

More information

Mesh Decimation Using VTK

Mesh Decimation Using VTK Mesh Decimation Using VTK Michael Knapp knapp@cg.tuwien.ac.at Institute of Computer Graphics and Algorithms Vienna University of Technology Abstract This paper describes general mesh decimation methods

More information

Volume Visualization. Part 1 (out of 3) Volume Data. Where do the data come from? 3D Data Space How are volume data organized?

Volume Visualization. Part 1 (out of 3) Volume Data. Where do the data come from? 3D Data Space How are volume data organized? Volume Data Volume Visualization Part 1 (out of 3) Where do the data come from? Medical Application Computed Tomographie (CT) Magnetic Resonance Imaging (MR) Materials testing Industrial-CT Simulation

More information

1. Interpreting the Results: Visualization 1

1. Interpreting the Results: Visualization 1 1. Interpreting the Results: Visualization 1 visual/graphical/optical representation of large sets of data: data from experiments or measurements: satellite images, tomography in medicine, microsopy,...

More information

Lecture overview. Visualisatie BMT. Fundamental algorithms. Visualization pipeline. Structural classification - 1. Structural classification - 2

Lecture overview. Visualisatie BMT. Fundamental algorithms. Visualization pipeline. Structural classification - 1. Structural classification - 2 Visualisatie BMT Fundamental algorithms Arjan Kok a.j.f.kok@tue.nl Lecture overview Classification of algorithms Scalar algorithms Vector algorithms Tensor algorithms Modeling algorithms 1 2 Visualization

More information

Lecture overview. Visualisatie BMT. Goal. Summary (1) Summary (3) Summary (2) Goal Summary Study material

Lecture overview. Visualisatie BMT. Goal. Summary (1) Summary (3) Summary (2) Goal Summary Study material Visualisatie BMT Introduction, visualization, visualization pipeline Arjan Kok a.j.f.kok@tue.nl Lecture overview Goal Summary Study material What is visualization Examples Visualization pipeline 1 2 Goal

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

A Developer s Survey of Polygonal Simplification algorithms. CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005

A Developer s Survey of Polygonal Simplification algorithms. CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005 A Developer s Survey of Polygonal Simplification algorithms CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005 Some questions to ask Why simplification? What are my models like? What matters

More information

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 60 CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 3.1 IMPORTANCE OF OPTIC DISC Ocular fundus images provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular

More information

Keywords: Radiation Patterns, Antennas, Scientific Visualization, VTK.

Keywords: Radiation Patterns, Antennas, Scientific Visualization, VTK. Blucher Mechanical Engineering Proceedings May 2014, vol. 1, num. 1 www.proceedings.blucher.com.br/evento/10wccm COMPUTATION AND VISUALIZATION OF THREE DIMENSIONAL RADIATION PATTERNS OF ANTENNAS M. Joaquim

More information

Special Topics in Visualization

Special Topics in Visualization Special Topics in Visualization Final Project Report Dual contouring of Hermite Data Submitted By S M Shahed Nejhum 8589-1199 May 19, 2008 Introduction Iso-surface extraction from 3D volumetric data is

More information

Visualization Computer Graphics I Lecture 20

Visualization Computer Graphics I Lecture 20 15-462 Computer Graphics I Lecture 20 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 12] November 20, 2003 Doug James Carnegie Mellon University http://www.cs.cmu.edu/~djames/15-462/fall03

More information

Automatic Cerebral Aneurysm Detection in Multimodal Angiographic Images

Automatic Cerebral Aneurysm Detection in Multimodal Angiographic Images Automatic Cerebral Aneurysm Detection in Multimodal Angiographic Images Clemens M. Hentschke, Oliver Beuing, Rosa Nickl and Klaus D. Tönnies Abstract We propose a system to automatically detect cerebral

More information

Centre for Mathematical Sciences, Mathematics, LTH, December 2015

Centre for Mathematical Sciences, Mathematics, LTH, December 2015 Centre for Mathematical Sciences, Mathematics, LTH, December 2015 FMNA30 - Medical Image Analysis, Assignment 4 1 Introduction The purpose of this assignment is to give hands-on experience with handling

More information

Chapter 8 Visualization and Optimization

Chapter 8 Visualization and Optimization Chapter 8 Visualization and Optimization Recommended reference books: [1] Edited by R. S. Gallagher: Computer Visualization, Graphics Techniques for Scientific and Engineering Analysis by CRC, 1994 [2]

More information

K-Means Clustering Using Localized Histogram Analysis

K-Means Clustering Using Localized Histogram Analysis K-Means Clustering Using Localized Histogram Analysis Michael Bryson University of South Carolina, Department of Computer Science Columbia, SC brysonm@cse.sc.edu Abstract. The first step required for many

More information

CHAPTER 1 Graphics Systems and Models 3

CHAPTER 1 Graphics Systems and Models 3 ?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........

More information

Volume Graphics Introduction

Volume Graphics Introduction High-Quality Volume Graphics on Consumer PC Hardware Volume Graphics Introduction Joe Kniss Gordon Kindlmann Markus Hadwiger Christof Rezk-Salama Rüdiger Westermann Motivation (1) Motivation (2) Scientific

More information

Surface Projection Method for Visualizing Volumetric Data

Surface Projection Method for Visualizing Volumetric Data Surface Projection Method for Visualizing Volumetric Data by Peter Lincoln A senior thesis submitted in partial fulfillment of the requirements for the degree of Bachelor of Science With Departmental Honors

More information

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer The exam consists of 10 questions. There are 2 points per question for a total of 20 points. You

More information

Fundamental Algorithms and Advanced Data Representations

Fundamental Algorithms and Advanced Data Representations Fundamental Algorithms and Advanced Data Representations Anders Hast Outline Isosurfaces (Volume Data) Cuberille Contouring Marching Squares Linear Interpolation methods Marching Cubes Non Linear Interpolation

More information

CIS 4930/ SCIENTIFICVISUALIZATION

CIS 4930/ SCIENTIFICVISUALIZATION CIS 4930/6930-902 SCIENTIFICVISUALIZATION ISOSURFACING Paul Rosen Assistant Professor University of South Florida slides credits Tricoche and Meyer ADMINISTRATIVE Read (or watch video): Kieffer et al,

More information

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Computational Medical Imaging Analysis Chapter 4: Image Visualization Computational Medical Imaging Analysis Chapter 4: Image Visualization Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington,

More information

Keywords Distortion-oriented presentation techniques, Information visualisation, Geographical Information Systems.

Keywords Distortion-oriented presentation techniques, Information visualisation, Geographical Information Systems. FRUSTUM : A Novel Distortion Oriented Display for Demanding Applications Paul Anderson, Ray Smith andzhongwei Thang Gippsland School of Computing and Information Technology, Monash University, Switchback

More information

Iso-surface cell search. Iso-surface Cells. Efficient Searching. Efficient search methods. Efficient iso-surface cell search. Problem statement:

Iso-surface cell search. Iso-surface Cells. Efficient Searching. Efficient search methods. Efficient iso-surface cell search. Problem statement: Iso-Contouring Advanced Issues Iso-surface cell search 1. Efficiently determining which cells to examine. 2. Using iso-contouring as a slicing mechanism 3. Iso-contouring in higher dimensions 4. Texturing

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming L1 - Introduction Contents Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming 1 Definitions Computer-Aided Design (CAD) The technology concerned with the

More information

Surface Construction Analysis using Marching Cubes

Surface Construction Analysis using Marching Cubes Surface Construction Analysis using Marching Cubes Burak Erem Northeastern University erem.b@neu.edu Nicolas Dedual Northeastern University ndedual@ece.neu.edu Abstract This paper presents an analysis

More information

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller Scalar Data Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Basic strategies Function plots and height fields Isolines Color coding Volume visualization (overview) Classification Segmentation

More information

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Illumination Visualisation Lecture 11 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

GPU-based Volume Rendering. Michal Červeňanský

GPU-based Volume Rendering. Michal Červeňanský GPU-based Volume Rendering Michal Červeňanský Outline Volume Data Volume Rendering GPU rendering Classification Speed-up techniques Other techniques 2 Volume Data Describe interior structures Liquids,

More information

From medical imaging to numerical simulations

From medical imaging to numerical simulations From medical imaging to numerical simulations Christophe Prud Homme, Vincent Chabannes, Marcela Szopos, Alexandre Ancel, Julien Jomier To cite this version: Christophe Prud Homme, Vincent Chabannes, Marcela

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

What is visualization? Why is it important?

What is visualization? Why is it important? What is visualization? Why is it important? What does visualization do? What is the difference between scientific data and information data Visualization Pipeline Visualization Pipeline Overview Data acquisition

More information

INDUSTRIAL SYSTEM DEVELOPMENT FOR VOLUMETRIC INTEGRITY

INDUSTRIAL SYSTEM DEVELOPMENT FOR VOLUMETRIC INTEGRITY INDUSTRIAL SYSTEM DEVELOPMENT FOR VOLUMETRIC INTEGRITY VERIFICATION AND ANALYSIS M. L. Hsiao and J. W. Eberhard CR&D General Electric Company Schenectady, NY 12301 J. B. Ross Aircraft Engine - QTC General

More information

Computer Graphics and Visualization. What is computer graphics?

Computer Graphics and Visualization. What is computer graphics? CSCI 120 Computer Graphics and Visualization Shiaofen Fang Department of Computer and Information Science Indiana University Purdue University Indianapolis What is computer graphics? Computer graphics

More information

CS Simple Raytracer for students new to Rendering

CS Simple Raytracer for students new to Rendering CS 294-13 Simple Raytracer for students new to Rendering Ravi Ramamoorthi This assignment should be done only by those small number of students who have not yet written a raytracer. For those students

More information

https://ilearn.marist.edu/xsl-portal/tool/d4e4fd3a-a3...

https://ilearn.marist.edu/xsl-portal/tool/d4e4fd3a-a3... Assessment Preview - This is an example student view of this assessment done Exam 2 Part 1 of 5 - Modern Graphics Pipeline Question 1 of 27 Match each stage in the graphics pipeline with a description

More information

Available Online through

Available Online through Available Online through www.ijptonline.com ISSN: 0975-766X CODEN: IJPTFI Research Article ANALYSIS OF CT LIVER IMAGES FOR TUMOUR DIAGNOSIS BASED ON CLUSTERING TECHNIQUE AND TEXTURE FEATURES M.Krithika

More information

CSC 7443: Scientific Information Visualization

CSC 7443: Scientific Information Visualization Scientific Information Visualization CSC 7443, Spring 2011 9:10 am to 10:30 am, Tuesday and Thursday 104 Audubon Hall Bijaya Bahadur Karki Course Description Catalog: Study computer visualization principles,

More information

CIS 467/602-01: Data Visualization

CIS 467/602-01: Data Visualization CIS 467/602-01: Data Visualization Vector Field Visualization Dr. David Koop Fields Tables Networks & Trees Fields Geometry Clusters, Sets, Lists Items Items (nodes) Grids Items Items Attributes Links

More information

Overview. Distortion-Based Techniques. About this paper. A Review and Taxonomy of Distortion-Oriented Presentation Techniques (94 ) Wei Xu

Overview. Distortion-Based Techniques. About this paper. A Review and Taxonomy of Distortion-Oriented Presentation Techniques (94 ) Wei Xu Overview Distortion-Based Techniques Wei Xu A Review and Taxonomy of Distortion- Oriented Presentation Techniques Y.K.Leung M.D.Apperley 1994 Techniques for Non-Linear Magnification Transformations T.A.Keahey

More information

Introduction to Medical Image Processing

Introduction to Medical Image Processing Introduction to Medical Image Processing Δ Essential environments of a medical imaging system Subject Image Analysis Energy Imaging System Images Image Processing Feature Images Image processing may be

More information

3D Surface Reconstruction of the Brain based on Level Set Method

3D Surface Reconstruction of the Brain based on Level Set Method 3D Surface Reconstruction of the Brain based on Level Set Method Shijun Tang, Bill P. Buckles, and Kamesh Namuduri Department of Computer Science & Engineering Department of Electrical Engineering University

More information

Computational Foundations of Cognitive Science

Computational Foundations of Cognitive Science Computational Foundations of Cognitive Science Lecture 16: Models of Object Recognition Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February 23, 2010 Frank Keller Computational

More information

Rasterization Overview

Rasterization Overview Rendering Overview The process of generating an image given a virtual camera objects light sources Various techniques rasterization (topic of this course) raytracing (topic of the course Advanced Computer

More information