Acknowledgements. I also want to thank Mr. Bill Whyte for his suggestion on evaluation of system and project.

Size: px
Start display at page:

Download "Acknowledgements. I also want to thank Mr. Bill Whyte for his suggestion on evaluation of system and project."

Transcription

1 Summary This project has investigated methods for finding both the central axis and diameters of the blood vessels. Three kinds of methods maximum intensity project (MIP), shaded surface display (SSD) and 3D blood vessel segmentation are studied and compared. The existing approaches to segmentation are classified into three categories global thresholding, edge-based segmentation and region-based segmentation. They are compared on the background of blood vessel segmentation. Active contour models (or snakes) and their numeric solutions are carefully studied. Among them, greedy snake algorithm is selected and tailored for the aorta segmentation. A semi-automatic 3D blood vessel segmentation system has been designed, implemented and evaluated. This is a supervised segmentation method which is based on improved greedy snake algorithm. It has successfully solved the problem of initialisation by two ways: initialisation using gradient profile and recursive snakes. This system has been tested upon more than 700 CT images with a satisfying outcome. Further study on bifurcation of blood vessel, segmentation refinement and 3D model generation may lead to its industrial application. I

2 Acknowledgements I would like to thank my supervisor Dr. Andy Bulpitt for his constant support and guidance throughout the project. The system would not come into existence without his suggestion on snake s initialisation. I also want to thank Mr. Bill Whyte for his suggestion on evaluation of system and project. II

3 Contents SUMMARY...I ACKNOWLEDGEMENTS... II LIST OF FIGURES... VI CHAPTER 1 INTRODUCTION PROJECT BACKGROUND REPORT ORGANIZATION...1 CHAPTER 2 BACKGROUND RESEARCH COMPUTED TOMOGRAPHY D BLOOD VESSEL SEGMENTATION Maximum intensity projection Shaded surface display Reconstruction from CT image segmentation D BLOOD VESSEL SEGMENTATION Thresholding Edge-based segmentation Region-based segmentation Active contour models SUMMARY...10 CHAPTER 3 ACTIVE CONTOUR MODELS ACTIVE CONTOUR MODELS (SNAKES) Snakes parametrical representation Internal energy Image energy Constraint energy NUMERIC SOLUTIONS Finite difference method (Kass) Dynamic programming (Amini) Greedy snake algorithm (Williams) Gradient vector flow (Xu) JUSTIFICATION OF CHOOSING GREEDY SNAKES SUMMARY...16 CHAPTER 4 SYSTEM ANALYSIS AND DESIGN SYSTEM DEVELOPMENT CYCLE SYSTEM FUNCTION MODULES GREEDY SNAKES ALGORITHM Image pre-processing Initialisation of snake Snake energy minimization Snake point energy minimization Automatic axis extraction from segmented object INITIALISATION OF SNAKES Initialisation using gradient profile...22 III

4 4.4.2 Recursive snake ANALYSIS OF GREEDY SNAKES Coefficient beta and corners Local vs. global optimal GUI DESIGN AND MVC PATTERN SUMMARY...26 CHAPTER 5 SYSTEM IMPLEMENTATION INTRODUCTION IMPLEMENTATION OF GREEDY SNAKES Architecture: Underlying data structure Image pre-processing Snakes initialisation Snakes energy minimisation Post-processing of snakes for evaluation MANUAL AXIS EXTRACTION AND MVC SUMMARY...37 CHAPTER 6 TESTING AND EVALUATION INTRODUCTION SEGMENTATION OF SYNTHETIC IMAGES Synthetic images Synthetic images with deteriorated quality Discussions SEGMENTATION OF 2D CT IMAGES Difference map Sequential snakes Segmentation of CT images SUPERVISED 3D BLOOD VESSEL SEGMENTATION...48 CHAPTER 7 LIMITATIONS AND FUTURE WORK LIMITATIONS AND IMPROVEMENTS OF THE SYSTEM Unfinished mission Implementation of other flavours of snake algorithms Segmentation refinement Boundary representation Implementation of full scan-line conversion algorithm D blood vessel segmentation without supervising Evaluation method FUTURE WORK Implementation of region growing algorithm Implementation of volume rendering, MIP and SSD Immerse virtual reality...52 CHAPTER 8 CONCLUSION...53 REFERENCES...54 BIBLIOGRAPHY...56 APPENDIX A PROJECT EXPERIENCE...57 IV

5 A.1 KNOWLEDGE PREPARATION...57 A.2 REFERENCE MODEL...57 A.3 RESEARCH METHODOLOGY...58 A.4 PROJECT MANAGEMENT...59 APPENDIX B OBJECTIVES AND DELIVERABLES...60 APPENDIX C INTERIM REPORT FEEDBACK...62 APPENDIX D SNAKE S MODEL & VIEW...63 D.1 CLASS SNAKENODE...63 D.2 CLASS SNAKELIST...63 D.3 CLASS GREEDYSNAKE...63 D.4 CLASS SLITHE...64 APPENDIX E: 3D BLOOD VESSEL SEGMENTATION...65 APPENDIX F SOFTWARE USAGE...67 F.1.SUPERVISED 3D SEGMENTATION...67 F.2 MANUAL SEGMENTATION AND MANUAL INITIALISATION...69 F.3 VIEW GRADIENT MAP AND BLURRED IMAGE...69 F.4 MANUAL AXIS EXTRACTION...70 F.5 EVALUATION TOOLS...71 APPENDIX G PROJECT EVALUATION...72 G 1 EVALUATION AGAINST OVERALL OBJECTIVES...72 G 2 EVALUATION AGAINST THE MINIMUM REQUIREMENTS...73 G 3 EVALUATION AGAINST THE DELIVERABLES...74 G 4 PROJECT MILESTONES...74 V

6 List of figures FIG 2-1 CT SCANNER CONFIGURATION...2 FIG 2-2 RECONSTRUCTING 2D IMAGES FROM 1D VIEWS...3 FIG 2-3 HUMAN BODY CT IMAGE...3 FIG 2-4 BLOOD VESSEL VISUALIZATION USING MIP...4 FIG 2-5 SSD OF KIDNEYS...5 FIG 3-1 SNAKES SMOOTH OVER SHARP CORNER...12 FIG 4-1 WATERFALL MODEL...17 FIG 4-2 CLOSED SNAKES WITH 5X5 SEARCHING NEIGHBOURHOOD...19 FIG 4-3 FIND LOCAL MINIMUM WITHIN 3X3 NEIGHBOURHOOD...20 FIG 4-4 BINARY IMAGE WITH SEGMENTED BLOOD VESSEL...21 FIG 4-5 INITIALISING SNAKES AS A CIRCLE...22 FIG 4-6 MARCHING POINT AND GRADIENT PROFILE...22 FIG 4-7 INITIALISING SNAKE USING RAY CASTING...23 FIG 4-8 RECURSIVE SNAKE...23 FIG 4-9 RECURSIVE SNAKE FAILS...24 FIG 4-10 LOCAL OPTIMAL AND GLOBAL OPTIMAL...25 FIG MODEL-VIEW-CONTROLLER ARCHITECTURE...26 FIG 5-1 GREEDYSNAKE ARCHITECTURE...27 FIG 5-2 SNAKENODE...28 FIG 5-3 SNAKELIST...28 FIG 5-4 2D GAUSSIAN FUNCTION...29 FIG 5-5 BORDER PROCESSING AND REFLECTED INDEX...30 FIG 5-6 ENLARGE IMAGE USING REFLECTED INDEX...31 FIG 5-7 CLAMPING GRADIENT MAP...32 FIG 5-8 BRESENHAM ALGORITHM...34 FIG 5-9 RASTERIZATION...35 FIG 5-10 SCAN LINE CONVERSION...35 FIG 5-11 MANUAL AXIS EXTRACTION...36 FIG 6-1 EXTRAPOLATION...38 FIG 6-2 SYNTHETIC IMAGES...39 FIG 6-3 SEGMENTATIONS OF SYNTHETIC IMAGES...40 FIG 6-4 DEGRADED BY GUASSIAN NOISE...40 FIG 6-4 DEGRADED BY SALT-AND-PEPPER NOISE...41 FIG 6-5 DEGRADED BY ARTEFACTS...41 FIG 6-6 DEGRADED BY BOTH ARTEFACTS AND GUASSIAN NOISE...42 FIG 6-7 VARIATION OF SEGMENTATION ERROR FOR EACH SHAPE...43 FIG 6-8 AVERAGE SEGMENTATION ERROR...44 FIG 6-9 STANDARD DEVIATION OF SEGMENTATION ERROR FOR EACH SHAPE...44 FIG 6-10 INFLUENCE OF NOISE UPON SEGMENTATION ERROR...45 FIG 6-11 INFLUENCE OF SNAKE S INITIALISATION...45 FIG 6-12 DIFFERENT INITIALISATIONS AND DIFFERENT RESULTS...46 FIG 6-13 DIFFERENT CLAMPING THRESHOLDS...46 FIG 6-14 DIFFERENCE MAP...47 FIG 6-15 SEQUENTIAL SNAKES...48 FIG 6-16 SUPERVISED SEGMENTATION...49 FIG 6-17 ONLY RECURSIVE SNAKE WORKS...49 FIG 7-1 FILLING CONCAVE POLYGON...51 VI

7 CHAPTER 1 INTRODUCTION 1.1 Project Background Disease of cardiovascular system is top killer in western world. The early diagnosis and treatment may save a lot of lives. Recent advancement in medical science makes non-invasive diagnosis techniques, such as CT (computerized tomography), being widely used. This provides doctors with abundant information about the patient in terms of medical images. In a disease such as abdominal aortic aneurysm, it is necessary to assess the condition of aorta in order to plan treatment. After the images are obtained, it is necessary to determine the location of aorta and location and thickness of aorta wall. This leads to acquiring quantitative parameters of the aorta and enabling qualitative 3D visualization. To achieve this goal in a more accurate and faster way, it is required to develop image analysis procedures, of which the most challenging is image segmentation [1]. One reliable approach is manual segmentation done by radiologists. There are usually more than a thousand CT images for one patient. So manual segmentation is laborious and time-consuming. This project attempts to address this problem by producing an automatic system for the 3D blood vessel segmentation. The target is achieved through running active contour models (or snakes) in 2D space and recursive snakes among slices of CT images. 1.2 Report Organization This report is structured into three parts. The first part is dedicated to background research. Chapter 1 gives the problem and research context of this project. Chapter 2 studies candidate methods for both 3D and 2D blood vessel segmentation; Chapter 3 studies the active contour models (snakes), especially its numeric solutions. The second part develops a system for 3D blood vessel segmentation. Chapter 4 is about system design; Chapter 5 is about system implementation; Chapter 6 is about system evaluation. The last is the conclusion part. Chapter 7 points out the limitations and improvements of this system. It also suggests future work about the project; Chapter 8 concludes the project. 1

8 CHAPTER 2 BACKGROUND RESEARCH 2.1 Computed tomography Tomography refers to the cross-sectional imaging of an object from either transmission or reflection data collected by illuminating the object from many different directions. Computed tomography is a kind of radiography in which a three-dimensional image of a body structure is constructed by computer from a series of plane cross-sectional images made along an axis. [2][3] (a) rotating detectors (b) stationary detectors Fig 2-1 CT scanner configuration [4] The most common CT technology is x-ray CT which is performed using a CT scanner. Fig 2-1 gives the configuration of a CT scanner. X-ray tube is the place where the x-ray is emitted and detectors record the intensities of the x-rays after they pass through human body. X-ray tube rotates around the human body 360 degrees with a fixed small angle steps [4]. In (a) both x-ray tube and detectors rotate. In (b) x-ray tube rotates and detectors are fixed. The record of each round of x-ray tube rotation is a series of 1D intensity images (Fig 2-2 a) called views. Fundamentally, tomographic imaging deals with reconstructing an image from its views. Reconstruction of a 2D intersection image of human body from these 1D views is carried out by a technology known as filtered back projection. This technology projects these views back along the beam axis into the images plane. Summing these back projected images over all views generates an image showing a cross-section through the object. Fig 2-2 shows an example of how to reconstruct an eclipse from its views. [3] 2

9 Fig 2-2 Reconstructing 2D images from 1D views [3] After each round of rotation, the CT scanner moves upwards or downwards to get another cross-section image. In such a way, 3D image of human body is generated as a stack of 2D slices, each separated by a small fixed distance. In this project, the starting point is a stack of 2D human body intersection images. Fig 2-3 shows one of them. Fig 2-3 Human body CT image 3

10 2.2 3D Blood vessel segmentation In [5] Bert Verdonck summarizes these approaches. The simplest way to look at the data is on a slice-by-slice basis. As an extension these slices can be oriented along different orientations (a process called reformating). A second approach gathers information from the data volume in a projection image. Two popular techniques are maximum intensity projection (MIP) and surface shaded displays (SSD). But more accurate method of blood vessel segmentation depends on the reconstruction from the segmented volume data Maximum intensity projection The most common technology used in visualization of blood vessel is Maximum Intensity Projection (MIP). It is based on the technology of volume rendering using ray casting [6]. Rays are cast from each pixel of image into the volume data, which is regarded as a composition of multiple slices. Along each ray, only the maximum intensity is chosen as the final intensity of that pixel instead of compositing the intensities along the ray. i.e. MIP finds the brightest voxel from all the slices at each pixel location. It is less expensive in terms of computing because it only finds the maximum intensity value instead of doing integral computation. Therefore it is possible to view the volume date from different angles in real-time. The common assumption about human CT images is that the intensity of blood vessel is the largest among other tissues and organs. So it is widely used in angiograpgy by which doctors gain general idea of the whole blood vessel system. The main drawback of maximum intensity projection is its poor accuracy. The resulting images are hampered by the presence of non-vascular structures since intensity is the only criterion. For instance, in CT images bony structures sometimes have a higher intensity than vessel, and it will influence the ability to inspect the vessel structure. In this project, we are concerned with the aorta. It is often entangled and therefore shielded by tributary or small blood vessels (Fig 2-4). So MIP does not fit for this project. Fig 2-4 Blood vessel visualization using MIP [6] 4

11 2.2.2 Shaded surface display The shaded surface display (SSD) is also known as ISO-surfacing. It fits geometric primitives such as polygons or patches to constant-value contour surfaces in volume data. This algorithm consists of three principal steps in [7]: The first step is segmentation and the calculation of the normalized gradient vector at the surface points. The second step is the shading procedure. Realistic 3D-effects can be obtained by using Phong's shading formula. The third step is using the marching cubes algorithm to construct 3D surface [6]. SSD can be regarded as a simplified volume rendering technology. The volume rendering degenerates into ISO-surfacing when the transfer function becomes a threshold. That threshold is the value of the contour surface. So fundamentally SSD is a kind of thresholding which works on volume data directly. Thresholding is infamous for its inaccuracy and magic effect. In our CT images, non-homogeneous contrast exists over the complete volume. So measurements made on these images based on a single threshold appeares to be extremely inaccurate. SSD is often used for surgeons to gain preoperative knowledge about one organ and its relationship to other important structures [7]. See Fig 2-5. Fig 2-5 SSD of Kidneys [7] Reconstruction from CT image segmentation A more accurate and reliable approach is through reconstructing 3D blood vessel from 2D image segmentation. This subject has been researched for some time and different methods are proposed. These methods vary in accuracy and the level of automation. More or less, it involves user interaction. They can be roughly classified into two categories: direct 3D segmentation and reconstructing 3D model based on 2D segmentations. In the first category, segmentation is carried out in 3D space directly. It adopts a 3D balloon model and makes it grow till it fits the blood vessel. The result is a 3D blood vessel model. In the second category, first segment 2D images and the segmentations are used to reconstruct the 3D model. 5

12 In the second category, the common approach often consists of three major steps: Step1, Blood vessel axis extraction In this step, the centre point of blood vessel is located in each CT image manually or automatically. These centroids form a treelike model. Automatic extraction of blood vessel central axis is a complex problem which involves intensive research. Several papers present their different methods with various level of success. Because this is the foundation of the later processing, many researchers prefer manual extraction. Step2, Generation of blood vessel contour from each blood vessel center of CT images. There are also several proposed methods. In [5] this method is proposed: For each CT image, from the found center, generate radiation ray; Along the rays, find the point with the maximum intensity. That is the boundary points. Blood vessel boundary is generated through interpolation or spline fitting. In this algorithm, using maximum intensity points to delineate boundary is inaccurate, but using spline fitting to get boundary is a good idea. Other adopted methods include region growing and active contour. In this project active contour (or snakes) is adopted. The comparison between them is discussed later. Step3, Reconstruction of 3D blood vessel model. From the boundary of the blood vessel in each CT image, 3D model is reconstructed. Since the axis is not perpendicular to CT images, the boundary cannot represent the real size of vessel although they can be used to construct is 3D model. For each centre point, we may use the information of axis to generate an orthogonal intersection plane to cut the 3D vessel and get the real intersection image. The approach of this project belongs to the latter. Recursive snakes algorithm is designed to make step 2 be carried out without finishing step D blood vessel segmentation In the process of image segmentation, an image is divided into separate regions that are homogeneous with respect to a chosen property such as brightness, color, reflectivity, texture, etc. There are two problems concerning image segmentation: One is image data ambiguity which often leads to the false segmentation of the medical image. The other comes from noise and sampling artifacts, which deteriorate medical images quality and sabotage the precision of the segmentation results. Image segmentation methods can be roughly categorized as followings: (1) Global approaches, e.g. using histogram of image features (2) Edge-based approaches: e.g. border tracing, Hough transforms (3) Region-based approaches: e.g. region growing, region merging and splitting. 6

13 2.3.1 Thresholding Thresholding is a process of bi-level of an input image. For example, given a grey level image, thresholding: 1 f ( i, j) T g( i, j) = { (2-1) 0 f ( i, j) < T where: g(i, j) is the pixel grey level at the position of (i, j) of the output image; f(i, j) is the pixel grey level at the position of (i, j) of the input image; T is grey level threshold. In the case of blood vessel segmentation, (1) Grey level within an aorta object varies; (2) Gray level of aorta objects among different CT images varies; (3) Objects other than blood vessel within the image have similar or same gray level. So a single global threshold is not suitable for this project because of the above ambiguities. The possible improvement is to adopt variable threshold, that is, to divide the image into sub-images and apply different threshold on each sub-image. But it also needs some prior knowledge about the image, which is often impossible in the case of blood vessel segmentation. The analysis does not lead to the conclusion that thresholding is not going to be used in this project. Rather, it means that global segmentation is not suitable for segmentation of blood vessel directly. Actually, thresholding is very useful is image pre-processing and image subdivision Edge-based segmentation Edge-based segmentations rely on edges found in an image by edge detecting operators such as Roberts, Prewitt, Sobel, and Laplacian. After convoluting these operators with the images, these resulting edges mark image locations of discontinuities in grey level, colour, texture, etc. The following processing steps combine edges into edge chains that correspond better with borders in the image. This step needs additional prior knowledge about the segmented object. For example, the commonly used method in medical image processing of this class is border detection as graph searching. The prior knowledge includes approximate starting point and ending point of the border. Even some relatively weak additional requirements such as smoothness, low curvature, etc. may also be included as prior knowledge. Other methods such as generalized Hough transformation is also based on the prior knowledge. The demand of the prior knowledge prevents them to be candidate methods in this project Region-based segmentation One of the most popular methods of this category is region growing. Homogeneity of regions is used as the main segmentation criterion in region growing. Resulting regions of the segmented image must be both homogeneous and maximal. Homogeneous means all the pixels within the region comply with the homogeneous criterion. Maximal means this region is the largest and cannot grow under this homogeneous criterion. It may simply start with a single pixel. The homogeneous criterion is applied upon its 7

14 connected pixels. The connectivity of the pixel usually has two kinds: 4-connectivity and 8-connectivity. In the former one, the up, down, left and right pixels to the current pixel is examined. In the latter one, besides the above pixels, the 4 diagonal pixels (northwest, northeast, southwest and southeast) are added. If one meets the homogeneous criterion, it is added to the region. The region expands by absorbing more neighbour pixels and it stops growing until no more neighbour pixels satisfy the homogeneous criterion. The whole image is segmented until it reaches: (1) All pixels are assigned into one of the regions and only one region; (2) Each region is homogenous upon the homogeneous criterion; (3) Each region is uniform; (4) Each region is the maximum. Region growing techniques are generally better in noisy images where edges are extremely difficult to detect than edge-based techniques. It is widely used in blood vessel segmentation Active contour models (1) Problems with Image segmentation techniques The aforementioned techniques are common in the field of image processing. But there are several demanding problems involved with them. Prior-knowledge It means some knowledge, such as shape, intensity, texture, about to-be-segmented objects is a must. It is a strong precondition and sometime it is impossible to acquire. For example, template-matching needs a strict demand of prior knowledge of the shape and orientation. Region-based segmentation demands an unambiguous homogeneous characteristic of the to-be-segmented object. Although other method such as generalized Hough transformation weakens this constraint, it still exists to be an obstacle or it involves huge computation to prevent itself to be practical. Some methods which adopt edge finding needs post-processing or high level interaction to get the border of the desired object from the entangled and gapped edges. Presence of noise and sampling artifacts Presence of noise and sampling artifacts are extremely common in real world images. Their existence often deteriorate the performance of the segmentation techniques and even cause them to either fail completely or require some kind of manual post-processing adjustment to rectify the resulting object to be valid. Restriction on object shape Most of the classical edge detection techniques work by convolving the image with horizontal and vertical filters, although sometimes the operators can be adjusted to test diagonals. This often leads to favoring detection of rectilinear objects. (2) Problems concerning CT images 8

15 In the case of blood vessel segmentation, there are 3 problems concerning the blood vessel object in CT images according to [1]: The aorta is in many places connected with the tissue that has same optical density; Smaller vessels that branch from the aorta should not be included in segmentation result; The aorta does not have a uniform optical density through the volume. Global thresholding technique can not segment blood vessel because the ambiguity of its intensity with the contagious tissues which is the only criterion as discussed above. Region growing is not fit for this project in the aspect of condition 1 and 2. If the connected tissue has the same optical intensity as aorta s, the resulted object algorithm can not differentiate them, thus fail. More often than not, the region will cross the haze line between blood vessel and the disconnected (in human eyes) tissue. Therefore, it cannot successfully segment the aorta. Because of problem 3, it is also hard to find a practical intensity criterion to segment blood vessels. (3) Justification of choosing snakes Because of the existence of the above problems, a simplified method [5] is proposed, which delineate the boundary as a serial of points where the maximum intensity occur. After observing the CT image, it is found that the maximum intensity can not be used as the rule of whether it is blood vessel or not since other organs like bones may have higher intensity than blood vessel. This is the same problem existing in the method of MIP. The major problem is its poor accuracy. The answer to this question is how snakes address the above problems. (1) Snakes are energy minimizing spline. Unlike other edge finding operators which favors the points along horizontal, vertical or diagonal edges, it treats each points on any edge fairly. Therefore, it can fit the boundary of irregular shape quite well. (2) Snakes can be roughly classified as one of edge based segmentation technique. It does not depend on the homogeneous feature of objects. Rather, it is based on the edge gradient. So if properly processed, such gradient information can be enlarged to overcome the obstacle of hazel dividing lines among blood vessel and other objects. (3) Snakes do not demand prior knowledge about object s features in terms of shape, intensity, texture etc. What snakes need is a proper initialization which is easier to be found than those features. (4) Since snakes are adopted, the user is at liberty to segment only the aorta instead of tributary vessels. Or the user may specify those blood vessels which are larger than a certain size. Incidentally, compared with its peer techniques, snakes are not widely used in blood vessel segmentation. According to the literature review, region growing is the predominate method. 9

16 2.4 Summary This chapter starts with the volume date in terms of stacks of CT images. Three methods: ISO-surfacing, volume rendering, and reconstruction from 2D segmentation, are introduced and compared subsequently. Then it reaches the conclusion that the third method suits this project best. In the following part, how to segment object in 2D image is discussed. Three classes of segmentation techniques: global thresholding, edge-based segmentation, and region-based segmentation are discussed with a focus on the problems concerning them. Those are the common problems concerning 2D image segmentation. In real CT images, there are other problems exacerbate the segmentation quality of to-be-segmented images. Last, the justification of choosing snakes is stated. 10

17 CHAPTER 3 ACTIVE CONTOUR MODELS 3.1 Active contour models (snakes) Active contour models (snakes) [8] were introduced by Kass, Witkin and Terzopoulos in 1987 as a solution to finding salient contours like edges and object s boundary in digital images. Active contour models can be used in image segmentation and understanding, and are also suitable for analysis of dynamic image data such as motion tracking, and 3D image data. [8] gives the foundation of active contour models, and outlines the underlying equations of snakes. Other flavours of snakes are variations of on this snake. This snake is also called traditional or classical snake Snakes parametrical representation Snakes are defined as an energy minimizing spline whose energy depends on its shape and its location within the image. Snakes start from some arbitrary initial shape and location within the image, then deform into the desired shape and move into the desired location to align the object s boundary or edge. The deformation and movement of snakes are activated by their intrinsic trend of minimizing their energy. The energy functional of the snakes is such defined that the snakes energy reaches their local minimum when the snakes align with the boundary of the object of interest in the image. In [8], Snakes are defined parametrically as: v ( s) = [ x( s), y( s)], s [0,1] (3.1) Where x(s), y(s) are x, y coordinates along the contour and s represents the arc-length, with values in [0,1]. Every point on this curve moves in order to minimize this snakes energy function (3.2) that is composed of three energy components: E = 1 Eint ( v( s)) + Eimage ( v( s)) + Econ ( v( s)) ds (3.2) 0 Where: E int is the internal energy; E image is the image energy and E con is the external energy Internal energy The internal energy of each point of snakes is defined as: Where: E int = ( α v'( s) + β v''( s) ) (3.3) 2 11

18 dv( s) v' ( s) = defines deformation along tangential direction at the point; ds 2 d v( s) v ''( s) = defines the deformation of its curvature at the point. 2 ds α, β are weighting parameters that control the snake s elasticity and bending respectively. α is the elasticity coefficient and β is the stiffness coefficient; It is easy to find snake s physical analogy as a rubber band, which resists elongating as well as bending. The snake s internal energy is the integration of all points energy of formula (3.3), which is separated into two parts for the convenience of explanation. The first part of (3.3) is called elastic energy and second part of (3.3) is called bending energy. They are integrated respectively to get elastic energy and bending energy of the whole snake. 1 E elastic = ( s) v'( s) 2 ds 2 α (3.4) 1 E bending = ( s) v''( s) 2 ds 2 β (3.5) Generally speaking, both α(s) and β(s) are the same for each point in the snakes. In this report, they are treated as constant if not otherwise specified. The snake s deformation trend can be interpreted from the minima of formula (3.4) and (3.5). (1) Elastic force: Formula (3.4) reaches its minimum when v' ( s) = 0 or v ( s) = const which means every point of the snake is located at the same position. In other words, the elastic force makes the snakes shrink into a single point. (2) Bending force Formula (3.5) reaches its minimum when v ''( s) = 0, which amounts to: v ( s ) = cs (C is a constant) (3.6) Formula (3.6) is the equation of line. Geometrically, v (s) represents the curvature of the snakes. So the snakes try to be a line or smooth over sharp corner where the curvature value is high. Fig 3-1 snakes smooth over sharp corner (β 0) In Fig3-1, the sharp corner C has a higher curvature value than C. Let us assume that AC B has the same radius as ACB. The curvature value C is greater than the curvature value of C because the length AC+BC is larger than the arc length of AC B. 12

19 3.1.3 Image energy Image energy E image (v(s)) is derived from the image so that it takes on smaller values at the features of interest, such as the boundary. For example, given a grey-level image I(x, y), the snake is attracted to object s boundary which has large image gradient strength and small image energy if set: 2 E image = I( x, y) (3.7) But more often than not, Image energy is defined as: 2 E image = Gσ ( x, y) I( x, y) (3.8) where: G σ ( x, y) - 2D Gaussian function with standard deviation σ; - gradient operator. Gaussian function is introduced with the boundary being blurred and expanded. This effect leads to the poor convergence of snake at local minimal. The benefit is that the snakes can be attracted to this minimal from further away. And larger standard deviation σ results in larger capture space. In [8], (3.7) is called edge function and (3.8) is named as scale space Constraint energy Constraint energy Econ(v(s)) is an external force for high level interaction. This forces are typically based on higher order constraints relating to more global strategies, such as the relation to other objects in the image or coercive forces forcing the snake towards or away from particular areas local to the snake. It provides a measure of external constrains either from higher level shape information or user applied energy via interaction. 3.2 Numeric solutions Finite difference method (Kass) [8] gives the numeric solution to the snakes using finite difference method. The snake is discretized into N discrete points and the energy of snakes (3.2) is the sum of those N points energy. N * Esnake = Eint ( i) + E i= 1 ext ( i) (3.9) In formula (3.1), parameter s is replaced by i*h, where h is a fixed-value step. v( s) = v( i) = ( x, y ) = ( x( i h), y( i h)) ( i 1... n) i i = (3.10) Then approximate the derivatives v (s) and v (s) within internal energy with finite differences. 2 2 dv( s) / ds = vi vi 1 / h (3.11) 13

20 / 2 ) / ( h v v v ds s v d i i i + + = (3.12) Let and i ext x x E i f = / ) ( i ext y y E i f = / ) (. Now minimizing the formula (3.9) amounts to solve the following Euler equations: 0 )) ( ), ( ( ] 2 [ ] 2 [ 2 ] 2 [ ] [ ] [ = i f i f v v v v v v v v v v v v v y x i i i i i i i i i i i i i i i i i i β β β α α (3.13) [8] solves this equation using matrix method to (3.9) which is now composed of N Euler equations as formula (3.13). Formula (3.13) is rewritten in matrix form as: 0 ), ( 0 ), ( = + = + Y X f AY Y X f AX y x (3.14) where A is a pentadiagonal banded matrix. To solve (3.14), we need set right hand of the equation equal to the product of a step size γ and the derivative of left hand side of the equation. ) ( ), ( ) ( ), ( = + = + t t t t y t t t t t x t Y Y Y X f AY X X Y X f AX γ γ (3.15) Equations (3.15) can be solved by matrix inversion iteratively as follows: )), ( ( ) ( )), ( ( ) ( = + = t t y t t t t x t t Y X f Y I A Y Y X f X I A X γ γ γ γ (3.16) Dynamic programming (Amini) In [9], some problems concerning finite difference method are found and a new solution called dynamic programming is proposed. It adopts the same discrete methods and reach the Euler equation(3.14) as [8] does. Then the snake is made dynamic by introducing another parameter t, which affects the internal energy. And v(s) turns into v(s,t). Then get equation (3.17): ext s s t E t s v t s v t s v + = ), ''''( ), ''( ), ( β α (3.17) when the solution v(s,t) stabilizes, the term v t (s,t) vanishes and we get the solution to Euler equation (3.13) Greedy snake algorithm (Williams) [10] suggests a greedy snake algorithm which has been widely adopted. This snake is implemented by minimizing the following equation: + + = ds E s E s E s E image curv cont ) ) ( ) ( ) ( ( γ β α (3.18) Equation (3.18) is similar to the traditional snakes (3.2), but it simplifies the external energy by excluding E con, and parameter γ is added to scale the edge strength. The first two parameters correspond to E int (v(s)) and E im (v(s)) in equation(3.2) since 1/2 is constant. 14

21 At the stage of discretization, [10] proposes new approaches towards v (s) and v (s) approximation. (1) v (s) approximation: dv ( s) / ds = d v i vi 1 (3.19) where: d - average distance between two points. Compared with approximation using equation (3.10), formula (3.19) makes snakes avoid shrinking into a single point. It also makes snake points equidistant. (2) v (s) approximation: In [10], five ways of curvature estimation are compared and two of them are selected d v s) / ds v 2v v (3.20) ( = i 1 i + i+ 1 ( = i i i+ 1 i d v s) / ds ( u / u ) ( u / u ) (3.21) where: u i = ( xi xi 1, yi yi 1) ; u i+ 1 = ( xi+ 1 xi, yi+ 1 yi ) Formula (3.20) is used for curvature energy computation. It is actually the same was equation (3.12) since h is constant. Formula (3.21) is used to judge whether a corner occurs. Then relax β = 0 to allow snakes to align with the corner. Unlike the above methods, greedy snake does not try to solve the equivalent Euler equation (3.13) to get the final snake. It adopts a point-based method. The snake reaches its final shape and location when all the points stop or the number of moved points is below a threshold. For each point in the snake, it moves to the minimum energy point within a search neighbourhood (e.g. 3x3 or 5x5). The energy of each point of the neighbourhood is computed according to formula (3.19), (3.20) and (3.8). The details of greedy snakes can be found in chapter Gradient vector flow (Xu) The gradient vector flow (GVF) was originally introduced by Xu and Prince when they try to segment cerebral cortex. They improve the traditional snake in two aspects: limited capture range and convergence to boundary concavities [11][12][13]. GVF is a static force field defined as F g ext = w( x, y) = [ u( x, y), v( x, y)] that minimizes the energy function: ε = µ( u + u + v + v ) + f w f dxdy (3.22) x y x y Where: µ is set according to the amount noise present in the image; 2 f ( x, y) = Gσ ( x, y) I( x, y). Now we get the following equation (GVF snake) by replacing the potential force (gradient part) in the dynamic equation (18) by w (based on GVF): vt ( s, t) = αvs ''( s, t) + βvs ''''( s, t) + w (3.23) The equivalent Euler equations are: 15

22 2 µ u ( u f 2 µ v ( v f x y )( f )( f 2 x 2 x + + f f 2 y 2 y ) = 0 ) = 0 (3.24) The method used to solve this equation is exactly similar to the one used to solve traditional snakes. 3.3 Justification of choosing greedy snakes The features concerning each of the above numeric methods can be found in the relevant papers. [9] made a conclusion that finite difference method is lack of numeric stability and has a tendency for points to bunch up on strong portions of an edge contour. And the method of dynamic programming is more stable and allows the inclusion of hard constraints in addition to the soft constraints inherent in the formulation of functional. From the perspective of computing cost, finite difference method is most expensive because it has to solve a sequence of linear equations. The method for solving GVF snakes is actually the same as dynamic programming. [10] points out that dynamic programming having complexity 3 O( nm ), where n is the number of points in the snakes and m is the size of searching neighbour. Greedy snakes retain the stability property of dynamic programming and lower the complexity too (nm). Given the scenario of our CT images, the question is whether it is worthwhile using GVF which has the same complexity of dynamic programming instead of adopting greedy snakes. The answer is no. The major concern is whether the snakes should be moved across small gap on the boundary. Xu introduced GVF with an intention to make snakes move to fit the concaves which frequently occur in cerebral cortex. In our case, the small gap on the boundary is often the blurred part of boundary between aorta and other tissues, which the snakes should not cross over. Therefore it is desirable to choose greedy snakes as the means of 2D blood vessel segmentation. 3.4 Summary In this chapter, we examine the parametric representation of snakes, with each energy component interpreted mathematically and physically. Then four kinds of numeric solutions are briefly discussed and compared. Finally, we reach the conclusion of choosing greedy snakes. 16

23 CHAPTER 4 SYSTEM ANALYSIS AND DESIGN 4.1 System development cycle It is necessary to make a project planning before undertaking a large project like this. The system development cycle of waterfall Model [14] is used as system development guideline of this project. It is shown in Fig 4-1. Fig 4-1 Waterfall model In [14], Waterfall model is a well-defined development process in which one phase has to be finished before the next phase. The model can be used if the requirement is well understood and defined. It enables an individual to break a large complicated project into small component of steps whose successes contribute to the success of the whole project. It is justifiable to choose waterfall model in this project because: (1) The requirement is clearly stated at the beginning. It is 3D blood vessel segmentation. (2) The requirement does not change often. It is still 3D blood vessel segmentation at the end. 4.2 System function modules As it is stated in the agreed minimum requirement (Appendix B), this project has two major function modules: manual axis extraction and greedy snakes. Both of them utilise software architecture of Model View Controller (MVC) to implement a graphical user interface since both of them have to support user interaction. The former one is an application of Java swing programming and Java advanced imaging. Although there is lots of code from it, it is trivial when compared with greedy snakes. From hindsight, the manual axis extract is circumvented by the recursive snake. Major of this chapter is about greedy snakes design and analysis. A brief introduction of 17

24 MVC architecture is also introduced to pave the way leading to next chapter. 4.3 Greedy snakes algorithm With reference to [10], my algorithm is divided into three steps: image pre-processing, snakes initialisation and energy minimization. The first two steps are almost the same for all snakes algorithm. What distinguishes this algorithm from other flavours of snakes lies in step3. For the sake of clearness, step 3 is subsequently divided into two parts: snake s energy minimization and snake point s energy minimization Image pre-processing First, original image is processed in order to get the image energy. In this step, Gaussian filter is applied to blur the image; Sobel operator is applied to generate the gradient map of this image. Usually, Sobel operation is applied before Gaussian blurring. It follows the idea that first generate edges then blur the edges to acquire the scale space within which snakes may be attracted to the boundary. In this project, the reverse sequence is adopted, which is based on the idea that Gaussian blurring first can remove noise as well as enlarge edges. Otherwise, when noise is present, Gaussian blurring has to be used twice: it is used to remove noise first, then it is used to blur the edges. Actually, twice Gaussian blurring can be combined into once. But Gaussian blurring also weakens the edges and therefore larger segmentation error. The detail is discussed in chapter Initialisation of snake The initialisation of snakes consists of two aspects: (1) Initialisation of snakes geometric values. In other words, the points of snakes need to be placed near the boundary of the to-be-segmented object with a similar contour as the boundary. (2) Initialisation of snakes controlling parameters. i.e. for each point of snakes, the three controlling coefficients (α, β, γ) are assigned initial values. Initialisation of snakes geometric value is one of the most important things, which deserves our double attention. If they are located outside of scaled space of the boundary, the snakes cannot be attracted to fit the desired boundary. Even if part of the snakes is outside of the scale space of the boundary, the final contour cannot properly represent the desired boundary. A lot of researches about snakes have been done. But most of them focus on the energy minimisation process. As to the initialisation of snakes, it is usually done manually. i.e. Snakes points are manually placed on the image. It is tedious and time-consuming. In our case, it is impossible to manually initialise snakes for more than a thousand CT images. This project tries to address this problem by two useful and practical methods: initialisation using gradient profile and initialisation using the previous snakes (it is called recursive snakes in this project). They are stated in section 4.4. Initialisation of snakes control parameters gives users means of high-level interaction with snakes. By assigning different values to those three coefficients (α, β, γ), users are able to balance the relative influence of the three terms of formula 19. [10] concludes that their 18

25 relative sizes, rather than absolute sizes, are significant. This project intends to find their optimal values for the segmentation. They are discussed in chapter Snake energy minimization Fig 4-2 Closed snakes with 5x5 searching neighbourhood [23] The greedy snakes algorithm adopts the snake-point based minimizing method. In Fig 4-2, this snakes is composed of a series of points which are also called control points. The basic idea of greedy snakes energy minimization is that the snakes total energy reaches its minimum when all the control points energy reaches their minima. The minimizing process goes like this: i) Start iteration from snake point 1 of the initial snake; ii) Call snake point s energy minimizing algorithm (in 4.3.4) for all points of the current snake; iii) Update the current snakes with the result of (ii); iv) Continue for another iteration (go to ii) until a) No snake point moves in this iteration, or b) The ratio between the number of moved points and the number of total points exceeds a threshold, or c) Iteration times exceed a threshold. It is worthwhile mentioning here that the minimum of snakes is not globally optimal. This is because: for each iteration, the computation of current snake point v(i) is based on the previous point s current iteration value and the next point s previous iteration value. In Fig 4-2, they are denoted as v(i-1) of round (t+1) and v(i-1) of round (t) respectively. Therefore, the minimum of snakes in this iteration (t+1) is unnecessarily globally optimal. 19

26 4.3.4 Snake point energy minimization Fig 4-3 Find local minimum within 3x3 neighbourhood[10] In the iteration t, snake point v(i) moves to new position where this point has the minimum energy within the searching neighbourhood. This snake point energy minimization takes the following steps: i) Set first snake point as current snake point; ii) Compute the average distance between two snake points of the snakes; iii) Given current snake point and its neighbourhood size, localize each pixel s position within this neighbourhood; iv) For each pixel s position (i.e its coordinates) within this neighbourhood (including current snake point), compute its: (a) Continuity energy using formula (3.20) (b) Curvature energy using formula (3.21) (c) Image energy using formula (4.2) (see below) v) Scale their continuity and curvature energy into the range of (0,1); vi) Compute each pixel s total energy using formula (4.1): E = α * E + β * E + γ * E (4.1) continuity curvature vii) Find the minimum energy s coordinates and compare it with the current snake point coordinates; viii) If they are different: (a) The number of moved points increment; (b) Update current snake point coordinates with the minimum energy position s coordinates; (c) Update average distance between two points of the snakes; ix) Set next snake point as current snake point; x) Go to iii. Given a search neighbourhood, image energy of each pixel position is got through formula (4.2): E image = (min mag) /(min max) (4.2) where: min the smallest gradient strength within the searching neighbour; max the greatest gradient strength within the searching neighbour; mag the computed position s gradient strength. The value of magnitude (or strength) is got through looking up the original image s gradient map. NB: The output of formula (4.2) is already normalised within the range of (-1,0). image 20

27 4.3.5 Automatic axis extraction from segmented object Since the blood vessel is segmented automatically from each CT image using the above method, the weight centre of this binary image (Fig 4-4) can be extracted as its centroid using formula (4.3) [15]. Fig 4-4 Binary image with segmented blood vessel where: 1 m = N 1 n = N ( m. n) R ( m. n) R N is the total number of non-zero pixels; m is the non-zero pixels x coordinate; n is the non-zero pixels y coordinate. m n (4.3) This step is taken for two purposes: The weight centre can be used as starting point of gradient profile initialisation when recursive snake fails; The weight centre may be used for the refinement of segmentation (refer to future work of chapter 7). A more precise method of locating centroid of the blood vessel is based on the analysis of the shape feature extraction that is studied in Iowa state university [7]. 4.4 Initialisation of snakes The significance of snake s proper initialisation can never be overestimated. If the snakes are not within the range of scale space, they will not be able move to the boundary. If the snake is initialised near a strong part of the boundary, the whole snake may be attracted into that part of edge. Here, we shall discuss how to evenly place snake points near the desired boundary. How to avoid being attracted to strong edges is partially solved by means of clamping that is to be discussed in subsequent chapter. 21

28 4.4.1 Initialisation using gradient profile The most reliable method may be the manual one. As to automatic generating initial snakes, it is intuitive to come up with the idea of circle. In fact, the circle has been widely adopted although there is an apparent defect. Fig 4-5 Initialising snakes as a circle In fig 4-5, neither initialisation 1 nor initialisation 2 is acceptable. Actually, the boundary of objects varies because of its irregularity. So, a rigid method of initialisation is not suitable for all occasions. The proposed initialisation in this project is a combination of ray casting, marching point and gradient profile that are summarized in two algorithms: (1) Marching point algorithm Fig 4-6 Marching point and gradient profile In Fig 4-6(a), a marching point set out from point A. It travels until reaching point C. Along its path, the gradient strength is recorded and a profile is generated in Fig 4-6(b). Since the edge is blurred, the shape of gradient profile looks like a wedge crossing the boundary. A proper initialised snake point should be located within the range of (B1,B3) which excludes B1 and B3. In this algorithm, the initial snake point is located on the slope of B1B2 and close to B2. 22

29 (2) Ray casting algorithm In Fig 4-7, a series of rays are cast from point A with a fixed angle gap. Along the trace of each ray, the above marching point algorithm is applied to get one initial snake point. The result of all the rays amounts to the initial snakes. Fig 4-7 Initialising snake using ray casting Recursive snake (a) (b) (c) Fig 4-8 Recursive snake In the case of stakes of CT images, it is also proper to use the resulting snake of previous image as the initial snake of current image for two consecutive CT images. It is called recursive snake because the output of snake is used as input of snake. In Fig 4-8, segmentation in terms of a serial of snake points (Fig 4-8 a) is used as the initial snake in next image (Fig 4-8 b). Fig 4-8 c shows the segmentation result of Fig 4-8 b. Recursive snake holds true only if the variation of blood vessel s location and diameter within two consecutive images do not change dramatically. This is not always true. For example, the diameter of blood vessel dramatic increases where the aneurysm occurs. In Fig 4-9, snake that aligns the boundary of blood vessel in (a) is not a proper initialisation for (b) because of the aneurysm. 23

30 (a) (b) (c) Fig 4-9 Recursive snake fails 4.5 Analysis of greedy snakes Coefficient beta and corners In Williams greedy snakes algorithm, coefficient beta for each snake point is subject to be set to zero after each iteration. The criteria of such an adjustment of beta are: i) It has higher curvature than its neighbouring points; ii) Its curvature is above some specified threshold; iii) The strength of gradient at this point is above a threshold. The intention of adjusting beta is to allow the snake point moving to corner by relaxing its curvature constraint. In fact, the boundary of blood vessel always changes smoothly. Therefore, no corners or sharp turning points exist to justify such an adjustment of coefficient beta. So, compared with the original greedy snakes, this part is not necessary. But it is included in the programme in order to evaluate some synthesized images that have sharp corners Local vs. global optimal We arrive at the conclusion from that the snakes do not necessarily reach their global optimal in terms of each iteration because the value of v(i+1) used to compute v(i) of round (t+1) is the value of last iteration of round (t). It is not difficult to find that actually each snake point does not reach its global optimal either, from the perspective of each snake point after all iterations (NB: from the perspective of the snakes, this is the local optimal). This can be proved using the famous scenario of Newton downhill from optimisation theory. Suppose one snake point is now at the top of a hill. It wants to find the shortest path to the foot of hill. It starts from the current position to new position following the steepest path, which is within that point s sight. This process iterates untill it reaches the foot of hill. In Fig 4-10, it can be found that although each segment of this path reaches a local optimal, 24

31 the sum of local optimal does not lead to a global optimal. i.e. the length of straight line is smaller than the sum of the zigzag line segments. In our case, the foot is the boundary where energy is minimum. Within each search neighbourhood, the minimum energy point corresponds to the steepest path if we set both alpha and beta zero. One reason why snakes sometimes are attracted to one side of strong edges or into one point is that each point can not find the global optimal. Fig 4-10 Local optimal and global optimal Although the greedy snakes algorithm does not guarantee either global or local optimal, that does not prevent it being a good and practical algorithm. In common situations, it does reach the desired boundary quickly and precisely. Sometimes, it cannot converge at all because this snake do not know it actually has reach minimum after certain amount of iterations. This is the phenomenon that snake points oscillate on the boundary. There also situations where the snake fail to reach the minimum. i.e. the algorithm fails to converge. So, compared with Williams original greedy snake algorithm, the iteration times are added as a termination condition in this system. 4.6 GUI design and MVC pattern Graphical User Interface is not included in the minimum requirements of this project. It is the author s viewpoint that image processing is a stage of information processing of flowing data (refer to reference model in Appendix A). It is a necessity to implement WSWYG in the project to: (1) synchronize the presentation and its underlying data model; (2) support interaction with users or high-level programme, after all, snakes is a top-down method of image understanding by which a model is imposed upon the image. Here is the analysis of the scenario: there are several data models being maintained by the greedy snake. Those data are derived from original CT image, blurred CT image, gradient map image and snakes. They change frequently either procedurally, such as snakes, or upon user s interaction. For example the user decides to show gradient map or choose another CT image. The system has to maintain their views to keep up with the ever-changing data models. Apparently there are three basic elements interacting with each other in this system: data models, views and controllers. (NB: controllers may come from either the procedure or user s action). Since this system is designed using object-oriented method (which means those data 25

32 and operations are dispersed in different classes), it is literally impossible to implement it without resorting to the ideal of software architecture. Based upon above analysis, it is natural to come up with the architecture of model-view-controller (MVC). The MVC architecture separates application data (contained in the model) from graphical presentation components (the view) and input processing logic (the controller). This architecture realises the mapping from the traditional input, processing, and output into the GUI s controller, model, and View. The inter-relationships among model, view, and controller is shown in Fig 4-11 [16]. Fig Model-View-Controller Architecture The model maintains the underlying data or business logic; The view, or presentation, creates the proper or desired visual representation of the model; The controller, which deals with user interaction, modifies the model and/or changes the view in response to user actions. MVC originally appeared in Smalltalk-80 and is also supported by other object-oriented programming languages, such as Java where model, view and controller are encapsulated in different classes with their relationship supported by a special message-generating and event-handling mechanism. The implementation details of MVC are covered in Chapter Summary In this chapter, waterfall model is adopted to facilitated analysis and design. Simply put, the strategy is top down analysis with abstract high-level functionalities being divided into manoeuvrable modules with corresponding algorithms. The majority of this chapter devotes to greedy snake design. It also analyses two interesting features concerning greedy snakes: corner and optimal. The manual axis extraction, graphical user design and MVC architecture are just mentioned or briefed if necessary. 26

33 CHAPTER 5 SYSTEM IMPLEMENTATION 5.1 Introduction The previous chapter was all about image understanding. In this chapter, the underlying digital image processing techniques are exploited to support the mission of image understanding. The focal point of this chapter is the implementation of greedy snakes algorithm. In order to compare the system s result against the manually segmented blood vessels, some post work is also included: boundary representation using piecewise line-fitting, polygon filling to get the object of blood vessel. 5.2 Implementation of greedy snakes Architecture: Fig 5-1 GreedySnake architecture Here is the outline of how greedy snakes works. The top class GreedySnake is the entrance. It also implements the manual initialisation and initialisation using the previous image s result and read in the original CT image. The rest is divided into four groups. Each dedicates to one work. Image pre-processing group, which is composed of classes GradientMap, Gradient and GaussianKernel, is responsible for generating gradient map (i.e. image energy for snakes). Initialisation group consists of RayCasting, SnakePoint and SnakeList. They 27

34 create a snake list using gradient profile algorithm. Energy minimising is the core. It runs based on the data coming from image pre-processing and initialisation. The fourth group presentation show both the snakes (i.e. SnakeList) and the raster image which can be original CT image, blurred image or gradient map Underlying data structure (1) Data structure for snake The data of a snake is encapsulated in two classes: SnakeList and SnakeNode.The fundamental data structure of snakes is a closed dual directional linked list. It is implemented in class SnakeList. Every snake point is one node of SnakeList. And it is implemented in class SnakeNode. Class SnakeNode: Fig 5-2 SnakeNode In order to form linked list, one has to resort to reference in Java since no pointer is available. Two attributes of the class are instances of this class. They point to previous and next node respectively in Fig 5-2. The reference variable that implicitly means pointer helps construct the list. The dashed arrows belong to previous and next nodes respectively. (NB: the author does not use class List provided by JDK. The reason is that implementing our closed dual directional linked list using class List and its methods is much more cumbersome). Its attributes and methods are summarised in Appendix D. Class SnakeList: Fig 5-3 SnakeList The boundary of a blood vessel is a closed contour. Therefore, the greedy snakes list needs to be designed as a closed linked list. Closed is realized by making the last node and first node point to each other (NB: make the first node point to last node to keep the characteristic of dual directional). In this list, there are three pointers: First node pointer is created when a snake list is initialised and it is used as identifier of this snake list; 28

35 Last node pointer always points to the end node of snake list and it moves backward when a new node is added at the end of the snake list; Current node pointer is used to traverse the whole list and it travels from the position of first node pointer position to the last node pointer position. So, first node pointer and last node pointer are two landmarks of this list and therefore set as attributes of this class. Current node pointer is a temporary pointer generated by the methods where it is needed. In fact, single directional linked list can meet the need. The dual directional linked list is adopted for the convenience of implementation. Its attributes and methods are also summarized in Appendix D. (2) Presentation of image data and snakes data In this section, the focal point is to maintain image data in buffered image models and update its presentation upon user s action. The buffered image model is implemented in class GreedySnakes. The proper presentation of snake and image is implemented in the class Scroller that makes also response to user s action. The presentation of the snakes data is implemented in class Scroller by calling the method paint and its update is implemented by calling method repaint. (NB: refer to appendix D for detailed implementation of snake and image models and views) Image pre-processing First, original image is processed in order to get the image energy. In this step, Gaussian filter is applied to blur the image; Sobel operator is applied to generate the gradient map of this image. (1) Guassian filtering Guassian filtering is a kind of low pass filtering with a non-uniform kernel that is generated by formula (5.1) in [4]: 2 2 ( x + y ) h ( x, y) = exp[ ] (5.1) 2 2σ where: σis the standard deviation. Fig 5-4 2D Gaussian function [17] 29

36 Actually the two-dimensional gaussian function can be imagined as a composing of two one-dimensional Gaussian functions. So geometrically it is a Mexican straw hat in Fig 5-4 whose shape is only determined by the value of sigma. The central peak rises as sigma increases (see Fig 5-1); and greater sigma results in greater blurring. So, Gaussian filtering is also called Gaussian blurring. In this project, sigma is the parameter that controls the size of scale space. The implementation of Gaussian blurring has three parts: (i) The value of sigma is set as 2.0f in the class GradientMap (ii) Generate gaussian kernel according to formula (5.1) Class GaussianKernel, which extends Kernel, is responsible for generating a guassian kernel given the standard deviation sigma. It starts with computing the kernel size. Then it fills the coefficients of the kernel by evaluating formula (5.1). (iii) Convolute image with gaussian kernel Class GradientMap has the method GaussinaBlur that convolutes the image with gaussian kernel. This convolution utilises Java s ConvolveOp. It is fast because this class is implemented by JDK using native code. (2) Border processing using reflected index Given any kernel, convolution can be carried out on all pixels except those on border whose size is decided by the radius of the kernel. In Fig 5-5(a), the shaded pixels cannot be convoluted for a 3x3 kernel. There are two common ways out. First, truncate the image. That means the resulting image is truncated into the non-shaded part in Fig 5-5(a). Second, the pixels on border either keep their original value or set to zero. But neither way meets our demand in this project. (a) (b) Fig 5-5 Border processing and reflected index [4] In order to scale up the capture range of object boundary, larger sigma is preferred. Larger sigma means higher peak in Fig 5-1. In order to make the kernel keep its Gaussian shape, the radius of kernel has to increase. In consequence, a larger part of the image is truncated, which is unaffordable for our CT images. So, method 1 fails. 30

37 It is also improper to simply follow above method 2. The reason comes from next step applying Sobel operator to get edges. In either case of method 2 (use original value or set to zero), four strong false edges are generated. Since the grey level of the gradient map has to be within (0,255), these strong edges scale the real and weaker edges, i.e. blood vessel s boundary, into the a very small range of the gradient map s histogram. That means the useful information being greatly compromised, which we hate strongly. The strategy of border processing of this project follows these steps: (i) Enlarge original image in four directions by the size of kernel s radius; (ii) The value of added pixel is set using reflected index [4] Given an MxN image Fig 5-3(b), the index x is reflected according to: x = x 1 x = 2M x 1 ( x < 0) ( x M ) (5.2) This operation enlarges the image in x direction. Similarly, the image can be enlarged in y direction. The formula for y index refection can be got by Replacing M with N and x with y in formula (5.2). The sequence of reflecting index x or y does not matter. (iii) Convolute with border pixels of resulting image being set zero; (iv) Truncate the convoluted image to the size of original image. Compared with the method proposed in [4], this method is much faster because it utilises the convolution operator of JDK that is implemented using native code. These operations are implemented in class GradientMap, the corresponding methods are: enlargeimage (implements i,ii), gaussianblur(implements iii) and truncateimage (implement iv). Fig 5-6 demonstrates the midway result of an enlarged image(the radius of Guassian kernel is 40). Fig 5-6 Enlarge image using reflected index 31

38 (3) Edge detection and gradient map In this step, the blurred image is then convoluted with following Sobel operators respectively [4]: h = x = 2 0 2, h y (5.3) The two gradients computed at each pixel are the x and y components of a gradient vector g for that pixel. g x g = (5.4) g y The gradient strength at the position of each pixel is computed by: g g x + g y Then g is scaled into the range of (0,255) by: 2 2 = (5.5) g g greylevel g max g min where: g is the current gradient value of the gradient map; g max and g min are the max and min value of the map. min = 255 (5.6) The resulting gradient map is the image whose grey level is computed by (5.6). In class GradientMap, the method myconvolve implements convolution and method GradientMap implements formula (5.3) to (5.6). (4) Gradient map clamping (a) original image (b) before clamping (c) after clamping Fig 5-7 Clamping gradient map The importance of gradient map clamping in this project can never be overestimated. Actually, the gradient map which is derived by formula (5.6) is not suitable for the following step initialisation using gradient profile. In Fig 5-5, (b) and (c) are the gradient map before 32

39 and after clamping of (a). In order to show them clearly, black and white are inverted. In Fig 5-7(a), the central object is the aorta with a seemly homogenous intensity of its interior. Fig 5-7(b) shows that the intensity within it varies, which results small objects in its gradient map. Those small objects are local minima that deceive marching point and make it stop there. Therefore they undermine the greedy snakes algorithm by sabotaging the initialisation. During the iteration of this algorithm, mis-initialised snake points can move and deform to fit the boundary by its constraints, it only holds true upon the condition that the number of misplaced initial snake points is within a limitation and they are not consecutively related, which is not true in Fig 5-7(b). The solution to this problem lies in the method of clamping of gradient strength. Actually, it takes an additional operation of thresholding before applying formula (5.6). There are two benefits of it: First, only strong edges, such as boundaries between objects, are kept, the interior weak edges are cleared in Fig 5-5(c). Second, since the lower range of histogram is vacated for the rest part, the contrast of real boundaries is enhanced which results better convergence of snakes. This function is encapsulated in method GradientMap of class GradientMap Snakes initialisation In this project, totally there are three initialisation methods. From the perspective of automation, initialisation using gradient profile is semi-automatic since it needs a user to specify the centre point; initialisation using previous image s result is automatic since there is no user action; The manual initialisation demands user to place all the snake points. In this project, manual initialisation is also implemented to facilitate evaluation. (1) Initialisation using gradient profile The principle of initialisation using gradient profile is detailed in chapter 4. Class RayCasting encapsulates these two functions. The method findsnakenode implements the marching point algorithm (refer to 4.4.1) with these termination conditions: current sample is bigger than a predefined threshold; current sample is larger than both previous and next samples. The method RayCasting implements the ray casting algorithm (refer to 4.4.1). (2) Initialisation using the result of previous image There is no special class to implement it. In fact, initialisation using previous image s result is implemented through specific data structure and parameter passing based on class s attribute SnakeList. The value set by one instance can be used by another instance. It is covered in the top class of GreedySnake. (3) Manual initialisation This is implemented in inner class Scroller within class GreedySnake. It records mouse input as each initial snake point, then save it into snake list. 33

40 5.2.5 Snakes energy minimisation The snakes energy minimising (refer to 4.3.3) and snake point minimising (refer to 4.3.4) are implemented in the class Slithe. GreedySnakes runs a greedy snakes by: create an instance of Slithe; call its method go. The implementation detail is covered in Appendix D Post-processing of snakes for evaluation The result of greedy snakes is a series of points. Segmentation is not finished until the object of blood vessel being generate from them. This problem amounts to mapping geometry (points) into pixels (blood vessel images). In computer graphics, this is called rasterization. It takes two steps to implement rasterization. (1) Line segment rasterization using Brensenham algorithm In this step, boundary is formed by piecewise line-fitting of two adjacent snake points. The adopted algorithm is Bresenham's line drawing algorithm. Fig 5-8 Bresenham algorithm[18] The basic idea about this algorithm is to determine pixel-filling below or above the real line. Given end points (x1,y1) and (x2,y2), it starts with deciding x-oriented or y-oriented of the line by comparing x and y. In Fig 5-9, it is x-oriented since x > y. Then from pixel (x1,y1) to decide which pixel to be filled by the criterion of minimum error. In Fig 5-8, pixel (Xk+1,Yk+1) is to be filled since d1<d2. Totally, there are four situations being discussed in this algorithm. The details can be found in [19]. It is implemented by method drawline in class Slithe. Fig 5-8 (a) shows a contour of snake points. (b) shows the contour of piecewise fitted lines. 34

41 (a) points contour (b) line rasterization (c) polygon rasterization (2) Polygon filling by scan line conversion Fig 5-9 Rasterization Scan line conversion is developed in computer graphics for polygon rasterisation. Given vertices A1 A5, rasterisation of this polygon amounts to fill the interior part between point pairs. In Fig 5-10,what needs to be filled in this scan line is the part between point pairs (P0, P1) and (P2, P3). Fig 5-10 (c) shows the filled polygon by scan line algorithm. Fig 5-10 Scan line conversion [18] The author simplifies this algorithm for convex polygon which means there is only one points pair in each scan line. Scan line from two sides of image. They end at the first non-zero pixel. Then fill the interior part between the points pair. It is implemented by method fillsnake in class Slithe. 35

42 5.3 Manual axis extraction and MVC Fig 5-11 Manual axis extraction The graphic user interface for manual axis extraction is implemented using Java Swing. The components are inherited from JFC [20][21]. Fig 5-12 is a snapshot of this interface. The classes that hold data models are: Class record holds the information of the centre point of blood vessel. It is manually selected by user; Class Image holds the CT image. The MVC architecture is implemented using class Observable and interface Observer. Class Observable provides method addobserver, which takes a java.util.observer argument. Interface Observer represents the view in MVC, Instance of class Observable is the data model to be observed. The detail can be found in [16]. In this case, class Image, which holds the current CT image, is observable. Therefore, it has the methods inherited from class Observable: setchanged and notifyobservable. Anytime when the buffered image is updated with new CT image, it notifies the observers that the model has changed. Any class, whose view wants to observe this model, must have an instance initialised from this Observable class. In the initialisation of this class, method addobserver has to be called. In our case, the class ImageView has attribute of reference image. Therefore, any view which is instantiated from class ImageView will be able observe the change of image. 36

43 5.4 Summary This chapter consists of implementations of snake as well as manual axis extraction. The implementation of greedy snakes is thoroughly narrated. In this part, we follow the sequence designed in previous chapter: image pre-processing, snakes initialisation and energy minimising. Two important problems are raised, analysed and solved by means of reflected index and gradient map clamping. The underlying data structure is also introduced as part of the implementation. In order to get the blood vessel object, post-processing is needed. First create the close contour from snake points. Then fill the interior part of that contour. Two famous algorithms Brensenham line drawing algorithm and scan line conversion algorithm from computer graphics are adopted. The implementation of manual axis extraction is briefly introduced. It uses components of Java Swing (based on JFC) to design the interface for user interaction. The detail of implementation amounts to illustrating how to use JFC components. Since it can be found in any Java book, there is no need to cover it here. MVC is implemented by using class Observable and interface Observer. The purpose is to maintain the synchronization between view (i.e presentation) and model (i.e. data) of the image. 37

44 CHAPTER 6 TESTING AND EVALUATION 6.1 Introduction It is stipulated in the minimum requirements that the evaluation needs to be made against manual segmentations. Before we carry on, one logic problem needs to be addressed. We know that the true blood vessels do exist but we cannot get them. Otherwise, there is no need to develop this system. What is thought to be true blood vessel objects are those segmented manually by experienced radiologist. Although they are reliable and accurate enough to be accepted as the real ones, they are not the real ones. If the segmentations by this system do not comply with those manually segmented, one might justifiably argue that it does not mean this system does not work. Similarly, if the results comply with the manual segmentations, it does not mean it works either. This is a logic paradox. In mathematics, this kind of problems can be solved through extrapolation. Fig 6-1 Extrapolation In Fig 6-1, suppose there is a logic statement. And it holds true in condition A. Then we test the logic statement with gradually changing the condition from A to B and derive the trend. If we could prove that this logic statement holds true in condition B. Base on condition B and the variation trend of AB, we can prove that this logic statement holds true in condition C. Therefore, the evaluation of the system is designed as the followings: First, evaluate against synthetic images segmentation; In this step, we first construct some artificial objects then insert them into black images to generate synthetic images. Thus we own the synthetic image and the true objects. Then the segmentation results are compared with these true objects. We need to prove this system works. Now we stand at point A in Fig 6-1. Second, add noise or artefacts to deteriorate synthetic images Either noise or artefacts could degrade the real CT images. After the system is tested under both conditions, we can stand at somewhere between A and B. Third, add noise and artefacts to deteriorate synthetic images The synthetic images could be degraded by both noise and artefacts. After the system is tested under this condition, we then stand at point B. Last, test upon real CT images and evaluate against manual segmentations 38

45 In this step, we need not to prove whether the system works or not. Base upon previous deductions, we reach the conclusion that it is true based on extrapolation. The evaluation against manual segmentation needs not to prove whether it works but to test how well it works. 6.2 Segmentation of synthetic images Synthetic images The synthetic images are created with artificial objects of different shapes. The shapes are determined by two factors: The shape should be such that to test the influence of greedy snake s controlling parameters, such as alpha and beta, would be applicable; The shapes should resemble the blood vessels in real CT images. The images have a size of pixels with 256 grey-levels. In each image, the object is white (with grey-level 255) and background is black (with grey-level 0) or vice verse. They have the same gradient map. Such an arrangement intends to prove that the grey-level does not matter. It is the change of grey-level on boundaries that affects the snakes. Fig 6-1 shows six synthetic images that are used in the evaluation. (a) (b) (c) (d) (e) (f) Fig 6-2 Synthetic images Among those six images, objects in image Fig 6-2 (a) and (b) are used to test the controlling parameters of the snakes. Objects in images Fig 6-2 (c) to (f) are created to resemble the blood vessels in real CT images. Fig 6-3 shows the segmentations by this system. They are nice. 39

46 (a) (b) (c) (d) (e) (f) Fig 6-3 Segmentations of synthetic images Synthetic images with deteriorated quality We need to degrade the quality of the synthetic images to make them resemble the real world images. There are two major causes leading to a deteriorated quality of real CT images. One is noise and the other is artefact. (1) Image noise (a) (b) (c) (d) (e) (f) Fig 6-4 Degraded by Guassian noise (20%) Guassian noise is the most common noise existing in digital images. In this case, the 20% of Gaussian noise is added to degrade the images. (NB: Since the images is shrunk to save 40

47 space, it is less manifest). Compared with the results in Fig 6-4, the segmentation precision decreases. Corners and those parts where curvature changes are affected. (a) (b) (c) (d) (e) (f) Fig 6-4 Degraded by salt-and-pepper noise Salt-and-pepper is another kind of common noise, but it is not common in our CT images. It is introduced to simulate the uneven homogeneous intensity within blood vessels. Fig 6-4 shows that the affect of salt-and-pepper noise is greater than those of Gaussian noise. (2) Image artefacts (a) (b) (c) (d) (e) (f) Fig 6-5 Degraded by artefacts The artefacts introduce local geometric distortion upon objects (See Fig 6-5). These artefacts are used to simulate the real blood vessel boundaries. Some of them are exaggerated which makes sure that this snake can capture common blood vessels boundaries. 41

48 (3) Noise and artefacts (a) (b) (c) (d) (e) (f) Fig 6-6 Degraded by both artefacts and Guassian noise It is natural to surmise that the segmentations on images that are degraded by both artefacts and noise would be less accurate than the segmentations on images degraded by only one of them. The test reveals the otherwise. It suggests that the two factors are not correlated. The accuracy is affected by the change of curvature and the value of sigma of Guassian filtering independently Discussions (1) Segmentation error The segmentation error is estimated in terms pixel difference or pixel errors. The comparison of two binary images can be made by pixel XOR operation. The segmentation is estimated by [22]: VT VS δ ( T, S) = (6-1) V Where: V is the true object; V is the segmented object. T (2) Influence of noise and artefacts S T The segmentation error is computed for each synthetic image. The average segmentation error and their standard deviation are also computed. See table

49 Images No. a b c d e f 1 True objects Guassian noise Salt-and-pepper noise 4 Artefacts Artefacts and guassian noise 6 Average error Standard deviation Table 6-1 Segmentation errors of synthetic images (sigma = 2.0) Fig 6-7 Variation of segmentation error for each shape Fig 6-7 compares the segmentation error variation of each shape. It shows that salt-and-pepper (see 3 vertically) causes a dramatic increase of segmentation error for all shapes except circle (see b horizontally). The combination of artefacts and Gaussian noise even slightly decreases the segmentation error for all shapes (see 5 vertically). 43

50 Fig 6-8 Average segmentation error Fig 6-8 shows that the average segmentation errors of all simple shapes for a to d are similar. This means the snake can find their boundaries pretty well. The irregularity of boundary increases in terms of the changes of curvature for e and f. This leads to an increase in segmentation error because the snakes smooth over those twists and turns of the irregular boundary. Fig 6-9 Standard deviation of segmentation error for each shape Fig 6-9 shows the standard deviation of segmentation errors of each shape. It suggests that circle (see b) has the smallest value. This means the snake can segment circles in all conditions pretty well. This is because snakes have the trend of deforming towards smoothly changed boundaries. We can infer two major factors that may increase the segmentation error based upon above analysis: Shape This includes global silhouette shape as well as local distortions of artefacts. The error is introduced because snakes have to smooth over those 44

51 dramatic geometric changes. The first one is more important than the latter. Noise There are two reasons: in order to remove noise, sigma of Gaussian blurring increases. That also blurs the boundary, which leads to a less accurate contour (Fig 6-10 b). For noises such as salt-and-pepper, it brings local minima to prevent some snake points reaching the boundary (Fig 6-10 c). (3) Influence of initialisation (a) (b) (c) Fig 6-10 Influence of noise upon segmentation error (a) (b) (c) (d) (e) (f) Fig 6-11 Influence of snake s initialisation The initialisation using gradient profile is not deterministic. Different starting points result in different initialisation, shown in Fig 6-11(a and b, d and e). Snakes have a certain amount of robustness to initialisations. Fig 6-11 shows that snakes capture the same boundaries of circle and eclipse (c and f) despite different initialisations. 45

52 (a) (b) (c) (d) Fig 6-12 Different initialisations and different results For those objects with more complex boundaries in degraded images, snakes may fail to capture the boundary, shown in Fig The snake fails to capture the boundary (b) because of its initialisation in (a). The initialisation (c) leads to a successful segmentation. The remedy method is through snakes refinement, which is discussed in chapter 7. (4) Influence of clamping threshold It has been found that the interior homogeneous property of blood vessels is not perfect. In the gradient map, they are local minima that trap snake points. The gradient map clamping is introduced to overcome them. This work is done through thresholding. The value of threshold in our case is arrived through try-and-error method. In this system, it is set as 8, and it works well. But sometimes it needs to change that value if the property of CT images changes. The uneven homogeneity is simulated by salt-and-pepper noise. In Fig 6-13, threshold in clamping decides the success and defeat of segmentation. (5) Influence of controlling parameters (a) Threshold = 15 (b) Threshold = 8 Fig 6-13 Different clamping thresholds There are four controlling parameters of a snake:α, β, γ, and σ. The influence of σ upon segmentation error has been discussed in (2). The generic influence of α, β, γ upon snakes is not our mission. The goal is to find their optimal value for our CT images. And it only needs to find their optimal value for one CT image. They can be used as optimal values for all CT images because all the CT images have the similar properties and aortas boundaries have similar shape. This assumption proves to be correct after testing CT images. The controlling parameters are set as: 46

53 α = 1.0 β = 0.8 (6-2) γ = 1.2 σ = 2.0 The size of search neighbourhood also affects the convergence of snake algorithm and the segmentation error. In this system, it is a 5x5 neighbourhood. 6.3 Segmentation of 2D CT images Difference map In order to vividly show the comparison between manual segmentation and automatic segmentation by the snake, difference map is introduced, shown in Fig (a) manual segmentation (b) automatic segmentation (c) difference map Fig 6-14 Difference map In Fig 6-14, The Boolean operation XOR works on corresponding pixel pair between the manual segmentation (a) and automatic segmentation (b). The result is then mapped into the range [0,255] just like gradient map. I call them map because the image is generated by mapping data into grey level Sequential snakes There is one problem occurring during the evaluation. After the bifurcation of aorta occurs, it needs to run two snakes to get the boundaries of both aortas. In Fig 6-15, the snake runs for the first time and the segmentation is saved in image (Fig 6-15 c). Then the snake runs for the second time and the result is saved in the same image (Fig 6-15 e). Now, the snake runs twice, and the segmentations are saved in different images. They are merged to get the image (Fig 6-15 e). In this case, the segmentation error of first segmentation (Fig 6-15 d) is 24.7% because another aorta in this image is to be segmented. The segmentation error after running snake twice (Fig 6-15 f) is 13.8%. Now the segmentation is successfully finished. 47

54 (a) original image (b) manual segmentation (c) run snake once (d) difference map1 (e) run snake twice (f) difference map2 Fig 6-15 Sequential Snakes It is not reliable to use segmentation error as the only criterion. The error pixel distribution also matters. It can be regarded as a successful segmentation if the error pixels evenly distribute along the boundary. An improved evaluation method is discussed in next chapter Segmentation of CT images Totally, 703 CT images have been segmented and the analysis of them can be referred in Appendix E. 6.4 Supervised 3D blood vessel segmentation The major goal of this project is 3D segmentation. So the system needs to do segmentation from one slide of CT image to another using recursive snakes. For the current system, it is immature to let it run automatically. Two barriers cannot be overcome now: one is the bifurcation of blood vessels. Another comes from the occurrence of aneurysm where blood vessel diameter changes dramatically. So, 3D blood vessel segmentation can only be carried out under user s supervision. i.e. The segmentation of each CT image is checked by the user before it proceeds to next CT image. If there is an unsuccessful segmentation, user will initialise a new snake using gradient profile. 48

55 (a) previous result (b) recursive initialisation (c) failed segmentation (d) initialisation using gradient profile (e) successful segmentation Fig 6-16 Supervised segmentation Fig 6-16 shows the procedure of supervised segmentation. (a) shows the result snake on previous images; This snake is used as initial snake of (b); (c) shows the recursive snake fails; (d) user reinitialises a snake using gradient profile; (c) shows the successful segmentation. Although initialisation using gradient profile is more likely to a successful segmentation, there are situations where only recursive snake works (see Fig 6-17). The reason is: for those tapering aorta, recursive snakes approach their boundaries from outside where the homogeneity property is better than that of the inside. (a) using gradient profile (b) recursive snake Fig 6-17 Only recursive snake works 49

56 CHAPTER 7 LIMITATIONS AND FUTURE WORK 7.1 Limitations and improvements of the system Unfinished mission In the interim report, I mentioned the orthogonal intersection upon 3D blood vessel model. This function is not implemented in this system. I proposed two methods: one is to make snake running on 3D plane that is generated through tri-linear interpolation upon CT images. The other is to construct 3D blood vessel model first, then generate an intersection plane to get the orthogonal image. The ultimate target is the real diameter of blood vessel. The second is more feasible. But I need to construct the 3D model first which is yet to be finished in this project Implementation of other flavours of snake algorithms Actually, this snake cannot segment all the CT images and its flexibility is not good enough to segment shapes with some irregular boundary. As it is stated in chapter 4, greedy snake is the proper snake for this project in which aorta is the target. But sometimes, both aorta and tributary blood vessels are needed. For example, if we want to segment both aorta and two connected tributary vessels leading to the kidneys, this greedy snake is not applicable. Geometrically, it is a big region with two long and narrow concaves on either side. GVF may be more suitable than greedy snake Segmentation refinement The initialisation of snake determines not only the success of greedy snake but also the result s accuracy. The segmentation error of this system is big in several cases. They may come from post-processing stage which represent the boundary. Or they may come from snake itself which find the boundary. For the latter, the refinement of snake can be fulfilled through running snake twice. The first time snake is initialised through recursive snake. The second time snake is initialised through gradient profile. The weight centre of first time is used as the starting point of gradient profile in second time. The shape description of the blood vessel may be the criterion of deciding which one of the two results is better Boundary representation In order to get the object of segmented blood vessel, the snake points are used to generate the boundary. This step is a major contribution of segmentation error. There are two ways of improving its accuracy. Add or remove snake points adaptively The adopted method in this project is piecewise line-fitting using Brensenham algorithm. The precise reconstruction of the curved boundary can be achieved when the 50

57 distance of two contingent points is below a certain value. Since the diameter of blood vessel varies. For example, the blood vessel bulges where aneurysm occurs. The blood vessel tapers after the bifurcation. In order to keep a relatively small distance between snake points, the number of total snake points needs to be an adaptive value depending on the blood vessel diameter. In this project, it is a fixed number of 40 which provides an acceptable precision in all cases. It is necessary to remove to snake points where blood vessel slim. Otherwise, it amounts to over-sampling, which can only increase the distance between two snake points. Boundary representation using spline-fitting Given a certain number of snake points, we can use them as control points for spline fitting instead of connecting them directly. The most popular spline adopted in snake is Bezier-spline Implementation of full scan-line conversion algorithm In this project, the polygon filling using scan line conversion algorithm is simplified based on the assumption that the blood vessel is always a convex polygon. Although it is true in most cases, some concave polygons (Fig 7-1) do occur to undermine this algorithm (refer to 6.2). Actually, it needs manual post-processing to get Fig 7-1 c. The most popular method of implementation is through linked-list upon linked-list [19]. (a) original image (b) segmentation (c) manual post-processed Fig 7-1 Filling concave polygon D blood vessel segmentation without supervising So far, the 3D blood vessel segmentation needs human supervision to overcome the barrier where aneurysm occurs since the snake needs to be initialised using gradient profile. That demands that staring point of gradient profile algorithm be manual selected. Actually, the weight centre of the previous image s resulting snake can be the candidate starting point. This question is very complex, which deserves further research Evaluation method In the evaluation of greedy snake, pixel error is used as the only criterion of segmentation 51

58 error. A more reliable evaluation of segmentation also takes the distribution of pixel error into consideration. My proposed method is to compare the variation of weight centre of two segmented blood vessels. A small variation of weight centre means the pixel error evenly distributed along the boundary. So, if a large pixel error occurs, the boundary needs to be refined. The methods can be referred from (3). If the variation of weight centre is dramatic, it means the snake algorithm fails to capture the boundary of blood vessel. Then we need to check (7.1.3), (7.1.4) and (7.1.5). They are three major contributors of segmentation error. 7.2 Future work Implementation of region growing algorithm The two competing technologies for 2D blood vessel segmentation are region growing and snakes. Implementing region growing algorithm and comparing its results and snakes may give rise to a combination of two. My suggestion is to apply snake twice, then region growing. The snakes can deform from both inside and outside the blood vessel. The inside snake helps to overcome concaves within the blood vessel. The outside snake helps to overcome the leakage of region growing. The refinement of boundary relies on region growing Implementation of volume rendering, MIP and SSD Other competing technologies in 3D blood vessel segmentation are: MIP, SSD and volume rendering. In order to show either the relationship between aorta and other organs or whole tree blood vessel, we need to see through some blood vessels or part of blood vessels. My suggestion is to implement volume rendering. Then MIP can be achieved by degenerate integrate of each ray casting into find the maximum value. SSD can be achieved by using single threshold in transfer function before volume rendering. Of course, SSD can also be achieved by using marching cubes algorithm Immerse virtual reality The segmented blood vessels can be further exploited in this direction: Generate VRML model through the segmented blood vessels. This facilitates the interaction between user and geometry model; Impose computational fluid dynamics constraints upon above blood vessel model and let blood flow within blood vessels. This makes the blood vessel model a dynamic and vivid one; Apply hepatic feedback using stylus, data gloves and other immerse virtual reality facilities to support virtual surgery on this model. 52

59 CHAPTER 8 CONCLUSION The system developed during this project provides a method for 3D blood vessel segmentation through recursive snakes. The segmentations done by this system prove that the method of reconstructing 3D blood vessel model through 2D image segmentation is feasible. The comparison between the system s segmentation and manual segmentation proves that this system is reliable. But it needs to be improved as suggested in previous chapter. This system provides an improved snake algorithm based on greedy snake. It has graphic user interface which facilitates segmentation. It has automatic initialisation function. This system provides a convenient and intuitive running environment that can be used to: Examine the property of snakes; Manually segment blood vessel by radiologists (refer to Appendix E); Segment 3D blood vessels in the form of stacks of 2D CT images using recursive snakes. The evaluation of 2D greedy snake is carried out on both synthetic and real CT images. The testing results demonstrate that greedy snake is an ideal method for aorta segmentation. It runs fast with a relatively high segmentation precision. This project systematically applied knowledge of multimedia technologies (refer to Appendix A). The research work is meaningful and rewarding. The preliminary testing result is encouraging. The system developed for this project provides a basis for further research on those subjects, such as detection on blood vessels bifurcation and orthogonal intersection on 3D blood vessel models. 53

60 References [1] Sven Loncarie & Domagoj Kovacevie, Semi-automatic active contour approach to segmentation of computed tomography volumes. Franzes University, Austria [2] A. C. Kak & Malcolm Slaney, Principles of Computerized Tomographic Imaging, Society of Industrial and Applied Mathematics, 2001 [3] A. C. Kak & Malcolm Slaney, Principles of Computerized Tomographic Imaging, IEEE Press, [4] Nick Efford, Digital image processing-a practical introduction using Java, Addison-Wesley 2001 [5] Bert Verdonck, Blood vessel analysis for 3D spiral CT and MR angiography, PhD thesis École Nationale Supérieure des Télécommunications in Paris [6] [7] [8] M. Kass, A. Witkin, & D. Terzopoulos, Snakes: Active contour models, International Journal of Computer Vision ,1988. [9] A.A Armini, S. Tehrani, & T.E.Weymouth, Using dynamic programming for minimizing the energy of active contours in the presence of hard constraints, Proceedings, Second International Conference on Computer Vision, 1988, pp [10] D.J. Williams, D.J. & M. Shah, A fast algorithm for active contours and curvature estimation, CVGIP: Image Understanding. v. 55, n. 1, pp , [11] C. Xu & J.L. Prince, Gradient Vector Flow: A New External Force for Snakes, Proceedings. IEEE Conf. on Comp. Vis. Patt. Recog. (CVPR), Los Alamitos: Comp. Soc. Press, pp , June [12] C. Xu and J.L. Prince, Snakes, Shapes, and Gradient Vector Flow, IEEE Transactions on Image Processing, , March [13] C. Xu, Deformable models with application to human cerebral cortex reconstruction from MRI. PhD Thesis, John Hopkins University, April 2000 [14] Ian Sommerville, Software Engineering, 6th Edition Addison-Wesley

61 [15] A.K.Jain, Fundamentals of digital image processing, Prentice Hall, NJ, 1989 [16] Deitel & Deitel, Advanced Java: How to program, Prentice-hall, 2002 [17] [18] Donald Hearn & M. Pauline Baker, Computer graphics C version, Prentice-hall, 1997 [19] F.S.Hill, JR. Computer graphics using OpenGL, Prentice-hall, 2001 [20] Jamie Jaworski, Java 2 Platform unleashed, Sams, 1999 [21] Deitel & Deitel, Java: How to program, Prentice-hall, 2001 [22] Daniel Rueckert, Segmentation and Tracking in Cardiovascular MR Images using Geometrically Deformable Models and Templates, PhD thesis Imperial college of science, technology and medicine, 1997 [23] D.H. Davis, The application of active contour models to MR and CT images, project report, University of Birmingham,

62 Bibliography The following materials are not directly cited in the report, but they contribute to the success of this project. 1. Paul Quin, The 3D segmentation of blood vessels by the active contour model, Bsc project report, 2001, School of computing, university of leeds. 2. Bjarne Stroustrup, C++ programming language, Addison-wesley, Will Schroeder, Ken Martin & Bill Lorensen, The Visualization Toolkit, Prentice-hall, Randy Crane, Simplified Approach to Image Processing, Prentice-hall, Micheal Seul, Lawrence O Gorman & Micheal J.Sammon, Practical Algorithms For Image Analysis Descriptions Examples and Code, Cambridge University Press, Dzung L. Pham, A survey of current methods in medical image segmentation, Department of electrical and computer engineering, The Johns Hopkins University, Bruno Santos Pimentel et. Al, On active contour models and their application on medical image segmentation, Departmento de Ciencia da Computacao, UFMG, Brazil, Programming in Java Advanced Image, Sun Microsystems, Sha He, Medial Axis reformation: A new visualization method for CT angiography, Acad Radiol 2001 P

63 Appendix A Project Experience A.1 Knowledge preparation Here is the summary of involved knowledge in this project and the location in the report. Anyone who would like to read this report or undertake a similar subject is recommended to revisit them. No Subject Involved technologies Location in the report 1 PSS Active contour models (snakes) Low pass filtering Chapter Edge detection Point processes (pixel XOR) Digital image I/O (using JAI) 2 VIS Volume rendering Chapter 2 ISO-surfacing (or Shaded Shape Display) Angiography (Maximum intensity project) 3 AGR Rasterization of line (Brensenham Chapter 5 algorithm) Rasterization of polygon (Scan-line conversion algorithm) 4 VWE Model-view-controller architecture Chapter OOP Object-oriented analysis and design Chapter 5 Java Swing programming Java File I/O 6 Data Dual-directional linked list Chapter 5 structure 7 Numerical Standard deviation Chapter 6 analysis Extrapolation 8 Optimisation theory Local and global optimal analysis (Newton downhill algorithm) Chapter 4 NB: 1-5 are taught modules of DMS. A.2 Reference model A reference model has been derived and used as the guideline of this project. The method of using reference model to help solve problem is inspired by Ken Brodlie s course of data 57

64 visualization where he adopts the reference model of data enrichment mapping rendering to implement workflow of visualization. In this project, the conceptual reference model (Fig A-1) shows the relationship among computer graphics, data visualisation and image processing. Fig A-1 Reference model It is a cycle with each of them fitting a stage. Data flow from one stage to another and information is processed at each stage with different presentations. In this project, stacks of CT images are raster images and they are the input of image processing. The segmented images can be interpreted as volume data. Therefore, they can be processed in the stage of data visualization. The output of data visualization is geometry models and they are ready to be rendered in the stage of computer graphics. The output of computer graphics is the raster image which is again the input of image processing. Actually, deformable model is geometry model which is imposed upon digital images. In this project, the resulting snake is directly sent to the stage of computer graphics for rasterization. Reference model prevents me from losing in those techniques details (see table A-1). A.3 Research methodology I summarize the research methodology I learned from this project as the followings: It starts with literature review. I would like to thank Dr. Andy Bulpitt. He gave me several related papers about this project, which contribute to write up of my interim report. Their references provide clues for me to find more papers. One thing I would like to emphasize is to make sure to address one problem a time. First I tested dual-directional list in greedy snake. It failed. I then extracted it and test it alone and it was finished within an hour. Then I brought it back into greedy snake. It worked. At beginning, I told Dr. Andy Bulpitt that I chose this project not for research of image processing but for practicing Java programming. In retrospect, I think that programming is not about a particular language, it is about how to study and practice programming skills such as algorithm design, data structure, logic etc. Another achievement about programming is OOAD. In this project, the fundamental logic is implemented by procedure programming. OOAD is used for system design. I have to hold a detached attitude towards programming. I have to take a step back from solving the problem directly to analyse the problem in order to get its attributes and operations. OOAD makes the total length of 58

65 programmes smaller. Actually, I inherited one class from my PSS coursework RayCasting and two classes from [4] GaussianKernel and ImageView. A well-designed system takes very little time to implement. A.4 Project management It would be impossible to do a project like this without good planning and self-motivation. The timetable that was set up in the interim report is the foundation of time management. This timetable is absolutely necessary although there are many differences with the its execution. It reminds me what have been done and what yet to be done. That prevent me from sitting in the sun. Throughout the project, I have been always trying to find some shortcut. I did find two shortcuts. I intended to manually find blood axis before implementing snake. I changed my mind to implement snake in order to gain some solid achievements. Then I received suggestion from Dr. Andy Bulpitt to use the result of previous snake as the initialization. Then I tried it successfully. So, there is no need to manually find centroid. It saves two weeks. Although the manual axis extraction is also circumvented, I learned MVC to design GUI for snake. A severe logic error was found thanks to this GUI. Four weeks work about researching snake and evaluation on CT images are finished within a week because of this GUI. The total time I have saved reaches almost a month. Sadly, it is spent in the abominable writing up. So, time management is important, but not rigid in detailed steps. I remind me frequently not what the timetable asks me to do but what I should achieve at this stage. New idea is the key to the success and programming makes it a reality. Thinking hard is a prerequisite of the project. Self-motivation is another important factor. I am preoccupied with this project because I am interested in technique details and would like to implement them through programming. I would not suggest anyone who does not like programming or care about technique details to do this or similar project. What went bad is about evaluation. Testing and evaluation should also be well recorded. I wasted a lot of time in repeating test when the testing result was needed for this report. Original I left three weeks for writing-up. After being told that only report matters, I started a week earlier since August 5. My suggestion to those people who are not native English speakers is that even four weeks may not enough. All in all, I enjoy doing this project and working with my supervisor who apparently know what I need most. I also learned how to show respect and practice tolerance towards people with different viewpoints. The achievement and experience I gained from this project makes me well prepared for my forthcoming PhD research. 59

66 Appendix B Objectives and Deliverables 60

67 61

68 Appendix C Interim Report Feedback 62

69 Appendix D Snake s Model & View Greedy snake algorithm maintains two data models: snake and image. The data model of snake is maintained in class: SnakeNode and SnakeList. Its view is maintained in class GreedySnake. The data model and view of image are maintained in class GreedySnake. D.1 Class SnakeNode Attributes: SnakeNode nextnode for node I, next node is I+1; SnakeNode prevnode for node I, previous node is I-1; Methods: Constructors snakenode ( three of them) one is null; one sets position; one sets snake point position and make it point to previous and succeeded nodes; setcurrentposition update current position of snake point; setprevsnakenode and setnextsnakenode make snake node points to its previous and succeeded snake nodes. D.2 Class SnakeList Attributes: SnakeNode firstnode first node pointer (see Fig 5-3); SnakeNode lastnode last node pointer (see Fig 5-3); Methods: Constructor SnakeList creates a snake list given a node. Now, both first node pointer and last node pointer point to this node; InsertAtBack add new snake node at the end; InsertBeforeNode and insertafternode insert a snake node before or after a given snake node of the snake list; getlength and averagedist return the average distance between two snake points; D.3 Class GreedySnake Class GreedySnake maintains four relevant attributes maintaining image data: image This is the buffered image that holds image currently being shown; origim This buffered image stores the original CT image; blurredim This buffered image holds the resulting image after gaussian blurring; gradim This buffered image holds the gradient map image. The presentation of the buffered image is implemented in the class Scroller which extends the class JscrollPane. In this pane, both the image and the pixel info over which cursor hover are shown. Class ImageView implements the function of showing buffered image in a scrollable label. This class is rewritten based on [2]. The pixel info over which cursor hover is implemented in class PixelInfoPane which is also from [2]. 63

70 D.4 Class Slithe Class Slithe maintains the data model of a snake during the process of energy minimisation. (1) Inner class Cell implements searching neighbourhood. This class keeps the information of a searching neighbourhood. Given a snake point, the searching neighbourhood is created accordingly. Each instance of class Cell corresponds to one cell of that searching neighbourhood, which keeps these attributes: Position record the coordinate of that cell; TotalE keep the total energy of snake point when it moves there; CurveE keep the curvature energy of snake point when it moves there; ContE keep the continuity energy of snake point when it moves there; (2) Attributes of Class Slithe keeps necessary data to support snakes The major attributes corresponds to the underlying data: Gradient map and snake list including geometry and coefficients, are two basic data; Searching neighbourhood is generated for each snake point of snake list; Average distance between two contingent snake points is updated each time when snake point moves; (3) methods of Slithe Constructor Slithe only provides necessary initial values for each instance of Slithe; Method go runs snakes one iteration by call method updateposition for each snake point of snakes and return the number of moved snake points. This method corresponds to snakes energy minimising of 4.3.3; Method updateposition computes snake point s energy within the searching neighbour and checks whether snake point needs to move by calling method moveto. It return 1 if yes and 0 if no. This method corresponds to snake point energy minimising of 4.3.4; Method moveto checks whether the minimal energy point is the snake point s current position; Method normalise scale curvature and continuity energy into (0,1) and image energy into (-1,0); Method updatebeta set the beta zero to relax curvature constraints to allow corners. (NB: since the existence of continuity constraints, it does not mean a precise corner but a more closer position to corner compared with the situation when beta is not zero.) 64

71 Appendix E: 3D Blood Vessel Segmentation Totally 703 CT images from 7 stacks are tested and handed to Dr Andy Bulpitt for further evaluation. The segmentation images are stored in raw byte format that can be viewed by MIV as the follows. (MIV is provided by Dr. Andy Bulpitt). Stack00 is used for example. This is the original stacks of CT images: The following is my segmentations on images from No.38 to No.97 of stack00. 65

72 The segmentation error is saved in a text file named as stack00.txt. They are visualized as this map(those segmentation errors below 20% indicate successful snake, below 10% indicate a good snake): Error Stack Image Number In order to show the difference, I also saved the difference map into raw byte format and it is viewed as the following. 66

73 Appendix F Software Usage F.1.Supervised 3D segmentation Step1: Start with this command C:\java GreedySnake image1.jpg (image1.jpg is the input image and it must be specified). Step2: For the first time running, you have to use gradient profile to initialise a snake. Click one point within blood vessel and you get: Step 3: Run greedy snake by clicking Run snake button to get the result snake; 67

74 Step 4: The result is saved by clicking save snake image button, you get file snake50.jpg which saves the segmentation of image50.jpg. The name is automatically generated. Step 5: Then click open next image for image51.jpg. Initialise snake recursively without doing anything. Or you may initialise using gradient profile as step2. Thus, you are able to supervise the running of this snake. Step 6: Read a non-sequential image, say, image80. Click open image button. You get this dialogue window that lets you select the image. 68

75 F.2 Manual segmentation and manual initialisation Step 1: change into manual directory, using the same command line as step1 of F.1; Step2: Click mouse s left button to place a snake point along the blood vessel s boundary and click mouse s right button to finish this snake; Step3: run the snake or you may save snake without running. Itt amounts to manual segmentation. F.3 View gradient map and blurred image The change of image view is bound in mouse s middle button. If your mouse has a roller instead of a middle button or has only two buttons, you need to press ALT + left button to simulate middle button. The three views: original image, blurred image and gradient button are shown in a round robin manner as a response to mouse s middle button. 69

76 F.4 Manual axis extraction Step1: input this command line c:\java proj Step2: Click mouse at any position on right image, to save the coordinates into data file. NB: The mouse coordinates have been changed into pixel coordinates. For images from stack00, the coordinates are saved into file stack00.dat as a record in this format: Image number/x-coordinate/y-coordinate For example: Record 1/32/136 means the centroid of blood vessel in image1 of stack00 is (32,136). Step 3: choose images by using the left tree. 70

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Image Segmentation II Advanced Approaches

Image Segmentation II Advanced Approaches Image Segmentation II Advanced Approaches Jorge Jara W. 1,2 1 Department of Computer Science DCC, U. of Chile 2 SCIAN-Lab, BNI Outline 1. Segmentation I Digital image processing Segmentation basics 2.

More information

Snakes operating on Gradient Vector Flow

Snakes operating on Gradient Vector Flow Snakes operating on Gradient Vector Flow Seminar: Image Segmentation SS 2007 Hui Sheng 1 Outline Introduction Snakes Gradient Vector Flow Implementation Conclusion 2 Introduction Snakes enable us to find

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Unit 3 : Image Segmentation

Unit 3 : Image Segmentation Unit 3 : Image Segmentation K-means Clustering Mean Shift Segmentation Active Contour Models Snakes Normalized Cut Segmentation CS 6550 0 Histogram-based segmentation Goal Break the image into K regions

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK

CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK Ocular fundus images can provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular degeneration

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges

More information

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534: Introduction to Computer Vision Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators Laplacian

More information

Normalized cuts and image segmentation

Normalized cuts and image segmentation Normalized cuts and image segmentation Department of EE University of Washington Yeping Su Xiaodan Song Normalized Cuts and Image Segmentation, IEEE Trans. PAMI, August 2000 5/20/2003 1 Outline 1. Image

More information

Segmentation of Images

Segmentation of Images Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a

More information

Topic 6 Representation and Description

Topic 6 Representation and Description Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

How and what do we see? Segmentation and Grouping. Fundamental Problems. Polyhedral objects. Reducing the combinatorics of pose estimation

How and what do we see? Segmentation and Grouping. Fundamental Problems. Polyhedral objects. Reducing the combinatorics of pose estimation Segmentation and Grouping Fundamental Problems ' Focus of attention, or grouping ' What subsets of piels do we consider as possible objects? ' All connected subsets? ' Representation ' How do we model

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface , 2 nd Edition Preface ix 1 Introduction 1 1.1 Overview 1 1.2 Human and Computer Vision 1 1.3 The Human Vision System 3 1.3.1 The Eye 4 1.3.2 The Neural System 7 1.3.3 Processing 7 1.4 Computer Vision

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323

More information

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of

More information

1. Two double lectures about deformable contours. 4. The transparencies define the exam requirements. 1. Matlab demonstration

1. Two double lectures about deformable contours. 4. The transparencies define the exam requirements. 1. Matlab demonstration Practical information INF 5300 Deformable contours, I An introduction 1. Two double lectures about deformable contours. 2. The lectures are based on articles, references will be given during the course.

More information

Segmentation. Separate image into coherent regions

Segmentation. Separate image into coherent regions Segmentation II Segmentation Separate image into coherent regions Berkeley segmentation database: http://www.eecs.berkeley.edu/research/projects/cs/vision/grouping/segbench/ Slide by L. Lazebnik Interactive

More information

Comparison between Various Edge Detection Methods on Satellite Image

Comparison between Various Edge Detection Methods on Satellite Image Comparison between Various Edge Detection Methods on Satellite Image H.S. Bhadauria 1, Annapurna Singh 2, Anuj Kumar 3 Govind Ballabh Pant Engineering College ( Pauri garhwal),computer Science and Engineering

More information

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Visual feature extraction Part I: Color and texture analysis Sveta Zinger Video Coding and Architectures Research group, TU/e ( s.zinger@tue.nl

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization Volume visualization Volume visualization Volumes are special cases of scalar data: regular 3D grids of scalars, typically interpreted as density values. Each data value is assumed to describe a cubic

More information

Image Analysis, Geometrical Modelling and Image Synthesis for 3D Medical Imaging

Image Analysis, Geometrical Modelling and Image Synthesis for 3D Medical Imaging Image Analysis, Geometrical Modelling and Image Synthesis for 3D Medical Imaging J. SEQUEIRA Laboratoire d'informatique de Marseille - FRE CNRS 2246 Faculté des Sciences de Luminy, 163 avenue de Luminy,

More information

Digital Image Processing

Digital Image Processing Digital Image Processing SPECIAL TOPICS CT IMAGES Hamid R. Rabiee Fall 2015 What is an image? 2 Are images only about visual concepts? We ve already seen that there are other kinds of image. In this lecture

More information

82 REGISTRATION OF RETINOGRAPHIES

82 REGISTRATION OF RETINOGRAPHIES 82 REGISTRATION OF RETINOGRAPHIES 3.3 Our method Our method resembles the human approach to image matching in the sense that we also employ as guidelines features common to both images. It seems natural

More information

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han Computer Vision 10. Segmentation Computer Engineering, Sejong University Dongil Han Image Segmentation Image segmentation Subdivides an image into its constituent regions or objects - After an image has

More information

Local Image preprocessing (cont d)

Local Image preprocessing (cont d) Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge

More information

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. 1. 3D AIRWAY TUBE RECONSTRUCTION. RELATED TO FIGURE 1 AND STAR METHODS

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Image Processing

Image Processing Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Line, edge, blob and corner detection

Line, edge, blob and corner detection Line, edge, blob and corner detection Dmitri Melnikov MTAT.03.260 Pattern Recognition and Image Analysis April 5, 2011 1 / 33 Outline 1 Introduction 2 Line detection 3 Edge detection 4 Blob detection 5

More information

Other Linear Filters CS 211A

Other Linear Filters CS 211A Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Image Segmentation and Registration

Image Segmentation and Registration Image Segmentation and Registration Dr. Christine Tanner (tanner@vision.ee.ethz.ch) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation

More information

Adaptive active contours (snakes) for the segmentation of complex structures in biological images

Adaptive active contours (snakes) for the segmentation of complex structures in biological images Adaptive active contours (snakes) for the segmentation of complex structures in biological images Philippe Andrey a and Thomas Boudier b a Analyse et Modélisation en Imagerie Biologique, Laboratoire Neurobiologie

More information

MET71 COMPUTER AIDED DESIGN

MET71 COMPUTER AIDED DESIGN UNIT - II BRESENHAM S ALGORITHM BRESENHAM S LINE ALGORITHM Bresenham s algorithm enables the selection of optimum raster locations to represent a straight line. In this algorithm either pixels along X

More information

Lecture 15: Segmentation (Edge Based, Hough Transform)

Lecture 15: Segmentation (Edge Based, Hough Transform) Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................

More information

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 60 CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 3.1 IMPORTANCE OF OPTIC DISC Ocular fundus images provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus Image Processing BITS Pilani Dubai Campus Dr Jagadish Nayak Image Segmentation BITS Pilani Dubai Campus Fundamentals Let R be the entire spatial region occupied by an image Process that partitions R into

More information

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37 Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The

More information

A New Technique of Extraction of Edge Detection Using Digital Image Processing

A New Technique of Extraction of Edge Detection Using Digital Image Processing International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) A New Technique of Extraction of Edge Detection Using Digital Image Processing Balaji S.C.K 1 1, Asst Professor S.V.I.T Abstract:

More information

Snakes reparameterization for noisy images segmentation and targets tracking

Snakes reparameterization for noisy images segmentation and targets tracking Snakes reparameterization for noisy images segmentation and targets tracking Idrissi Sidi Yassine, Samir Belfkih. Lycée Tawfik Elhakim Zawiya de Noaceur, route de Marrakech, Casablanca, maroc. Laboratoire

More information

Part 3: Image Processing

Part 3: Image Processing Part 3: Image Processing Image Filtering and Segmentation Georgy Gimel farb COMPSCI 373 Computer Graphics and Image Processing 1 / 60 1 Image filtering 2 Median filtering 3 Mean filtering 4 Image segmentation

More information

Image Analysis. Edge Detection

Image Analysis. Edge Detection Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

More information

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy Chenyang Xu 1, Siemens Corporate Research, Inc., Princeton, NJ, USA Xiaolei Huang,

More information

Optimal Grouping of Line Segments into Convex Sets 1

Optimal Grouping of Line Segments into Convex Sets 1 Optimal Grouping of Line Segments into Convex Sets 1 B. Parvin and S. Viswanathan Imaging and Distributed Computing Group Information and Computing Sciences Division Lawrence Berkeley National Laboratory,

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Norbert Schuff VA Medical Center and UCSF

Norbert Schuff VA Medical Center and UCSF Norbert Schuff Medical Center and UCSF Norbert.schuff@ucsf.edu Medical Imaging Informatics N.Schuff Course # 170.03 Slide 1/67 Objective Learn the principle segmentation techniques Understand the role

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

Medical Image Processing: Image Reconstruction and 3D Renderings

Medical Image Processing: Image Reconstruction and 3D Renderings Medical Image Processing: Image Reconstruction and 3D Renderings 김보형 서울대학교컴퓨터공학부 Computer Graphics and Image Processing Lab. 2011. 3. 23 1 Computer Graphics & Image Processing Computer Graphics : Create,

More information

EECS490: Digital Image Processing. Lecture #19

EECS490: Digital Image Processing. Lecture #19 Lecture #19 Shading and texture analysis using morphology Gray scale reconstruction Basic image segmentation: edges v. regions Point and line locators, edge types and noise Edge operators: LoG, DoG, Canny

More information

Digital Image Processing. Image Enhancement - Filtering

Digital Image Processing. Image Enhancement - Filtering Digital Image Processing Image Enhancement - Filtering Derivative Derivative is defined as a rate of change. Discrete Derivative Finite Distance Example Derivatives in 2-dimension Derivatives of Images

More information

9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B

9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B 8. Boundary Descriptor 8.. Some Simple Descriptors length of contour : simplest descriptor - chain-coded curve 9 length of contour no. of horiontal and vertical components ( no. of diagonal components

More information

Implementation and Comparison of Four Different Boundary Detection Algorithms for Quantitative Ultrasonic Measurements of the Human Carotid Artery

Implementation and Comparison of Four Different Boundary Detection Algorithms for Quantitative Ultrasonic Measurements of the Human Carotid Artery Implementation and Comparison of Four Different Boundary Detection Algorithms for Quantitative Ultrasonic Measurements of the Human Carotid Artery Masters Thesis By Ghassan Hamarneh Rafeef Abu-Gharbieh

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

Chapter - 2 : IMAGE ENHANCEMENT

Chapter - 2 : IMAGE ENHANCEMENT Chapter - : IMAGE ENHANCEMENT The principal objective of enhancement technique is to process a given image so that the result is more suitable than the original image for a specific application Image Enhancement

More information

A Study of Medical Image Analysis System

A Study of Medical Image Analysis System Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction CoE4TN4 Image Processing Chapter 5 Image Restoration and Reconstruction Image Restoration Similar to image enhancement, the ultimate goal of restoration techniques is to improve an image Restoration: a

More information

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Recall: Edge Detection Image processing

More information

Network Snakes for the Segmentation of Adjacent Cells in Confocal Images

Network Snakes for the Segmentation of Adjacent Cells in Confocal Images Network Snakes for the Segmentation of Adjacent Cells in Confocal Images Matthias Butenuth 1 and Fritz Jetzek 2 1 Institut für Photogrammetrie und GeoInformation, Leibniz Universität Hannover, 30167 Hannover

More information

CHAPTER 6 Parametric Spline Curves

CHAPTER 6 Parametric Spline Curves CHAPTER 6 Parametric Spline Curves When we introduced splines in Chapter 1 we focused on spline curves, or more precisely, vector valued spline functions. In Chapters 2 and 4 we then established the basic

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity. Chapter - 3 : IMAGE SEGMENTATION Segmentation subdivides an image into its constituent s parts or objects. The level to which this subdivision is carried depends on the problem being solved. That means

More information

B. Tech. Project Second Stage Report on

B. Tech. Project Second Stage Report on B. Tech. Project Second Stage Report on GPU Based Active Contours Submitted by Sumit Shekhar (05007028) Under the guidance of Prof Subhasis Chaudhuri Table of Contents 1. Introduction... 1 1.1 Graphic

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Blood Vessel Visualization on CT Data

Blood Vessel Visualization on CT Data WDS'12 Proceedings of Contributed Papers, Part I, 88 93, 2012. ISBN 978-80-7378-224-5 MATFYZPRESS Blood Vessel Visualization on CT Data J. Dupej Charles University Prague, Faculty of Mathematics and Physics,

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

Ultrasonic Multi-Skip Tomography for Pipe Inspection

Ultrasonic Multi-Skip Tomography for Pipe Inspection 18 th World Conference on Non destructive Testing, 16-2 April 212, Durban, South Africa Ultrasonic Multi-Skip Tomography for Pipe Inspection Arno VOLKER 1, Rik VOS 1 Alan HUNTER 1 1 TNO, Stieltjesweg 1,

More information

Volume Illumination and Segmentation

Volume Illumination and Segmentation Volume Illumination and Segmentation Computer Animation and Visualisation Lecture 13 Institute for Perception, Action & Behaviour School of Informatics Overview Volume illumination Segmentation Volume

More information

Advanced Image Reconstruction Methods for Photoacoustic Tomography

Advanced Image Reconstruction Methods for Photoacoustic Tomography Advanced Image Reconstruction Methods for Photoacoustic Tomography Mark A. Anastasio, Kun Wang, and Robert Schoonover Department of Biomedical Engineering Washington University in St. Louis 1 Outline Photoacoustic/thermoacoustic

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

CS-465 Computer Vision

CS-465 Computer Vision CS-465 Computer Vision Nazar Khan PUCIT 9. Optic Flow Optic Flow Nazar Khan Computer Vision 2 / 25 Optic Flow Nazar Khan Computer Vision 3 / 25 Optic Flow Where does pixel (x, y) in frame z move to in

More information

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES 1 B.THAMOTHARAN, 2 M.MENAKA, 3 SANDHYA VAIDYANATHAN, 3 SOWMYA RAVIKUMAR 1 Asst. Prof.,

More information

Image Acquisition Systems

Image Acquisition Systems Image Acquisition Systems Goals and Terminology Conventional Radiography Axial Tomography Computer Axial Tomography (CAT) Magnetic Resonance Imaging (MRI) PET, SPECT Ultrasound Microscopy Imaging ITCS

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

Chapter 11 Representation & Description

Chapter 11 Representation & Description Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering

More information

Variational Methods II

Variational Methods II Mathematical Foundations of Computer Graphics and Vision Variational Methods II Luca Ballan Institute of Visual Computing Last Lecture If we have a topological vector space with an inner product and functionals

More information

the most common approach for detecting meaningful discontinuities in gray level. we discuss approaches for implementing

the most common approach for detecting meaningful discontinuities in gray level. we discuss approaches for implementing Edge Detection FuJian the most common approach for detecting meaningful discontinuities in gray level. we discuss approaches for implementing first-order derivative (Gradient operator) second-order derivative

More information

Filtering Applications & Edge Detection. GV12/3072 Image Processing.

Filtering Applications & Edge Detection. GV12/3072 Image Processing. Filtering Applications & Edge Detection GV12/3072 1 Outline Sampling & Reconstruction Revisited Anti-Aliasing Edges Edge detection Simple edge detector Canny edge detector Performance analysis Hough Transform

More information

Lecture 8 Object Descriptors

Lecture 8 Object Descriptors Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

Automatic Extraction of Femur Contours from Hip X-ray Images

Automatic Extraction of Femur Contours from Hip X-ray Images Automatic Extraction of Femur Contours from Hip X-ray Images Ying Chen 1, Xianhe Ee 1, Wee Kheng Leow 1, and Tet Sen Howe 2 1 Dept of Computer Science, National University of Singapore, 3 Science Drive

More information

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking Representation REPRESENTATION & DESCRIPTION After image segmentation the resulting collection of regions is usually represented and described in a form suitable for higher level processing. Most important

More information

Practical Image and Video Processing Using MATLAB

Practical Image and Video Processing Using MATLAB Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and

More information