Institutionen för systemteknik

Size: px
Start display at page:

Download "Institutionen för systemteknik"

Transcription

1 Institutionen för systemteknik Department of Electrical Engineering Examensarbete Design, evaluation and implementation of a pipeline for semi-automatic lung nodule segmentation Examensarbete utfört i Datorseende vid Tekniska högskolan vid Linköpings universitet av Lukas Berglin LiTH-ISY-EX--16/4925--SE Linköping 2016 Department of Electrical Engineering Linköpings universitet SE Linköping, Sweden Linköpings tekniska högskola Linköpings universitet Linköping

2

3 Design, evaluation and implementation of a pipeline for semi-automatic lung nodule segmentation Examensarbete utfört i Datorseende vid Tekniska högskolan vid Linköpings universitet av Lukas Berglin LiTH-ISY-EX--16/4925--SE Handledare: Examinator: Marcus Wallenberg isy, Linköpings universitet Toms Vulfs Sectra Imaging IT AB Maria Magnusson isy, Linköpings universitet Linköping, 3 mars 2016

4

5 Abstract Lung cancer is the most common type of cancer in the world and always manifests as lung nodules. Nodules are small tumors that consist of lung tissue. They are usually spherical in shape and their cores can be either solid or subsolid. Nodules are common in lungs, but not all of them are malignant. To determine if a nodule is malignant or benign, attributes like nodule size and volume growth are commonly used. The procedure to obtain these attributes is time consuming, and therefore calls for tools to simplify the process. The purpose of this thesis work was to investigate the feasibility of a semi-automatic lung nodule segmentation pipeline including volume estimation. This was done by implementing, tuning and evaluating image processing algorithms with different characteristics to create pipeline candidates. These candidates were compared using a similarity index between their segmentation results and ground truth markings to determine the most promising one. The best performing pipeline consisted of a fixed region of interest together with a level set segmentation algorithm. Its segmentation accuracy was not consistent for all nodules evaluated, but the pipeline showed great potential when dynamically adapting its parameters for each nodule. The use of dynamic parameters was only briefly explored, and further research would be necessary to determine its feasibility. v

6

7 Acknowledgements I would like to thank Sectra Imaging IT AB for giving me the opportunity to do my master thesis on such challenging but interesting field. Thank you, everyone at Sectra for the amazing work environment and positive energy. A special thank you to my supervisors Toms Vulfs and Marcus Wallenberg and examiner Maria Magnusson for the continuous help and feedback. I would also like to thank Malin Bergqvist for always supporting and inspiring me to do my best. vii

8

9 Contents 1 Introduction Motivation Purpose Problem statements Limitations Outline of report Background theory Computed tomography Lung nodules Hounsfield units DICOM Jaccard index Algorithms Region of interest Fixed size Derivative search Pre-processing Gaussian filtering Edge filtering Adaptive filtering Median filtering Cylinder filtering Local contrast enhancement filtering Anti-geometric diffusion Segmentation Region growing Standard deviation thresholding Lassen threshold Ensemble segmentation Level set Otsu s method Post-processing Method Solution layout ix

10 Contents x Seed point Region of interest Isotropic voxels Pre-processing Segmentation Post-processing Volumetric analysis Tuning process Pipeline evaluation process Result Test data Tuning Pipeline evaluation Discussion Pipeline Region of interest Pre-processing Segmentation Post-processing Tuning process Pipeline evaluation process Conclusions 33 A Datasets 34 Bibliography 35

11 List of Figures 2.1 Radiography, anatomy planes and CT Examples of nodules handled in this thesis All fourteen directions used in the derivative search Examples to illustrate the effect of selected pre-processing algorithms Visualization of level set contours Examples of Otsu s method for determining thresholds Layout of the final solution for the nodule analysis Axial slices of the selected tuning nodules Example of different complexity for subsolid nodules Example of increased performance using dynamic parameters Example of decreased performance using dynamic parameters Example of intensity leakage for small nodules close to chest walls Example of low performance for a small nodule xi

12 List of Tables 5.1 List of abbreviations and section references Results for the tuning process Results for the first run with the centered seed point Results for the second run with the seed point slightly off center Results for the third run with the seed point slightly off center xii

13 Abbreviations AGD CT DICOM HU IDRI LCE LIDC PACS Anti-Geometric Diffusion Computed Tomography Digital Imaging and Communications in Medicine Hounsfield Unit Image Database Resource Initiative Local Contrast Enhancement Lung Image Database Consortium Picture Archiving and Communication System xiii

14 Chapter 1 Introduction This document is written as a master of science thesis at the Department of Electrical Engineering at Linköping University, with Maria Magnusson as examiner and Marcus Wallenberg as supervisor. The work was performed at Sectra Imaging IT AB, with Fredrik Häll as requirement specifier and Toms Vulfs as supervisor. Sectra Imaging IT AB is a subsidiary of Sectra AB, which was founded in 1978 and specializes in both medical IT and secure communication. 1.1 Motivation Lung cancer is the leading cause of cancer death in the world. In 2012, over 1.8 million new cases were diagnosed as lung cancer, which corresponded to 13.0% of all new cancer cases that year. Lung cancer manifests as small tumors called lung nodules [1]. Manually assessing these lung nodules is a time consuming task, taking 5-10 minutes for each lung scan [2]. This is why efficient tools for diagnosing lung nodules in lung volumes are of highest importance. 1.2 Purpose The aim of this thesis is to investigate the feasibility of a semi-automatic lung nodule segmentation system. This was done by evaluating the accuracy and the computational cost for multiple existing pre-processing and nodule segmentation algorithms. The best performing algorithm combination were implemented and optimized for Sectra s proprietary picture archiving and communication system (PACS). 1

15 Introduction Problem statements The work consisted of two main parts: a design and an implementation part. For the design part, the goal was to find the most feasible pipeline for lung nodule segmentation. The following two questions were therefore explored: What pipeline of existing pre-processing, segmentation and post-processing algorithms is the most promising one? What factors limit the pipelines not chosen among those evaluated? For the implementation part, the goal was to implement and optimize the selected pipeline into Sectra s proprietary PACS. When doing this, the layout and limitations of the existing software had to be taken into account. 1.4 Limitations The limitations for this work were: The end user input is limited to a single click on a nodule. The click is assumed to be approximately in the center of mass of the nodule. The computational time for the whole pipeline has to be below two seconds per nodule. 1.5 Outline of report Following this introductory chapter, the report includes five more chapters. Chapter 2 introduces related theory and concepts used, followed by chapter 3 which presents all algorithms used. Chapter 4 includes the method and describes the pipeline layout and all tests performed for the pipeline evaluation. Chapter 5 presents the results from the evaluation. Chapter 6 discusses the results and motivates the final pipeline chosen for implementation. Finally, chapter 7 contains conclusions and suggestions for future work.

16 Chapter 2 Background theory This chapter introduces the necessary background theory. This includes computed tomography, lung nodules, the medical image standard used and the Jaccard index. 2.1 Computed tomography There exists multiple techniques in medical diagnostic imaging: computed tomography (CT), magnetic resonance imaging (MRI), ultrasound and radiography to mention a few. The most common technique for lung scanning today is CT, which is based on X-ray scanning. In classical X-ray radiography, X-rays are emitted from an X-ray tube and sent through the patient. Radiation not absorbed by the body are exposed to a digital flat panel detector to create the image. The image intensities represent the inside of the body, due to the different radiodensity factors for different anatomical structures. X-ray images are projections that show overlayed structures, see figure 2.1a. CT is also based on X-rays but the flat panel detector is replaced with an arc-shaped detector. The X-ray tube and the detector rotate around the patient in a helical pattern. This way, readings are obtained from all angles around the patient which enables 3D reconstruction of the body. The 3D reconstruction is usually visualized as sagittal, coronal and axial slices, see figure 2.1b. In this work, a slice or an image refers to an axial slice, see figure 2.1c. 2.2 Lung nodules Lung nodules are lung tissue abnormalities. They are overall round or oval-shaped and 3-30 mm in diameter [4]. As mentioned in section 1.1, lung cancer always manifests as 3

17 Background theory 4 (a) (b) (c) Figure 2.1: Example of radiography, anatomy planes and CT. (A) Radiography image. (B) Anatomy planes. A = axial, B = coronal, C = sagittal [3]. (C) CT axial slice. lung nodules. However, the opposite is not true as most lung nodules are not cancerous. Most lung nodules are benign, and may instead be the result of inflammations and scars from fungal or bacterial infections in the lung. They are very common and exist in at least 50% of people by the age of 50. Despite most nodules being benign, they are still potential manifestation of lung cancer and therefore there is a big challenge in determining whether or not they are cancerous. Two attributes commonly used to determine this are nodule size and volume growth. For this work, both solid nodules and subsolid nodules are handled, see figure 2.2. Solid nodules are well-defined, bright and round in shape. They have a uniform intensity distribution with high contrast to air which make them relatively easy to detect in CT scans. Subsolid nodules do not have uniform shapes and/or intensity distributions, and their intensity distributions are closer to the air in the lung than solid nodules [5], which makes them harder to separate from the background. The nodules could be wellcircumscribed or connected to blood vessels, but they are assumed to not be connected to the chest walls. 2.3 Hounsfield units The Hounsfield unit (HU) value is a quantity used for describing the radiodensity in CT. It is a linear transformation of the linear attenuation coefficient µ and is used to present the attenuation in a standardised and convenient form. The conversion from µ to H is H = µ µ water µ water 1000, (2.1)

18 Background theory 5 (a) A solid, well-circumscribed nodule. (b) A subsolid, well-circumscribed nodule. Figure 2.2: Examples of nodules handled in this thesis. Images from the LIDC-IDRI database [6]. where µ water and µ air are the linear attenuation coefficients of water and air respectively. Note that H water = 0 HU and H air = 1000 HU. 2.4 DICOM The Digital Imaging and Communications in Medicine (DICOM) is an international standard in medical imaging. It is used for handling, storing, distributing and viewing any kind of medical images, regardless of their origin. There are hundreds of attributes stored in each DICOM image, but only a few are interesting for the scope of this thesis, namely: Slice location: Defines where the slice is located. Slice thickness: Defines the thickness of the slice in millimeters. Pixel spacing: Defines the width and height of a pixel in millimeters. Rescale slope and intercept: Two variables used for linear transformation from voxel intensity to HU. Voxel intensities in CT slices vary for different image acquisition techniques. This means that equal voxel intensities in two sets of images could correspond to different radiodensities and vice versa. To enable algorithms to include radiodensity characteristics, HU are used instead of voxel intensities. The conversion from voxel intensity to HU is

19 Background theory 6 H(x) = I(x) R s + R i, (2.2) where I(x) is the voxel intensity at image coordinate x =, R s is the rescale slope and R i is the rescale intercept. 2.5 Jaccard index The Jaccard index is used to compare similarities and diversities between two sets, see equation (2.3). It is calculated for a pair of sets by dividing their intersection with their union J(A, B) = A B A B (2.3) where J(A, B) is the similarity for set A and set B. Note that 0 J 1 where A = B gives J = 1 and A B = 0 gives J = 0.

20 Chapter 3 Algorithms In this chapter, all algorithms used in the thesis are presented. They will be categorized according to which step of the pipeline they are part of, and a detailed description for each algorithm will be presented. There is a multitude of algorithms within this field, and evaluating all of them would not be feasible. The algorithm selection is based on two reviews [7, 8] performed on papers in the field together with recommendations from supervisors. 3.1 Region of interest Listed below are the two region of interest algorithms selected for this work Fixed size The simplest and most intuitive algorithm for choosing a region of interest is to use a fixed size cube. The cube is centered around the user seed point with the edge length of 45 mm, 1.5 times larger than the largest size of a nodule [4]. The additional 50% is to ensure that any nodule fits inside the region even if the seed point is chosen slightly off-center. Fixed size is based on the region of interest algorithm used by Lassen et al [5]. Lassen et al. use a user defined stroke to define a cubic region with side length 1.6 times the stroke length. Since this solution is limited to only include a seed point as user input, the algorithm was modified to fulfill this requirement. 7

21 Algorithms 8 Figure 3.1: All fourteen directions used in the derivative search Derivative search The second region of interest algorithm is a low-pass filter together with a derivative search. The low-pass filter smooths the nodule body to remove small inhomogeneities in the nodule. After that intensity derivatives are calculated in fourteen different directions starting from the seed point, see figure 3.1. Expansion is performed in all these directions until, for each of them, a point is encountered where: The derivative is non-negative. The derivative has been negative at least once. The first condition is set to stop the progression for a direction when it approaches nearby structures outside the vessel. The second condition handles seed points chosen slightly off-center. A off-center seed point traveling towards the center point of the nodule is likely to initially have a non-negative derivative, which otherwise would stop the progression. When all derivatives have stopped, the region of interest is selected as the largest bounding box surrounding all end points. 3.2 Pre-processing Listed below are all pre-processing algorithms selected for this work. For visualization of performance for each algorithm, see figure Gaussian filtering The Gaussian filter is a weighted low-pass filter. The filter is constructed by sampling a 3D Gaussian function G(x, y, z) = 1 x 2 +y 2 +z 2 2πσ 2 e 2σ 2 (3.1)

22 Algorithms 9 1A 1B 1C 1D 1E 1F 1G 1H 2A 2B 2C 2D 2E 2F 2G 2H 3A 3B 3C 3D 3E 3F 3G 3H 4A 4B 4C 4D 4E 4F 4G 4H Figure 3.2: Four different examples to illustrate the effect of all selected pre-processing algorithms. The first row uses a test case, the following three rows use real nodule data. A = original image, B = Gaussian filtering, C = edge filtering, D = adaptive filtering, E = median filtering, F = cylinder filtering, G = local contrast enhancement, H = anti-geometric diffusion. where x, y and z are the distances from the center of the filter and σ is the standard deviation of the filter. The result of Gaussian filtering is a smoothed image, see figure 3.2, column B. Gaussian filters were used by multiple authors according to [8], and Matlab s existing implementation has been used during the design phase of this work. The tunable parameters are the filter size and σ Edge filtering Edge filters are used to enhance image features, such as edges of objects. It uses an unsharp masking method, which creates a low-pass filtered version of the image and subtracts this from the original image. The resulting image then contains high frequency structures. These are then amplified by an amplification parameter and added to the original image. The result of edge filtering is an image with enhanced contrast near edges and borders, see figure 3.2, column C. Edge filtering is a common technique used for enhancing edges. The tunable parameters are low-pass filter size and the amplification parameter for the high frequencies.

23 Algorithms Adaptive filtering The adaptive filter is the sum of a position invariant low-pass filter and a position variant band-pass filter [9]. The coefficients for the band-pass filter is controlled by a local control tensor, which is constructed using a local structure tensor and its eigenvectors and eigenvalues. This way the band-pass filter can adapt its coefficients to structures in the data. The result of adaptive filtering is an image with sharper edges and smoothed areas where there are little to no structures, see figure 3.2, column D. This adaptive filter used is based on Signal Processing for Computer Vision by Granlund et al. [9]. Tunable parameters are the filter size and the standard deviation σ, for the averaging filter used when calculating the local structure tensor. It is also possible to choose if the structure tensor should be averaged and/or normalized Median filtering The median filter sets each pixel to the median value in a surrounding neighbourhood. It is used to remove salt and pepper noise and for edge preserving smoothing. The result of the median filtering is a smoothed image where small vessels and structures have been removed, see figure 3.2, column E. Median filters were used by multiple authors according to [8], and Matlab s existing implementation has been used during the design phase of this work. The tunable parameter is the mask size Cylinder filtering Cylinder filters are used to suppress vessel-like structures in the lung. It uses template matching with cylinder templates to find the vessel-like structure, where higher correlation between data and templates creates stronger filter responses. The strongest filter response is then subtracted from the original image to suppress vessels. The implemented algorithm uses seven templates, each with a length of seven pixels, expanding in the same directions as the derivative search, see section The result of cylinder filtering is an image with reduced intensities for cylinder shaped structures, see figure 3.2, column F. Cylinder filters are used by Chang et al. [10]. Chang et al. defines the filter response F cyl (x) as

24 Algorithms 11 ( ) F cyl (x) = max min I(y), (3.2) θ y Ω x θ where Ω x θ is the domain of the cylinder filter centered around coordinate x with orientation θ and I(y) is the pixel intensity in coordinate y. Although the implemented filter is not equivalent to that of Chang et al., it uses the same principle of suppressing vessel-like structures in data. The tunable parameters are the kernel size and the cylinder radius Local contrast enhancement filtering Local contrast enhancement (LCE) is used to improve the details and the local contrast. For each pixel, the operation removes a local mean and divides that results with the local standard deviation, O(x, y) = I(x, y) µ(x, y) σ(x, y) (3.3) where O(x, y) is the output image, I(x, y) is the input image, (x, y) are the image coordinates, µ is the local mean and σ is the local standard deviation. µ is given by µ(x, y) = I(x, y) h(x, y), (3.4) where h(x, y) is a Gaussian low-pass filter. σ is given by σ(x, y) = I 2 (x, y) h(x, y) µ 2 (x, y). (3.5) The result of LCE is an image with high contrast between fast and slow varying structure, see figure 3.2, column G. This LCE filter is used by Messay et al. [11]. Messay et al. uses two different window sizes depending on the size of the nodule. The algorithm implemented for this work only uses a single window size regardless of the nodule size. The operation is done in 2D for each slice, and the tunable parameters are the window size and the standard deviation σ for the Gaussian low-pass filter.

25 Algorithms Anti-geometric diffusion Anti-geometric diffusion uses first and second derivatives along both x and y axes for each point in the image to calculate a new image I AD = I2 xi xx + 2I x I y I xy + Iy 2 I yy Ix 2 + Iy 2, (3.6) where I x,i y are the first order derivatives and I xx, I xy and I yy are the second order derivatives of the image. 3 3 Sobel operators were used to calculate the derivatives. The result of anti-geometric diffusion is an image only highlighting edges, see figure 3.2, column H. Anti-geometric diffusion is used by Ye et al. [12]. Ye et al. uses it as a pre-step to calculate a geometry feature called shape index. Different shape index values correspond to different shapes, which is used to identify sphere-like structures in the data. Shape index was excluded from this work due to insufficient information on how to calculate principal curvature, which is necessary for calculating the shape index. There are no tunable parameters for this algorithm. Other choices of derivative operators are possible, but this was not explored further. 3.3 Segmentation Listed below are all segmentation algorithms selected for this work. After the image data has been pre-processed, the segmentation algorithms are used to find the nodule Region growing Region growing is a threshold-based segmentation algorithm. It classifies voxels within set thresholds and connected to a specified seed point as part of the segmented object. Starting from a seed point, it checks if adjacent voxels in a 6-connective neighbourhood are within specified upper and lower thresholds. Voxels within the thresholds are classified as part of the object, and the procedure is repeated for all newly classified voxels. The algorithm stops when no additional voxels are added. This is the baseline for threshold-determining segmentation algorithms.

26 Algorithms Standard deviation thresholding Standard deviation thresholding is a threshold-determining algorithm. It uses intensity characteristics in a local neighbourhood of the seed point to calculate thresholds for the region growing algorithm. The thresholds are defined as T lower = I(x) a σ cube (3.7) T upper = I(x) + a σ cube (3.8) where I(x) is the intensity at seed point x, a is constant and σ cube is the standard deviation of local neighbourhood surrounding the seed point. Tunable parameters are the window size of the local neighbourhood and the constant a Lassen threshold This algorithm is a threshold-determining algorithm. It uses both intensity characteristics of the selected nodule along with general nodule characteristics. The upper threshold is set as the sum of the mean and standard deviation of the seed point and its local neighbourhood. The lower threshold is determined by P +L 2, if P +L 2 < 600 T lower = 600, otherwise (3.9) where L is the typical nodule intensity and P is the typical background intensity. Both are determined from histogram analysis, where L is set as the highest peak in the histogram generated from the seed point s local neighbourhood, and P is set as the highest peak in the region of interest excluding the region used to calculate L. When determining P, voxel intensities above -200 HU are excluded from the histogram to ignore large vessels and chest walls. This is approach is based on the threshold determining algorithm used by Lassen et al. [5]. As previously mentioned is section 3.1, Lassen et al. uses a user defined stroke as input to mark the nodule and create a region of interest instead of a seed point. This stroke is also used for specifying the volumes necessary to determine L and P, where L analyses the volume generated from dilating the stroke by one voxel in each direction and P excludes the volume generated from dilating the stroke by four voxels. Tunable parameter for this algorithm is the radius of the small cubic volume for the calculation of L and P.

27 Algorithms Ensemble segmentation Ensemble segmentation combines multiple uses of region growing. Starting from a seed point, region growing is performed with upper and lower thresholds set as a multiple of the standard deviation of a small cubic region around the seed point, which is same as standard deviation thresholding, see section The initial segmentation region is eroded to ensure that the region is found inside the nodule. Ten new seed points are selected at random from inside this region and the same region growing procedure is applied to each of these seed points, creating ten new regions. The intersection between all these defines the nodule core. From the nodule core, eight regions are specified using the sagittal, coronal and axial planes all intersecting at the nodule core center. A random seed point is selected in each region. These eight seed points together with the nodule core center and a randomly selected seed point within the nodule core form ten parent seeds. For each parent seed point, eight child seed points per slice are selected in a three slice neighbourhood around the parent seed point, giving a total of 24 child seed points. For each of these seed points, the region growing procedure is applied, creating 24 child regions. conditions: A child region is included to the parent region if it satisfies the following if mean child < mean parent 3 stdev parent then exclude child region; else if intersection child,parent /area child > 0.2 then include child region; else exclude child region; end Algorithm 1: Conditions for keeping a child region. The parent and the included child regions together form a parent tumor segmentation. All ten parent tumor segmentation are added together, and a voxel is assigned to the final nodule segmentation if at least half of the voxels in its neighborhood were included in at least half of the parent masks. Ensemble segmentation is used by Gu et al. [13]. Gu et al. uses a different region growing algorithm called Click & Grow, which differs somewhat from the one implemented in this thesis. Every aspect of their Click & Grow implementation is not presented in the paper, but seems to be similar to the previously mentioned standard deviation thresholds algorithm and is therefore substituted. Gu et al. also uses an additional condition for including or excluding child regions called roundness feature. This feature

28 Algorithms 15 Figure 3.3: Visualization of level set contours in 2D [14]. The surface φ intersection with the zero level set creates the contour. is only mentioned and not explained, and is therefore excluded. Tunable parameters for this algorithm are the same as for standard deviation thresholding, see section Level set Level set is a segmentation algorithm that estimates the contour of an evolving surface φ(x(t), t) over time t [14]. It is defined as all points intersecting the plane where the surface has no height, i.e. the zero level set φ = 0, see figure 3.3. Any point x changes over time with the φ, and φ can be any function as long as its zero level set gives a contour. A surface height map is calculated as the distance d to the surface, φ(x, t = 0) = ±d (3.10) where d is positive outside the contour and negative inside it. Given an initial φ t0, φ t can be calculated for any t using the motion equation φ t, which gives φ(x(t), t) t = φ(x(t), t) x t + φ t. (3.11) x(t) Here, the gradient φ(x(t),t) x(t) = φ, the speed x t = F (x(t))n, where F is a force and n = φ φ is the normalized gradient. With this, the motion equation (3.11) can be rewritten as

29 Algorithms 16 φ t + φx t = φ t + φf n = φ t + F φ. (3.12) To adjust the shape and the contour to the object, the force F should be derived from the image data. This is achieved by using gradients from the image. The contour should stop at edges of the object (F 0) and expand inside the object (F > 0), which corresponds to the inverse gradient information. Also, with φ comes the possibility to compute surface curvature κ which is useful when controlling the surface smoothness. κ = φ φ, (3.13) The implementation used is based on Wang et al. [15, 16]. Presented in their papers are segmentation results for much larger objects such as a brain, an aorta and a liver, and the implementation is not adjusted specifically for nodule segmentation. The algorithm is initialized using a sphere as initial φ t0 together with a vector for intensity analysis. This intensity analysis is used for an additional stop criterion. Tunable parameters are the sphere radius, κ and the vector used for initialization Otsu s method Otsu s method is a threshold-determining algorithm. It divides all pixels in an image into a specified number of classes using thresholds, see figure 3.4. The thresholds are determined by minimizing the variance within each class or, equivalently, maximizing the variance between classes. To minimize the effect of high intensity chest walls when determining the thresholds, an upper pixel intensity limit is set, excluding pixels above the limit when calculating the thresholds. When using more than two classes, it is necessary to specify which classes that should be considered as selected classes. Their corresponding thresholds are then used in the region growing algorithm. Otsu s method is a common method used for automatic thresholding [17]. Tunable parameters are the number of classes to find thresholds for, which classes to include in the segmentation and the upper limit for chest wall exclusion.

30 Algorithms 17 (a) (b) (c) Figure 3.4: Examples of Otsu s method for determining thresholds. (A) Original image, (B) Otsu s method with two classes, C) Otsu s method with four classes. 3.4 Post-processing All region growing based algorithms are susceptible to leakage since they only include intensity based characteristics and do not take the shape of the segmented object into account. To minimize this leakage effect, all region growing based segmentation algorithms are followed by some morphological operations as post-processing. This includes three steps: 1. Erode the object to remove small structures such as vessels. 2. Remove objects not connected to the core object. 3. Dilate the object to return it to its original size. Morphological operations are used by Messay et al. [11]. Both erosion and dilation are performed multiple times with 6-connective neighbourhoods.

31 Chapter 4 Method This chapter describes the methods used to achieve the results of the thesis. First, an overview of the evaluated solution layout is provided. After that, the algorithm tuning process and automated test descriptions follows. 4.1 Solution layout According to reviews by Dhara et al. and Lee at al. [7, 8], most automatic analysis solutions include common operations; pre-processing, lung field segmentation, nodule detection, false positive reduction, nodule classification and volumetric analysis. Pre-processing enhances the contrast between foreground and background. Lung field segmentation locates the lung region in the scans. Nodule detection finds a set of nodule candidates. False positive reduction removes candidates not considered to be nodules. Nodule classification determines if a nodule is benign or malignant. Volumetric analysis estimates volume and growth rate of the nodule. An adjusted solution layout will be used in this work, see figure 4.1. The solution is semi-automatic which lets the user specify a seed point inside the nodule where the region of interest should be centered. This enables the algorithms to work only within a local neighbourhood of the nodule. Lung field segmentation will not be included since it is used for separating well-attached nodules from surrounding wall tissue which is an 18

32 Method 19 Figure 4.1: Layout of the final solution for the nodule analysis. optional task for the work. False positive reduction will not be included since the seed point is assumed to be inside a nodule. Nodule classification and volumetric analysis will not be included, except for the volume estimate, because they lie beyond the scope of this thesis. In summary, the solution contains the following steps: 1. A seed point is provided by the user. 2. A region of interest is defined around the seed point. 3. Image data is processed to consist of isotropic voxels. 4. A pre-processing algorithm is performed on the region of interest. 5. A segmentation algorithm is performed on the processed region of interest. 6. Morphological operations are performed on the segmented volume. 7. Volumetric analysis is performed on the final volume Seed point As a first step in the solution, a seed point is provided by the user. Since the solution is semi-automatic, it includes end user interaction. For the solution to work, the seed point is assumed to be close to the center of the nodule.

33 Method Region of interest Working on a small region of interest has two major advantages. Firstly, it reduces the workload of algorithms. Instead of applying algorithms on pixels for every slice, it can applied on to a small fraction of voxels. Secondly, it places an upper bound on the segmentation error, as the segmented region is limited by the region of interest. For instance, errors caused by leakage into surrounding tissue are limited by the region-of-interest boundaries Isotropic voxels CT scans usually have different voxel resolutions in different image dimensions. The pixel spacing in a data set is equal for both the x- and y-direction, but differs in the distance between slices. The slice thickness is usually larger than the pixel distance. Due to the anisotropy of the voxels, 3D operations are harder to implement. To simplify the use of 3D operations on the CT scans, isotropic voxels is preferable. The image data is therefore resampled through linear interpolation to get a slice thickness equal to the pixel spacing to ensure isotropic voxels Pre-processing Prior to the segmentation, the image data is pre-processed. There are multiple reasons for this, which are: To make diffuse nodules more distinct. To suppress blood vessels connected to nodule. To enhance nodule edges. Diffuse subsolid nodules usually have intensity inhomogeneities in its core. These irregularities introduce problems in intensity- or derivative-based segmentation algorithms. To make the nodule core more homogeneous, it can be smoothed using Gaussian filtering and median filtering. Blood vessels connected to the nodules are the main source of segmentation leakage. The HU intensity values for vessels are similar to those of the nodules, which causes region growing based algorithms to continuously expand outside the nodule. One way to reduce this vessel leakage is to suppress vessel like structures using median filtering and cylinder filtering. Nodules usually show gradually decreasing HU intensities close to their borders, which complicates the process of estimating where

34 Method 21 the actual contour lies. To simplify this, one wants to enhance edges to get a clearer nodule border using edge filtering, adaptive filtering, local contrast enhancement and anti-geometric diffusion Segmentation Segmentation is the main task of determining which voxels that belong to the nodule, and which voxels that do not. The process uses the pre-processed image data as input, and returns a 3D region of the segmented volume. Most algorithms today use intensity based segmentation. To contrast that, a method using surface and shape characteristics is also included in the evaluation Post-processing Post-processing is performed on the segmented volume. The purpose of this process is to fill holes in the nodule body and remove blood vessels and other surrounding structures connected to the nodule. The result of this step is the final segmentation Volumetric analysis As an additional step in the solution, the volume is estimated by adding the segmented voxels followed by scaling with pixel spacing and slice thickness. 4.2 Tuning process Before the algorithm evaluation, all algorithms underwent a tuning process. For the tuning process, four nodules were selected. The nodules were chosen to give a good representation of different nodule characteristics, and to show some of the challenging aspects of the segmentation process. The tuning is done as an empirical and qualitative study, and the procedure is as follows. 1. Tunable parameters were identified. 2. Multiple parameter setups were created. 3. All setups for the pre-processing algorithm were evaluated together with all segmentation algorithms for all nodules. Equivalently, the segmentation algorithms were evaluated together with all pre-processing algorithms.

35 Method The setup including the highest mean Jaccard index for all four nodules were chosen as the optimal setup for that specific algorithm. 5. Algorithm combinations performing under 0.2 in Jaccard index for any of the four nodules were excluded from the evaluation. As a first step, tunable parameters were identified for each algorithm. Combinations of different parameter values formed multiple parameter setups. Each of these setups were then evaluated separately for all nodules. Parameter value ranges were chosen within reasonable limits to reduce the number of runs performed per algorithm. The performance of each setup was measured using the Jaccard index. For the pre-processing algorithms, the parameter setup which resulted in the highest Jaccard index mean for all four nodules in combination with any segmentation algorithm was selected. Equally, the best performing setup for the segmentation algorithms, in combination with any pre-processing algorithm, were selected. In addition to selecting parameter setups, the tuning process indicates if a pre-processing/ segmentation algorithm combination are compatible or not. A combination with a Jaccard index less than 0.2 for any nodule is excluded from the final evaluation. The post-processing algorithm was not included in the tuning process. It had been tuned during the implementation phase of pre-processing and segmentation algorithms. 4.3 Pipeline evaluation process For the pipeline evaluation process, selected parameters were used together with a larger set of nodules. The performance of each algorithm combination for each nodule was measured using the Jaccard index and the computation time. The mean and standard deviation of both measurements were used when comparing different algorithm combinations against each other. The evaluation process was run three times on the same data set. For the first run, the seed point for each nodule were selected as the nodule center. For the two following runs, noise was added to the seed point location to simulate a seed point chosen a few pixels slightly off center.

36 Chapter 5 Result This chapter presents the results achieved from the methods presented in the previous chapter. First, the database used for both the tuning and the evaluation is presented, followed by the results from both the tuning process and the pipeline evaluation. Abbreviations and section references for all algorithms are presented in table 5.1. Table 5.1: List of abbreviations and section references. Algorithm Abbreviation Section No filter NF Gaussian filtering GF Edge filtering EF Adaptive filtering AF Median filtering MF Cylinder filtering CF Local contrast enhancement LCE Anti-geometric diffusion AGD Standard deviation Stdev Lassen threshold filtering Lassen Ensemble segmentation Ensemble Level set Level-set Otsu s method Otsu Test data The test data used have been selected from the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI). LIDC-IDRI is a public database of CT scans [6]. It includes 1018 cases with DICOM images from thoracic CT scans. Each case includes at least one nodule, and provides ground truth borders for all of 23

37 Result 24 Table 5.2: Results for the tuning process. NF GF EF AF MF CF LCE AGD Stdev X X Lassen X X X Ensemble X X X X X Level set X X X X X X X Otsu them. The ground truth is produced from an annotation process performed by four radiologists, where each radiologist individually do readings of the data. The radiologists markings do not always correspond, which results in each nodule having a set of one to four slightly different ground truth borders. The database was used for evaluation of the algorithms. The DICOM images were used as image input to the system and the ground truth markings were used to both extract a seed point and to calculate the ground truth volume. This volume, together with the results from the segmentation, was used to calculate the Jaccard index, see section 2.5. For the tuning process, a set of four nodules, with one reading each form a single patient case, were used, see figure 5.1. They were chosen because of their different characteristics. Nodule 1 is a small, solid and spherical nodule close but not connected to the chest wall. Nodule 2 is a large solid nodule with large vessels connected to it. Nodule 3 is a large solid nodule with small vessels connected to it. Nodule 4 is a small subsolid nodule close, but not connected to the chest wall. For the pipeline evaluation process, a larger set of 13 patient cases were used. These cases are a subset of the cases used by Lassen et al. [5], which all include at least one subsolid nodule. Each case were inspected before the evaluation to exclude nodules that did not satisfy the conditions of this study, i.e. were connected to a chest wall. Each scan includes between one to twenty nodules, with one to four markings per nodule. See appendix A for a specification of the cases used in the evaluation data set. 5.2 Tuning Presented are the results from the tuning results, see table 5.2. Algorithm combinations selected for the pipeline evaluation are indicated with X.

38 Result 25 (a) (b) (c) (d) Figure 5.1: Axial slices of the selected tuning nodules: A) Nodule 1, small separated nodule. B) Nodule 2, large nodule with vessels. C) Nodule 3, large nodule with small vessels. D) Nodule 4, small subsolid nodule. 5.3 Pipeline evaluation Presented in table 5.3, 5.4 and 5.5 are the mean of all Jaccard index results for selected algorithm combinations from the pipeline evaluation process. Table 5.3 shows the results from the first run, where the seed point for each nodule was selected as the nodule center. Table 5.4 and 5.5 shows the results from the second and third run, where the seed point where chosen a few pixels slightly off center. Table 5.3: Results for the first run with centered seed point. Stdev Lassen Ensemble Level set NF GF 0,404 0,361 0,397 EF 0,185 0,275 AF MF 0,379 0,405 0,401 CF 0,152 0,311 0,402 LCE 0,347

39 Result 26 Table 5.4: Results for the second run with the seed point slightly off center. NF GF EF AF MF CF LCE Stdev 0,138 0,185 Lassen 0,250 0,190 0,234 Ensemble Level set 0,337 0,315 0,316 0,310 0,312 0,332 0,254 Table 5.5: Results for the third run with the seed point slightly off center. NF GF EF AF MF CF LCE Stdev 0,132 0,176 Lassen 0,244 0,147 0,226 Ensemble Level set 0,327 0,322 0,289 0,316 0,319 0,328 0,259

40 Chapter 6 Discussion This chapter presents thoughts and reflections on the results and the methods used to achieve them. All segmentation results presented in this chapters figures are results from the No filter/level set pipeline. 6.1 Pipeline Region of interest After the implementation of both region of interest algorithms, I noticed for a lot of nodules cases, the region of interest did not include the whole nodule when using derivative search. Nodules, especially large and subsolid nodules, usually include multiple local minimum and maximum points inside their cores, which would stop the derivative search before it has reached the borders of the nodule. Because of its sensitivity for local minimums in the data, I considered the algorithm too unreliable to guarantee that the whole nodule was always inside the region of interest, and since region of interest selection is a small and not as important part of the pipeline compared to pre-processing and segmentation, the fixed size algorithm were chosen for its reliability Pre-processing Cylinder filtering has the best results out of all the pre-processing filters when combining all three runs, but its results are only slightly higher than Gaussian, adaptive and median filtering. Also, its results were slightly lower than using no filter at all. To use cylinder filtering to its full potential, templates would have to be in multiple sizes and additional orientations. This would be very computationally expensive, and not feasible for this 27

41 Discussion 28 work. An alternative could be to find vessel-like structures using gradients [18] and shape index [12] instead of template matching. Both local contrast enhancement and anti-geometric diffusion have quite large drawbacks. While enhancing edges and structures, the results do not represent HU values any more. Anti-geometric diffusion only took derivatives into account, which resulted in static areas, such as nodule cores and background, having identical output. The same applies for local contrast enhancement filters, but since the Gaussian filter used to remove the local mean was quite big, only large nodules were affected. Also, the Ye et al. [12] used the anti-geometric diffusion algorithm as a first part in their pre-processing. The second part, shape index, was not included in this work. The fact that anti-geometric diffusion was not intended to pre-process the data exclusively could explain its low performance results Segmentation Ensemble segmentation got excluded due to its poor computational time. A single segmentation using this algorithm occasionally took a few minutes per nodule, which was not feasible for this work. Its computational time is very susceptible to leakage through vessels and other structures connected to the nodules since it performs the region growing algorithm multiple times. Its performance results was similar to both standard deviation thresholding and Lassen thresholding for the tuning process, and would have been a feasible algorithm to evaluate with a faster region growing algorithm or better vessel suppression. An interesting approach to the segmentation part of the pipeline would be to instead of trying to segment the nodules, one could segment the blood vessels and other structures in the lung and exclude that content. The level set algorithm shows potential for this operation in the study by Wang et al. [19] and the review done by Kirbas et al. [20]. It also would have been interesting to explore machine learning systems and their potential as segmentation algorithms. There is plenty of research done in the field [21 23] and there are multiple large databases to use as training data, so the strategy would have been viable. This was excluded from the work due to time limitations.

42 Discussion 29 (a) The nodule used for tuning. (b) Example of larger, more diffuse nodule. Figure 6.1: Example of different complexity for subsolid nodules Post-processing Very little time were spent on evaluating post-processing algorithms in comparison to both the pre-processing and the segmentation. This is related to the small focus on the area according to the two reviews [7, 8] used. 6.2 Tuning process When identifying nodules that were hard to segment, I found that diffuse nodules in general were both larger and more inhomogeneous than the ones used for the tuning process, see figure 6.1. This led me to the conclusion that I should have used better representatives for subsolid nodules in the tuning process. It also would have been beneficial to use a larger test data set, so that more nodule characteristics could have been more accurately represented. It would then have been easier to tune the algorithms to handle a larger variety of nodules, which most likely would have increased the performance. Only using a single seed point as input, no additional information such as size, sphericity, orientation or texture could be provided by the user. All of these features, or just rough estimates of them, are useful for the tuning of the algorithms. During my implementation of the selected pipeline into Sectra s PACS, I used visual inspection to compare its results to the results from the evaluation. For some nodules, the difference was substantial. The inspection led to the conclusion that two data dimensions were flipped between the two systems, which impacted my parameter setting. Manually adjusting the parameters to fit the nodules provided a significantly better result.

43 Discussion 30 (a) Static parameters. (b) Dynamic parameters. Figure 6.2: Example of increased performance using dynamic parameters. Green = ground truth. Blue = segmentation. (a) Static parameters. (b) Dynamic parameters. Figure 6.3: Example of decreased performance using dynamic parameters. Green = ground truth. Blue = segmentation. The tuning process always determined static parameters that were the same for all nodules. This inspection indicated that dynamic parameters, adjusted for each nodule, could improve the performance. To try this idea, I modified my derivative search, see section 3.1.2, to include conditions with HU characteristics. These nodule expansion estimates could then be used for setting parameters such as the initial radius and the initialization vector. This procedure was tested on a handful of the nodules and resulted in both better and worse performance, see figure 6.2 and 6.3 for examples. 6.3 Pipeline evaluation process The best performing pipelines, comparing all the runs separately, were:

44 Discussion 31 No filter/level set Cylinder filtering/level set Adaptive filtering/level set Median filtering/level set No filter had the highest total performance including all three runs. Cylinder filtering had a slightly lower variance in Jaccard index than the other pipelines, but a much higher computational cost. Adaptive filtering had the highest performance when no noise was included, but also the highest Jaccard index variance of all selected pipelines. Median filtering never out-performed all other pipelines in a single run, but performed just as good in general. Even though the mean results for the highest performing pipelines were quite even for all nodules, the variation in performance per nodule sometimes fluctuated. This indicates that selecting the pre-processing algorithm for each nodule would increase performance. Comparing table 5.3 with table 5.4 and 5.5 gives an indication that the evaluated pipelines are not very robust. Noise added as small shifts in seed point position resulted in a notable decrease in performance. The decrease was more apparent for small nodules, since the shift represented a larger portion of the nodule radius. I think that the same idea presented in section 6.2, meaning estimating some nodule characteristics as part of determining parameters for the pipeline, would make it more robust. The performance of all pipelines were pretty low. Even the best performing pipelines performed around in Jaccard index for 15-20% of the nodules, which is very unreliable. Presented below are some examples of difficult nodules with low segmentation performance. Nodules close to chest walls often included intensity leakage. This blended the nodule and the wall together, see figure 6.4. This often caused the segmentation to include wall tissue outside the nodule, which resulted in really low test results. Really small nodules had low test results in general, see figure 6.5. Since the parameters for the segmentation were static for all nodules, the initial φ t0 was sometimes larger than the nodule that the algorithm wanted to segment. Since φ t0 is supposed to be inside the object and then expand, the segmentation operated under conditions it was not constructed to. I think dynamic parameters would have helped a lot in this case. Another problem with small nodules was that the level set function did not have time to adapt to the object before hitting its stop conditions. This lead to the segmentation

45 Discussion 32 (a) Ground truth. (b) Segmentation. Figure 6.4: Example of intensity leakage for small nodules close to chest walls. (A) Ground truth for nodule. (B) Segmentation result for nodule using level set. Figure 6.5: Example of low performance for small nodule. Green = ground truth, blue = segmentation. of small nodules always being close to spherical due to the sphere used as φ t0 set. I think this is a drawback of level set in general. in level

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

The MAGIC-5 CAD for nodule detection in low dose and thin slice lung CT. Piergiorgio Cerello - INFN

The MAGIC-5 CAD for nodule detection in low dose and thin slice lung CT. Piergiorgio Cerello - INFN The MAGIC-5 CAD for nodule detection in low dose and thin slice lung CT Piergiorgio Cerello - INFN Frascati, 27/11/2009 Computer Assisted Detection (CAD) MAGIC-5 & Distributed Computing Infrastructure

More information

Automated segmentation methods for liver analysis in oncology applications

Automated segmentation methods for liver analysis in oncology applications University of Szeged Department of Image Processing and Computer Graphics Automated segmentation methods for liver analysis in oncology applications Ph. D. Thesis László Ruskó Thesis Advisor Dr. Antal

More information

Image Acquisition Systems

Image Acquisition Systems Image Acquisition Systems Goals and Terminology Conventional Radiography Axial Tomography Computer Axial Tomography (CAT) Magnetic Resonance Imaging (MRI) PET, SPECT Ultrasound Microscopy Imaging ITCS

More information

Prostate Detection Using Principal Component Analysis

Prostate Detection Using Principal Component Analysis Prostate Detection Using Principal Component Analysis Aamir Virani (avirani@stanford.edu) CS 229 Machine Learning Stanford University 16 December 2005 Introduction During the past two decades, computed

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3: IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Principal objective: to process an image so that the result is more suitable than the original image

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Corner Detection. GV12/3072 Image Processing.

Corner Detection. GV12/3072 Image Processing. Corner Detection 1 Last Week 2 Outline Corners and point features Moravec operator Image structure tensor Harris corner detector Sub-pixel accuracy SUSAN FAST Example descriptor: SIFT 3 Point Features

More information

Copyright 2007 Society of Photo Optical Instrumentation Engineers. This paper was published in Proceedings of SPIE, volume 6514, Medical Imaging

Copyright 2007 Society of Photo Optical Instrumentation Engineers. This paper was published in Proceedings of SPIE, volume 6514, Medical Imaging Copyright 2007 Society of Photo Optical Instrumentation Engineers. This paper was published in Proceedings of SPIE, volume 6514, Medical Imaging 2007: Computer Aided Diagnosis and is made available as

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh, CEO at Mapizy (www.mapizy.com) and InFarm (www.infarm.io) Lecture 02 Binary Image Analysis Objectives Revision of image formation

More information

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging 1 CS 9 Final Project Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging Feiyu Chen Department of Electrical Engineering ABSTRACT Subject motion is a significant

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 03 Image Processing Basics 13/01/28 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spatial Domain Filtering http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Background Intensity

More information

Processing of binary images

Processing of binary images Binary Image Processing Tuesday, 14/02/2017 ntonis rgyros e-mail: argyros@csd.uoc.gr 1 Today From gray level to binary images Processing of binary images Mathematical morphology 2 Computer Vision, Spring

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Jen-Hui Chuang Department of Computer Science National Chiao Tung University 2 3 Image Enhancement in the Spatial Domain 3.1 Background 3.4 Enhancement Using Arithmetic/Logic Operations

More information

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space Medical Image Processing Using Transforms Hongmei Zhu, Ph.D Department of Mathematics & Statistics York University hmzhu@yorku.ca Outlines Image Quality Gray value transforms Histogram processing Transforms

More information

Semantic Context Forests for Learning- Based Knee Cartilage Segmentation in 3D MR Images

Semantic Context Forests for Learning- Based Knee Cartilage Segmentation in 3D MR Images Semantic Context Forests for Learning- Based Knee Cartilage Segmentation in 3D MR Images MICCAI 2013: Workshop on Medical Computer Vision Authors: Quan Wang, Dijia Wu, Le Lu, Meizhu Liu, Kim L. Boyer,

More information

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS 130 CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS A mass is defined as a space-occupying lesion seen in more than one projection and it is described by its shapes and margin

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations I

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations I T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations I For students of HI 5323

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Binary image processing In binary images, we conventionally take background as black (0) and foreground objects as white (1 or 255) Morphology Figure 4.1 objects on a conveyor

More information

Chapter 3: Intensity Transformations and Spatial Filtering

Chapter 3: Intensity Transformations and Spatial Filtering Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Automated Lesion Detection Methods for 2D and 3D Chest X-Ray Images

Automated Lesion Detection Methods for 2D and 3D Chest X-Ray Images Automated Lesion Detection Methods for 2D and 3D Chest X-Ray Images Takeshi Hara, Hiroshi Fujita,Yongbum Lee, Hitoshi Yoshimura* and Shoji Kido** Department of Information Science, Gifu University Yanagido

More information

Biomedical Image Analysis. Mathematical Morphology

Biomedical Image Analysis. Mathematical Morphology Biomedical Image Analysis Mathematical Morphology Contents: Foundation of Mathematical Morphology Structuring Elements Applications BMIA 15 V. Roth & P. Cattin 265 Foundations of Mathematical Morphology

More information

Modern Medical Image Analysis 8DC00 Exam

Modern Medical Image Analysis 8DC00 Exam Parts of answers are inside square brackets [... ]. These parts are optional. Answers can be written in Dutch or in English, as you prefer. You can use drawings and diagrams to support your textual answers.

More information

Available Online through

Available Online through Available Online through www.ijptonline.com ISSN: 0975-766X CODEN: IJPTFI Research Article ANALYSIS OF CT LIVER IMAGES FOR TUMOUR DIAGNOSIS BASED ON CLUSTERING TECHNIQUE AND TEXTURE FEATURES M.Krithika

More information

A Study of Medical Image Analysis System

A Study of Medical Image Analysis System Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 10 Segmentation 14/02/27 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 4300 / INF 9305 Digital image analysis Date: Thursday December 21, 2017 Exam hours: 09.00-13.00 (4 hours) Number of pages: 8 pages

More information

Chapter - 2 : IMAGE ENHANCEMENT

Chapter - 2 : IMAGE ENHANCEMENT Chapter - : IMAGE ENHANCEMENT The principal objective of enhancement technique is to process a given image so that the result is more suitable than the original image for a specific application Image Enhancement

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

SIFT - scale-invariant feature transform Konrad Schindler

SIFT - scale-invariant feature transform Konrad Schindler SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective

More information

Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections.

Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections. Image Interpolation 48 Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections. Fundamentally, interpolation is the process of using known

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

Image Segmentation and Registration

Image Segmentation and Registration Image Segmentation and Registration Dr. Christine Tanner (tanner@vision.ee.ethz.ch) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation

More information

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7) 5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?

More information

Interactive Differential Segmentation of the Prostate using Graph-Cuts with a Feature Detector-based Boundary Term

Interactive Differential Segmentation of the Prostate using Graph-Cuts with a Feature Detector-based Boundary Term MOSCHIDIS, GRAHAM: GRAPH-CUTS WITH FEATURE DETECTORS 1 Interactive Differential Segmentation of the Prostate using Graph-Cuts with a Feature Detector-based Boundary Term Emmanouil Moschidis emmanouil.moschidis@postgrad.manchester.ac.uk

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

Spiral CT. Protocol Optimization & Quality Assurance. Ge Wang, Ph.D. Department of Radiology University of Iowa Iowa City, Iowa 52242, USA

Spiral CT. Protocol Optimization & Quality Assurance. Ge Wang, Ph.D. Department of Radiology University of Iowa Iowa City, Iowa 52242, USA Spiral CT Protocol Optimization & Quality Assurance Ge Wang, Ph.D. Department of Radiology University of Iowa Iowa City, Iowa 52242, USA Spiral CT Protocol Optimization & Quality Assurance Protocol optimization

More information

Global Journal of Engineering Science and Research Management

Global Journal of Engineering Science and Research Management ADVANCED K-MEANS ALGORITHM FOR BRAIN TUMOR DETECTION USING NAIVE BAYES CLASSIFIER Veena Bai K*, Dr. Niharika Kumar * MTech CSE, Department of Computer Science and Engineering, B.N.M. Institute of Technology,

More information

Image segmentation. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

Image segmentation. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year Image segmentation Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Methods for Image Processing academic year 2017 2018 Segmentation by thresholding Thresholding is the simplest

More information

Image features. Image Features

Image features. Image Features Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary) Towards image analysis Goal: Describe the contents of an image, distinguishing meaningful information from irrelevant one. Perform suitable transformations of images so as to make explicit particular shape

More information

Classification of Hyperspectral Breast Images for Cancer Detection. Sander Parawira December 4, 2009

Classification of Hyperspectral Breast Images for Cancer Detection. Sander Parawira December 4, 2009 1 Introduction Classification of Hyperspectral Breast Images for Cancer Detection Sander Parawira December 4, 2009 parawira@stanford.edu In 2009 approximately one out of eight women has breast cancer.

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Diagnostic imaging techniques. Krasznai Zoltán. University of Debrecen Medical and Health Science Centre Department of Biophysics and Cell Biology

Diagnostic imaging techniques. Krasznai Zoltán. University of Debrecen Medical and Health Science Centre Department of Biophysics and Cell Biology Diagnostic imaging techniques Krasznai Zoltán University of Debrecen Medical and Health Science Centre Department of Biophysics and Cell Biology 1. Computer tomography (CT) 2. Gamma camera 3. Single Photon

More information

n o r d i c B r a i n E x Tutorial DTI Module

n o r d i c B r a i n E x Tutorial DTI Module m a k i n g f u n c t i o n a l M R I e a s y n o r d i c B r a i n E x Tutorial DTI Module Please note that this tutorial is for the latest released nordicbrainex. If you are using an older version please

More information

3D VISUALIZATION OF SEGMENTED CRUCIATE LIGAMENTS 1. INTRODUCTION

3D VISUALIZATION OF SEGMENTED CRUCIATE LIGAMENTS 1. INTRODUCTION JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 10/006, ISSN 164-6037 Paweł BADURA * cruciate ligament, segmentation, fuzzy connectedness,3d visualization 3D VISUALIZATION OF SEGMENTED CRUCIATE LIGAMENTS

More information

Review on Different Segmentation Techniques For Lung Cancer CT Images

Review on Different Segmentation Techniques For Lung Cancer CT Images Review on Different Segmentation Techniques For Lung Cancer CT Images Arathi 1, Anusha Shetty 1, Madhushree 1, Chandini Udyavar 1, Akhilraj.V.Gadagkar 2 1 UG student, Dept. Of CSE, Srinivas school of engineering,

More information

Copyright 2017 Medical IP - Tutorial Medip v /2018, Revision

Copyright 2017 Medical IP - Tutorial Medip v /2018, Revision Copyright 2017 Medical IP - Tutorial Medip v.1.0.0.9 01/2018, Revision 1.0.0.2 List of Contents 1. Introduction......................................................... 2 2. Overview..............................................................

More information

REGION & EDGE BASED SEGMENTATION

REGION & EDGE BASED SEGMENTATION INF 4300 Digital Image Analysis REGION & EDGE BASED SEGMENTATION Today We go through sections 10.1, 10.2.7 (briefly), 10.4, 10.5, 10.6.1 We cover the following segmentation approaches: 1. Edge-based segmentation

More information

Lecture: Segmentation I FMAN30: Medical Image Analysis. Anders Heyden

Lecture: Segmentation I FMAN30: Medical Image Analysis. Anders Heyden Lecture: Segmentation I FMAN30: Medical Image Analysis Anders Heyden 2017-11-13 Content What is segmentation? Motivation Segmentation methods Contour-based Voxel/pixel-based Discussion What is segmentation?

More information

BME I5000: Biomedical Imaging

BME I5000: Biomedical Imaging 1 Lucas Parra, CCNY BME I5000: Biomedical Imaging Lecture 4 Computed Tomography Lucas C. Parra, parra@ccny.cuny.edu some slides inspired by lecture notes of Andreas H. Hilscher at Columbia University.

More information

CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK

CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK Ocular fundus images can provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular degeneration

More information

REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT

REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT Anand P Santhanam Assistant Professor, Department of Radiation Oncology OUTLINE Adaptive radiotherapy for head and

More information

doi: /

doi: / Yiting Xie ; Anthony P. Reeves; Single 3D cell segmentation from optical CT microscope images. Proc. SPIE 934, Medical Imaging 214: Image Processing, 9343B (March 21, 214); doi:1.1117/12.243852. (214)

More information

Image Restoration and Reconstruction

Image Restoration and Reconstruction Image Restoration and Reconstruction Image restoration Objective process to improve an image Recover an image by using a priori knowledge of degradation phenomenon Exemplified by removal of blur by deblurring

More information

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. 1. 3D AIRWAY TUBE RECONSTRUCTION. RELATED TO FIGURE 1 AND STAR METHODS

More information

Image Enhancement: To improve the quality of images

Image Enhancement: To improve the quality of images Image Enhancement: To improve the quality of images Examples: Noise reduction (to improve SNR or subjective quality) Change contrast, brightness, color etc. Image smoothing Image sharpening Modify image

More information

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator Li X.C.,, Chui C. K.,, and Ong S. H.,* Dept. of Electrical and Computer Engineering Dept. of Mechanical Engineering, National

More information

Tomographic Reconstruction

Tomographic Reconstruction Tomographic Reconstruction 3D Image Processing Torsten Möller Reading Gonzales + Woods, Chapter 5.11 2 Overview Physics History Reconstruction basic idea Radon transform Fourier-Slice theorem (Parallel-beam)

More information

LUNG NODULES SEGMENTATION IN CHEST CT BY LEVEL SETS APPROACH

LUNG NODULES SEGMENTATION IN CHEST CT BY LEVEL SETS APPROACH LUNG NODULES SEGMENTATION IN CHEST CT BY LEVEL SETS APPROACH Archana A 1, Amutha S 2 1 Student, Dept. of CNE (CSE), DSCE, Bangalore, India 2 Professor, Dept. of CSE, DSCE, Bangalore, India Abstract Segmenting

More information

Ch. 4 Physical Principles of CT

Ch. 4 Physical Principles of CT Ch. 4 Physical Principles of CT CLRS 408: Intro to CT Department of Radiation Sciences Review: Why CT? Solution for radiography/tomography limitations Superimposition of structures Distinguishing between

More information

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Morphology Identification, analysis, and description of the structure of the smallest unit of words Theory and technique for the analysis and processing of geometric structures

More information

Level-set MCMC Curve Sampling and Geometric Conditional Simulation

Level-set MCMC Curve Sampling and Geometric Conditional Simulation Level-set MCMC Curve Sampling and Geometric Conditional Simulation Ayres Fan John W. Fisher III Alan S. Willsky February 16, 2007 Outline 1. Overview 2. Curve evolution 3. Markov chain Monte Carlo 4. Curve

More information

Slide 1. Technical Aspects of Quality Control in Magnetic Resonance Imaging. Slide 2. Annual Compliance Testing. of MRI Systems.

Slide 1. Technical Aspects of Quality Control in Magnetic Resonance Imaging. Slide 2. Annual Compliance Testing. of MRI Systems. Slide 1 Technical Aspects of Quality Control in Magnetic Resonance Imaging Slide 2 Compliance Testing of MRI Systems, Ph.D. Department of Radiology Henry Ford Hospital, Detroit, MI Slide 3 Compliance Testing

More information

Today INF How did Andy Warhol get his inspiration? Edge linking (very briefly) Segmentation approaches

Today INF How did Andy Warhol get his inspiration? Edge linking (very briefly) Segmentation approaches INF 4300 14.10.09 Image segmentation How did Andy Warhol get his inspiration? Sections 10.11 Edge linking 10.2.7 (very briefly) 10.4 10.5 10.6.1 Anne S. Solberg Today Segmentation approaches 1. Region

More information

Detection & Classification of Lung Nodules Using multi resolution MTANN in Chest Radiography Images

Detection & Classification of Lung Nodules Using multi resolution MTANN in Chest Radiography Images The International Journal Of Engineering And Science (IJES) ISSN (e): 2319 1813 ISSN (p): 2319 1805 Pages 98-104 March - 2015 Detection & Classification of Lung Nodules Using multi resolution MTANN in

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

MEDICAL IMAGE COMPUTING (CAP 5937) LECTURE 9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation)

MEDICAL IMAGE COMPUTING (CAP 5937) LECTURE 9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation) SPRING 2017 1 MEDICAL IMAGE COMPUTING (CAP 5937) LECTURE 9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation) Dr. Ulas Bagci HEC 221, Center for Research in Computer Vision (CRCV),

More information

PROCESS > SPATIAL FILTERS

PROCESS > SPATIAL FILTERS 83 Spatial Filters There are 19 different spatial filters that can be applied to a data set. These are described in the table below. A filter can be applied to the entire volume or to selected objects

More information

Image Restoration and Reconstruction

Image Restoration and Reconstruction Image Restoration and Reconstruction Image restoration Objective process to improve an image, as opposed to the subjective process of image enhancement Enhancement uses heuristics to improve the image

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

White Pixel Artifact. Caused by a noise spike during acquisition Spike in K-space <--> sinusoid in image space

White Pixel Artifact. Caused by a noise spike during acquisition Spike in K-space <--> sinusoid in image space White Pixel Artifact Caused by a noise spike during acquisition Spike in K-space sinusoid in image space Susceptibility Artifacts Off-resonance artifacts caused by adjacent regions with different

More information

A Model-Independent, Multi-Image Approach to MR Inhomogeneity Correction

A Model-Independent, Multi-Image Approach to MR Inhomogeneity Correction Tina Memo No. 2007-003 Published in Proc. MIUA 2007 A Model-Independent, Multi-Image Approach to MR Inhomogeneity Correction P. A. Bromiley and N.A. Thacker Last updated 13 / 4 / 2007 Imaging Science and

More information

MEDICAL IMAGE ANALYSIS

MEDICAL IMAGE ANALYSIS SECOND EDITION MEDICAL IMAGE ANALYSIS ATAM P. DHAWAN g, A B IEEE Engineering in Medicine and Biology Society, Sponsor IEEE Press Series in Biomedical Engineering Metin Akay, Series Editor +IEEE IEEE PRESS

More information

Knowledge-Based Organ Identification from CT Images. Masahara Kobashi and Linda Shapiro Best-Paper Prize in Pattern Recognition Vol. 28, No.

Knowledge-Based Organ Identification from CT Images. Masahara Kobashi and Linda Shapiro Best-Paper Prize in Pattern Recognition Vol. 28, No. Knowledge-Based Organ Identification from CT Images Masahara Kobashi and Linda Shapiro Best-Paper Prize in Pattern Recognition Vol. 28, No. 4 1995 1 Motivation The extraction of structure from CT volumes

More information

THE SEGMENTATION OF NONSOLID PULMONARY NODULES IN CT IMAGES

THE SEGMENTATION OF NONSOLID PULMONARY NODULES IN CT IMAGES THE SEGMENTATION OF NONSOLID PULMONARY NODULES IN CT IMAGES A Thesis Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Master

More information

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 60 CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 3.1 IMPORTANCE OF OPTIC DISC Ocular fundus images provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular

More information

Blood vessel tracking in retinal images

Blood vessel tracking in retinal images Y. Jiang, A. Bainbridge-Smith, A. B. Morris, Blood Vessel Tracking in Retinal Images, Proceedings of Image and Vision Computing New Zealand 2007, pp. 126 131, Hamilton, New Zealand, December 2007. Blood

More information

What will we learn? Neighborhood processing. Convolution and correlation. Neighborhood processing. Chapter 10 Neighborhood Processing

What will we learn? Neighborhood processing. Convolution and correlation. Neighborhood processing. Chapter 10 Neighborhood Processing What will we learn? Lecture Slides ME 4060 Machine Vision and Vision-based Control Chapter 10 Neighborhood Processing By Dr. Debao Zhou 1 What is neighborhood processing and how does it differ from point

More information

Fundamentals of Digital Image Processing

Fundamentals of Digital Image Processing \L\.6 Gw.i Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Chris Solomon School of Physical Sciences, University of Kent, Canterbury, UK Toby Breckon School of Engineering,

More information

MEDICAL EQUIPMENT: COMPUTED TOMOGRAPHY. Prof. Yasser Mostafa Kadah

MEDICAL EQUIPMENT: COMPUTED TOMOGRAPHY. Prof. Yasser Mostafa Kadah MEDICAL EQUIPMENT: COMPUTED TOMOGRAPHY Prof. Yasser Mostafa Kadah www.k-space.org Recommended Textbook X-Ray Computed Tomography in Biomedical Engineering, by Robert Cierniak, Springer, 211 Computed Tomography

More information

DUE to beam polychromacity in CT and the energy dependence

DUE to beam polychromacity in CT and the energy dependence 1 Empirical Water Precorrection for Cone-Beam Computed Tomography Katia Sourbelle, Marc Kachelrieß, Member, IEEE, and Willi A. Kalender Abstract We propose an algorithm to correct for the cupping artifact

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

CHAPTER VIII SEGMENTATION USING REGION GROWING AND THRESHOLDING ALGORITHM

CHAPTER VIII SEGMENTATION USING REGION GROWING AND THRESHOLDING ALGORITHM CHAPTER VIII SEGMENTATION USING REGION GROWING AND THRESHOLDING ALGORITHM 8.1 Algorithm Requirement The analysis of medical images often requires segmentation prior to visualization or quantification.

More information

Digital Image Processing

Digital Image Processing Digital Image Processing SPECIAL TOPICS CT IMAGES Hamid R. Rabiee Fall 2015 What is an image? 2 Are images only about visual concepts? We ve already seen that there are other kinds of image. In this lecture

More information

First CT Scanner. How it Works. Contemporary CT. Before and After CT. Computer Tomography: How It Works. Medical Imaging and Pattern Recognition

First CT Scanner. How it Works. Contemporary CT. Before and After CT. Computer Tomography: How It Works. Medical Imaging and Pattern Recognition Computer Tomography: How t Works Medical maging and Pattern Recognition Lecture 7 Computed Tomography Oleh Tretiak Only one plane is illuminated. Source-subject motion provides added information. 2 How

More information

Computer Vision I - Filtering and Feature detection

Computer Vision I - Filtering and Feature detection Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based

More information

NIH Public Access Author Manuscript Proc Soc Photo Opt Instrum Eng. Author manuscript; available in PMC 2014 October 07.

NIH Public Access Author Manuscript Proc Soc Photo Opt Instrum Eng. Author manuscript; available in PMC 2014 October 07. NIH Public Access Author Manuscript Published in final edited form as: Proc Soc Photo Opt Instrum Eng. 2014 March 21; 9034: 903442. doi:10.1117/12.2042915. MRI Brain Tumor Segmentation and Necrosis Detection

More information

Lecture 4: Spatial Domain Transformations

Lecture 4: Spatial Domain Transformations # Lecture 4: Spatial Domain Transformations Saad J Bedros sbedros@umn.edu Reminder 2 nd Quiz on the manipulator Part is this Fri, April 7 205, :5 AM to :0 PM Open Book, Open Notes, Focus on the material

More information

Image Processing: Final Exam November 10, :30 10:30

Image Processing: Final Exam November 10, :30 10:30 Image Processing: Final Exam November 10, 2017-8:30 10:30 Student name: Student number: Put your name and student number on all of the papers you hand in (if you take out the staple). There are always

More information