Comparison of Segmentation Methods for Accurate Dental Caries Detection

Similar documents
EE795: Computer Vision and Intelligent Systems

Filtering Images. Contents

Lecture: Edge Detection

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

An Algorithm for Blurred Thermal image edge enhancement for security by image processing technique

Filtering and Enhancing Images

Image Processing

Local Image preprocessing (cont d)

Digital Image Processing. Image Enhancement - Filtering

Topic 4 Image Segmentation

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Lecture 6: Edge Detection

A Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection

Other Linear Filters CS 211A

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

Segmentation and Grouping

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

EDGE BASED REGION GROWING

Edge Detection. CMPUT 206: Introduction to Digital Image Processing. Nilanjan Ray. Source:

Chapter - 2 : IMAGE ENHANCEMENT

EECS490: Digital Image Processing. Lecture #19

Practical Image and Video Processing Using MATLAB

Digital Image Procesing

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Lecture 7: Most Common Edge Detectors

Line, edge, blob and corner detection

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biomedical Image Analysis. Point, Edge and Line Detection

Edge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Comparison between Various Edge Detection Methods on Satellite Image

Neighborhood operations

Edges and Binary Images

Prof. Feng Liu. Winter /15/2019

Edge and local feature detection - 2. Importance of edge detection in computer vision

GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES

A Comparative Assessment of the Performances of Different Edge Detection Operator using Harris Corner Detection Method

Concepts in. Edge Detection

Feature Detectors - Canny Edge Detector

Image Analysis. Edge Detection

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

[ ] Review. Edges and Binary Images. Edge detection. Derivative of Gaussian filter. Image gradient. Tuesday, Sept 16

Chapter 3: Intensity Transformations and Spatial Filtering

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

JNTUWORLD. 4. Prove that the average value of laplacian of the equation 2 h = ((r2 σ 2 )/σ 4 ))exp( r 2 /2σ 2 ) is zero. [16]

Ulrik Söderström 16 Feb Image Processing. Segmentation

What Are Edges? Lecture 5: Gradients and Edge Detection. Boundaries of objects. Boundaries of Lighting. Types of Edges (1D Profiles)

Edge detection. Winter in Kraków photographed by Marcin Ryczek

CHAPTER 4 EDGE DETECTION TECHNIQUE

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.

SRCEM, Banmore(M.P.), India

Dental Contour Extraction & Matching with Label Contouring Using ISEF Algorithm on DICOM Images for Human Identification

the most common approach for detecting meaningful discontinuities in gray level. we discuss approaches for implementing

AN EFFICIENT APPROACH FOR IMPROVING CANNY EDGE DETECTION ALGORITHM

EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

Digital filters. Remote Sensing (GRS-20306)

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

Edge detection. Winter in Kraków photographed by Marcin Ryczek

ECG782: Multidimensional Digital Signal Processing

A New Technique of Extraction of Edge Detection Using Digital Image Processing

EECS490: Digital Image Processing. Lecture #20

Comparative Analysis of Edge Detection Algorithms Based on Content Based Image Retrieval With Heterogeneous Images

Edge Detection for Dental X-ray Image Segmentation using Neural Network approach

Lecture 4: Image Processing

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

Chapter 10: Image Segmentation. Office room : 841

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

Effects Of Shadow On Canny Edge Detection through a camera

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

Linear Operations Using Masks

Part 3: Image Processing

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Denoising and Edge Detection Using Sobelmethod

How and what do we see? Segmentation and Grouping. Fundamental Problems. Polyhedral objects. Reducing the combinatorics of pose estimation

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction

SYDE 575: Introduction to Image Processing

Edge Detection CSC 767

Image Enhancement Techniques for Fingerprint Identification

Operators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary

[Programming Assignment] (1)

Global Journal of Engineering Science and Research Management

Image Restoration and Reconstruction

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

Image Analysis. Edge Detection

Lectures Remote Sensing

Sobel Edge Detection Algorithm

ECG782: Multidimensional Digital Signal Processing

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Image Enhancement in Spatial Domain (Chapter 3)

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Image Restoration and Reconstruction

Part 3: Image Processing

Transcription:

Comparison of Segmentation Methods for Accurate Dental Caries Detection Jackson B.T Pedzisai 210549604 School of Mathematics, Statistics and Computer Science University of KwaZulu-Natal Dr Serestina Viriri School of Mathematics, Statistics and Computer Science University of KwaZulu-Natal Abstract Dental X-ray image analysis applications are important in helping dentist procedures and diagnosis, and in postmortem identification. The primary step in such applications is the estimation of teeth contour in order to permit efficient feature extraction (segmentation). The increase in the need for teeth segmentation for example as a result of increase in the volumes of cases that need to be investigated by the forensic specialists brings the need for automation of this process.to obtain the proper result, it is required to perform the accurate and efficient segmentation approach which proved itself in the aspect of X-ray image segmentation. In this paper, we investigated different teeth contour extraction techniques. These include canny edge detector, Laplacian Gaussian, images differences, thresholding. Keywords: 1 Introduction X-ray images, Ante mortem (AM) identification, which is identification prior to death, is usually possible through comparison of many biometric identifiers, post-mortem (PM) identification, that is identification after death, is impossible using behavioural biometrics (e.g. speech, gait) especially under severe circumstances, such as those encountered in mass disasters (e.g. airplane crashers) or if identification is being attempted more than a couple of weeks post-mortem, under such circumstances. Most physiological biometrics may not be employed for identification, because of the decay of soft tissues of the body to unidentifiable states. Teeth, being the hardest and the most impregnable part of human body, cannot easily decay. Dental features are biometric identifiers that are resistant to early decays, hence their survivability and diversity, makes them the best candidates for post-mortem biometric identification. With the evolution in information technology and the huge volume of cases that need to be investigated by forensic specialists, it has become important to automate forensic identification systems. Dental caries is an infectious, communicable disease resulting in destruction of tooth structure by acidforming bacteria found in dental plaque, an intra-oral bio film, in the presence of sugar. The infection results in the loss of tooth minerals beginning with the outer surface of the tooth and can progress through the dentine to the pulp, ultimately compromising the vitality of the tooth. The early detection and characterization of caries lesions is very important because surgical restorative procedures could be reduced. If detected at an early stage, the dentist and dental professionals can implement measures to reverse and control caries, such as identifying patients in need of preventive care, implementing fluoride treatments, implementing plaque control measures, and identifying patients that are at high risk of developing dental caries. Also classification of dental caries is important for the diagnosis and treatment planning of the dental disease. Dental imaging allows dentists to be proactive seeing problems before they become visible. It is also helpful for conducting detailed study and investigations about the nature of the dental disease. Automating the process of analysis of dental images is an important application; it helps the dentist procedures and diagnosis. Tooth segmentation from the radiographic images and feature extraction are essential steps. 2 Related work Several good progresses have been made for teeth segmentation in the past few years. Jain and Chen [1] proposed a semi-automatic contour extraction method for tooth segmentation by using integral projection and Bayes rule, in which an initial valley gap point is required for applying the integral projection to tooth ISBN: 978-1-61804-247-7 393

isolation. Keshtkar and Gueaieb [4] introduced a swarm-intelligence-based and a cellular-automata model approach. Zhou and Abdel-Mottaleb [2] used active contour models (based on snakes) for extracting the contour of the teeth. Active contour models based on edges are driven by the gradient of the image intensities. In many dental images, the gradient between the teeth and the background is not prominent and, hence, such technique is not able to accurately extract the tooth contour. Also, snake-based schemes rely on a parametric representation of the contour. Parametric representations typically fail to evolve in noisy conditions [6]. Further, parametric contours may not be able to split and merge upon encountering local minima in the region of interest ( [7], [8], [9], [10]). Most of these segmentation algorithms are greatly affected by noise embedded in images, which will lead to poor segmentation performance; hence it is crucial to apply any type of image enhancement before applying the segmentation process. Also due to different ways in which dental curies manifest in different types of teeth, it would be important to classify each tooth. Figure 1: A diagram of the methodology of the teeth segmentation and dental curies detection. 3 Methodology In this paper we did thorough reviews of the dental X-ray image segmentation techniques, implemented the segmentation techniques and features extraction methods. The comparison of the segmentation methods is split into a number of processes; the input of an image file, the pre-processing (involves cropping the image), enhancement, teeth segmentation using different segmentation techniques. Figure 1. below shows the pictorial view of the methodology together with the proposed techniques or algorithms. Figure 2: X-ray image 3.1 Cropping The first step in segmentation is cropping. We created an interface that shows the image and by simply highlighting the part of the image needed, that part is extracted. Figure 2 and 3 below shows an example of an image and the two cropped images respectively. These images will be used throughout this paper. 3.2 Enhancement Images Are often corrupted by random variants in intensity values such as salt and pepper noise, impulse noise and Gaussian noise. There is a trade-off between edge Figure 3: Cropped Images ISBN: 978-1-61804-247-7 394 2

strength and noise reduction. More filtering to reduce noise results in a loss of edge strength. In this paper a median filter is used, where each pixel is replaced by the median of its neighbourhood [140]. 3.3 Detection Many points in an image have a nonzero value for the gradient, and not all of these points are edges for a particular application. Therefore, some method should be used to determine which points are edge points. 3.3.1 Point Detector This is achieved by applying a point detecting mask to the image. Each pixel (g(x,y)) is given by: Figure 5: Point detector after applying median filter 3.3.2 Laplacian Gausian The Laplacian generally is not used in its original form for edge detection due to its unacceptable sensitivity to noise. Consider the function: (1) where function f is the initial image, w is the point detecting mask of size n x m and g is the resultant image. (2) where r 2 = x 2 + y 2 and σ is the standard deviation. Convolving this function with an image blurs the image, with the degree of blurring being determined by the value of σ. The Laplacian of h(second derivative of h with respect to r) is: w = It is clear that any pixel at the edge will have a significantly different value as to that of the average of its neighbours. If the value is approximately the same, then the pixel in not at any edge. Figure 4 below shows the result after applying the point detector filter. A median filter is applied to the image to remove the noise so that the edges are clear. Figure 5 below shows the results after applying the median filter. (3) This is the Laplacian of a Gaussian (LoG). The purpose of the Gaussian function in the LoG formulation is to smooth the image, and the purpose of the Laplacian operator is to provide an image with zero crossing used to establish the location of edges. Smoothing reduces the effect of noise and, in principle; it counters the increased effect of noise caused by the second derivative of the laplacian. Laplacian of Gaussian first smoothes with a low-pass filter to reduce noise. Laplacian of Gausian kernel is used for this. Figure 6 below shows Laplacian Gausian filter. Figure 6: Laplacian Gausian filter Figure 4: Results of Point detector Filter Figure 7 below shows the results of Laplacian Gaussian filter. Below are the results after filtering the images ISBN: 978-1-61804-247-7 395 3

255 for those with pixel values above T high. The other set is given a value between 0 and 255. Figure 9 below shows the result of double thresholding. Figure 7: Results of Laplacian Gaussian Figure 9: Results of double thresholding Gradient Figure 8: Laplacian of Gaussian after filtering noise 3.3.3 Double Thresholding This is one of the most simple segmentation techniques, involves thresholding the image intensities [5]. In this paper this technique is done in two steps, thresholding and calculating gradient to determine edges. Thresholding Edge pixels stronger than the high threshold, are marked as strong(white); edge pixels weaker than low threshold are supressed and edges pixels between high and low are marked as weak(grey). Two threshold values (T high and T low ) are chosen. In this paper, the threshold value (T high or T low ) is the i th pixels value after all the pixels values are sorted in ascending order, where i is given by the equation: From the result shown above, it is clear that the boundaries (edges) can be easily extracted. To determine the edges, gradient magnitude for each pixels is calculated. All the pixels will clearly have the same gradient value of zero, except for those pixels at the edges. Hence any pixels with gradient greater than 0 is categorised as a boundary( is given a value of 1). This draws edges with two pixel thickness Gradient is computed for every pixels. The gradient of the image f(x, y) at location (x, y) is defined as the vector (5) It is well known from vector analysis that the gradient vector points in the direction of maximum rate of change of at coordinates(x, y). The gradient magnitude is the magnitude of this vector, (4) And α in this case is equal 0.4 for T high and 0.26 for T low. This means that for (T high ), the threshold groups 40 percent of the pixels to be below T high and the other 60 percent will be above T high. The same is done for T low. These threshold values will group the pixels into three subset, those with pixel values above T high, those with pixels values below T low and those with pixels values below T high and above T low. These three groups of pixels are given different pixel values, 0 for those below T low, (6) In this paper we implemented the Sobel operator to find a first order partial derivative. The Sobel operator computes the approximation of gradients along the horizontal (x) and the vertical (y) directions for each pixels[6]. Any pixel with a gradient magnitude greater than 0 is classified as an edge. ISBN: 978-1-61804-247-7 396 4

(7) Figure 12: Results of subtracting the filtered image from the original image Applying averaging filter Figure 10: Result of using gradient to determine edges 3.3.4 Image Difference Gaussian Filter The image is first smoothed by applying a Gaussian filter with standard deviation of σ = 1.4. The filter (B) is shown in figure 11 below is a Gaussian filter with σ = 1.4. The resultant image above has too much noise,hence requires filtering the noise. A lowpass (averaging) filter is applied to the image. Each pixel value is replaced by the the average of the gray levels in the neighbourhood defined by the filter mask. Because random noise typically consists of sharp transitions in gray values these are reduced, together with false contour. On the other hand the filter has an undesirable effect of blurring the edges, but in this case, the edges will not be much affected because they are already clearly defined. The lowpass filter used in this paper is a weighted averaging filter Pixels are multiplied by different coefficients, thus giving more importance to some pixels at the expense of others. The weights are given to the filter depending on the distance from the centre. A median filter is then applied to further clean the image. Figure 13 below shows the results of applying the average filter. Figure 11: Gaussian filter with standard deviation of 1.4 Original image minus filtered image The smoothed image is subtracted from the original image. (8) where g(x,y) is the resultant image, f (x,y) is the original image, and h(x, y) is the filtered image. The results are shown in figure 12 below. Figure 13: Results of applying the average weighted filter Erosion/Edge thinning Because the resultant image has thick contour, there is need to reduce the thickness of the edges. This is acheived by applying erosion on the image. For sets A and B in Z 2 the erosion of A by B, is defined as ISBN: 978-1-61804-247-7 397 5

(9) This indicates that the erosion of A by B is a set of all points z such that B, translated by z, is contained in A. The minimum filter was used to to erode the edges. Figure 14 below shows the final results after erosion. It can also be seen that erosion has also removed the noise on the image. all local maxima in the gradient image and deleting everything else. Firstly, the gradient direction of each pixel is round off to the nearest 45 o. The edge strength of each pixel is compared with the edges strength of the pixels in the positive and the negative directions. If the edge strength of that pixel is the largest, its edge strength is preserved, else its is removed. Double thresholding Potential edges are determined by thresholding. Many of the resultant edges will probably be true edges in the image, but some maybe as a result of noise. Edges pixels stronger than a chosen high threshold (T high ), are marked as strong edges(white); edge pixels weaker than low threshold (T low ) are suppressed and edge pixels between high and low are marked as weak edges (grey). The result is shown below. Figure 14: Final image after eroding 3.3.5 Canny Detector This is a multi-step process, implemented as a sequence of filters. The algorithm involves the following steps: Noise Filtering Blurring the image to remove noise. The image is first smoothed by applying a Gaussian filter. In this paper we used a Gaussian filter with a standard deviation of 1.4, shown in figure 11. Gradient The edges should be marked where the gradients of the images has large magnitudes (greyscale intensity values changes most). Firstly determined by applying sobeloperator to get the gradient in the x and y direction, then use the law of Pythagoras to determine the resultant gradient. However, the edges are typically broad and thus do not indicate exactly where the edges are. To make it possible to determine this, the direction of the edges must be determined and stored. The direction is determined by the equation below. Non-maximum suppresion (10) Only local maxima should be marked as edges. The purpose of this is to convert the blurred edges in the image of the gradient magnitudes to sharp edges, by preserving Figure 15: Results of double thresholding Edge tracking by hysteresis Final edges are determined by suppressing all edges that are not connected to a very certain (strong) edge. Strong edges are interpreted as certain edges, weak edges are included if and only if they are connected to strong edges. The logic is that noise and other small variations are unlikely to result in a strong edge. The weak edges can either be due to true edges or noise. Edges due to noise will probably be distributed independently of the edges on the entire image, and thus only a small amount will be distributed adjacent to strong edges. Thresholding above would have labelled valid edges as weak (grey) leaving disconnected edges, usually in regions where the gradient fluctuates between above and below the high threshold value (T high ). Hysteresis solves the problem by discarding (=0) all pixels with magnitude less than (T low ). Pixels with magnitude greater than of equal to (T high ) are retained (=1). If a pixel have magnitude between (T low ) and (T high ), it is retained if any of its neighbours in the 3 x 3 region around it have gradient magnitude greater than (T high ). If none of pixels(x, y)s neighbours have a ISBN: 978-1-61804-247-7 398 6

magnitude greater than (T high ) but at least one falls between (T low ) and (T high ) a search is made in the 5 x 5 region and if any of these pixels have a magnitude greater than(t high ), it is retained else discarded. Figure below shows the results of hysteresis. 5 Future work Figure 18: Percentage failure Implement a new algorithm to accurately detect dental caries. References Figure 16: Results of edge tracking by hysteresis 4 Results and comparison Canny provides better contour in terms of accuracy. It is however affected by illumination variance as shown in figure 17 below. The contour drawn by this technique is 1(one) pixel thick, but sometimes draws lines which are not part of the tooth. Figure 17: Effects of illumination variance, the first image is the original image where the contour has to be detected. The second and third images show the results of contour detection using canny detector and double thresholding respectively Most teeth have uneven surfaces and since double thresholding have only 2 threshold values, it can only detect at most two ridges on the surface of a tooth, leaving others. Its is not much affected by illumination variance as shown in figure 17 above. For teeth that are too close to each other, this technique in most cases is unable to detect contour separating the teeth. It is also not so accurate, though able to detect the shape of the contour. Laplacian of Gaussian and point detector have very high failure rates. The table below shows the percentage of failure for each technique we implemented. We define percentage failure as the number of images from where the technique didn t capture any true segmentation region (tooth) to the total number of images conducted in the experiments. ISBN: 978-1-61804-247-7 399 7