CHAPTER 4 EDGE DETECTION TECHNIQUE

Similar documents
Lecture 6: Edge Detection

Lecture 7: Most Common Edge Detectors

Digital Image Processing. Image Enhancement - Filtering

Biomedical Image Analysis. Point, Edge and Line Detection

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Segmentation and Grouping

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Local Image preprocessing (cont d)

Comparison between Various Edge Detection Methods on Satellite Image

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Edge and local feature detection - 2. Importance of edge detection in computer vision

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

Edge Detection Lecture 03 Computer Vision

Topic 4 Image Segmentation

Image Analysis. Edge Detection

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

EECS490: Digital Image Processing. Lecture #19

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

Lecture: Edge Detection

Image Processing

Filtering Images. Contents

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Ulrik Söderström 16 Feb Image Processing. Segmentation

Feature Detectors - Canny Edge Detector

Edge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient

Image Analysis. Edge Detection

Line, edge, blob and corner detection

An Algorithm for Blurred Thermal image edge enhancement for security by image processing technique

Edge and corner detection

Other Linear Filters CS 211A

What Are Edges? Lecture 5: Gradients and Edge Detection. Boundaries of objects. Boundaries of Lighting. Types of Edges (1D Profiles)

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK

DIGITAL IMAGE PROCESSING

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

Digital Image Processing. Introduction

AN EFFICIENT APPROACH FOR IMPROVING CANNY EDGE DETECTION ALGORITHM

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Filtering and Edge Detection. Computer Vision I. CSE252A Lecture 10. Announcement

Part 3: Image Processing

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Image features. Image Features

Anno accademico 2006/2007. Davide Migliore

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

Chapter - 2 : IMAGE ENHANCEMENT

EDGE BASED REGION GROWING

Lecture 12 Color model and color image processing

EE795: Computer Vision and Intelligent Systems

How and what do we see? Segmentation and Grouping. Fundamental Problems. Polyhedral objects. Reducing the combinatorics of pose estimation

Digital Image Processing COSC 6380/4393

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Review of Filtering. Filtering in frequency domain

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Chapter 3: Intensity Transformations and Spatial Filtering

Outline 7/2/201011/6/

Filtering and Enhancing Images

Solution: filter the image, then subsample F 1 F 2. subsample blur subsample. blur

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Edge detection. Gradient-based edge operators

Concepts in. Edge Detection

Edge Detection (with a sidelight introduction to linear, associative operators). Images

Edge detection. Winter in Kraków photographed by Marcin Ryczek

SYDE 575: Introduction to Image Processing

Edge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science

CAP 5415 Computer Vision Fall 2012

A Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Features. Places where intensities vary is some prescribed way in a small neighborhood How to quantify this variability

Linear Operations Using Masks

Prof. Feng Liu. Winter /15/2019

Image Segmentation Image Thresholds Edge-detection Edge-detection, the 1 st derivative Edge-detection, the 2 nd derivative Horizontal Edges Vertical

EECS490: Digital Image Processing. Lecture #20

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

Edge Detection. EE/CSE 576 Linda Shapiro

Edges and Binary Images

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Denoising and Edge Detection Using Sobelmethod

Image Processing Lecture 10

EDGE DETECTION-APPLICATION OF (FIRST AND SECOND) ORDER DERIVATIVE IN IMAGE PROCESSING

Experiments with Edge Detection using One-dimensional Surface Fitting

Image Understanding Edge Detection

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Neighborhood operations

Sobel Edge Detection Algorithm

Fuzzy Inference System based Edge Detection in Images

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

EE795: Computer Vision and Intelligent Systems

SIMULATIVE ANALYSIS OF EDGE DETECTION OPERATORS AS APPLIED FOR ROAD IMAGES

Edge Detection CSC 767

Effects Of Shadow On Canny Edge Detection through a camera

A New Technique of Extraction of Edge Detection Using Digital Image Processing

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Transcription:

56 CHAPTER 4 EDGE DETECTION TECHNIQUE The main and major aim of edge detection is to significantly reduce the amount of data significantly in an image, while preserving the structural properties to be used for further image processing. A number of algorithms already exist. The present chapter focuses on two different methodologies which already exist, namely Canny Edge Detection and Marr-Hildreth Edge Detection Techniques. A new edge detection technique is proposed in this chapter. 4.1 INTRODUCTION In image processing, the Edge Detection Technique is an important area. Normally edges define and differentiate between the boundaries of an image and the background region. It even helps in segmentation and object recognition. The following are the criteria for a good quality of detecting the edges, i. Lighting conditions ii. The presence of objects of similar intensities iii. Scene image densities and iv. Noise

57 Any of the above problems can be handled by adjusting the threshold values. No good method was proposed to set the threshold values automatically. The manual change is the only way of changing the values. An ideal algorithm is needed to make use of the edge detections. In order to create a good performance, the previous performance must be known (www.csc.noaa.gov/crs/lca/faq_gen.html#wirs). In the present research, two different edge detectors have been tested under a variety of situations and have been compared with the current system. A new Kodi-Edge Detection Method is used to analyze the scene sensed by the sensor. The two different factors considered here are, i. Intensity ii. Color composite information. 1) Intensity - It is a measurement of light intruding on a sensor or photosensitive device. Normally the color specification is done using three gray levels (Red, Green, and Blue) at each pixel. One of the useful schemes is called HSI. In this, I stands for Intensity or brightness. It is measured as the average of the R, G, and B gray level values (Pratt et al 1991). The overall brightness of a pixel is specified by the intensity value. It is done regardless of its color. The other two parameters are H-Hue and S-Saturation. Hue is expressed as an angle which will be referred as the spectral wavelength. The point counted from the origin of the color as radius is the saturation parameter. Normally, for any arbitrary, hue 0 o is red, 120 o is green and 240 o is for blue (Castleman et al 2010). The non-spectral colors fall between 240 o and 360 o. It is in purple color where the eye can perceive.

58 2) Color composite information- From a multi band image9 like LANDSAT imageries, any three bands are selected to merge to generate a color image. Additive and subtractive are the two methods of color composite. The three primary colors of 3 light sources (RGB) are used for additive color composite. That is called color graphics display (Castleman et al 2010). The three pigments of the colors (cyan, magenta and yellow) are used for subtractive color composite. There are more than 3 spectral bands in multispectral images. Except the RGB, the human eye cannot detect any region of electromagnetic spectrum (or color). The visible range of the multi - spectral image may be 0.4 to 0.7 µm for human eye. The common wavelength of the electromagnetic spectrum is, 0.4 0.446 µm : Violet 0.447 0.500 µm : Blue 0.501 0.578 µm : Green 0.579 0.592 µm : Yellow 0.593 0.620 µm : Orange 0.621 0.7 µm : Red The invisible bands can be viewed by combining the colors. When blue gun of a display device passes blue color and green color from the green gun and red band from the red gun, the combination is called True color combination [ref.3]. The combinations other than this are called false color combinations. The change detection is also done with multispectral images. When a multi spectral image of the 6 bands are used, the number of color

59 composite may be calculated as, 6!/ (6-3)! = 120. A single band is passed through the blue and green guns to understand the changes between the two images. 4.2 THE PREVIOUS METHODS AN OVERVIEW Edges define the image boundaries which help segmentation and object recognition. It is the basis of low-level image processing and good edges (edges without noise) are necessary for higher level processing. There are only poor behaviors obtained from the general edge detectors where their behavior may fall within the tolerances in specific situations, but have difficulty in adapting to different situations. Hence, an edge detector which has a better performance must have to be developed. It is necessary to first discuss about the previous edge detectors. In the present research, two different methodologies are considered. 4.2.1 The Marr-Hildreth Edge Detector Marr-Hildreth Edge Detector (1979) was very popular before Canny (1986). It is based on the gradient which uses Laplacian to take the second derivative of an image (Marr Hildreth 1980). Hence, there are three steps in Marr-Hildreth edge detector algorithm. They are, i. Smooth the image using Gaussian, to reduce the error due to noise. ii. 2D Laplacian is applied, f = + (4.1)

60 iii. Repeat the loop and determine / verify sign change. If there is a sign change and, If Slope > threshold, mark the pixel as an edge Else find another pixel There is only one Gaussian distribution which optimizes the relation using equations 4.2 to 4.6 and named as, G(x) = exp( x /2 ), with the help of Fourier transform (4.2) G( ) = exp ) (4.3) It can be represented in 2D as, G(r) = exp ( r /2 ) (4.4) presentation, There are two parts in the edge detection analysis in Marr-Hildreth i. The change in intensity values of the natural images are detected at different scales. Using Gaussian 2 nd derivative filter, some simple conditions are satisfied. The intensity changes are detected by finding zero values of 2 G(x, y) * I (x, y) for image I where G(x, y) is 2D Gaussian distribution and 2 is Laplacian. These represented intensity values are referred as zero-crossing segments and the evidence is given for this in the Marr-Hildreth theory.

61 ii. Intensity changes arise from surface discontinuities. Owing to this, the zero-crossing pixels are not independent, and a description of the image named raw primal sketch is formed and the rules are deduced. In this, many psychophysical findings are explained. 4.2.1.1 Detecting Intensity Changes A change occurs in intensity values, there will be a corresponding first directional derivative or zero-crossing in the second directional derivative in intensity values. Hence, the zero-crossing in, f(x, y) = D 2 (G(r) * I (x,y)) (4.5) In this, I (x, y) is the image and * is the convolution operator. The derivative rule for convolution is, f(x,y) = D 2 G * I (x,y) (4.6) Hence, these arguments establish the intensity changes using Equation (4.6) at one scale. An image is an array of pixels. These array values may be affected by noise. Hence, smoothing is done to reduce the noise. The brightness of the light impinges on the sensor. The direction of edges can be measured by partial derivatives. The abrupt changes of image brightness will be measured and it will occur at the curves of the image planes. Thus, blurring of images are done through smoothening the pixel points and all these operations are called convolution (Basudeb Bhatta et al 2011). As the edges are abrupt image intensity changes, the derivation must be started in horizontal direction.

62 The partial derivative D of the continuation function f(x,y) can be calculated with respect to, a. x which is called the horizontal variable b. Slope of the function (, ) = (, ) (, ) (4.7) Where, by, x is the discrete variable which has a value 1. It may be detected i. Convolving the image with D 2 G and ii. Looking for zero-crossing in its output. 4.2.1.2 Issues i. It concerns the orientation associated with D 2. ii. Also, it is not enough to choose zero-crossings of the second derivative in any direction of the image representation. Marr and Ullman presented a theory on edge detection. The analysis proceeded with 2 parts namely (1) Intensity changes (2) Spatial localization

63 The intensity values are detected at different scales. The second derivative Gaussian filter is being adopted at some conditions and thus the primary filter need not be orientation-dependent. The 2D Gaussian distribution is considered at a given scale of intensity values. However, according to Marr, the edge concept has a partly visual and partly physical meaning. The spatial localization is considered the simplification of detection of intensity changes and the detection process can be based on finding zerocrossing using second derivative. This representation is complete and invertible. The physical edges will produce roughly coincident zero-crossings in channels of nearby sizes. These are sufficient for the real physical edge existence. The assurance is not given on the performance of a linear convolution equivalent to a directional derivative. The symbolic descriptions provided by zero-crossing segments that need to be matched between images, not the raw convolution values. The descriptions need to be formed and kept separate. Hence and Canny (1986) Proposed an Edge Detection Technique. 4.2.2 The Canny Edge Detector It is possible for Marr-Hildreth to set the slope threshold, sigma of Gaussian and the size. This edge detector will give connected edges if hysteresis is used and will not otherwise give for a single threshold. It usually gets spotty and has thick edges. The Marr-Hildreth (1979) technique does not use any form of threshold, but an adaptive scheme is used. With Marr- Hildreth zero-crossings, the edge strength is averaged over its length-part. If the calculated average is above the threshold, the entire segment is marked true pixels. If not, no part of the contour appears in the output. However, the contour is segmented by breaking it at maxima in the curvature.

64 There is a standard edge detector and an algorithm was developed by John Canny (1986). It is still the main source for detecting the edges of the images. It outperforms many new algorithms. From this, it is found that there are two ideas. First, the detection of intensity values can be simplified. This is done at different resolutions. The detection is then based on finding the zero-crossing in the second derivative which is called Laplacian. The representations consist of zero crossing segments and their slopes. The information from different channels being combined into a single description is the second main idea. Canny developed the edge detection problem as a signal processing optimization (Jifen Liu and Maoting Gao 2007).The result and solution to this was a complex exponential function. However, in the computational approach algorithm, Canny made no attempt to pre segment contours. Instead, threshold is done with hysteresis. If it is high threshold, the points are immediately output. The steps of Canny Edge Detector are as follows, i. Smoothing images using Gaussian 2D filter ii. iii. iv. Finding the gradient which shows the changes in intensity. The presence of edges indicated using the gradient in X and Y direction. Non-maximal suppression, edges are indicated at a maximum gradient as true edges. Edge threshold and tracking Hysteresis using high and low threshold

65 In white Gaussian noise, the determination of edge is started. This edge is called step-edge. This can be convolved with filter whose impulse response is illustrated with the first derivative of Gaussian operator. The local minimum is calculated and the center of an edge is located at this point. The design problem becomes one of finding the filters. This may be expected to give better performance. There are three criteria namely, i. Detection: A low probability of failing to mark the real edge points may be met. A low probability of marking false edges and non edge points are also detected. It decreases the signal-tonoise ratio. ii. iii. Localization: The edge points that the operator marks the edge points should be as close as possible to the center of true edge. There is only one response to a single edge. When there are two responses to the same edge, one may be false. However, the multiple responses may not be captured when the mathematical form is applied as in equation 4.7. SNR = (G( x)f(x)dx) f (x) dx (4.8) where n 0 is mean-squared noise amplitude per unit length. f(x) impulse response of the filter G(x) edge w finite impulse response limit

66 The first derivative of the response is always zero. So a defined time period is required to determine the local maxima to mark the true edges. 4.2.2.1 Issues i. It is expensive. ii. iii. iv. Gradients can be obtained in the x, y directions. During the non-maxima suppression, the edges at the high threshold are only shown. There is no indication of the false edges. v. More time is required for calculation. 4.3 PROPOSED KODI-EDGE DETECTION TECHNIQUE In the present research, the new algorithm is proposed and it is tested on various images. The following algorithm is illustrated using the ERDAS implementation. i. Preprocessing steps a. Determine AOI (using ERDAS) b. Conversion of gray scale to obtain the limit for the computational requirements. ii. Smoothening images Blur the image to remove noise. When the remotely sensed image is taken to obtain the edges, the image is to be blurred to remove the noise. For such purpose, Gaussian filter is used. The signal to noise ratio is

67 considered for a good detection of false positive edges (Something marked as edge which is not actually an edge) and false negative edges (Failing to mask the existing edge). The above mentioned two problems are monotonically decreasing the functions. iii. The signal to noise ratio and localization may be defined as follows, a. Let f(x) is the impulse response of the Gaussian filter. b. G(x) is used to denote the edge. There are two gradients in X and Y direction. Hence, denote the edges at X=0, where it can be centered. The root mean square (RMS) response is given by, (4.9) Where xi represents the individual noise pixel and n is the number of noisy pixels in the images and for calculating SNR, signal represents the desired output and the noise represents the undesired output (4.9). This quantity is frequently calculated to assess how well the system works (David Landgrobe 2002, Li Jia-cun et al 2003). That is, how high the desired output is with respect to the undesired noise level. The higher the SNR, the better is the system performance. Calculation of SNR requires knowledge of the average values of signal and noise levels. The SNR is to measure the amount of noise present in any image acquisition and it takes into account all the different sources of noise present in an image.

68 = ( ) (4.10) Where µ is signal mean and is the standard deviation of the noise. iv. The reciprocal of the root mean square will determine the Localization to find the true edge. (4.11) v. Finding Gradients Image gradient shows the change of colors. If f(x,y) is a scalar function and i,j are the unit vectors in x and y directions, the gradient vector function is given by, (, ) =. (, ) +. (, ) (4.12) here is the vector gradient operator. (, ) - point in upward direction of shape Magnitude value of slope The scalar function may be given like, (, ) = (, ) + (, ) (4.13)

69 From this, the steepness of the slope is represented for each point. But it cannot give the directional information. So the mask of pixels may be approximately given as (Castleman et al 2010) f(x, y) [max f(x, y) f(x + 1, y), f(x, y + 1) f(x + 1, y + 1) ] (4.14) The equation (4.13) will compute the vertical and horizontal pixel differences (Castleman et al 2010). To enhance the appearance, the filtering function is used. To obtain a sharp and fine detail in an image, the high pass edge detection called Kodi method is proposed in the present work. No division is performed because it is not defined when all input values are found to be equal, it gives the output value as zero. Hence, smoothening the input values with low spatial frequency is done. Finally the image mask contains only edges and zeros as represented in the matrix below, Kodi = 1 2 1 0 0 0 1 2 1 Horizontal 1 0 1 2 0 2 1 0 Vertical 1 The linear features are highlighted such as roads or residential boundaries using the 3X3 mask. The magnitude of the gradients should be large and they are also known as edge strengths which can be determined as a Euclidean distance measurement. Hence, by using the Pythagoras equation, the gradient G (Castleman et al 2010) is derived as, = ( + ) (4.15)

70 Where, = ( ) = ( ), G x, G y are the gradients found in the x,y directions respectively. Normally, as the edges are broad, they cannot be indicated with their exact location. To determine the direction, the following expression is used. = tan (4.16) vi. Sharpening of edges Non-maximal suppression is the technique where Canny used to blur the image edges. Here, in Canny, the eight connected neighborhoods have been used. The positive and negative strength of the current pixel is compared and preserved. If not, (i.e., remove) the value is suppressed. However, in the present research, a new algorithm is proposed for strengthening the true edges. The algorithm 2D Non-Maxima Suppression (2D-NMS) for the blocks of images is given below. Normally nonseperability is the result of this 2D NMS. Hence, an efficient solution is needed (David Landgrobe 2002). Therefore, the region algorithm is discussed here. 4.4 REGION ALGORITHM FOR NON MAXIMA SUPPRESSION There are two local maxima observed with n +1 pixel. There must be at least one local maximum in each region. The size can be determined at (n+1) * (n+1). Hence, this algorithm partitions the entire input image into a number of regions. Within the partitioned image blocks, it searches for the

71 greatest pixel element, which is known as the maximum candidate (www.csc.noaa.gov/crs/lca/faq_gen.html#wirs). With the help of this local maxima candidate, the full neighborhood is tested. The pseudo code can be, V i, j belongs to {n, n+1, 2n+1,} n (0, (worst case x no. of pixels)) x (0, worst case y no. of pixels) Do (mi, mj) For all (i, j) (i,j); (i, i+n) x (j, j+n) do If (img (i 2, j2) > img (mi, mj) then (mi, mj) (i 2, j2) For all (i 2, j2) (mi n, mi+n) x (mj n, mj + n) (i, i+n) x (j, j+n) do If img (i 2, j2) > img (mi, mj) then Goto exit; Maxat (mi, mj); Exit: stop If the blocks candidate is in the local maximum, the worst case occurs. The algorithm does the testing of neighbors for the pixels in the input image for (2n+1) 2-1 the comparisons per region. Hence, the number of comparisons will be limited to, = ( ) ( ) (4.17)

72 Average analysis will be possible. Hence, the testing starts with (n+1) 2 th neighbor instead of the first one. The average calculation will be, Avg Compare = (2n + 1) 1 (2n + 1) ( ) ( ) 1 i ( ) + (4.18) <= 1 + ln 4 (4.19) This is referred to as straight forward implementation using Equations 4.9 to 4.12 as the behavior of the pixel is independent of the size of the neighborhood. Normally, the average comparison calculated using Equation 4.17, is 1.983 per pixel and in the worst case by 4. Though it is far from optimal, this algorithm requires no additional memory and there is an independent process of each region which can improve the straight forward implementation in the real time scenario of image pre-processing techniques (Ehsan et al 2008). 4.5 MULTI THRESH-HOLDING AND EDGE TRACKING The edge pixels which are remaining after the non maxima suppression can be marked with their strength pixel - by - pixel. Most of them are true edges. However, sometimes there may be possibilities of bow noises and color variations, due to the thorough surface of the image. To discern, it will be better to use a threshold value, so that certain values will be preserved to strengthen the pixels. The stronger pixels will be marked with high threshold values, whereas the weaker edge pixels which are lower

73 than the low threshold or suppressed are marked as weak (Canny 1986). A range of 10 to 255 is taken as the thresh hold values and those with weaker and stronger edges are illustrated in the Figure 4.2. The interpreted stronger pixels are included in the final image. The weak edges are also included if they are connected to the strong edges. The logic behind this is that the noise and other variations are unlikely to result in a strong image with threshold adjustments. Due to the true edges, only the stronger edges will occur in the original image (John Cipar and Wood Cooley 2007). The following criteria are considered when the edge pixels are set. The weak edges are also connected to the strong edges due to the true edges. The high and low threshold values like 255 to 16 ( 2 8 to 2 4 ) are used. The edge pixel is set if a pixel has a high threshold value. If a pixel is found to be the neighbor of an edge pixel and if it has a low threshold, it is also known as edge pixel (stronger). If a pixel has a high threshold but it is not the neighbor of an edge pixel, it is then not set as edge pixel. If a pixel has a low threshold, it is never set as an edge pixel.

74 Figure 4.1 (a),(b) The original images Kodi-Edge Detection Method (a) (b) Figure 4.2 (c) The outputs for proposed Kodi Edge Detection technique (a) Multi thresholding (b) Edge tracking (c) Non maxima suppression

75 (a) (b) (c ) (d) Figure 4.3 The canny outputs (a) Double thresholding (b) Edge tracking (c) Final output (d) Edges after non maxima suppression In the specification of edge detection problem, the edges are marked at local maxima. It is done in response to the linear filter applied to the image. The detection is done with the discrimination between signal and noise at the center of an edge. Comparing the figures 4.2 and 4.3, a great variation is determined when the edge tracking is implemented. Using ERDAS, the edges are accurately tracked. Hence the maximum pixels were identified. As in Figure 4.3(d), using canny s non maxima suppression, the edges are tracked with very less accuracy since the nearby pixels were unable to identify.

76 Table 4.1 Comparison of the edge detection techniques S.No Criteria Marr Canny Ours 1 *false positive High High Reduced 2 **false negative Wrong direction measurement Wrong direction measurement Reduced 3 Mean square Spotty and not Spotty and not Spotty and not distance continuous continuous continuous 4 Algorithm Too spotty, Gives noise Gives good tolerance of the noise and wide outlines, but outlines of Land corners and to identify middle covered features functions features arrangements, features distorted and recovered with condition of colors 5. CPU Performance 3.12ms 2.67ms 1.79ms From the above Table 4.1, the false positive and false negative of the pixel setting are reduced. The calculation of mean square distance measured is as in Marr and Canny methods. However, the algorithm tolerance is very much improved (86%) in the proposed method. In Canny method, the detected edges are with noise and the features are distorted. In the Marr algorithm, the image is found with more noise and the features identified are too spotty. Hence, the proposed method is more accurate and the performance is tabulated below.

77 Table 4.2 Performance comparison of three methodologies S.No Criteria Marr Canny Ours 1 *false positive 64% 56% 32% 2 **false negative 73% 61.20% 44.34% 3 Mean square distance 70% 45% 30% 4 Algorithm tolerance of finding pixels 20% 50% 86% 5 CPU Performance 68.1% 76% 84% From the comparison chart shown below, it is noted that compared to the Marr-Hildreth and Canny algorithms, obtaining the false positive and false negative is reduced to32% and 44% respectively. It is done by identifying the nearest pixel and its boundary. Table 4.3 Performance comparison of Canny Vs. Kodi Edge detector S.No Canny Edge detector Kodi Edge detector 1 No analytic solution has been found. 2 Variational approach has been developed. The false positive and false negative edges are found by calculating SNR. Finding true-edge method is developed (using equ. 4.3) 3 Localization is less accurate. The true edges are strengthened using region algorithm for sharpening edges. 4 Edge gradients are computed in two orthogonal directions i.e. rows and column vise only Using local maxima (mi,mj) the neighborhood is tested. (section 4.4)

78 Table 4.3 (Continued) 5 The impulse response of the optimal step edge function was shown with first derivative of a Gaussian. 6 Threshold were set according to the amount of noise in the image ( low threshold = 40% of high threshold i.e 1 to 255= 2 0 to 2 8 ) 7 When edge contours are locally straight, canny operator gives better result with maximum of false negative edges. 8 The unsolved problem in canny is integration of different edge detector outputs into a single description (Figure. 4.2 c,g) 9 The edge and ridge detector outputs were implemented but the results were inconclusive. There is no clear reasons to prefer one edge type over another Reference made from Edge Detection A computational approach by Canny(1986) 2 nd derivative of Gaussian is used to test the impulse response. Using ERDAS 9.3, the blurring of image will remove the noise. ( the threshold was set from 64 to 255) By setting the threshold values to the maximum of 255, the true edges will be bright and easily measurable. (Using equation 4.4) The average pixel comparison for its neighborhood is determined. So that the maximum true edges are the result from kodi. Figure. 4.2(e) Results are conclusive, because of the reduction in false positive edges (Table 4.1) Reference made from the Implementation using Erdas 9.3

79 Comparison chart 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% *false positive **false negative Mean square distance Algorithm tolerance of corners and functions Marr Canny Ours 1 2 3 4 Chart 4.1 Comparison chart of the three different edge detection methods 4.6 CONCLUSION Using Marr and Canny, the methods are still produced with single thick pixels and continuous edges. Finding the optimal way to combine the three colors may be challenging. In these methods, the fine particle details are missing. In the present research, the non maximum suppression may help the granularity of the output for the identified edges. It is also notable that the average case complexity may be below 1 for small neighborhood pixel sizes. The column wise maxima are used in the left or right region of the sensed image. An observation is made during the computation of the CPU performance where it takes about 5.3 milliseconds only for determining the Kodi edge detection. The comparison is also listed out in Table 4.1.