A NOVEL METHOD FOR EDGE DRAWING OR LINKING USING SOBEL GRADIENT

Similar documents
Comparison between Various Edge Detection Methods on Satellite Image

Image Processing

Lecture 7: Most Common Edge Detectors

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

EE795: Computer Vision and Intelligent Systems

Edge and local feature detection - 2. Importance of edge detection in computer vision

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

Sobel Edge Detection Algorithm

Vehicle Image Classification using Image Fusion at Pixel Level based on Edge Image

Biomedical Image Analysis. Point, Edge and Line Detection

ELEN E4830 Digital Image Processing. Homework 6 Solution

A Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

An Algorithm for Blurred Thermal image edge enhancement for security by image processing technique

SRCEM, Banmore(M.P.), India

Digital Image Processing. Image Enhancement - Filtering

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

SECTION 5 IMAGE PROCESSING 2

Assignment 3: Edge Detection

Algorithms for Edge Detection and Enhancement for Real Time Images: A Comparative Study

Line, edge, blob and corner detection

Segmentation and Grouping

An Efficient Image Sharpening Filter for Enhancing Edge Detection Techniques for 2D, High Definition and Linearly Blurred Images

ECG782: Multidimensional Digital Signal Processing

Image Segmentation Image Thresholds Edge-detection Edge-detection, the 1 st derivative Edge-detection, the 2 nd derivative Horizontal Edges Vertical

Performance Evaluation of Edge Detection Techniques for Images in Spatial Domain

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017

Anno accademico 2006/2007. Davide Migliore

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Detection of Edges Using Mathematical Morphological Operators

Varun Manchikalapudi Dept. of Information Tech., V.R. Siddhartha Engg. College (A), Vijayawada, AP, India

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

RESEARCH ON OPTIMIZATION OF IMAGE USING SKELETONIZATION TECHNIQUE WITH ADVANCED ALGORITHM

Morphological Image Processing GUI using MATLAB

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Feature Detectors - Sobel Edge Detector

ECG782: Multidimensional Digital Signal Processing

Concepts in. Edge Detection

EN1610 Image Understanding Lab # 3: Edges

Computational Foundations of Cognitive Science

Edge Detection Lecture 03 Computer Vision

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects

Edge detection. Gradient-based edge operators

Implementation Of Fuzzy Controller For Image Edge Detection

Filtering Images. Contents

Local Image preprocessing (cont d)

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

EECS490: Digital Image Processing. Lecture #19

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Topic 4 Image Segmentation

Digital Image Processing COSC 6380/4393

EDGE DETECTION-APPLICATION OF (FIRST AND SECOND) ORDER DERIVATIVE IN IMAGE PROCESSING

Comparative Analysis of Edge Detection Algorithms Based on Content Based Image Retrieval With Heterogeneous Images

Implementation of Canny Edge Detection Algorithm on FPGA and displaying Image through VGA Interface

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Sharpening through spatial filtering

A Comparative Assessment of the Performances of Different Edge Detection Operator using Harris Corner Detection Method

Lecture 4: Image Processing

Skeletonization Algorithm for Numeral Patterns

Lecture: Edge Detection

CSCI 4974 / 6974 Hardware Reverse Engineering. Lecture 20: Automated RE / Machine Vision

JNTUWORLD. 4. Prove that the average value of laplacian of the equation 2 h = ((r2 σ 2 )/σ 4 ))exp( r 2 /2σ 2 ) is zero. [16]

Improved Simplified Novel Method for Edge Detection in Grayscale Images Using Adaptive Thresholding

Feature Detectors - Canny Edge Detector

HCR Using K-Means Clustering Algorithm

Image Analysis. Edge Detection

Other Linear Filters CS 211A

Fuzzy Inference System based Edge Detection in Images

Image enhancement for face recognition using color segmentation and Edge detection algorithm

How and what do we see? Segmentation and Grouping. Fundamental Problems. Polyhedral objects. Reducing the combinatorics of pose estimation

MORPHOLOGICAL EDGE DETECTION AND CORNER DETECTION ALGORITHM USING CHAIN-ENCODING

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

[Dixit*, 4.(9): September, 2015] ISSN: (I2OR), Publication Impact Factor: 3.785

An Edge Detection Algorithm for Online Image Analysis

Lecture 6: Edge Detection

EDGE BASED REGION GROWING

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK

Copyright Detection System for Videos Using TIRI-DCT Algorithm

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

The application of a new algorithm for noise removal and edges detection in captured image by WMSN

SIMULATIVE ANALYSIS OF EDGE DETECTION OPERATORS AS APPLIED FOR ROAD IMAGES

What Are Edges? Lecture 5: Gradients and Edge Detection. Boundaries of objects. Boundaries of Lighting. Types of Edges (1D Profiles)

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary

Moving Object Tracking in Video Using MATLAB

Automatic Target Recognition in Cluttered Visible Imagery

Effects Of Shadow On Canny Edge Detection through a camera

A Vision System for Automatic State Determination of Grid Based Board Games

Ulrik Söderström 16 Feb Image Processing. Segmentation

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

A New Technique of Extraction of Edge Detection Using Digital Image Processing

Comparative Analysis of Various Edge Detection Techniques in Biometric Application

AN EFFICIENT APPROACH FOR IMPROVING CANNY EDGE DETECTION ALGORITHM

Chapter 3: Intensity Transformations and Spatial Filtering

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

SYDE 575: Introduction to Image Processing

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3.

Edge Detection Techniques in Processing Digital Images: Investigation of Canny Algorithm and Gabor Method

Transcription:

A NOVEL METHOD FOR EDGE DRAWING OR LINKING USING SOBEL GRADIENT R.PRIYAKANTH 1 Associate Professor, Electronics & Communication Department, Devineni Venkata Ramana & Dr.Hima Sekhar MIC College of Technology, Kanchikacherla, Krishna District, Andhra Pradesh 521180, INDIA priyakanthr@gmail.com M.V.BHAVANI SANKAR 2 Professor, Electronics & Communication Department, Devineni Venkata Ramana & Dr.Hima Sekhar MIC College of Technology, Kanchikacherla, Krishna District, Andhra Pradesh 521180, INDIA bhavanisankar_mv@yahoo.com Abstract The most commonly used operation in computer vision and image analysis is edge detection. The reason behind enlightening this operation of edge detection is that it is used in the identification and classification of the objects in an image. In this paper we propose a new edge drawing algorithm that works by finding a set of edge points in an image and then linking these edge points by drawing edges between them. We believe that our edge detector is a novel step in edge detection and would be very suitable for the next generation real-time image processing and computer vision applications. Keywords: Edge Detection; Sobel gradient; Raster Scan; Edge linking. 1. Introduction Image processing, in general, is a form of signal processing for which the input is an image, such as a photograph or video frame and the outcome of image processing may be either an image or representing a set of characteristics or parameters related to the image under test. Image-processing algorithms mostly treat the image as a two-dimensional signal and apply standard signal-processing techniques to it. The acquisition of images (producing the input image) referred to as imaging and it can be done by an image capturing device like digital camera, etc. Digital image processing is the use of computer algorithms to perform processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over Analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions digital image processing may be modeled in the form of multidimensional systems. Edge detection is a fundamental tool in image processing and computer vision, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The same problem of finding discontinuities in 1-Dsignals is known as step detection. 2. Edge Linking or Drawing This algorithm runs in five successive steps: 1) Image Smoothing 2) RGB to Gray Conversion 3) Determination of Sobel Gradient 4) Global Thresholding 5) Edge Linking To illustrate the output of each step of the algorithm, we have used a 512x512 RGB image. 2.1. Smoothing or blurring the Image The goal here is to reduce noise, which may be because of blurring of each pixel with its neighbouring pixels within an image. We perform the operation of smoothing to minimize noise. Most smoothing methods are based on low pass filters. We achieve this by a standard 5x5 rotationally symmetric or isotropic Gaussian kernel with standard deviation SIGMA (positive) σ = 1 as shown in Fig 2.1 In 2-D, an isotropic (i.e. circularly symmetric) Gaussian has the form ISSN : 0975-5462 Vol. 4 No.12 December 2012 4766

2 2 1 x y G( x, y) e 2 2 2 2 Where x and y are the distances from the origin in the horizontal and vertical axis respectively, and σ is the standard deviation of the Gaussian distribution. When applied in two dimensions, this formula produces a surface whose contours are concentric circles with a Gaussian distribution from the center point. Values from this distribution are used to build a convolution matrix which is applied to the original image. Each pixel's new value is set to a weighted average of that pixel's neighborhood. The original pixel's value receives the heaviest weight (having the highest Gaussian value) and neighboring pixels receive smaller weights as their distance to Fig. 2.1. 2-D Gaussian distribution with mean (0, 0) and σ =1 the original pixel increases. This results in a blur that preserves boundaries and edges better than other, more uniform blurring filters. 2.1.1. Spatial Filtering In spatial domain the process of smoothing or Gaussian filtering is done by convolving each point in the input array with a Gaussian kernel and then summing them all to produce the output array. One of the principle justifications for using the Gaussian as a smoothing filter is due to its frequency response. Most convolutionbased smoothing filters act as low pass frequency filters. This means that their effect is to remove high spatial frequency components from an image. 2.2. RGB to Gray Conversion The Second step in edge detection after smoothing as a first step is to convert the raw data to a grayscale image by eliminating the hue and saturation information while retaining the luminance (brightness). Each pixel in the image indicates the level of brightness of the image: from 0 representing black to 255 representing white, with an 8-bit wide pixel. 3. Determination of Sobel Gradient [3] The Sobel operator performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial frequency that correspond to edges. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image. In theory at least, the operator consists of a pair of 3 3 convolution kernels as shown in Figure 3.1. One kernel is simply the other rotated by 90. This is very similar to the Roberts Cross operator. These kernels are designed to respond maximally to edges running vertically and horizontally relative to the pixel grid, one kernel for each of the two perpendicular orientations. The kernels can be applied separately to the input image, to produce separate measurements of the gradient component in each orientation (call these Gx and Gy). These can then be combined together to find the absolute magnitude of the gradient at each point and the orientation of that gradient. The gradient magnitude is given by: 2 2 G G y G y Typically, an approximate magnitude is computed using: ISSN : 0975-5462 Vol. 4 No.12 December 2012 4767

G G x G y The Sobel Gradient of the grayscale image can be obtained by first calculating the sobel masks for x and y directions and then the image is then thresholded to create a clear gradient. Edge information for a particular pixel is obtained by exploring the brightness of pixels in its neighborhood. If all of the pixels in the neighborhood have the same brightness, it indicates that there is no edge in the area. However, if some of the neighbors are much brighter than the others, it indicates that there is an edge present. Measuring the relative brightness of pixels in a neighborhood is mathematically analogous to calculating the derivative of brightness. The Sobel edge detection algorithm uses a 3x3 convolution table to store a pixel and its neighbors to calculate the derivatives. The table is moved across the image, pixel by pixel. For a 640x480 image, the convolution table will move through 302964 (638x478) different locations because we cannot calculate the derivative for pixels on the perimeter of the image. The Sobel edge detection algorithm identifies both the presence of an edge and the direction of the edge. There are eight possible directions: north, northeast, east, southeast, south, southwest, west, and northwest. 4. Global Thresholding (Rafael C.Gonzalez, 2004) Thresholding is the simplest method of image segmentation. Thresholding is a non-linear operation that converts a gray-scale image into a binary image where the two levels are assigned to pixels that are below or above the specified threshold value. One method that is relatively simple, does not require much specific knowledge of the 1 0 1 1 2 1 2 0 2 0 0 0 1 0 1 1 2 1 image, and is robust against image noise, is the following iterative method: a) An initial threshold (T) is chosen, and this can be done randomly or according to any other method desired. b) The image is segmented into object and background pixels as described above, creating two sets: i. G 1 = {f(m,n):f(m,n)>t} (object pixels) ii. Gx Fig. 3.1. Sobel Convolution Kernels G 2 = {f(m,n):f(m,n) T} (background pixels) (note, f(m,n) is the value of the pixel located in the m th column, n th row) c) The average of each set is computed. i. m 1 = average value of G 1 ii. m 2 = average value of G 2 d) A new threshold is created that is the average of m 1 and m 2 i. T I = (m 1 + m 2 )/2 e) Go back to step two, now using the new threshold computed in step four, keep repeating until the new threshold matches the one before it (i.e. until convergence has been reached). This iterative algorithm is a special one-dimensional case of the k-means clustering algorithm, which has been proven to converge at a local minimum meaning that a different initial threshold may give a different final result. 5. Edge Linking Considering the thresholded image obtained above and setting an optimum edge length we perform the operation of image morphing by removing isolated pixels i.e., 1 s and 0 s. Now the image has been thinned by skeletonization [5] which removes the pixels on the boundaries of objects in the image without allowing objects to break apart. Now we find the junctions in the edge information while testing whether the center pixel within a 3x3 neighbourhood is a junction or not. The procedure followed here is that the center pixel must be set and the number of transitions or crossings between 0 & 1 as one traverses the perimeter of the 3x3 region must be six or eight. Gy ISSN : 0975-5462 Vol. 4 No.12 December 2012 4768

5.1 Raster Scan for computation of EdgeLists [2] In a raster scan, an image is subdivided into a sequence of (usually horizontal) strips known as "scan lines". Each scan line can be transmitted in the form of an analog signal as it is read from the video source, as in television systems, or can be further divided into discrete pixels for processing in a computer system. This ordering of pixels by rows is known as raster order, or raster scan order as shown in Fig 5.1.1. Fig. 5.1.1 Raster Scan Perform raster scan through image looking for edge points. We find the edge points one by one based on eightconnectivity as they are encoded by the negative edge number within the image bounds. Whenever the junction point is hit we stop tracking here otherwise we continue tracking. Now track from original point in the opposite direction - but only if the starting point was not a junction point. Finally negate edge image to make edge encodings positive. 5.2 Elimination of Isolated edges and spurs After calculating the edge lists through Raster Scan eliminate isolated edges and spurs that are below the minimum length. Each edgelist has two end nodes - the starting point and the ending point. We build up an adjacency/connection matrix for each node so that we can determine which, if any, edgelists are connected to a node. We also maintain an adjacency matrix for the edges themselves. 5.3 Edge Drawing [1] by Linking points in EdgeLists In this step, the goal is to link real edges by starting at an edgelist point and tracing a pathway to the next edgelist point along the same edge region. Red points in the Figure5.3.1illustrate actual edge points, and arrows indicate the pathway direction. Assume we start at the edge point in the middle. Since this is a horizontal edge, we first link pixels to west by going over the pixels having the maximum gradient values. Figure 5.3.1 gives the linking process details. The numbers inside the boxes colored green indicate the gradient value at that pixel, i.e., Gx + Gy. The three edgelist points are marked with green boxes. Assume that we start at the edgelist point in the middle. As we move sidewards, we simply look at the three neighboring pixels to the west and go to the one having the maximum. Specifically, if we are at pixel (i, j) moving west and the edge passing through (i, j) is a horizontal edge, then we look at pixels (i-1, j-1), (i, j-1), (i+1, j-1) and simply go to the one having the maximum value. This linking process makes us go over the real edge marked with yellow circles, until we hit the next edgelist point. The real path drawn during the linking process is shown with yellow circles. This linking process draws a perfect contiguous 1-pixel wide edge. If the edge was vertical, then we would have traced a path to the up (North) and to the down (South) of the edgelist point. Specifically, going up from (i, j), we would look at pixels (i-1, j-1), (i-1, j) and (i-1, j+1) and go to the one having the maximum value. Similarly, going down from (i, j), we would look at pixels (i+1, j-1), (i+1, j) and (i+1, j+1) and go to the one having the maximum value. ISSN : 0975-5462 Vol. 4 No.12 December 2012 4769

Fig. 5.3.1. Edge Linking Illustration. 6. Results The results for edge detection and linking are compared for canny and also sobel edge detectors. Fig 6.1 is the input RGB image of size 512x512. The importance here we understood is that some of the missing edges using canny edge detectors along with edge linking are visible using Sobel edge detector with edge linking but at the expense of some false positives in the latter. The missing edges in canny edge linking result in Figure 6.2 are indicated with numbers one, two and three in its counterpart sobel edge linking result. Note that the Linked edges shown in the Figure 6.2 are one pixel wide. The Limitation in the form of some false positives can be observed in the sobel output in the bottom right corner compared to the canny result. ISSN : 0975-5462 Vol. 4 No.12 December 2012 4770

Fig. 6.1. Input RGB image of size 512x512 Fig. 6.2. Edge linked images for Sobel and Canny Edge Detection and Linking 7. Conclusion and Future Scope With this proposed work we show that missing edges in the edge detected output can be obtained with the process of edge linking. This process of edge linking can be associated not only with Canny or Sobel edge detecting algorithms but also with other existing edge detection algorithms. The future scope for this work is that algorithms can be set as to eliminate the false edges in the result. 8. References [1] Cihan Topal, Cüneyt Akınlar, Yakup Genç, Edge Drawing: A Heuristic Approach to RobustReal-Time Edge Detection, International Conference on Pattern Recognition, Publication Year: 2010, Page(s): 2424-2427 [2] Peter Kovesi, "Edges Are Not Just Steps". Proceedings of ACCV2002 The Fifth Asian Conference on Computer Vision, Melbourne Jan 22-25, 2002. pp 822-827. [3] E. Sobel, Camera Models and Machine Perception, PhD thesis, Stanford Univ., 1970. [4] V. S. Nalwa and T. O. Binford, On Detecting Edges IEEE Trans. Pattern Analysis and Machine Intelligence, vol.8, no. 6, pp. 699-714, Nov. 1986. [5] L. A. Iverson and S.W. Zucker, Logical/Linear Operators for Image Curves IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, Oct. 1995. ISSN : 0975-5462 Vol. 4 No.12 December 2012 4771

[6] Frank Y. Shih., Image Representation and Description - A comprehensive guide to the essential principles of image processing and pattern recognition, John Wiley & Sons, Page(s): 219-268 Copyright Year: 2010 Author s Bibliography: R.Priyakanth received B.Tech degree in Electronics & Control Engineering from PVP Siddhartha Institute of Technology, Vijayawada, A.P, India and M.Tech degree in Communication and Radar Systems from KLUniversity, Vaddeswaram, Guntur Dist., A.P, India in 2002 and 2005 respectively. From 2005 till date he is working in the department of Electronics and Communication Engineering in Devineni Venkata Ramana & Dr.HimaSekhar MIC College of Technology, Kanchikacherla, A.P, India. Presently he is pursuing Ph.D from JNTUK, Kakinada, A.P, India. His research interests include Multimodal Signal Processing, Biomedical Image Processing. M. V. Bhavani Sankar received B.Tech degree in Electronics & Communication Engineering from V.R. Siddhartha Engineering College, Vijayawada, A.P, India and M.Tech degree in Instrumentation and Control Systems from SVU College of Engineering, Tirupathi, A.P, India in 1990 and 2002 respectively. From 2009 till date he is working as in the department of Electronics and Communication Engineering in Devineni Venkata Ramana & Dr.HimaSekhar MIC College of Technology, Kanchikacherla, A.P, India. His research interests include Image Processing, Embedded Systems. ISSN : 0975-5462 Vol. 4 No.12 December 2012 4772