NEW WAVELET-BASED TECHNIQUES FOR EDGE DETECTION

Similar documents
Lecture 7: Most Common Edge Detectors

Digital Image Processing. Image Enhancement - Filtering

ELEN E4830 Digital Image Processing. Homework 6 Solution

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS

Lecture 6: Edge Detection

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK

Ulrik Söderström 16 Feb Image Processing. Segmentation

Chapter 3: Intensity Transformations and Spatial Filtering

Denoising and Edge Detection Using Sobelmethod

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

A New Technique of Extraction of Edge Detection Using Digital Image Processing

Edge and local feature detection - 2. Importance of edge detection in computer vision

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Comparison between Various Edge Detection Methods on Satellite Image

Sobel Edge Detection Algorithm

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Schedule for Rest of Semester

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

DIGITAL IMAGE PROCESSING

JNTUWORLD. 4. Prove that the average value of laplacian of the equation 2 h = ((r2 σ 2 )/σ 4 ))exp( r 2 /2σ 2 ) is zero. [16]

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Fuzzy Inference System based Edge Detection in Images

An Edge Detection Algorithm for Online Image Analysis

Other Linear Filters CS 211A

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

Edge Detection. CMPUT 206: Introduction to Digital Image Processing. Nilanjan Ray. Source:

A Comparative Assessment of the Performances of Different Edge Detection Operator using Harris Corner Detection Method

Filtering and Enhancing Images

EECS490: Digital Image Processing. Lecture #19

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

Edge detection. Gradient-based edge operators

Line, edge, blob and corner detection

EE795: Computer Vision and Intelligent Systems

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN


Image Gap Interpolation for Color Images Using Discrete Cosine Transform

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Image Enhancement Techniques for Fingerprint Identification

EDGE BASED REGION GROWING

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

Lecture 4: Spatial Domain Transformations

Digital Image Procesing

Neighborhood operations

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Digital Image Processing, 2nd ed. Digital Image Processing, 2nd ed. The principal objective of enhancement

Digital Image Processing COSC 6380/4393

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Performance Evaluation of Edge Detection Techniques for Images in Spatial Domain

Feature Detectors - Sobel Edge Detector

Statistical Image Compression using Fast Fourier Coefficients

Edge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science

A NOVEL METHOD FOR EDGE DRAWING OR LINKING USING SOBEL GRADIENT

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Edges and Binary Images

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Implementation Of Fuzzy Controller For Image Edge Detection

Lecture 4 Image Enhancement in Spatial Domain

ECG782: Multidimensional Digital Signal Processing

Topic 5 Image Compression

TEXT DETECTION AND RECOGNITION IN CAMERA BASED IMAGES

Final Review. Image Processing CSE 166 Lecture 18

An Algorithm for Blurred Thermal image edge enhancement for security by image processing technique

Vehicle Image Classification using Image Fusion at Pixel Level based on Edge Image

Assignment 3: Edge Detection

the most common approach for detecting meaningful discontinuities in gray level. we discuss approaches for implementing

An Introduc+on to Mathema+cal Image Processing IAS, Park City Mathema2cs Ins2tute, Utah Undergraduate Summer School 2010

Edge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient

Sampling and Reconstruction

Segmentation and Grouping

An Edge Based Adaptive Interpolation Algorithm for Image Scaling

Anno accademico 2006/2007. Davide Migliore

FEATURE EXTRACTION TECHNIQUES FOR IMAGE RETRIEVAL USING HAAR AND GLCM

A New Approach to Compressed Image Steganography Using Wavelet Transform

Modified Bit-Planes Sobel Operator: A New Approach to Edge Detection

Morphological Image Processing GUI using MATLAB

Edge Detection. EE/CSE 576 Linda Shapiro

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Basic relations between pixels (Chapter 2)

INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM

Image gradients and edges April 10 th, 2018

New Edge Detector Using 2D Gamma Distribution

EDGE DETECTION IN MEDICAL IMAGES USING THE WAVELET TRANSFORM

Classification of image operations. Image enhancement (GW-Ch. 3) Point operations. Neighbourhood operation

CS 534: Computer Vision Texture

Hardware Description of Multi-Directional Fast Sobel Edge Detection Processor by VHDL for Implementing on FPGA

Image Enhancement in Spatial Domain (Chapter 3)

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

Performance Evaluation of Different Techniques of Differential Time Lapse Video Generation

Transcription:

NEW WAVELET-BASED TECHNIQUES FOR EDGE DETECTION YAHIA S. AL-HALABI, HESHAM JONDI ABD Computer Science Department Princess Sumaya University for Technology Amman, AL-Jubaiha Jordan E-mail: yahia_1111@yahoo.com ABSTRACT Image segmentation and edge detection are of great interest in image processing prior to image recognition step. Segmentation process has an important application in Military, Biomedical, space, and environmental applications. In this paper, we applied the spatial domain methods, Thresholding and Edge based methods (Roberts operator, Sobel operator, Prewitt operator, and Laplacian operator). In this research, we found that using Fourier Transform in edge detection applications yields to a bad results and has a serious drawback. Wavelet transform plays a very important role in the image processing analysis, for its fine results when it is used in multi-resolution, multi-scale modeling. Unlike Discrete Cosine transforms or Fourier transforms, wavelet transform offers a natural decomposition of images at multiple resolutions. The resulting representation of wavelet transform provides an attractive tradeoff between spatial and frequency resolution where the human visual system can be better exploited. Also, wavelet transform reveals another important feature unfound in the conventional transforms in the sense that its basis function can be designed to exactly fit a given problem.. New two wavelet-based edge detection techniques have been presented in this paper. The first one is called RC- Algorithm and the second one is called RCD-Algorithm. These two techniques have proved better results than other old techniques. Edges extracted using RC-Algorithm and RCD-Algorithm, are sharpen more than edges extracted using other techniques. RCD-Algorithm gave better results than RC-Algorithm in most cases. The RC- Algorithm and RCD-Algorithm respond best even on low transitions. The RCD-Algorithm can handle noisier images better than other techniques. Key-Words: - Thresholding, Segmentation, Wavelet, Roberts operator, Sobel operator, Prewitt operator, and Laplacian operator, RC-Algorithm and RCD-Algorithm, Fourier Transform, 1 Introduction The term image, refers to a two-dimensional light intensity function f(x,y),where x and y denote spatial coordinates and the value of f at any point (x,y) is proportional to the brightness (or gray level) of the image at that point [4]. A digital image is an image f(x,y) that has been discretized both in spatial coordinates and brightness, A digital image can be considered as a matrix whose row and column indices identify a point in the image and the corresponding matrix element value identifies the gray level at that point. The elements of such a digital array are called image elements, picture elements, pixels, or pels. The last two being commonly used abbreviations of picture elements [4] Computer imaging is a fascinating and exciting to be involved in today. The advent of the information superhighway, which its ease of use via the World Wide Web, combined with the advances in computer power have brought the world into our offices and into our homes. One of the most interesting aspects of this information revolution is the ability to send and receive complex data that transcend ordinary written text. Visual information, transmitted in the form of digital image, is becoming a major method of communication in the modern age. Computer imaging can be defined as the acquisition and processing of visual information by computer [1]. The importance of computer imaging is derived from the fact that our primary sense is our visual sense. The information that can be conveyed in images has been known throughout 31

the centuries to be extraordinary-one, picture is worth a thousand words. In Image processing, the input is an image, and the output is also an image. While in the computer vision, the input is an image and the output consists of information about the image itself that may be used by another process. Thus, the output will not be an image any more. Fig. 1, illustrates a block diagram of computer vision process [2]. It is quite clear that imageprocessing techniques are always the first stage of computer vision, moreover, some times image processing algorithms are used as feature extraction algorithms in the first step of the image analysis procedures. image, that containing only 0 s (background) and 1 s (objects), may be created by applying the threshold [8],[9], 1 g x, y) = 0 if f ( x, y) > T if f ( x, y) T ( (1) where T is a specified thresholding value. Automatic or semi-automatic methods for finding the thresholds are available in [4],[6]. Thresholds may be defined between peaks in the onedimensional histogram. For example, combine peaks that are close together, or combine peaks for which the ratio of the minimum of the two peaks is sufficiently small. Fig. 1. Computer vision process. This paper will discuss the different methods of segmentation and edge detection. The spatial domain methods used for the process of image segmentation and edge detection will be described in section 2. The new two edges detection techniques using wavelet transformation will be presented in section 3.Comparison between the new techniques and the other known techniques, and an evaluation for the results, will be presented in section 4. 2 SPATIAL DOMAIN METHOD Thresholding is particularly, useful region approach for scenes containing solid objects resting upon a contrasting background [2],[3],[5]. All methods in this approach depend on Gray-level Histogram, which can be defined as a function showing, for each gray level, the number of pixels in the image that have that gray level. The abscissa is gray level and the ordinate is frequency of occurrence (number of pixels) [4],[6],[9]. In fixed Thresholding, one assumes that the objects have pixel values, which are generally different from the background. A binary output ) 32 2.1 Edge based Methods An edge is the boundary between two regions with relatively distinct gray-level properties. Several methods are known for edge detection as the next sections show. The Sobel edge detection operation [4] extracts all of edges in an image, regardless of direction. Sobel operation has the advantage of providing both a differencing and smoothing effect. The Sobel mask is defined as shown in Fig.2. Z1 Z4 3x3 image region -1 0 1 Z2 Z5-2 Mask used to compute G x Mask used to compute G y 0 2-1 0 1-2 0 2-1 0 1 Fig. 2. Sobel Operators. Z3 Z6 Z7 Z8 Z9 It is implemented as the sum of two directional edge enhancement operations. The resulting image -1 0 1

appears as an unidirectional outline of the objects in the original image. Constant brightness regions become black, while changing brightness regions become highlighted. Derivative may be implemented in digital form in several ways. However, the Sobel operators have the advantage of providing both a differencing and a smoothing effect. Because derivatives enhance noise, the smoothing effect is particularly attractive feature of the Sobel operators (Gonzalez et al., 1993). From Fig. 2, derivatives based on the Sobel operator masks are shown in equation (1) and equation (2) below: G x =(z 7 +2z 8 +z 9 )-(z 1 +2z 2 +z 3 ) (1) G y =(z 3 +2z 6 +z 9 )-(z 1 +2z 4 +z 7 ) (2) The z s are the gray levels of the pixels overlapped by the masks at any location in an image. Computation of the gradient at the location of the center of the masks then utilizes Equation (3) or (4), which gives one value of the gradient. (3) f = mag( F ) = 2 2 [ G + G] Which is equivalent to: x y 1 2 2 1/2 2 f f = + x y f G x + G y (4) To get the next value, the masks are moved to the next pixel location and the procedure is repeated. Thus, after the procedure has been completed for all possible locations, the result is a gradient image of the same size as the original. The following Algorithm shows the main features of Sobel edge detection. As usual, mask operations on the border of an image are implemented by using the appropriate partial neighborhoods. Applying this mask on Lina image is shown in Fig. 3 Fig. 5 below. Fig.3, shows the result of computing G x with the mask. Thus the strongest response produced by G x is expected to be on edges perpendicular to the x axis. This result is obvious in Fig. 4 which shows strong responses along horizontal edges. Note also the relative lack of response along vertical edges. The reverse situation occurs upon computation of G y, as shown in Fig. 4. Combining these results via using Eq. 4, yields to the gradient image shown in Fig. 5. Original Image Case-2 Case-1 Fig. 3. Lena image. Result of applying the mask to obtain G x ; (Case-1) result of using the mask in Equation (5) to obtain G y ; (Case-2) complete gradient image obtained by using Equation 4. The Prewitt Operator extracts the north, northeast, east, southeast, south, southwest, west, or northwest edges in an image. The resulting image appears as a directional outline of the objects in the original image. Constant brightness regions become black and changing brightness regions become highlighted. The Prewitt gradient operator is more immune to image noise than the shift and difference operation, and provides stronger directional edge discrimination. The mask coefficients add to 0. The eight directional masks are: 1 1 1 1 1 1-1 1 1-1 -1 1 1-2 1-1 -2 1-1 -2 1-1 -2 1-1 -1-1 -1-1 1-1 1 1 1 1 1 N mask NE mask E mask SE mask -1-1 -1 1-1 -1 1 1-1 1 1 1 1-2 1 1-2 -1 1-2 -1 1-2 -1 1 1 1 1 1 1 1 1-1 1-1 -1 S mask SW mask W mask NW mask The results of applying these masks on Lena image are shown in Fig. 6. Lena testing image. 33

(a) Prewitt nothern direction (f) Southwestern gradient. (g) Western gradient (b) Northeastern gradient. ) (h) Northwestern gradient. (c) Eastern gradient. Fig. 6. Application of the 8 orientations masks of Fig. 6. Application of the 8 orientations masks of Prewitt Operator on Lena testing image. 3 THE NEW WAVELET-BASED EDGE DETECTION TECHNIQUES (d) Southeastern gradient. In this section, we will present a new two Waveletbased edge detection techniques. Both techniques depend on the following: The high-pass filter which is used to represent the differences between points, or it is called details. Edge detection can be extracted depending on the difference between each pixel and its neighbors. In both techniques, we apply the convolution process between the signal (image data) and the high-pass filter. The results of the convolution will be used without downsampling. (e) Southern gradient. 34

3.1 Row and Column (RC) Convolution Algorithm Input: WT represents the used Wavelet type. IM as a two-dimensional gray scale image. R represents the number of rows in the in IM. C represents the number of columns in IM. Output: EM as a two-dimensional image, in which the edges are extracted. The Process: Begin 1) HI_D = HPD_Filter(WT); % The HI_D holds the value of a high-pass decomposition filter for a specific wavelet type in WT % 2) For each row in IM, do the following RDiff = Convolution (HI_D, IM(i, :)); % RDiff is a two-dimensional matrix that represents the differences between pixels in each row% IM(i, :), i represents the row number (i.e. its value will be between 1 to R). : Sign means to include all columns in the same row. Convolution, is a function returns the result of the convolution process between the high-pass filter and the row image data. 3) For each column in IM, do the following CDiff = Convolution (HI_D, IM( :, j)); % CDiff is a two-dimensional matrix that represents the differences between pixels in each column% IM(:, j), : Sign means to include all rows in the same column j. j values will vary between 1 and C % 4) EM = RDiff (i, j)) + CDiff (i, j)) where 1 i R and 1 j C. % EM holds the result of the magnitude of a vector that includes RDiff and CDiff values% 5) EM = Scale(EM); % Scale, is a function returns EM matrix, where each value in this matrix is be between 0 and 1 % 6) Threshold(EM) % Thresholding (Section 3.2) can be used to extract the most important pixels which represent the edges % End of Algorithm. 3.2 Row, Column and Diagonal Convolution (RCD) Algorithm Input: WT represents the used Wavelet type. IM as a two-dimensional gray scale image. ) R represents the number of rows in the in IM. C represents the number of columns in IM. Output: EM as a two-dimensional image, in which the edges are extracted. The Process: Begin 1) HI_D = HPD_Filter(WT); % The HI_D holds the value of a high-pass decomposition filter for a specific wavelet type in WT % 2) For each row in IM, do the following RDiff = Convolution (HI_D, IM(i, :)); % RDiff is a two-dimensional matrix that represents the differences between pixels in each row% IM(i, :), i represents the row number (i.e. its value will be between 1 to R). : Sign means to include all columns in the same row. Convolution, is a function returns the result of the convolution process between the high-pass filter and the row image data. % 3) For each column in IM, do the following CDiff = Convolution (HI_D, IM( :, j)); % CDiff is a two-dimensional matrix that represents the differences between pixels in each column% IM(:, j), : Sign means to include all rows in the same column j. j value will be between 1 to C% 4) EM = RDiff (i, j)) + CDiff (i, j)) where 1 i R and 1 j C. % EM holds the result of the magnitude of a vector that includes RDiff and CDiff values % 5-) EM = Scale(EM); % Scale, is a function returns EM matrix, where each value in this matrix is be between 0 and 1 % 6) Threshold(EM) % Thresholding (Section 3.2) can be used to extract the most important pixels which represent the edge % End of Algorithm. 4 RESULTS AND CONCLUSION In this section, we will show the results obtained using different Edge Detection techniques. We used Roberts, Prewitt, Sobel, Laplacian, RC- Algorithm, and RCD-Algorithm. An evaluation of the results will be presented in the following section. 35

WT = Haar Wavele WT = DB2 Wavelet WT = DB3 Wavelet WT = DB4 Wavelet Fig. 9. The results of applying RC Algorithm on different image (wheel and blackboard) (when WT = Haar Wavelet). Fig. 7. Sample of the results obtained using different techniques for edge detection. Fig. 8. shows the result of applying RC-Algorithm on Lena image, by using (a) WT = Haar Wavelet, (b) WT = DB2 Wavelet, (c) WT = DB3 Wavelet and (d) WT = DB4 Wavelet. It is clear that the most efficient results are extracted when WT = Haar Wavelet, this is because Haar wavelet is more strict on breakdown points [3]. For edge detection, breakdown points represent edges. Fig. 8. The results of applying RC-Algorithm on Lena image Fig. 9 shows the results of applying RC-Algorithm (when WT = Haar Wavelet) on different images. Fig. 10. The results of applying RCD Algorithm on different image (Lena, wheel and blackboard) (when WT = HaarWavelet). As we previously mentioned in Section 3.3.4, the objective metrics are often of limited use in practical applications, so we will take a subjective look at the results of the edge detectors. The human visual system is still superior, by far, to any computer vision system that has yet been devised and is 36

often used as the final judge in application development. From the previous results, we conclude the following: 1) Edges extracted using RC-Algorithm and RCD- Algorithm, are sharpen more than other edges extracted using other techniques. But it is clear that RCD-Algorithm gives better results than RC- Algorithm. 2) By using RC-Algorithm and RCD-Algorithm, we can easily make better distinction between objects in the image. 3) RC-Algorithm and RCD-Algorithm respond best even on low transitions. 4) RCD-Algorithm can handle noisier images better than other techniques. Baumela Molina. Real-time tracking and estimation of plane pose, Proc. of International Conference on Pattern Recognition, IEEE. Quebec, August 2002,Vol II,.pp. 697-700. [8] Vetterli, Martin (2009) Reproducible Research in Signal Processing - What, why, and how. IEEE Signal Processing Magazine, 26 (3). pp. 37-47. [9] Han et al. Detecting objects in image sequences using rule-based control in an active contour model. IEEE Trans Biomed Eng. Vol 5, 2003, pp 705-710. REFERENCES: [1]Abdou, I. E. and Pratt W. K., Quantitative Design and Evaluation of Enhancement Thresholding Edge Detectors, Proc. IEEE, 67(5), 1979, pp. 753-763. [2] Aydin, T., Yemez, Y., Anarim, E. and Sankur, B., Multidirectional and multiscale Edge Detection via M-Band Wavelet Transform, IEEE Trans. on Image Processing, 5(9), 1996, pp. 1370-1377. [3] Biajaoui, A., Rue, E. S., Wavelet& study of the Distance Universe, Proc. IEEE, 84(4), 1996,pp.670-679. [4] Gonzalez, R. C. and Woods, R. E.,. Digital Image Processing, 3 st. Edition. Addisonwesley Publishing Company, 2008,USA. [5] Ryan, T. W., Sanders, L. D., Fisher, H. D. and Iverson, A. E., Image Compression by Texture Modeling in the Wavelet Domain, IEEE Trans. on Image Processing, 5(1), 1996, pp.26-36. [6]Wilson, R., Multiresolution Image Modeling, Electronic & Communication 1997. 1997. Journal, 1997, pp. 90-96. [7] José Miguel Buenaposada Biencinto, Luis 37

ABOUT THE AUTHOR: Profesor Yahia AL- Halabi received his Ph.D Degree in Computer Science in 1982, from Clarkson University, Potsdam,NY, USA and his Ms.c degree in Numerical Simulation from the University of Jordan - Hashemite Kingdom of Jordan. Since 1982, he served as professor of computer Science, Chairman of the computer, Director of Computer Center Department at the University of Jordan, Dean Of faculty of Art and Sciences at Amman Private University, Dean of Faculty of Computer Studies at Arab Open University Kuwait headquarter office. He joined Princess Sumaya University for Technology in 2005 and continues his research in image processing and simulation. Since September 2008-2010, he is appointed as Dean, King Hussein School for Information Technology at Princess Sumaya University for Technology. He is involved in many scientific organizations, locally and internationally and editor for many international research journals and member of accreditation and validation group for higher education in Jordan and abroad. ) 38