Edges Are Not Just Steps

Similar documents
Satellite Images features Extraction using Phase Congruency model

A Quantitative Approach for Textural Image Segmentation with Median Filter

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution

TEXTURE ANALYSIS USING GABOR FILTERS

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS

Lecture 6: Edge Detection

Schedule for Rest of Semester

Accurate Image Registration from Local Phase Information

Digital Image Processing COSC 6380/4393

Image features. Image Features


Edge Detection for Dental X-ray Image Segmentation using Neural Network approach

TEXTURE ANALYSIS USING GABOR FILTERS FIL

Topic 4 Image Segmentation

EDGE DETECTION-APPLICATION OF (FIRST AND SECOND) ORDER DERIVATIVE IN IMAGE PROCESSING

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

Local Image preprocessing (cont d)

Image Enhancement Techniques for Fingerprint Identification

On Witkin Range of Phase Congruency

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

Image Analysis. Edge Detection

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

Digital Image Processing

Histogram of Oriented Phase (HOP): A New Descriptor Based on Phase Congruency

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig

The SIFT (Scale Invariant Feature

Image Processing, Analysis and Machine Vision

Lecture 8 Object Descriptors

An Adaptive Approach for Image Contrast Enhancement using Local Correlation

Comparison between Various Edge Detection Methods on Satellite Image

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

FACE RECOGNITION USING INDEPENDENT COMPONENT

CS 231A Computer Vision (Fall 2012) Problem Set 3

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction

EECS490: Digital Image Processing. Lecture #19

Fuzzy Inference System based Edge Detection in Images

Statistical image models

Image Analysis. Edge Detection

Image Processing

Concepts in. Edge Detection

Lecture 7: Most Common Edge Detectors

Nicolai Petkov Intelligent Systems group Institute for Mathematics and Computing Science

Denoising and Edge Detection Using Sobelmethod

Solution: filter the image, then subsample F 1 F 2. subsample blur subsample. blur

Dynamic visual attention: competitive versus motion priority scheme

Nicolai Petkov Intelligent Systems group Institute for Mathematics and Computing Science

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Final Review. Image Processing CSE 166 Lecture 18

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

A Survey of Light Source Detection Methods

A Simple Vision System

Basic Algorithms for Digital Image Analysis: a course

Edge and corner detection

Digital Image Processing. Image Enhancement - Filtering

CAP 5415 Computer Vision Fall 2012

An Intuitive Explanation of Fourier Theory

Advances in Natural and Applied Sciences. Efficient Illumination Correction for Camera Captured Image Documents

Edge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science

Fractional Discrimination for Texture Image Segmentation

CS5670: Computer Vision

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

New wavelet based ART network for texture classification

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

CS 223B Computer Vision Problem Set 3

DIGITAL IMAGE PROCESSING

An Introduc+on to Mathema+cal Image Processing IAS, Park City Mathema2cs Ins2tute, Utah Undergraduate Summer School 2010

Edge Detection Techniques in Processing Digital Images: Investigation of Canny Algorithm and Gabor Method

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Local Features: Detection, Description & Matching

Robust Lip Contour Extraction using Separability of Multi-Dimensional Distributions

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary

Hand-Eye Calibration from Image Derivatives

Computational Models of V1 cells. Gabor and CORF

OBJECT TRACKING AND RECOGNITION BY EDGE MOTOCOMPENSATION *

A Computational Approach To Understanding The Response Properties Of Cells In The Visual System

Edge Detection (with a sidelight introduction to linear, associative operators). Images

Nicolai Petkov Intelligent Systems group Institute for Mathematics and Computing Science

Level lines based disocclusion

Robust Ring Detection In Phase Correlation Surfaces

Model-based segmentation and recognition from range data

IRIS SEGMENTATION OF NON-IDEAL IMAGES

Practical Image and Video Processing Using MATLAB

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Outline 7/2/201011/6/

AN important task of low level video analysis is to extract

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Mixture Models and EM

Filtering Images. Contents

Motivation. Gray Levels

Introduction to Digital Image Processing

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Algorithm Optimization for the Edge Extraction of Thangka Images

Transcription:

ACCV2002: The 5th Asian Conference on Computer Vision, 23 25 January 2002, Melbourne, Australia. 1 Edges Are Not Just Steps Peter Kovesi Department of Computer Science & Software Engineering The University of Western Australia Crawley, W.A. 6009 pk@cs.uwa.edu.au Abstract Images contain edges, features and many feature types that are somewhere between the two. Traditional gradient based edge operators are tuned to detect edges, and hence are unable to properly detect and localize other feature types. The Phase Congruency detector is used as a tool to identify the different feature types found in images. It is shown that there is a continuum of feature types between edges and s, and that most images have a broad distribution of all these feature types. It is concluded that in typical images gradient based operators detect and localize only a small fraction of features correctly. s(x) = s(x) = n=0 n=0 1 sin[(2n + 1)x] (2n + 1) 1 sin[(2n + 1)x + π/2] (2n + 1) 2 1. Introduction In general the edge detection literature has concentrated on the detection of edges. The principal criteria being usually the good detection and localization of features in the presence of noise. This is typified by the work of Sobel [18], Marr and Hildreth [12], Canny [3, 4], and many others. A very limited amount of work has been done on the detection of other kinds of features. Some exceptions to this are the detection work of Canny [3], the detection of peaks and roofs by Perona and Malik [17], the detection of s and bars by Wang and Jenkin [20], and the catalog of feature types developed by Aw, Owens and Ross [1, 2]. The emphasis on the detection of edges is misplaced. Images contain a wide variety of edge types, many of which are somewhere between a and a. This paper shows that one can describe a continuum of feature types between edges and s, and that most images have a broad distribution of all these feature types. The emphasis of computer vision research on the detection of edges has resulted in edge detectors that can fail to find, and/or incorrectly localize, valid features that are recognized by the human eye. Figure 1. Fourier series of square and triangular waveforms, and sum of the first four terms. 2. What is a feature? The classical approach to edge detection has been to think of edges as being points of high intensity gradient. Rather than think of features in differential terms an alternative approach is to think of features in the frequency domain. Image profiles can be thought of as being formed by a Fourier series as shown in Figure 1. Notice how the Fourier components are all in phase at the point of the in the square wave, and at the peaks and troughs of the triangular wave. Congruency of phase at any angle produces a clearly perceived feature. We can generalize our Fourier Series expression to generate a wide range of waveforms with the equation s(x) = n=0 1 sin[(2n + 1)x + φ] (1) (2n + 1) p where φ is the phase offset defining the angle at which phase

congruency occurs at features, and p is the exponent that describes the rate of amplitude decay with frequency in the series. Figure 2 shows three gratings constructed using equation 1 for amplitude decay exponents of 0.5, 1, and 1.5. In each grating φ, the offset at which congruence of phase occurs, is varied from 0 at the top of the grating to π 2 at the bottom. Visually this corresponds to a pattern where we perceive a edge at the top changing to a feature at the bottom. This interpretation of the feature types remains the same for all three gratings. This indicates that changing the amplitude decay exponent, while varying the sharpness of the features, does not change our visual classification of feature type of at the top of each grating to at the bottom. This is despite the large variation in profiles. It would appear that the perceived feature type is purely a function of the angle at which phase congruence occurs. A grating of this kind reveals the limitations of gradient based edge detectors. Figure 3 shows the response of the Canny detector. At the top of the grating pattern, where the feature is a pure edge, one gets a single response. However, at all other points we obtain a double response, one on each side of the feature point. The Canny detector marks points of maximal intensity gradient (as it was designed to do). This example shows that points of maximal gradient do not necessarily correspond to the locations where we perceive features. 3. The Phase Congruency detector So, what is a feature? Rather than assume a feature is a point of maximal intensity gradient, the Local Energy Model postulates that features are perceived at points in an image where the Fourier components are maximally in phase. This model was developed by Morrone et al. [15] and Morrone and Owens [14]. Other work on this model of feature perception can be found in Morrone and Burr [13], Owens et al. [16], Venkatesh and Owens [19], and Kovesi [6, 7, 8, 10, 11]. The work of Morrone and Burr [13] has shown that this model successfully explains a number of psychophysical effects in human feature perception. The measurement of phase congruency at a point in a signal can be seen geometrically in Figure 4. The local Fourier components at a location x in the signal will each have an amplitude A n (x) and a phase angle φ n (x). Figure 4 plots these local Fourier components as complex vectors adding head to tail. The sum of these components projected onto the real axis represents F (x), the original signal. The magnitude of the vector from the origin to the end point is the Local Energy, E(x). (a) (b) (c) Figure 2. Interpolation of a feature to a feature by continuously varying the angle of congruence of phase from 0 at the top to π 2 at the bottom. Three amplitude decay exponents, 0.5, 1.0 and 1.5 are shown in subplots (a), (b) and (c) respectively. Profiles of the gratings corresponding to congruence of phase at 0, π 6, π 3 and π 2 are shown on the right. Figure 3. Raw Canny edge strength response on the grating shown in Figure 2(b). 2

noise circle Imaginary axis H(x) A n φ n (x) _ φ (x) E(x) F(x) Real Figure 4. Polar diagram showing the Fourier components at a location in the signal plotted head to tail. The weighted mean phase angle is given by φ(x). The noise circle represents the level of E(x) one can expect just from the noise in the signal. The measure of phase congruency developed by Morrone et al. [15] is P C 1 (x) = axis E(x) n A n(x). (2) Under this definition phase congruency is the ratio of E(x) to the overall path length taken by the local Fourier components in reaching the end point. If all the Fourier components are in phase all the complex vectors would be aligned and the ratio of E(x) / n A n(x) would be one. If there is no coherence of phase the ratio falls to a minimum of 0. Phase congruency provides a measure that is independent of the overall magnitude of the signal making it invariant to variations in image illumination and/or contrast. Fixed threshold values of feature significance can then be used over wide classes of images. It can be shown that this measure of phase congruency is a function of the cosine of the deviation of each phase component from the mean n P C 1 (x) = A n(cos(φ(x) φ(x)) n A. (3) n(x) This measure of phase congruency does not provide good localization as it is a function of the cosine of the phase deviation, it is also sensitive to noise. Kovesi [8, 10] developed a modified measure consisting of the cosine minus the magnitude of the sine of the phase deviation; this produces a more localized response. This new measure also incorporates noise compensation: P C 2 (x) = n W (x) An(x)(cos(φn(x) φ(x)) sin(φn(x) φ(x)) ) T n An(x)+ε. (4) The term W (x) is a factor that weights for frequency spread (congruency over many frequencies is more significant than congruency over a few frequencies). A small constant, ε is incorporated to avoid division by zero. Only energy values that exceed T, the estimated noise influence, are counted in the result. The symbols denote that the enclosed quantity is equal to itself when its value is positive, and zero otherwise. In practice local frequency information is obtained via banks of filters in quadrature tuned to different spatial frequencies, rather than via the Fourier transform. The current implementation uses oriented 2D Log Gabor filters [5]. These filters allow arbitrarily large bandwidth filters to be constructed while still maintaining a zero DC component in the even-symmetric filter. For details of this phase congruency measure and its implementation see [10, 11, 9]. 4. What feature types do images contain? Having a feature detector that finds features at all angles of phase congruence allows us to interrogate images to determine what feature types are present, and their relative frequency. As a by-product of the phase congruency calculation one can record the weighted mean phase angle, φ(x) at each point in the image. It should be noted that the weighted mean phase angle will vary with orientation at each image point. Here I have chosen to record the mean phase angle corresponding to the orientation having maximum local energy. The weighted mean phase angle will lie in the range π to π. As one moves around the phase circle an angle of 0 indicates an upward going, π/2 indicates a bright feature, π indicates a downward going, and 3π/2 indicates a dark feature. Given that it makes no sense to differentiate between upward and downward going s the phase data is folded back on itself mapping angles greater than π/2 and less than π/2 back into the range ±π/2. While one can sensibly differentiate between bright and dark features here I have chosen not to make a distinction between the two. Accordingly the phase data is further folded to map angles in the range 0 to π/2 back into the range 0 to π/2. This simplifies the range of feature types to a scale that varies from through / to finally. Figure 5 shows the output of the phase congruency detector on the test grating along with the feature classification determined by the weighted mean phase angle at the feature 3

based operators, which look for points of maximum intensity gradient, will fail to correctly detect and localize a large proportion of features within images. Attempts at producing sub-pixel localization of features with gradient based detectors are, literally, misplaced. References (a) (b) / Figure 5. (a) Output of the phase congruency operator on the grating shown in Figure 2(b); compare this to Figure 3. (b) Feature classification given by weighted mean phase angle. point. Even on a simple idealized blocks world type of graphics image one finds a very rich range of feature types present. Figure 6 shows the raw phase congruency output, the Canny edge strength, the feature classification, and a histogram of feature type occurrence. Note the doubled response of the Canny operator on the sphere and how it gets lost at some points on the torus. The occlusion boundaries of curved surfaces typically produce hybrid features that are somewhere between a and a ; two such sections through the image are shown. At these points a gradient based operator will produce a double response at the points of high gradient that occur on each side of the feature; one or both of these responses will be incorrectly localized. Figures 7 and 8 show a similar set of results on two more natural images. The histograms indicate that in each case the distribution of feature types present is very broad with a bias towards a higher frequency of -like features within the images. MATLAB code for the calculation of phase congruency and feature classification is available for those wishing to replicate the results presented here [9]. 5. Conclusion This paper has argued that it is useful to think of features in terms of their Fourier components, rather than in terms of intensity gradients. This allows us to describe a wide range of feature types within the framework of a single model. Features are assumed to lie at points of high phase congruency, and the angle at which the congruency occurs describes the feature type. Experiments indicate that images contain feature types of all phase angles, with a broad distribution. Accordingly it can be concluded that gradient [1] Y. K. Aw, R. Owens, and J. Ross. Image compression and reconstruction by a feature catalogue. In ECCV92, pages 749 756. Springer-Verlag Lecture Series in Computer Science, May 1992. Santa Margherita Ligure, Italy. [2] Y. K. Aw, R. Owens, and J. Ross. A catalogue of 1-D features in natural images. CVGIP - Graphical Models and Image Processing, 56(2):173 181, March 1994. [3] J. F. Canny. Finding edges and s in images. Master s thesis, MIT. AI Lab. TR-720, 1983. [4] J. F. Canny. A computational approach to edge detection. IEEE Trans. Pattern Analysis and Machine Intelligence, 8(6):112 131, 1986. [5] D. J. Field. Relations between the statistics of natural images and the response properties of cortical cells. Journal of The Optical Society of America A, 4(12):2379 2394, December 1987. [6] P. D. Kovesi. A dimensionless measure of edge significance. In The Australian Pattern Recognition Society, Conference on Digital Image Computing: Techniques and Applications, pages 281 288, 4-6 December 1991. Melbourne. [7] P. D. Kovesi. A dimensionless measure of edge significance from phase congruency calculated via wavelets. In First New Zealand Conference on Image and Vision Computing, pages 87 94, August 1993. Auckland. [8] P. D. Kovesi. Invariant Measures of Image Features From Phase Information. PhD thesis, The University of Western Australia, May 1996. [9] P. D. Kovesi. MATLAB code for calculating phase congruency and phase symmetry/asymmetry, April 1996. http://www.cs.uwa.edu.au/ pk/research/matlabfns/. [10] P. D. Kovesi. Image features from phase congruency. Videre: Journal of Computer Vision Research, 1(3):1 26, Summer 1999. http://mitpress.mit.edu/e-journals/videre/. [11] P. D. Kovesi. Phase congruency: A low-level image invariant. Psychological Research, 64(2):136 148, 2000. [12] D. Marr and E. C. Hildreth. Theory of edge detection. Proceedings of the Royal Society, London B, 207:187 217, 1980. [13] M. C. Morrone and D. C. Burr. Feature detection in human vision: A phase-dependent energy model. Proc. R. Soc. Lond. B, 235:221 245, 1988. [14] M. C. Morrone and R. A. Owens. Feature detection from local energy. Pattern Recognition Letters, 6:303 313, 1987. [15] M. C. Morrone, J. R. Ross, D. C. Burr, and R. A. Owens. Mach bands are phase dependent. Nature, 324(6094):250 253, November 1986. [16] R. A. Owens, S. Venkatesh, and J. Ross. Edge detection is a projection. Pattern Recognition Letters, 9:223 244, 1989. 4

A A B B Blocks world scene Phase Congruency edge map Canny edge map 250 section A A 200 150 frequency 100 section B B 50 Sections Feature classifications / 0 0 0.5 1 1.5 phase angle Histogram of feature type occurrence Figure 6. Features on a synthetic blocks world image. Phase Congruency edge map Canny edge map / 600 500 400 frequency 300 200 100 Feature classifications 0 0 0.5 1 1.5 phase angle Histogram of feature type occurrence Figure 7. Features on a building image. 5

Phase Congruency edge map Canny edge map / 700 600 500 frequency 400 300 200 100 Feature classifications 0 0 0.5 1 1.5 phase angle Histogram of feature type occurrence Figure 8. Features on Lena. [17] P. Perona and J. Malik. Detecting and localizing edges composed of s, peaks and roofs. In Proceedings of 3rd International Conference on Computer Vision, pages 52 57, 1990. Osaka. [18] K. K. Pringle. Visual perception by a computer. In A. Grasselli, editor, Automatic Interpretation and Classification of Images, pages 277 284. Academic Press, New York, 1969. [19] S. Venkatesh and R. Owens. On the classification of image features. Pattern Recognition Letters, 11:339 349, 1990. [20] Z. Wang and M. Jenkin. Using complex Gabor filters to detect and localize edges and bars. In C. Archibald and E. Petriu, editors, Advances in Machine Vision: Strategies and Applications, World Scientific Series in Computer Science, volume 32, pages 151 170. World Scientific Press, Singapore, 1992. 6