Line, edge, blob and corner detection

Similar documents
Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Biomedical Image Analysis. Point, Edge and Line Detection

Digital Image Processing. Image Enhancement - Filtering

Image features. Image Features

(10) Image Segmentation

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Topic 4 Image Segmentation

Lecture 7: Most Common Edge Detectors

Corner Detection. GV12/3072 Image Processing.

Filtering and Enhancing Images

Local Image preprocessing (cont d)

EECS490: Digital Image Processing. Lecture #19

Edge detection. Gradient-based edge operators

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Comparison between Various Edge Detection Methods on Satellite Image

Anno accademico 2006/2007. Davide Migliore

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Ulrik Söderström 16 Feb Image Processing. Segmentation

Keypoint detection. (image registration, panorama stitching, motion estimation + tracking, recognition )

Edge Detection. Ziv Yaniv School of Engineering and Computer Science The Hebrew University, Jerusalem, Israel.

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Edge and corner detection

Edge Detection Lecture 03 Computer Vision

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

School of Computing University of Utah

Lecture 6: Edge Detection

PERFORMANCE ANALYSIS OF CANNY AND OTHER COMMONLY USED EDGE DETECTORS Sandeep Dhawan Director of Technology, OTTE, NEW YORK

Image Processing

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Other Linear Filters CS 211A

An Algorithm for Blurred Thermal image edge enhancement for security by image processing technique

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Practical Image and Video Processing Using MATLAB

Edge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient

Towards the completion of assignment 1

Neighborhood operations

Lecture: Edge Detection

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Segmentation and Grouping

Edge and local feature detection - 2. Importance of edge detection in computer vision

Image Segmentation Image Thresholds Edge-detection Edge-detection, the 1 st derivative Edge-detection, the 2 nd derivative Horizontal Edges Vertical

Digital Image Processing COSC 6380/4393

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

Obtaining Feature Correspondences

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3.

JNTUWORLD. 4. Prove that the average value of laplacian of the equation 2 h = ((r2 σ 2 )/σ 4 ))exp( r 2 /2σ 2 ) is zero. [16]

A Comparative Assessment of the Performances of Different Edge Detection Operator using Harris Corner Detection Method

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Filtering Images. Contents

Local Features: Detection, Description & Matching

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Chapter 10: Image Segmentation. Office room : 841

A Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SIFT - scale-invariant feature transform Konrad Schindler

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

Concepts in. Edge Detection

Local Feature Detectors

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Feature Detection and Matching

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Multimedia Retrieval Ch 5 Image Processing. Anne Ylinen

CS-465 Computer Vision

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Computer Vision I - Basics of Image Processing Part 2

Edge Detection. CMPUT 206: Introduction to Digital Image Processing. Nilanjan Ray. Source:

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

What Are Edges? Lecture 5: Gradients and Edge Detection. Boundaries of objects. Boundaries of Lighting. Types of Edges (1D Profiles)

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Assignment 3: Edge Detection

Image Features Detection, Description and Matching

CS 556: Computer Vision. Lecture 3

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

Sobel Edge Detection Algorithm

Implementation of Canny Edge Detection Algorithm on FPGA and displaying Image through VGA Interface

Outline 7/2/201011/6/

EE795: Computer Vision and Intelligent Systems

TECHNIQUES OF VIDEO STABILIZATION

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary

CS 556: Computer Vision. Lecture 3

3D Object Recognition using Multiclass SVM-KNN

Computer Vision for HCI. Topics of This Lecture

The SIFT (Scale Invariant Feature

the most common approach for detecting meaningful discontinuities in gray level. we discuss approaches for implementing

5. Feature Extraction from Images

Image Analysis. Edge Detection

Introduction to Medical Imaging (5XSA0)

Scott Smith Advanced Image Processing March 15, Speeded-Up Robust Features SURF

Algorithms for Edge Detection and Enhancement for Real Time Images: A Comparative Study

Lecture 4: Image Processing

Vehicle Image Classification using Image Fusion at Pixel Level based on Edge Image

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

AK Computer Vision Feature Point Detectors and Descriptors

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Transcription:

Line, edge, blob and corner detection Dmitri Melnikov MTAT.03.260 Pattern Recognition and Image Analysis April 5, 2011 1 / 33

Outline 1 Introduction 2 Line detection 3 Edge detection 4 Blob detection 5 Corner detection 2 / 33

Introduction Detecting edges is fundamental in computer vision and image processing. Segmentation is a process that subdivides an image into its constituent regions or objects. Segmentation accuracy determines further results of image analisys procedures. Edge and line detection techniques are used as a base for more complicated segmentation algorithms. 3 / 33

Detection of discontinuities The idea is to find points in a digital image where the brightness changes sharply has discontinuities. To look for discontinueties, run the mask through the image, computing the sum of products of mask coefficients with the gray levels under the mask. w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 Mask z 1 z 2 z 3 z 4 z 5 z 6 z 7 z 8 z 9 Image The response of the mask at any point in the image is: R = w 1 z 1 + w 2 z 2 + + w 9 z 9 = 9 w i z i i=9 4 / 33

Line detection 1 1 1 2 2 2 1 1 1 Horizontal 1 1 2 1 2 1 2 1 1 +45 1 2 1 1 2 1 1 2 1 Vertical 2 1 1 1 2 1 1 1 2-45 Let R 1, R 2, R 3 and R 4 denote the responses from these 4 masks. If, after running the four masks through the image, R i > R j, j i then the point is likely to be associated with line in the direction of mask i. Alternatively, just use a single mask when looking for lines of one direction. 5 / 33

Edge detection Edge is a set of connected pixels that lie on the boundary between two regions. Edge detection is based on the measure of gray-level discontinuity at a point. Edge is a local concept. Unlike edges, a boundary forms a closed path. It is a global concept. 6 / 33

First derivative The first derivative is positive at the points of transition into and out of the ramp. It is constant for points in the ramp. It is zero in areas of constant gray levels. The magnitude of the first derivative can be used to detect the presence of an edge at a point in an image. 7 / 33

Second derivative The second derivative is positive at the transition associated with the dark side of the edge. It is negative at the transition associated with the light side of the edge. It is zero along the ramp and in the areas of constant gray level. Second derivative produces two values for every edge in an image. Zero-crossing midpoint of the imaginary straignt line joining the extreme positive and negative values. Can be used for finding centers of thick edges. 8 / 33

Edge detection We define a point in an image as being an edge point if its first-order derivative is greater than some threshold. A set of such connected points is an edge. First-order derivatives in an image are computed using the gradient. Second-order derivatives in an image are computed using the Laplacian. 9 / 33

Gradient The gradient of an image f (x, y) at point (x, y) is defined as the vector [ ] ] Gx f = = G y [ f x f y It points in the direction of maximum rate of change of f at (x, y). The magnitude of this vector is f = mag( f) = [ Gx 2 + Gy 2 ] 1 2 To simplify computation, it can be approximated as f = G x + G y The direction angle of the gradient vector at (x, y) is ( ) α(x, y) = tan 1 Gy G x 10 / 33

Roberts operators To obtain the partial derivatives, Roberts, Prewitt or Sobel operators can be used. z 1 z 2 z 3 z 4 z 5 z 6 z 7 z 8 z 9 Roberts first-order derivative approximation at point z 5 G x = (z 9 z 5 ), G y = (z 8 z 6 ) Can be implemented with 2 2 masks 1 0 0 1 0 1 1 0 11 / 33

Prewitt operators z 1 z 2 z 3 z 4 z 5 z 6 z 7 z 8 z 9 G x = (z 7 + z 8 + z 9 ) (z 1 + z 2 + z 3 ) G y = (z 3 + z 6 + z 9 ) (z 1 + z 4 + z 6 ) 1 1 1 0 0 0 1 1 1 1 0 1 1 0 1 1 0 1 12 / 33

Sobel operators z 1 z 2 z 3 z 4 z 5 z 6 z 7 z 8 z 9 G x = (z 7 + 2z 8 + z 9 ) (z 1 + 2z 2 + z 3 ) G y = (z 3 + 2z 6 + z 9 ) (z 1 + 2z 4 + z 6 ) 1 2 1 0 0 0 1 2 1 1 0 1 2 0 2 1 0 1 13 / 33

Laplacian The Laplacian of f (x, y) is a second-order derivative defined as 2 f = 2 f x 2 + 2 f y 2 Two most common digital approximations in practice are 2 f = 4z 5 (z 2 + z 4 + z 6 + z 8 ) 2 f = 8z 5 (z 1 + z 2 + z 3 + z 4 + z 6 + z 7 + z 8 + z 9 ) 0 1 0 1 4 1 0 1 0 1 1 1 1 8 1 1 1 1 Laplacian is usually not used in its original form for edge detection. Very sensitive to noise. 14 / 33

Laplacian of a Gaussian (LoG) Laplacian combined with smoothing can find edges using zero-crossing. The Gaussian function blurs the image and σ determines the degree of blurring. h(r) = exp( r 2 2σ 2 ), where r 2 = x 2 + y 2 The Laplacian of h (the second-order derivative of h with respect to r) is [ ] 2 h(r) = r 2 σ 2 exp( r 2 ) σ 4 2σ 2 15 / 33

Laplacian of a Gaussian (LoG) 16 / 33

Canny Edge Detection Algorithm 1 Smoothing: Image blurring to remove noise. 2 Finding gradients: Edges consist of points with large gradient magnitudes. 3 Non-maximum suppression: Only local maxima should be marked as edges. 4 Double thresholding: Potential edges are determined by thresholding. 5 Edge tracking by hysteresis: Final edges are determined by suppressing all weak edges that are not connected to strong edges. 17 / 33

Canny Edge Detection Algorithm 18 / 33

Canny Edge Detection Algorithm 19 / 33

Blob detection - LoG The most common blob detector is based on the Laplacian of the Gaussian (LoG). However, using fixed Gaussian kernel, the response is dependent on the size of the blob. The magnitude of the Laplacian response will achieve a maximum at the center of the blob, provided the scale of the Laplacian is matched to the scale of the blob. 20 / 33

Blob detection - LoG 21 / 33

Blob detection - LoG ( ) Scale-normalized Laplacian operator 2 normg = σ 2 2 g + 2 g x 2 y 2 The 2D Laplacian is given by ( (x (x 2 + y 2 2σ 2 2 + y 2 ) ) ) exp 2σ 2 For a binary circle of radius r, the Laplacian is maximum at σ = r 2 22 / 33

Blob detection - LoG Scale-space blob detector 1 Convolve image with scale-normalized Laplacian at several scales. 2 Find local maxima/minima of Laplacian response in scale-space. 3 Apply threshold. 23 / 33

Blob detection - DoH Determinant of the Hessian Like before, given L(x, y; t) = g(x, y, t) [ f (x, y) ] Lxx L xy and the Hessian matrix HL(x, y; t) = we can consider the scale-normalized determinant of the Hessian L yx L yy dethl(x, y; t) = t 2 (L xx L yy L 2 xy) Looking for scale-space local maxima of this operator, we obtain the blob points. 24 / 33

Corner detection An interest point is a point that has a well-defined position and can be robustly detected. A corner is one such interest point. A corner is an intersection of two edges. In the region around a corner, image gradient has two or more dominant directions. Shifting a small window in any direction gives a large change in intensity at corner points. 25 / 33

Harris Corner detector Change of intensity for the shift [x, y] is: S(x, y) = u,v w(u, v) [I (u + x, v + y) I (u, v)] 2 Where w is a window function. 26 / 33

Harris Corner detector I (u + x, v + y) can be approximated using Taylor expansion: I (u + x, v + y) I (u, v) + I x (u, v)x + I y (u, v)y S(x, y) = u,v w(u, v) [I x (u, v)x + I y (u, v)y] 2 Now, S(x, y) can be written in matrix form as ( ) x S(x, y) (x y)a, where y A = u,v [ ] I 2 w(u, v) x I x I y = I x I y Let λ 1, λ 2 be the eigenvalues of matrix A. I 2 y [ ] I 2 x Ix I y Ix I y I 2 y 27 / 33

Harris Corner detector We classify image points based on the values of λ 1 and λ 2 as follows 1 If λ 1 0 and λ 2 0 then it is a flat region. 2 If λ 1 0 and λ 2 is large, then an edge is found. 3 If λ 1 and λ 2 are large, a corner is found. However, eigenvalues are expensive to compute, so we use the function R = λ 1 λ 2 α(λ 1 + λ 2 ) 2 = det(a) αtrace 2 (A), where α is around 0.05 28 / 33

Harris Corner detector Algorithm 1 Compute partial derivatives I x and I y. 2 Compute the matrix A in a window around each pixel. 3 Compute function R at each pixel. 4 Find local maxima of R (using non-maximum suppression). 5 Apply threshold. 29 / 33

SUSAN SUSAN - Smallest Univalue Segment Assimilating Nucleus. Method for edge and corner detection. No image derivatives used. Not sensitive to noise. 30 / 33

SUSAN 31 / 33

SUSAN 32 / 33

SUSAN 1 Mask M is placed around the center pixel. Usually, M = 37. 2 Calculate the USAN area pixels within the mask which have similar brightness to the nucleus. ( (I (m) I (m0 )) 6 ) c(m) = exp, m M t n(m) = m M c(m) 3 The response of the SUSAN operator is { g n(m) if n(m) < g R(M) = 0 otherwise, where g = n max 2 4 Test for false positives by finding the USAN s centroid. 33 / 33