Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Similar documents
Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3.

Filtering and Edge Detection. Computer Vision I. CSE252A Lecture 10. Announcement

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Anno accademico 2006/2007. Davide Migliore

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Edge Detection (with a sidelight introduction to linear, associative operators). Images

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

Edge and corner detection

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

Other Linear Filters CS 211A

Review of Filtering. Filtering in frequency domain

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Edge and local feature detection - 2. Importance of edge detection in computer vision

DIGITAL IMAGE PROCESSING

Edge Detection. Computer Vision Shiv Ram Dubey, IIIT Sri City

Segmentation and Grouping

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

Image Analysis. Edge Detection

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Computer Vision I - Basics of Image Processing Part 2

Edge Detection. CSC320: Introduction to Visual Computing Michael Guerzhoy. René Magritte, Decalcomania. Many slides from Derek Hoiem, Robert Collins

Does everyone have an override code?

Lecture 7: Most Common Edge Detectors

Edge detection. Winter in Kraków photographed by Marcin Ryczek

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

Line, edge, blob and corner detection

Wikipedia - Mysid

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Prof. Feng Liu. Spring /26/2017

Digital Image Processing COSC 6380/4393

Feature Tracking and Optical Flow

Prof. Feng Liu. Winter /15/2019

Digital Image Processing. Image Enhancement - Filtering

CS4670: Computer Vision Noah Snavely

CS5670: Computer Vision

What Are Edges? Lecture 5: Gradients and Edge Detection. Boundaries of objects. Boundaries of Lighting. Types of Edges (1D Profiles)

Motion Estimation and Optical Flow Tracking

Edges and Binary Images

Low-level Vision Processing Algorithms Speaker: Ito, Dang Supporter: Ishii, Toyama and Y. Murakami

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary

(10) Image Segmentation

Edge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Computer Vision I - Filtering and Feature detection

Lecture 6: Edge Detection

Image Processing

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

EE795: Computer Vision and Intelligent Systems

Local Image preprocessing (cont d)

Image Analysis. Edge Detection

Edge Detection CSC 767

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Image features. Image Features

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Lecture 6 Linear Processing. ch. 5 of Machine Vision by Wesley E. Snyder & Hairong Qi. Spring (CMU RI) : BioE 2630 (Pitt)

Solution: filter the image, then subsample F 1 F 2. subsample blur subsample. blur

Announcements. Hough Transform [ Patented 1962 ] Generalized Hough Transform, line fitting. Assignment 2: Due today Midterm: Thursday, May 5 in class

Feature Tracking and Optical Flow

Generalized Hough Transform, line fitting

Edge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science

CS5670: Computer Vision

How and what do we see? Segmentation and Grouping. Fundamental Problems. Polyhedral objects. Reducing the combinatorics of pose estimation

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Edge Detection Lecture 03 Computer Vision

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Computer Vision for HCI. Topics of This Lecture

Local Feature Detectors

Biomedical Image Analysis. Point, Edge and Line Detection

Computer Vision

Scott Smith Advanced Image Processing March 15, Speeded-Up Robust Features SURF

Final Review CMSC 733 Fall 2014

[ ] Review. Edges and Binary Images. Edge detection. Derivative of Gaussian filter. Image gradient. Tuesday, Sept 16

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

The SIFT (Scale Invariant Feature

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

A Comparative Assessment of the Performances of Different Edge Detection Operator using Harris Corner Detection Method

EN1610 Image Understanding Lab # 3: Edges

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Assignment 3: Edge Detection

Topic 4 Image Segmentation

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Local Features: Detection, Description & Matching

Texture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Automatic Image Alignment (feature-based)

Biomedical Image Analysis. Spatial Filtering

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

CPSC 425: Computer Vision

Feature Detection and Matching

Feature Based Registration - Image Alignment

EE795: Computer Vision and Intelligent Systems

Image Pyramids and Applications

Capturing, Modeling, Rendering 3D Structures

Edge Detection. Ziv Yaniv School of Engineering and Computer Science The Hebrew University, Jerusalem, Israel.

Tracking Computer Vision Spring 2018, Lecture 24

Transcription:

Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with a 1-D row filter R to produce an intermediate image J J = R * I Fourier Tansform Discrete Fourier Transform (DFT) of I[x,y] Inverse DFT Then convolve each column of J with a 1-D column filter C. O = C * J = C *( R * I) = (C * R) * I x,y: spatial domain u,v: frequence domain N by N image Implemented via the Fast Fourier Transform algorithm (FFT) The Fourier Transform and Convolution If H and G are images, and F(.) represents Fourier transform, then F(H*G) = F(H)F(G) Median filters : example filters have width 5 : Or H*G = F -1 ( F(H)F(G)) This is referred to as the Convolution Theorem Fast Fourier Transform: complexity O(n log n) -> complexity of convolution is O( n logn) not O(n 2 ). Thus, one way of thinking about the properties of a convolution is by thinking of how it modifies the frequencies of the image to which it is applied. In particular, if we look at the power spectrum, then we see that convolving image H by G attenuates frequencies where G has low power, and amplifies those which have high power. 1

Edge Detection and Corner Detection Physical causes of edges 1. Object boundaries 2. Surface normal discontinuities 3. Reflectance (albedo) discontinuities 4. Lighting discontinuities Noisy Step Edge Derivative is high everywhere. Must smooth before taking gradient. Edge is Where Change Occurs: 1-D Change is measured by derivative in 1D Ideal Edge Smoothed Edge First Derivative Second Derivative Biggest change, derivative has maximum magnitude Or 2nd derivative is zero. Numerical Derivatives f(x) X 0 -h X 0 X 0 +h Take Taylor series expansion of f(x) about x 0 f(x) = f(x 0 )+f (x 0 )(x-x 0 ) + ½ f (x 0 )(x-x 0 ) 2 + Consider Samples taken at increments of h and first two terms, we have f(x 0 +h) = f(x 0 )+f (x 0 )h+ ½ f (x 0 )h 2 f(x 0 -h) = f(x 0 )-f (x 0 )h+ ½ f (x 0 )h 2 Subtracting and adding f(x 0 +h) and f(x 0 -h) respectively yields x Implementing 1-D Edge Detection 1. Filter out noise: convolve with Gaussian 2. Take a derivative: convolve with [-1 0 1] We can combine 1 and 2. 3. Find the peak: Two issues: Should be a local maximum. Should be sufficiently high. Convolve with First Derivative: [-1 0 1] Second Derivative: [1-2 1] 2

2D Edge Detection: Canny Next step 1. Filter out noise Use a 2D Gaussian Filter. J = I * G 2. Take a derivative Compute the magnitude of the gradient: Can next step, be the same for 2-D edge detector as in 1-D detector?? NO!! Smoothing and Differentiation Need two derivatives, in x and y direction. Filter with Gaussian and then compute Gradient, OR Use a derivative of Gaussian filter because differentiation is convolution, and convolution is associative Directional Derivatives θ Finding derivatives Gradient Magnitude Is this di/dx or di/dy? y x σ = 1 σ = 2 There are three major issues: 1. The gradient magnitude at different scales is different; which scale should we choose? 2. The gradient magnitude is large along thick trail; how do we identify the significant points? 3. How do we link the relevant points up into curves? 3

There is ALWAYS a tradeoff between smoothing and good edge localization! Image with Edge Edge Location 1 pixel 3 pixels 7 pixels The scale of the smoothing filter affects derivative estimates Image + Noise Derivatives detect edge and noise Smoothed derivative removes noise, but blurs edge Magnitude of Gradient Non-maximum suppression Normal: In direction of grad Tangent For every pixel in the image (e.g., q) we have an estimate of edge direction and edge normal (shown at q) We wish to mark points along the curve where the magnitude is biggest. We can do this by looking for a maximum along a slice normal to the curve (non-maximum suppression). These points should form a curve. There are then two algorithmic issues: which point is the maximum, and where is the next point on the curve? Non-maximum suppression Non-maximum suppression Predicting the next edge point Using normal at q, find two points p and r on adjacent rows (or columns). We have a maximum if the value is larger than those at both p and at r. Interpolate to get values. Assume the marked point is an edge point. Then we construct the tangent to the edge curve (which is normal to the gradient at that point) and use this to predict the next points (here either r or s). Link together to create Image curve 4

HysteresisThresholding Input image Start tracking an edge chain at pixel location that is local maximum of gradient magnitude where gradient magnitude > τ high. Follow edge in direction orthogonal to gradient. Stop when gradient magnitude < τ low. i.e., use a high threshold to start edge curves and a low threshold to continue them. I τ high τ low Position along edge curve Single Threshold T=15 T=5 Hysteresis T h =15 T l = 5 Hysteresis thresholding fine scale high threshold coarse scale, high high threshold 5

Why is Canny so Dominant coarse scale Low high threshold Still widely used after 20 years. 1. Theory is nice (but end result same,). 2. Details good (magnitude of gradient, non-max supression). 3. Hysteresis an important heuristic. 4. Code was distributed. Boundary Detection Boundary detection Brightness Color Texture Precision is the fraction of detections that are true positives rather than false positives, while recall is the fraction of true positives that are detected rather than missed. From Contours to Regions: An Empirical Evaluation, Arbelaez, M. Maire C. Fowlkes 2, and J. Malik, CVPR 2008 Feature extraction: Corners and blobs Corner Detection 6

Why extract features? Motivation: panorama stitching We have two images how do we combine them? Why extract features? Motivation: panorama stitching We have two images how do we combine them? Step 1: extract features Step 2: match features Why extract features? Motivation: panorama stitching We have two images how do we combine them? Corners contain more info than lines. A point on a line is hard to match. Image 1 Image 2 Step 1: extract features Step 2: match features Step 3: align images Corners contain more info than lines. A corner is easier to match Image 1 Image 2 The Basic Idea We should easily recognize the point by looking through a small window Shifting a window in any direction should give a large change in intensity Source: A. Efros flat region: no change in all directions edge : no change along the edge direction corner : significant change in all directions 7

Edge Detectors Tend to Fail at Corners Finding Corners Intuition: Right at corner, gradient is ill-defined. Near corner, gradient has two different values. Distribution of gradients for different image patches Formula for Finding Corners Shi-Tomasi Detector We look at matrix: Sum over a small region Gradient with respect to x, times gradient with respect to y 2 I x I x I y C(x, y) = 2 I x I y I y Matrix is symmetric WHY THIS? General Case: Because C is a symmetric positive definite matrix, it can be factored as follows: What is region like if: 1. λ 1 = 0? 2. λ 2 = 0? 3. λ 1 = 0 and λ 2 = 0? 4. λ 1 > 0 and λ 2 > 0? Where R is a 2x2 rotation matrix and λ i is non-negative. 8

General Case Since C is symmetric, we have C = R λ 1 1 0 R 0 λ 2 We can visualize C as an ellipse with axis lengths determined by the eigenvalues and orientation determined by R Ellipse equation: direction of the fastest change (λ max ) -1/2 (λ min ) -1/2 direction of the slowest change So, to detect corners Filter image with a Gaussian. Compute the gradient everywhere. Move window over image and construct C over the window. Use linear algebra to find λ 1 and λ 2. If they are both big, we have a corner. 1. Let e(x,y) = min(λ 1 (x,y), λ 2 (x,y)) 2. (x,y) is a corner if it s local maximum of e(x,y) and e(x,y) > τ Parameters: Gaussian std. dev, window size, threshold Corner Detection Sample Results Threshold=25,000 Threshold=10,000 Threshold=5,000 9