Local Features Tutorial: Nov. 8, 04

Similar documents
Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

School of Computing University of Utah

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

The SIFT (Scale Invariant Feature

Implementing the Scale Invariant Feature Transform(SIFT) Method

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Local Features: Detection, Description & Matching

Scale Invariant Feature Transform

Outline 7/2/201011/6/

3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform. Wenqi Zhu

Comparison of Feature Detection and Matching Approaches: SIFT and SURF

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Patch-based Object Recognition. Basic Idea

Scale Invariant Feature Transform

CAP 5415 Computer Vision Fall 2012

Key properties of local features

Eppur si muove ( And yet it moves )

Scale Invariant Feature Transform by David Lowe

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

SIFT - scale-invariant feature transform Konrad Schindler

A Novel Algorithm for Color Image matching using Wavelet-SIFT

Local features: detection and description May 12 th, 2015

SIFT: Scale Invariant Feature Transform

Computer Vision for HCI. Topics of This Lecture

Object Detection by Point Feature Matching using Matlab

A Comparison of SIFT and SURF

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Local Feature Detectors

Object Recognition with Invariant Features

Faster Image Feature Extraction Hardware

Local features: detection and description. Local invariant features

Motion Estimation and Optical Flow Tracking

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Distinctive Image Features from Scale-Invariant Keypoints

Obtaining Feature Correspondences

CS 558: Computer Vision 4 th Set of Notes

CS231A Section 6: Problem Set 3

ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012

Motion illusion, rotating snakes

Ulas Bagci

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Local invariant features

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Feature Based Registration - Image Alignment

Local features and image matching. Prof. Xin Yang HUST

Distinctive Image Features from Scale-Invariant Keypoints

A Comparison and Matching Point Extraction of SIFT and ISIFT

Verslag Project beeldverwerking A study of the 2D SIFT algorithm

IJMTES International Journal of Modern Trends in Engineering and Science ISSN:

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Lecture 10 Detectors and descriptors

2D Image Processing Feature Descriptors

Image features. Image Features

Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching

Distinctive Image Features from Scale-Invariant Keypoints

AN ADVANCED SCALE INVARIANT FEATURE TRANSFORM ALGORITHM FOR FACE RECOGNITION

CS664 Lecture #21: SIFT, object recognition, dynamic programming

Car Detecting Method using high Resolution images

EE795: Computer Vision and Intelligent Systems

Pictures at an Exhibition: EE368 Project

Scale Invariant Feature Transform (SIFT) CS 763 Ajit Rajwade

Panoramic Image Stitching

A Survey on Face-Sketch Matching Techniques

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Invariant Local Feature for Image Matching

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image

Pictures at an Exhibition

CS 231A Computer Vision (Fall 2012) Problem Set 3

Introduction to SLAM Part II. Paul Robertson

Robotics Programming Laboratory

Feature Matching and Robust Fitting

VK Multimedia Information Systems

TA Section 7 Problem Set 3. SIFT (Lowe 2004) Shape Context (Belongie et al. 2002) Voxel Coloring (Seitz and Dyer 1999)

A Novel Extreme Point Selection Algorithm in SIFT

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Feature Detection and Matching

SIFT Descriptor Extraction on the GPU for Large-Scale Video Analysis. Hannes Fassold, Jakub Rosner

CS 223B Computer Vision Problem Set 3

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

HISTOGRAMS OF ORIENTATIO N GRADIENTS

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

AN EMBEDDED ARCHITECTURE FOR FEATURE DETECTION USING MODIFIED SIFT ALGORITHM

Object Recognition Tools for Educational Robots

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

Coarse-to-fine image registration

Feature Detectors and Descriptors: Corners, Lines, etc.

Designing Applications that See Lecture 7: Object Recognition

Local Image Features

Local Image Features

AIR FORCE INSTITUTE OF TECHNOLOGY

IMAGE MATCHING USING SCALE INVARIANT FEATURE TRANSFORM (SIFT) Naotoshi Seo and David A. Schug

CS 556: Computer Vision. Lecture 3

THE ANNALS OF DUNAREA DE JOS UNIVERSITY OF GALATI FASCICLE III, 2007 ISSN X ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS

A Comparison of SIFT, PCA-SIFT and SURF

Transcription:

Local Features Tutorial: Nov. 8, 04 Local Features Tutorial References: Matlab SIFT tutorial (from course webpage) Lowe, David G. Distinctive Image Features from Scale Invariant Features, International Journal of Computer Vision, Vol. 60, No. 2, 2004, pp. 91-110 Local Features Tutorial 1

Local Features Tutorial Today: Local features for object recognition - The problem: Obtain a representation that allows us to find a particular object we ve encountered before (i.e. find Paco s mug as opposed to find a mug ). - Local features based on the appearance of the object at particular interest points. - Features should be reasonably invariant to illumination changes, and ideally, also to scaling, rotation, and minor changes in viewing direction. - In addition, we can use local features for matching, this is useful for tracking and 3D scene reconstruction. Local Features Tutorial 3

Local Features Tutorial Key properties of a good local feature: - Must be highly distinctive, a good feature should allow for correct object identification with low probability of mismatch. Question: How to identify image locations that are distinctive enough?. - Should be easy to extract. - Invariance, a good local feature should be tolerant to Image noise Changes in illumination Uniform scaling Rotation Minor changes in viewing direction Question: How to construct the local feature to achieve invariance to the above? - Should be easy to match against a (large) database of local features. Local Features Tutorial 4

SIFT features Scale Invariant Feature Transform (SIFT) is an approach for detecting and extracting local feature descriptors that are reasonably invariant to changes in illumination, image noise, rotation, scaling, and small changes in viewpoint. Detection stages for SIFT features: - Scale-space extrema detection - Keypoint localization - Orientation assignment - Generation of keypoint descriptors. In the following pages we ll examine these stages in detail. SIFT features 5

Scale-space extrema detection Interest points for SIFT features correspond to local extrema of difference-of-gaussian filters at different scales. Given a Gaussian-blurred image where L(x, y, σ) = G(x, y, σ) I(x, y), G(x, y, σ) = 1/(2πσ 2 ) exp (x2 +y 2 )/σ 2 is a variable scale Gaussian, the result of convolving an image with a difference-of-gaussian filter is given by G(x, y, kσ) G(x, y, σ) D(x, y, σ) = L(x, y, kσ) L(x, y, σ) (1) Scale-space extrema detection 6

Which is just the difference of the Gaussian-blurred images at scales σ and kσ. Figure 1: Diagram showing the blurred images at different scales, and the computation of the difference-of-gaussian images (from Lowe, 2004, see ref. at the beginning of the tutorial) The first step toward the detection of interest points is the convolution of the image with Gaussian filters at different scales, and the generation of difference-of- Gaussian images from the difference of adjacent blurred images. Scale-space extrema detection 7

Scale-space extrema detection The convolved images are grouped by octave (an octave corresponds to doubling the value of σ), and the value of k is selected so that we obtain a fixed number of blurred images per octave. This also ensures that we obtain the same number of difference-of-gaussian images per octave. Note: The difference-of-gaussian filter provides an approximation to the scale-normalized Laplacian of Gaussian σ 2 2 G. The difference-of-gaussian filter is in effect a tunable bandpass filter. Scale-space extrema detection 8

Scale-space extrema detection Figure 2: Local extrema detection, the pixel marked is compared against its 26 neighbors in a 3 3 3 neighborhood that spans adjacent DoG images (from Lowe, 2004) Interest points (called keypoints in the SIFT framework) are identified as local maxima or minima of the DoG images across scales. Each pixel in the DoG images is compared to its 8 neighbors at the same scale, plus the 9 corresponding neighbors at neighboring scales. If the pixel is a local maximum or minimum, it is selected as a candidate keypoint. Scale-space extrema detection 9

Scale-space extrema detection For each candidate keypoint: - Interpolation of nearby data is used to accurately determine its position. - Keypoints with low contrast are removed - Responses along edges are eliminated - The keypoint is assigned an orientation To determine the keypoint orientation, a gradient orientation histogram is computed in the neighborhood of the keypoint (using the Gaussian image at the closest scale to the keypoint s scale). The contribution of each neighboring pixel is weighted by the gradient magnitude and a Gaussian window with a σ that is 1.5 times the scale of the keypoint. Peaks in the histogram correspond to dominant orientations. A separate keypoint is created for the direction corresponding to the histogram maximum, Scale-space extrema detection 10

and any other direction within 80% of the maximum value. All the properties of the keypoint are measured relative to the keypoint orientation, this provides invariance to rotation. Scale-space extrema detection 11

SIFT feature representation Once a keypoint orientation has been selected, the feature descriptor is computed as a set of orientation histograms on 4 4 pixel neighborhoods. The orientation histograms are relative to the keypoint orientation, the orientation data comes from the Gaussian image closest in scale to the keypoint s scale. Just like before, the contribution of each pixel is weighted by the gradient magnitude, and by a Gaussian with σ 1.5 times the scale of the keypoint. Figure 3: SIFT feature descriptor (from Lowe, 2004) SIFT feature representation 12

Histograms contain 8 bins each, and each descriptor contains an array of 4 histograms around the keypoint. This leads to a SIFT feature vector with 4 4 8 = 128 elements. This vector is normalized to enhance invariance to changes in illumination. SIFT feature representation 13

SIFT feature matching - Find nearest neighbor in a database of SIFT features from training images. - For robustness, use ratio of nearest neighbor to ratio of second nearest neighbor. - Neighbor with minimum Euclidean distance expensive search. - Use an approximate, fast method to find nearest neighbor with high probability. SIFT feature matching 14

Recognition using SIFT features - Compute SIFT features on the input image - Match these features to the SIFT feature database - Each keypoint specifies 4 parameters: 2D location, scale, and orientation. - To increase recognition robustness: Hough transform to identify clusters of matches that vote for the same object pose. - Each keypoint votes for the set of object poses that are consistent with the keypoint s location, scale, and orientation. - Locations in the Hough accumulator that accumulate at least 3 votes are selected as candidate object/pose matches. - A verification step matches the training image for the hypothesized object/pose to the image using a least-squares fit to the hypothesized location, scale, and orientation of the object. Recognition using SIFT features 15

SIFT matlab tutorial Gaussian blurred images and Difference of Gaussian images Range: [ 0.11, 0.131] Dims: [959, 2044] Figure 4: Gaussian and DoG images grouped by octave SIFT matlab tutorial 16

SIFT matlab tutorial Keypoint detection a) b) c) Figure 5: a) Maxima of DoG across scales. b) Remaining keypoints after removal of low contrast points. C) Remaining keypoints after removal of edge responses (bottom). SIFT matlab tutorial 17

SIFT matlab tutorial Final keypoints with selected orientation and scale Figure 6: orientation. Extracted keypoints, arrows indicate scale and SIFT matlab tutorial 18

SIFT matlab tutorial Warped image and extracted keypoints Figure 7: Warped image and extracted keypoints. The hough transform of matched SIFT features yields SIFT matlab tutorial 19

the transformation that aligns the original and warped images: Computed affine transformation from rotated image to original image: >> disp(aff); 0.7060-0.7052 128.4230 0.7057 0.7100-128.9491 0 0 1.0000 Actual transformation from rotated image to original image: >> disp(a); 0.7071-0.7071 128.6934 0.7071 0.7071-128.6934 0 0 1.0000 SIFT matlab tutorial 20

SIFT matlab tutorial Matching and alignment of different views using local features. Orignial View Reference View Dims: [384, 512] Dims: [384, 512] Aligned View Reference minus Aligned View Range: [ 0.0273, 1] Dims: [384, 512] Range: [ 0.767, 0.822] Dims: [384, 512] Figure 8: Two views of Wadham College and affine transformation for alignment. SIFT matlab tutorial 21

SIFT matlab tutorial Object recognition with SIFT Image Model Location Image Model Range: [ 0.986, 0.765] Location Image Model Range: [ 1.05, 0.866] Location Range: [ 1.07, 1.01] Figure 9: Cellphone examples with different poses and occlusion. SIFT matlab tutorial 22

SIFT matlab tutorial Object recognition with SIFT Image Model Location Image Model Range: [ 0.991, 0.992] Location Image Model Range: [ 1.05, 0.963] Location Range: [ 0.988, 1.05] Figure 10: Book example, what happens when we match similar features outside the object? SIFT matlab tutorial 23

Closing Comments - SIFT features are reasonably invariant to rotation, scaling, and illumination changes. - We can use them for matching and object recognition among other things. - Robust to occlusion, as long as we can see at least 3 features from the object we can compute the location and pose. - Efficient on-line matching, recognition can be performed in close-to-real time (at least for small object databases). Closing Comments 24

Closing Comments Questions: - Do local features solve the object recognition problem? - How about distinctiveness? how do we deal with false positives outside the object of interest? (see Figure 10). - Can we learn new object models without photographing them under special conditions? - How does this approach compare to the object recognition method proposed by Murase and Nayar? Recall that their model consists of a PCA basis for each object, generated from images taken under diverse illumination and viewing directions; and a representation of the manifold described by the training images in this eigenspace (see the tutorial on Eigen Eyes). Closing Comments 25