Feature Based Registration - Image Alignment

Similar documents
Motion Estimation and Optical Flow Tracking

Automatic Image Alignment

Automatic Image Alignment

Automatic Image Alignment (feature-based)

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Local invariant features

Computer Vision for HCI. Topics of This Lecture

Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

Prof. Feng Liu. Spring /26/2017

Object Recognition with Invariant Features

Local features: detection and description. Local invariant features

The SIFT (Scale Invariant Feature

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Wikipedia - Mysid

Local features and image matching. Prof. Xin Yang HUST

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Stitching and Blending

Lecture: RANSAC and feature detectors

Local features: detection and description May 12 th, 2015

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Local Features: Detection, Description & Matching

Local Feature Detectors

Scale Invariant Feature Transform

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Motion illusion, rotating snakes

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Scale Invariant Feature Transform

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Image Features. Work on project 1. All is Vanity, by C. Allan Gilbert,

3D from Photographs: Automatic Matching of Images. Dr Francesco Banterle

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

CS5670: Computer Vision

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

A Comparison of SIFT, PCA-SIFT and SURF

Outline 7/2/201011/6/

Lecture 6: Finding Features (part 1/2)

Local Image Features

Key properties of local features

CAP 5415 Computer Vision Fall 2012

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Feature Detection and Matching

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Problems with template matching

Multi-modal Registration of Visual Data. Massimiliano Corsini Visual Computing Lab, ISTI - CNR - Italy

School of Computing University of Utah

Feature Matching and RANSAC

Feature Detectors and Descriptors: Corners, Lines, etc.

Lecture 10 Detectors and descriptors

Local Image Features

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

2D Image Processing Feature Descriptors

AK Computer Vision Feature Point Detectors and Descriptors

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

SIFT - scale-invariant feature transform Konrad Schindler

Scale Invariant Feature Transform by David Lowe

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

CS 558: Computer Vision 4 th Set of Notes

Edge and corner detection

Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching

Local Image preprocessing (cont d)

Local Features Tutorial: Nov. 8, 04

Evaluation and comparison of interest points/regions

Instance-level recognition

Chapter 3 Image Registration. Chapter 3 Image Registration

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Comparison of Feature Detection and Matching Approaches: SIFT and SURF

Obtaining Feature Correspondences

Video Google: A Text Retrieval Approach to Object Matching in Videos

Patch Descriptors. CSE 455 Linda Shapiro

Computer Vision I - Filtering and Feature detection

Implementing the Scale Invariant Feature Transform(SIFT) Method

Corner Detection. GV12/3072 Image Processing.

Invariant Features from Interest Point Groups

Keypoint detection. (image registration, panorama stitching, motion estimation + tracking, recognition )

CS4670: Computer Vision

Instance-level recognition

III. VERVIEW OF THE METHODS

Feature Matching and Robust Fitting

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

Computer Vision I - Basics of Image Processing Part 2

Patch Descriptors. EE/CSE 576 Linda Shapiro

Mosaics. Today s Readings

Instance-level recognition part 2

Distinctive Image Features from Scale-Invariant Keypoints

CS 556: Computer Vision. Lecture 3

Instance-level recognition II.

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image

Distinctive Image Features from Scale-Invariant Keypoints

SIFT: Scale Invariant Feature Transform

Feature descriptors and matching

Robotics Programming Laboratory

Patch-based Object Recognition. Basic Idea

Transcription:

Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html D.Frolova, D. Simakov - http://www.wisdom.weizmann.ac.il/~deniss/2004 03_invariant_features/InvariantFeatures.ppt Image Registration Motion Estimation and Optical Flow Example: Mosiacing (Panorama) Assumes: Constant Brightness assumption Small Motion Spatial Coherence. Affected by: Window Size, Image Noise, numerical convergence M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 1

Example 3D Reconstruction Image Registration Source: http://www.photogrammetry.ethz.ch/general/persons/fabio/fabio_spie0102.pdf How do we align two images automatically? Two broad approaches: Feature-based registration Find a few matching features in both images compute alignment Direct (appearance-based) registration Search for alignment where most pixels agree Direct Method (brute force) The simplest approach is a brute force search Need to define image distance function: SSD, Normalized Correlation, Mutual Information, etc. Search over all parameters within a reasonable range: e.g. for translation: for x=x0:step:x1, for y=y0:step:y1, calculate Dist(image1(x,y),image2(x+ x,y+ y)) end; end; Direct Method (brute force) What if we want to search for more complicated transformation, e.g. projective? x' a b c x y' = d e f y 1 g h 1 1 for a=a0:astep:a1, for b=b0:bstep:b1, for c=c0:cstep:c1, for d=d0:dstep:d1, for e=e0:estep:e1, for f=f0:fstep:f1, for g=g0:gstep:g1, for h=h0:hstep:h1, calculate Dist(image1, H(image2)) end; end; end; end; end; end; end; end; 2

Problems with brute force Not realistic for large number of parameters. Alternatives: Reduce parameter range using pyramids. Gradient decent on the error function. Direct v.s.. Feature Registration Direct methods: Can be applied locally (OF, video coding) Can handle complicated motions Gradient methods need good initial guess Feature based Fast No need for initial guess Can handle only global motion models Feature-based registration Detect feature points in both images Feature-based registration Detect feature points in both images Find corresponding pairs 3

Feature-based registration Detect feature points in both images Find corresponding pairs Compute image transformation Features: Issues to be addressed What are good features to extract? Distinctive Invariant to different acquisition conditions Different view-points, different illuminations, different cameras, etc. How can we find corresponding features in both images? no chance to match! Feature Correspondence -Example Image Features Feature Detectors - where Feature Descriptors - what Methods: Harris Corner Detector (multi-scale Harris) SIFT (Scale Invariant Features Transform) Given an image point in left image, what is the corresponding point in the right image, which is the projection of the same 3-D point. 4

Invariant Feature Descriptors Schmid & Mohr 1997, Lowe 1999, Baumberg 2000, Tuytelaars & Van Gool 2000, Mikolajczyk & Schmid 2001, Brown & Lowe 2002, Matas et. al. 2002, Schaffalitzky & Zisserman 2002 Harris Corner Detector C.Harris, M.Stephens. A A Combined Corner and Edge Detector.. 1988 We should easily recognize a corner by looking through a small window Shifting a window in any direction should give a large change in intensity Harris Detector: Basic Idea Harris Detector: Mathematics [u,v] xy, [ ] 2 Euv (, ) = wxy (, ) I( x+ uy, + v) I( xy, ) flat region: no change in all directions edge : no change along the edge direction corner : significant change in all directions Window function Window function w(x,y) = Shifted intensity or Intensity 1 in window, 0 outside Gaussian 5

Harris Detector: Mathematics xy, [ ] 2 Euv (, ) = wxy (, ) I( x+ uy, + v) I( xy, ) For small [u,v]: We have: E ( x + u, y + v) = I( x, y) + ui x vi y I + u [ ] = ( u, v) w( x, y) I ( x, y) I ( x, y) = x, y 2 I I xi x I I I y u v v x x y [ u v] w( x, y) = [ u v] M 2 y y 2 u v Harris Detector: Mathematics For small shifts [u,v] we have a bilinear approximation: u Euv (, ) [ uv, ] M v where M is a 2 2 matrix computed from image derivatives: 2 Ix IxI y M = w( x, y) 2 xy, II x y Iy Harris Detector: Mathematics Denote by e i the i th eigen-vector of M whose eigen-value is λ i : e T i M e i = i λ > 0 Harris Detector: Mathematics Intensity change in shifting window: eigenvalue analysis u Euv (, ) [ uv, ] M v λ 1, λ 2 eigenvalues of M Conclusions: arg max E u, v, = = e ( ) ( ) u v 1 max E ( emax ) = λmax Ellipse E(u,v) = const direction of the fastest change (λ max ) -1/2 (λ min )-1/2 direction of the slowest change 6

Harris Detector: Mathematics Harris Detector: Mathematics Measure of corner response (without calculating the e.v.): Classification of image points using eigenvalues of M: λ 2 Edge λ 2 >> λ 1 Corner λ 1 and λ 2 are large, λ 1 ~ λ 2 ; E increases in all directions R = det Trace M M det M = λ1λ 2 trace M = λ + λ R is associated with the smallest eigen-vector (why?) 100 50 1 2 90 45 80 40 λ 1 and λ 2 are small; E is almost constant in all directions Flat region Edge λ 1 >> λ 2 R v.s. λ 1, λ 2 70 60 50 40 30 20 35 30 25 20 15 λ 1 10 10 10 20 30 40 50 60 70 80 90 100 5 Harris Corner Detector Harris Detector: Workflow The Algorithm: Find points with large corner response function R (R > threshold) Take the points of local maxima of R 7

Harris Detector: Workflow Compute corner response R Harris Detector: Workflow Find points with large corner response: R>threshold Harris Detector: Workflow Take only the points of local maxima of R Harris Detector: Workflow 8

Harris Detector: Example Harris Detector: Some Properties Rotation invariance Ellipse rotates but its shape (i.e. eigenvalues) remains the same Corner response R is invariant to image rotation Harris Detector: Some Properties Partial invariance to affine intensity change Only derivatives are used => invariance to intensity shift I I + b Harris Detector: Some Properties But: non-invariant to spatial scale! Intensity scale: I a I g threshold g x (image coordinate) x (image coordinate) All points will be classified as edges Corner! 9

Scale Invariant Detection Consider regions (e.g. circles) of different sizes around a point Regions of corresponding sizes will look the same in both images Scale Invariant Detection The problem: how do we choose corresponding circles independently in each image? Solution: choose the scale of the best corner. Scale Invariant Detection Design a function on regions (circle). Example: average intensity. For corresponding regions (even of different sizes) it will be the same. For a point in an image, we can consider it as a function of region size (circle radius) Scale Invariant Detection Common approach: Take a local maximum of this function Observation: region size, for which the maximum is achieved, should be equivariant to image scale. Important: this scale invariant region size is found in each image independently! f f Image 1 scale = 1/2 f Image 2 region size s 1 region size s 2 region size 10

Scale Invariant Detection f A good function for scale detection: has one stable sharp peak bad region size f bad region size f Good! region size For usual images: a good function would be one that responds to contrast (sharp local intensity change) Harris-Laplacian Point Detector Harris-Laplacian Find local maximum of: Harris corner detector for a set of Laplacian images. scale y Harris x Laplacian 1 K.Mikolajczyk, C.Schmid. Indexing Based on Scale Invariant Interest Points. ICCV 2001 Memo: Gaussian / Laplacian Pyramids Harris - Laplacian Detector Gaussian Pyramid Laplacian Pyramid expand - expand - expand = = - = 11

SIFT Scale Invariant Feature Transform - Find local maximum of Laplacian in space and scale SIFT - Point Detection Construct scale-space: increasing σ scale y Laplacian David G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, 60, 2 (2004), pp. 91-110 x Laplacian G ( σ )* I ( kσ ) I G * First octave ( k 2 σ ) I G * G ( 2σ )* I ( 2kσ ) I G * ( ) I G 2k 2 σ * Second octave SIFT Scale Space SIFT point detection Scale Space extrema detection. Choose all extrema within 3x3x3 neighborhood. ( 2 σ ) D k D( kσ ) D( σ ) 12

SIFT point detection SIFT point detection 233x189 image 832 SIFT extrema Scale Invariant Detection: Summary Given: two images of the same scene with a large scale difference between them Goal: find the same interest points independently in each image Solution: search for maxima of suitable functions in scale and in space (over the image) One More thing.. Feature selection Want feature points to be distributed approx evenly over the image. Methods: 1. Harris-Laplacian [Mikolajczyk, Schmid]: maximize Laplacian over scale, Harris measure of corner response over the image 2. SIFT [Lowe]: maximize Laplacian over scale and space 13

Adaptive Non-maximal Suppression Desired: Fixed # of features per image Want evenly distributed spatially Sort points by non-maximal suppression radius [Brown, Szeliski, Winder, CVPR 05] Feature descriptors We know how to detect points Next question: How to match them? Solution: Generate point descriptors? Point descriptor should be: 1. Invariant 2. Distinctive Invariant Point Descriptors Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters Descriptors Invariant to Scale + Rotation + Contrast Local scale is given by the feature detection procedure. Local orientation - find dominant direction of gradient. Compute image descriptors/features relative to this orientation (e.g. derivatives) Normalize image descriptors/features relative to scale and orientation and relative to patch contrast. Features Descriptors 1 K.Mikolajczyk, C.Schmid. Indexing Based on Scale Invariant Interest Points. ICCV 2001 2 D.Lowe. Distinctive Image Features from Scale-Invariant Keypoints. Accepted to IJCV 2004 14

Descriptors Invariant to Rotation, Scale, Intensity SIFT = Scale Invariant Feature Transform SIFT - Step 1: Interest Point Detection DOG (laplacian Pyramid): take differences. David G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, 60, 2 (2004), pp. 91-110 SIFT Feature point is associated with: location, Orientation, Scale. SIFT Descriptor is a vector of 128 values each between [0-1] SIFT - Step 1: Interest Point Detection Scale Space extrema detection. Choose all extrema within 3x3x3 neighborhood. ( 2 σ ) D k D( kσ ) D( σ ) SIFT - Step 1: Interest Point Detection Detections at multiple scales Some of the detected SIFT frames. http://www.vlfeat.org/overview/sift.html 15

Interest Points SIFT - Step 2: Interest Localization & Filtering 1) Improve localization of Interest points. True Extrema Detected Extrema 233x189 image 832 DOG extrema For each maximum, fit quadratic function. Compute center with sub-pixel accuracy by setting first derivative to zero x SIFT - Step 2: Interest Localization & Filtering Interest Points 2) Remove bad Interest points: a) Remove points with low contrast b) Remove Edge points (Eigenvalues of Hessian Matrix must BOTH be large). (c) 729 left after peak value threshold (from 832) (d) 536 left after testing ratio of principle curvatures 16

SIFT Descriptor Vector STEP 3: Select canonical orientation Each SIFT interest point is associated with location (x,y) and scale (σ) Compute gradient magnitude and orientation for each SIFT point: SIFT Descriptor Vector STEP 3: Select canonical orientation Each SIFT interest point is associated with location (x,y), scale (σ), gradient magnitude and orientation (m, θ). Create histogram of local gradient directions computed at selected scale Assign canonical orientation at peak of smoothed histogram 0 2π Compute SIFT feature - a vector of 128 entries. SIFT Descriptor Vector Gradients determined in 16x16 window at SIFT point in scale space. Histogram is computed for gradients of each 4x4 sub window in 8 relative directions. A 4x4x8 = 128 dimensional feature vector is produced. SIFT Descriptor Vector STEP 4: Compute feature vector Image from: Jonas Hurrelmann 17

SIFT Scale Invariant Feature Transform Empirically found 2 to show very good performance, invariant to image rotation, scale, intensity change, and to moderate affine transformations Scale = 2.5 Rotation = 45 0 3D Object Recognition Only 3 keys are needed for recognition, so extra keys provide robustness 1 D.Lowe. Distinctive Image Features from Scale-Invariant Keypoints. Accepted to IJCV 2004 2 K.Mikolajczyk, C.Schmid. A Performance Evaluation of Local Descriptors. CVPR 2003 Recognition under occlusion Test of illumination Robustness Same image under differing illumination 273 keys verified in final match 18

Cases where SIFT didn t t work Feature matching Same object under differing illumination 43 keypoints in left image and the corresponding closest keypoints on the right (1 for each)? Feature matching Exhaustive search for each feature in one image, look at all the other features in the other image(s) Hashing compute a short descriptor from each feature vector, or hash longer descriptors (randomly) Nearest neighbor techniques kd-trees and their variants What about outliers?? 19

Matching features All red points are outliers What do we do about the bad matches? RANSAC Example: line fitting RANSAC = Random Sample Consensus An algorithm for robust fitting of models in the presence of many data outliers Given N data points {x i }, assume that mjority of them are generated from a model with parameters Θ, try to recover Θ 20

Example: line fitting Model fitting n=2 Measure distances Count inliers c=3 21

Another trial The best model c=3 c=15 Matching features RANSAC Select one match, count inliers Ground Truth 22

Least squares fit RANSAC algorithm Run k times: How many times? (1) draw n samples randomly How big? Smaller is better (2) fit parameters Θ with these n samples (3) for each of other N-n points, calculate their distance to the fitted model, count the number of inlier points, c Output Θ with the largest c Find average translation vector How to define? Depends on the problem. How to determine n? Minimum n value depends on Model. Translation 2dof Rigid 3dof Affine 6dof Projective 8dof 1 0 t x 0 1 t y 0 0 1 r11 r12 tx r21 r22 t y 0 0 1 a11 a12 tx a21 a22 t y 0 0 1 h11 h12 h13 h21 h22 h23 h 31 h32 h33 n=1 n=2 n=3 n=4 How to determine k p: probability of real inliers P: probability of success after k trials log(1 P) k = n log(1 p ) n k P = 1 (1 p ) n samples are all inliers a failure failure after k trials for P=0.99 n 3 6 6 p 0.5 0.6 0.5 k 35 97 293 23

RANSAC for estimating homography RANSAC RANSAC loop: 1. Select four feature pairs (at random) 2. Compute homography H (exact) 3. Compute inliers where SSD(p i, H p i ) < ε 4. Keep largest set of inliers 5. Re-compute least-squares H estimate on all of the inliers RANSAC for Homography RANSAC for Homography 24

RANSAC for Homography Example Application - Mosaicing 1D Rotations (θ) Ordering matching images 2D Rotations (θ, φ) Ordering matching images Finding the panoramas Finding the panoramas 25

Finding the panoramas Finding the panoramas Results Next Lectures How to build Panoramas How to compose seamlessly image parts 26

Feature Based Image Registration Detect feature points in both images Find corresponding pairs Compute image transformation Harris Corner Detection Sift Feature Detector RANSAC - Random Sample Consensus 27