CS5670: Computer Vision

Similar documents
Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Image Features. Work on project 1. All is Vanity, by C. Allan Gilbert,

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

Edge and corner detection

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Local Feature Detectors

Wikipedia - Mysid

Local features: detection and description. Local invariant features

Automatic Image Alignment (feature-based)

Prof. Feng Liu. Spring /26/2017

Local Features: Detection, Description & Matching

Lecture: RANSAC and feature detectors

Local features: detection and description May 12 th, 2015

Lecture 6: Finding Features (part 1/2)

CS4670: Computer Vision

The SIFT (Scale Invariant Feature

Automatic Image Alignment

Local invariant features

Local features and image matching. Prof. Xin Yang HUST

Feature Based Registration - Image Alignment

Automatic Image Alignment

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Mosaics. Today s Readings

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

Computer Vision for HCI. Topics of This Lecture

Local Image Features

Local Image Features

Multi-modal Registration of Visual Data. Massimiliano Corsini Visual Computing Lab, ISTI - CNR - Italy

Motion Estimation and Optical Flow Tracking

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

School of Computing University of Utah

3D from Photographs: Automatic Matching of Images. Dr Francesco Banterle

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Problems with template matching

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Outline 7/2/201011/6/

Local features and image matching May 8 th, 2018

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Low-level Vision Processing Algorithms Speaker: Ito, Dang Supporter: Ishii, Toyama and Y. Murakami

Scale Invariant Feature Transform

CS 558: Computer Vision 3 rd Set of Notes

CS5670: Computer Vision

CS 556: Computer Vision. Lecture 3

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

Scale Invariant Feature Transform

Bias-Variance Trade-off (cont d) + Image Representations

Keypoint detection. (image registration, panorama stitching, motion estimation + tracking, recognition )

Feature Matching and RANSAC

Image Stitching. Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Peripheral drift illusion

Computer Vision I - Filtering and Feature detection

CS 4495 Computer Vision Motion and Optic Flow

CS664 Lecture #21: SIFT, object recognition, dynamic programming

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

The bits the whirl-wind left out..

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

CS6670: Computer Vision

Local Image preprocessing (cont d)

Capturing, Modeling, Rendering 3D Structures

Autonomous Navigation for Flying Robots

Object Recognition with Invariant Features

Computer Vision I - Basics of Image Processing Part 2

Prof. Noah Snavely CS Administrivia. A4 due on Friday (please sign up for demo slots)

EE795: Computer Vision and Intelligent Systems

Image warping and stitching

Multi-stable Perception. Necker Cube

Motion illusion, rotating snakes

Structure from Motion

Final Exam Study Guide

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

Lecture 16: Computer Vision

CS6670: Computer Vision

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Scale Invariant Feature Transform by David Lowe

Image processing and features

CAP 5415 Computer Vision Fall 2012

Feature Tracking and Optical Flow

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Patch Descriptors. CSE 455 Linda Shapiro

Image warping and stitching

CPSC 425: Computer Vision

Feature Detection and Matching

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Application questions. Theoretical questions

Corner Detection. GV12/3072 Image Processing.

Lecture 16: Computer Vision

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

CS 556: Computer Vision. Lecture 3

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Stitching and Blending

Large-Scale 3D Point Cloud Processing Tutorial 2013

Transcription:

CS5670: Computer Vision Noah Snavely Lecture 4: Harris corner detection

Szeliski: 4.1 Reading

Announcements Project 1 (Hybrid Images) code due next Wednesday, Feb 14, by 11:59pm Artifacts due Friday, Feb 16, by 11:59pm Office hour rooms coming soon Quiz this Wednesday, 2/7 at the beginning of class (10 minutes)

Last time Sampling & interpolation Key points: Downsampling an image can cause aliasing. Better is to blur ( pre-filter ) to remote high frequencies then downsample If you repeatedly blur and downsample by 2x, you get a Gaussian pyramid Upsampling an image requires interpolation. This can be posed as convolution with a reconstruction kernel

Image interpolation Original image: x 10 Nearest-neighbor interpolation Bilinear interpolation Bicubic interpolation

Image interpolation Also used for resampling

Raster-to-vector graphics

Depixelating Pixel Art

Modern methods From Romano, et al: RAISR: Rapid and Accurate Image Super Resolution, https://arxiv.org/abs/1606.01299

Questions?

Feature extraction: Corners and blobs

Motivation: Automatic panoramas Credit: Matt Brown

Motivation: Automatic panoramas GigaPan http://gigapan.com / Also see Google Zoom Views: https://www.google.com/culturalinstitute/beta/project/gigapixels

Why extract features? Motivation: panorama stitching We have two images how do we combine them?

Why extract features? Motivation: panorama stitching We have two images how do we combine them? Step 1: extract features Step 2: match features

Why extract features? Motivation: panorama stitching We have two images how do we combine them? Step 1: extract features Step 2: match features Step 3: align images

Application: Visual SLAM (aka Simultaneous Localization and Mapping)

Image matching by Diva Sian by swashford

Harder case by Diva Sian by scgbt

Harder still?

Answer below (look for tiny colored squares ) NASA Mars Rover images with SIFT feature matches

Feature Matching

Feature Matching

Invariant local features Find features that are invariant to transformations geometric invariance: translation, rotation, scale photometric invariance: brightness, exposure, Feature Descriptors

Locality Advantages of local features features are local, so robust to occlusion and clutter Quantity hundreds or thousands in a single image Distinctiveness: can differentiate a large database of objects Efficiency real-time performance achievable

More motivation Feature points are used for: Image alignment (e.g., mosaics) 3D reconstruction Motion tracking (e.g. for AR) Object recognition Image retrieval Robot navigation other

Approach 1. Feature detection: find it 2. Feature descriptor: represent it 3. Feature matching: match it Feature tracking: track it, when motion

Kristen Grauman Local features: main components 1) Detection: Identify the interest points 2) Description: Extract vector feature descriptor surrounding each interest point. x (1) [ x,, x (1) 1 1 d ] 3) Matching: Determine correspondence between descriptors in two views x (2) [ x,, x (2) 2 1 d ]

What makes a good feature? Snoop demo

Want uniqueness Look for image regions that are unusual Lead to unambiguous matches in other images How to define unusual?

Local measures of uniqueness Suppose we only consider a small window of pixels What defines whether a feature is a good or bad candidate? Credit: S. Seitz, D. Frolova, D. Simakov

Local measures of uniqueness How does the window change when you shift it? Shifting the window in any direction causes a big change flat region: no change in all directions edge : no change along the edge direction corner : significant change in all directions Credit: S. Seitz, D. Frolova, D. Simakov

Harris corner detection: the math Consider shifting the window W by (u,v) how do the pixels in W change? compare each pixel before and after by summing up the squared differences (SSD) this defines an SSD error E(u,v): (u,v) W We are happy if this error is high Slow to compute exactly for each pixel and each offset (u,v)

Small motion assumption Taylor Series expansion of I: If the motion (u,v) is small, then first order approximation is good Plugging this into the formula on the previous slide

Corner detection: the math Consider shifting the window W by (u,v) define an SSD error E(u,v): (u,v) W

Corner detection: the math Consider shifting the window W by (u,v) define an SSD error E(u,v): (u,v) W Thus, E(u,v) is locally approximated as a quadratic error function

The second moment matrix The surface E(u,v) is locally approximated by a quadratic form. Let s try to understand its shape.

E(u,v) Horizontal edge: u v

E(u,v) Vertical edge: u v

General case We can visualize H as an ellipse with axis lengths determined by the eigenvalues of H and orientation determined by the eigenvectors of H Ellipse equation: u [ u v] H const v direction of the fastest change max, min : eigenvalues of H direction of the slowest change ( max ) -1/2 ( min ) -1/2

Quick eigenvalue/eigenvector review The eigenvectors of a matrix A are the vectors x that satisfy: The scalar is the eigenvalue corresponding to x The eigenvalues are found by solving: In our case, A = H is a 2x2 matrix, so we have The solution: Once you know, you find x by solving

Corner detection: the math x min x max Eigenvalues and eigenvectors of H Define shift directions with the smallest and largest change in error x max = direction of largest increase in E max = amount of increase in direction x max x min = direction of smallest increase in E min = amount of increase in direction x min

Corner detection: the math How are max, x max, min, and x min relevant for feature detection? What s our feature scoring function?

Corner detection: the math How are max, x max, min, and x min relevant for feature detection? What s our feature scoring function? Want E(u,v) to be large for small shifts in all directions the minimum of E(u,v) should be large, over all unit vectors [u v] this minimum is given by the smaller eigenvalue ( min ) of H

Interpreting the eigenvalues Classification of image points using eigenvalues of M: 2 Edge 2 >> 1 Corner 1 and 2 are large, 1 ~ 2 ; E increases in all directions 1 and 2 are small; E is almost constant in all directions Flat region Edge 1 >> 2 1

Corner detection summary Here s what you do Compute the gradient at each point in the image Create the H matrix from the entries in the gradient Compute the eigenvalues. Find points with large response ( min > threshold) Choose those points where min is a local maximum as features

Corner detection summary Here s what you do Compute the gradient at each point in the image Create the H matrix from the entries in the gradient Compute the eigenvalues. Find points with large response ( min > threshold) Choose those points where min is a local maximum as features

The Harris operator min is a variant of the Harris operator for feature detection The trace is the sum of the diagonals, i.e., trace(h) = h 11 + h 22 Very similar to min but less expensive (no square root) Called the Harris Corner Detector or Harris Operator Lots of other detectors, this is one of the most popular

The Harris operator Harris operator

Harris detector example

f value (red high, blue low)

Threshold (f > value)

Find local maxima of f

Harris features (in red)

Weighting the derivatives In practice, using a simple window W doesn t work too well Instead, we ll weight each derivative value based on its distance from the center pixel

Harris Detector [Harris88] Second moment matrix 2 I x ( D) I xi y ( D) ( I, D) g( ) 2 1. Image I I xi y ( D) I y ( D) derivatives (optionally, blur first) I x I y det M trace M 1 2 1 2 2. Square of derivatives 3. Gaussian filter g( I ) I x 2 I y 2 I x I y g(i x2 ) g(i y2 ) g(i x I y ) 4. Cornerness function both eigenvalues are strong 5. Non-maxima suppression 63 har

Harris Corners Why so complicated? Can t we just check for regions with lots of gradients in the x and y directions? No! A diagonal line would satisfy that criteria Current Window

Questions?