AK Computer Vision Feature Point Detectors and Descriptors
|
|
- Abigayle Bailey
- 5 years ago
- Views:
Transcription
1 AK Computer Vision Feature Point Detectors and Descriptors 1
2 Feature Point Detectors and Descriptors: Motivation 2
3 Step 1: Detect local features should be invariant to scale and rotation, or perspective transformation 3
4 Step 2: Rectify patch 4
5 Step 3: Build a description vector ("descriptor") 5
6 Step 4: Match the description vectors aka descriptors 6
7 7
8 Motivation Global image representations are difficult to handle Alternative: describe and match only local regions around interest points Increased robustness to Occlusions 8
9 Motivation Global image representations are difficult to handle Alternative: describe and match only local regions around interest points Increased robustness to Occlusions Geometric transformations: non-rigid deformation, perspective, etc. Intra-category variations: 9
10 Covariant vs. Invariant When a transformation is applied to an image, an invariant measure remains unchanged. a covariant measure changes in a way consistent with the image transformation IP Detector: Covariant detectors => Invariant descriptors 10
11 An Application Image Mosaicing: How to combine several, overlapping images 11
12 Robust Feature-Based Alignment 12
13 Robust Feature-Based Alignment Extract interest points and descriptors 13
14 Robust feature-based alignment Extract interest points and descriptors Compute putative matches 14
15 Robust feature-based alignment Extract interest points and descriptors Compute putative matches Loop (RANSAC): Hypothesize transformation T 15
16 Robust feature-based alignment Extract interest points Compute putative matches Loop (RANSAC): Hypothesize transformation T Verify transformation 16
17 Robust feature-based alignment Extract interest points and descriptors Compute putative matches Loop (RANSAC): Hypothesize transformation T Verify transformation Source: L. Lazebnik 17
18 Structure from Motion 22
19 Detection of interesting image parts INTEREST POINTS 24
20 History of Interest Points First interest point detector in 1977, the Moravec operator 26
21 Harris corner detector Hessian detector Laplace variants Affine variants MSER SURF Edge Foci FAST... Interest Points 27
22 Interest Points HARRIS CORNERS 28
23 Harris Detector Introduced by Harris and Stephens in 1988 Basic assumption: Shifting a local patch in any direction should give a large change in intensity References: A combined corner and edge detector, Harris and Stephens, Alvey Vision Conference 29
24 Small Motion Assumption Taylor Series expansion of I(x + u, y + v): If the motion (u,v) is small, then the first order approximation is good 30
25 Local Feature Detection: The Math 31
26 E(u, v) Local Feature Detection: The Math X (x,y)2w X (x,y)2w X (x,y)2w [uv] apple [I x I y ] apple Ix apple u v apple u [uv] [I I x I y ] y v apple apple I 2 [uv] x I x I y u I x I y Iy 2 v 1 X apple apple I 2 x I x I y A u I x I y v (x,y)2w 2 I 2 y 32
27 Local Feature Detection: The Math E(u, v) [uv] X (x,y)2w apple I 2 x I x I y I x I y I 2 y 1 A apple u v we are looking for (x, y) images locations such that E(u, v) is large for all directions [u, v] Eigenvalues of Q reveal the amount of intensity change in the two principal orthogonal gradient directions within the patch Q 33
28 Geometric Interpretation of Q (λmax) -1/2 (λmin) -1/2 34
29 Recall: Corners as distinctive interest points edge : λ1 >> λ2 λ2 >> λ1 One way to score the cornerness: corner : λ1 and λ2 are large, λ1 ~ λ2 flat region λ1 and λ2 are small 35
30 Harris corner detector 1) Compute matrix Q for each pixel to get its cornerness score 2) Find points with large corner response (f > threshold) 3) Take the points of local maxima, i.e., perform nonmaximum suppression 36
31 Harris Detector: Steps 37
32 Harris Detector: Steps 38
33 Harris Detector: Steps 39
34 Harris Detector: Steps 40
35 Harris Detector: Steps 41
36 Harris Detector Example 42
37 Harris Properties Rotation invariant? Yes Scale invariant? No All points will be classified as edges Corner! 43
38 Interest Points HESSIAN DETECTOR Hessian ( I) = I I xx xy I I xy yy 44
39 Hessian determinant Hessian Detector Hessian ( I) = I I xx xy I I xy yy 46
40 Problem with Harris/Hessian Detector Not scale invariant To overcome this problem: Automatic scale space detection required Finding characteristic scale 47
41 Automatic scale selection Intuition: Find scale that gives local maxima of some function f in both position and scale. f Image 1 f Image 2 s 1 region size s 2 region size 49
42 Scale Invariant Detection Functions for determining scale Kernels: ( (,, ) (,, )) 2 L= Gxx x y + Gyy x y σ σ σ (Laplacian of Gaussian) DoG = G( x, y, kσ) G( x, y, σ) (Difference of Gaussians) f = Kernel Image where Gaussian 2 2 x + y πσ σ Gxy (,, σ ) = e Note: both kernels are invariant to scale and rotation 51
43 Affine Invariance Need to generalize uniform scale changes Local Estimation of structure à Second moment matrix 55
44 Affine Transformation Estimation Warp by Affine Transformation Q 1/2, where Q is the auto-correlation matrix. 56
45 Interest Points DIFFERENCE OF GAUSSIANS 59
46 Laplacian of Gaussian (G xx + G yy ) for feature point detection Laplacian operator 60
47 DoG Efficient Computation Computation in Gaussian scale pyramid Maxima selection in 3x3x3 neighborhood σ σ σ Original image σ 62
48 Results: Lowe s DoG 64
49 Select Canonical Orientation Create histogram of local gradient directions computed over the image patch; Each gradient contributes for its norm, weighted by its distance to patch center; Assign canonical orientation at peak of smoothed histogram. 0 2π 66
50 Interest Points MAXIMALLY STABLE EXTREMAL REGIONS 68
51 MSER 1. Threshold an image at every intensity level References: Robust wide baseline stereo from maximally stable extremal regions, Matas et al., BMVC
52 MSER 1. Threshold an image at every intensity level 2. Find connected components 3. Build tree structure (nested components) 4. Find regions that are maximally stable w.r.t. their size, analyzing stability criterion MSER+ vs. MSER- References: Robust wide baseline stereo from maximally stable extremal regions, Matas et al., BMVC
53 Example Results: MSER 75
54 Affine Covariant Fit Ellipses to each region 76
55 Interest Points SURF 78
56 Efficient IP detector SURF Based on Hessian Approximation by Box Filters References: SURF: Speeded Up Robust Features, Bay et al., CVIU
57 Methodology Using integral images for major speed up Integral Image (summed area tables) is an intermediate representation for the image and contains the sum of gray scale pixel values of image Sum of values within rectangles is calculated using only a few operations independent of location and size Cost three operations only 80
58 Interest Points FAST 83
59 FAST Use heuristic for identifying corner points: Compare intensities of 16 surrounding pixels to center pixel intensity Relies on tests I(center) > I(p) + t or I(center) > < I(p) - t 84
60 FAST Use machine learning to select the tests for faster rejection of non-corner pixels References: Machine learning for high-speed corner detection, Rosten and Drummond, CVPR
61 Learning Decision Tree (ID3) to learn appearance of corner from training data ID3: supervised machine learning method: learn from training data Deterministic (no randomness) Play Tennis? 86
62 Decision Tree Training data: Corner points detected by heuristic Splitting criterion: compare neighboring pixels to center pixel and make 3 splits Select split that has lowest entropy Repeat until entropy = 0 in each leaf node n = 9 worked best Different non-maximum suppression 87
63 FAST 88
64 Interest Point Detectors X (x,y)2w apple I 2 x I x I y I x I y I 2 y 1 A Harris corner detector Hessian detector MSER SURF FAST... Hessian ( I) = I I xx xy I I xy yy 89
65 TILDE 90
66 Interest Points EVALUATION CRITERIONS 91
67 Evaluation Repeatability: average number of corresponding regions detected in images under different geometric and photometric transformations 92
68 Institute for Computer Graphics and Vision Repeatability 93
69 Analyze Overlap 94
70 Overlap Criterion 95
71 Evaluation Repeatability: average number of corresponding regions detected in images under different geometric and photometric transformations Matching score: ratio between the number of correct matches and the smaller number of detected regions 96
72 Matching Score 97
73 Flat scenes Mikolajczyk & Schmid (2004), Mikolajczyk et al. (2004) MSER has highest repeatability (but lowest number) Harris and Hessian provide the most correspondences 3D objects Moreels & Perona (2006) Features on 3D objects are much more unstable All detectors perform poorly for large viewpoint changes 98
74 Evaluation Repeatability: average number of corresponding regions detected in images under different geometric and photometric transformations Matching score: ratio between the number of correct matches and the smaller number of detected regions Number of detected points 99
75 Evaluation Viewpoint change Zoom+Rotation 100
76 Evaluation Zoom+Rotation Blur JPEG Illumination 101
77 102
78 Code in Matlab VLFEAT 103
79 104
80 vl_covdet 1. load an image impath = fullfile( oxford.jpg ) ; im = imread(impath) ; 2. convert it to single precision gray scale imgs = im2single(rgb2gray(im)) ; 3. run SIFT [frames, descrs] = vl_sift(imgs) ; 4. visualise keypoints imagesc(im) ; colormap gray; hold on ; vl_plotframe(frames) ; 105
81 Invariant local patch description DESCRIPTORS 0 2 π 106
82 SIFT SURF HOG BRIEF BRISK LBP Shape Context Self Similarity... Local Descriptors 107
83 Step 3: Build a description vector ("descriptor") 108
84 Step 4: Match the description vectors aka descriptors 109
85 Interest Point Descriptors SIFT 0 2 π 113
86 SIFT Description Vector Made of local histograms of gradients: In practice: 8 orientations x 4 x 4 histograms = 128 dimensions vector. 115
87 Primary Visual Cortex Institute for Computer Graphics and Vision 116
88 Handling Lighting Changes Gains do not affect gradients; Normalization to unit length removes contrast; Saturation affects magnitudes much more than orientation: magnitudes are thresholded. 117
89 SIFT Feature Representation Descriptor contains histograms of a 4 4 spatial grid around the keypoint Each histogram has 8 orientation bins SIFT vector contains = 128 values Normalized to enhance invariance to illumination changes 118
90 Extraordinarily robust matching technique Can handle changes in viewpoint Up to about 60 degree out of plane rotation Can handle significant changes in illumination Sometimes even day vs. night (below) Lots of code available SIFT descriptor 119
91 Example NASA Mars Rover images 120
92 Example NASA Mars Rover images with SIFT feature matches 121
93 Interest Point Descriptors SURF 123
94 Local Descriptors: SURF Fast approximation of SIFT idea Efficient computation by 2D box filters & integral images 6 times faster than SIFT GPU implementation available Feature 100Hz (detector + descriptor, ) References: SURF: Speeded Up Robust Features, Bay et al., CVIU
95 Description Haar Wavelets: efficient calculation by integral images 125
96 Description Split the interest region up into 4 x 4 square sub-regions Calculate Haar wavelet response d x and d y Weight the response with a Gaussian kernel centered at the interest point Sum the response over each sub-region for d x and d y separately à feature vector of length 32 In order to bring in information about the polarity of the intensity changes, extract the sum of absolute value of the responses à feature vector of length 64 Normalize the vector into unit length 126
97 Interest Point Descriptors HISTOGRAM OF GRADIENTS 127
98 Is an adaption of SIFT HOG Based on gradient magnitudes and orientations Cells vs. Blocks Normalization is important References: Histograms of Oriented Gradients for Human Detection, Dalal and Triggs, CVPR
99 HoG Designed for a specific task: detect upright(!) category instances in images Instead of describing neighborhood around interest point, HoG describes an entire window around object Use ideas of SIFT descriptor: Gradient orientation histograms Binning into cells Normalization 129
100 HOG Gradient calculation First step (as in SIFT): Estimate image gradients in the horizontal and vertical directions: [-1 0 1] and [-1 0 1] T 130
101 HOG Orientation binning Second step (as in SIFT): Window is partitioned into a grid of cells Cells can be rectangular or radial Each pixel casts a weighted vote (based on the gradient magnitude) to an orientationbased histogram for the corresponding cells Histogram with 8 bins between 0 and 360 yields best results 131
102 HOG Descriptor blocks Third step (different normalization): Cells are grouped into larger, spatially connected blocks Blocks are overlapping and are used for normalization HOG feature vector is a concatenation of normalized cell histograms for all blocks Very high-dimensional: ~4000 dimensions 132
103 Institute for Computer Graphics and Vision Histogram of Oriented Gradients (HoG) 135
104 Descriptors in VL Feat VLFEAT SIFT AND HOG 136
105 Interest Point Descriptors BRIEF 137
106 BRIEF: A Fast Binary Local Descriptor BRIEF descriptor 138
107 Evaluation 139
108 Evaluation 140
109 Computation Speed For BRIEF, most of the time is spent in Gaussian smoothing. 141
110 Matching Speed 142
111 Rotation and Scale Invariance 143
112 Rotation and Scale Invariance Duplicate the Descriptors: 18 rotations x 3 scales
113 145
114 Interest Point Descriptors BRISK 146
115 BRISK Idea: make BRIEF scale and rotation invariant IP detector = FAST + scale space search Descriptor: based on fixed sampling pattern References: Binary Robust Invariant Scalable Keypoints, Leutenegger et al., ICCV
116 Descriptor Given set of all sampling-point pairs We define A long range set A short range set Orientation estimation Normalized intensity difference 148
117 Interest Point Descriptors LOCAL BINARY PATTERNS 150
118 Local Binary Patterns Encodes texture Binary pixel test (center pixel vs. neighborhood) 8 bit à value between 0 and 255 (LBP code) Histogram of LBP codes within region of interest Frequency LBP index References: A Comparative Study of Texture Measures with Classification Based on Feature Distributions, Ojala et al, Pattern Recognition,
119 Extensions existing for Multiple scales Rotation invariance LBP Gray scale variance as contrast measures Uniform patterns Arbitrary circular neighborhoods Invariance with respect to monotonic transformations can be achieved 152
120 Interest Point Descriptors SHAPE CONTEXT 153
121 Shape Context Shape based description Descriptor calculated on edge map Local neighborhood description 154
122 Descriptor Edge pixels are assigned to histogram bins for varying distances and angles in log-polar manner (defines pooling) Usually 12 orientations and 5 distance bins 60-dimensional descriptor 155
123 Shape context descriptor 156
124 Learning a Descriptor Training data ( similar (correct) different (incorrect) Training sets incorporate various transformations, e.g.: intensity change affine transforma1on 157
125 Interest Point Descriptors EVALUATION CRITERIONS 158
126 Evaluation Measure matching quality: Precision vs. Recall Depends on IP detector 159
127 Descriptors in VL Feat VLFEAT DESCRIPTORS 160
128 Gradient Magnitude and Orientation LOCAL IMAGE GRADIENTS 161
129 The gradient of an image: The gradient points in the direction of most rapid change in intensity The gradient direction (orientation of edge normal) is given by: The edge strength is given by the gradient magnitude 162
130 The discrete gradient How can we differentiate a digital image f[x,y]? Take discrete derivative (finite difference) (a): Roberts cross operator (b): 3x3 Prewitt operator (c): Sobel operator (d) 4x4 Prewitt operator 163
131 Gradient to Edges Still state-of-the-art: Canny detector 164
132 What causes an edge? Reflectance change: appearance information, texture object boundary Cast shadows Change in surface orientation: shape 165
133 Machine Learning LEARNING GRADIENT DETECTORS 166
134 Goal: Learn from human segmentations what good contours are! Human-marked segment boundaries 167
135 Why learning? 1. Modeling assumptions Minimal 2. Parameters None 3. Multiple sources of information Automatically incorporated 4. Real world conditions Training data References: Learning to Detect Natural Image Boundaries Using Brightness and Texture, Martin et al., NIPS
136 Low-level edges vs. perceived contours image human segmentation gradient magnitudes Training data: Berkeley segmentation database: Source: L. Lazebnik 169
137 Individual Features 1976 CIE L*a*b* colorspace Brightness Gradient BG(x,y,r,θ) Difference of L* distributions Color Gradient CG(x,y,r,θ) Difference of a*b* distributions Texture Gradient TG(x,y,r,θ) Difference of distributions of V1-like filter responses (x,y) r θ All features together define our vector representation x i for each pixel 170
138 Feature comparison Oriented Edges Brightness Gradient Color Gradient Texture Gradient No Boundary Boundary 171
139 Machine Learning for Contour detection Given label data form Berkeley y i Contour pixels of human segmentations Corresponding features x i D-dimensional vectors (per pixel) Supervised Machine Learning problem Regression: y in real numbers (gradient magnitude) Any Regression Method is applicable 172
140 Regression Output densely evaluated on image 173
141 Contour detection ~
142 Contour detection ~
143 Contour detection ~2004 Machine Learning Gap 176
144 Contour detection ~2008 (gray) 177
145 Contour detection ~2008(color) 178
146 Today 179
Local invariant features
Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest
More informationLocal features and image matching. Prof. Xin Yang HUST
Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source
More informationLocal features: detection and description May 12 th, 2015
Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59
More informationLocal Features: Detection, Description & Matching
Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationComputer Vision for HCI. Topics of This Lecture
Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi
More informationLocal features: detection and description. Local invariant features
Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms
More informationAugmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit
Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection
More informationScale Invariant Feature Transform
Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic
More informationMidterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability
Midterm Wed. Local features: detection and description Monday March 7 Prof. UT Austin Covers material up until 3/1 Solutions to practice eam handed out today Bring a 8.5 11 sheet of notes if you want Review
More informationCS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing
CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.
More informationOutline 7/2/201011/6/
Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern
More informationSURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image
SURF CSED441:Introduction to Computer Vision (2015S) Lecture6: SURF and HOG Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Speed Up Robust Features (SURF) Simplified version of SIFT Faster computation but
More informationScale Invariant Feature Transform
Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image
More informationFeature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1
Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline
More informationLocal Image Features
Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment
More informationImage matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.
Announcements Project 1 Out today Help session at the end of class Image matching by Diva Sian by swashford Harder case Even harder case How the Afghan Girl was Identified by Her Iris Patterns Read the
More informationCEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.
CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering
More informationFeature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking
Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)
More informationMotion Estimation and Optical Flow Tracking
Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction
More information2D Image Processing Feature Descriptors
2D Image Processing Feature Descriptors Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Overview
More informationLecture 10 Detectors and descriptors
Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of detectors SIFT Shape context Silvio Savarese Lecture 10-26-Feb-14 From the 3D to 2D & vice versa P =
More informationMotion illusion, rotating snakes
Motion illusion, rotating snakes Local features: main components 1) Detection: Find a set of distinctive key points. 2) Description: Extract feature descriptor around each interest point as vector. x 1
More informationFeature Based Registration - Image Alignment
Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html
More informationHarder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford
Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer
More informationLocal Image Features
Local Image Features Ali Borji UWM Many slides from James Hayes, Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Overview of Keypoint Matching 1. Find a set of distinctive key- points A 1 A 2 A 3 B 3
More informationMulti-modal Registration of Visual Data. Massimiliano Corsini Visual Computing Lab, ISTI - CNR - Italy
Multi-modal Registration of Visual Data Massimiliano Corsini Visual Computing Lab, ISTI - CNR - Italy Overview Introduction and Background Features Detection and Description (2D case) Features Detection
More informationSIFT - scale-invariant feature transform Konrad Schindler
SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective
More informationBuilding a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882
Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationBSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy
BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving
More informationProf. Feng Liu. Spring /26/2017
Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6
More informationIntroduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature
More informationLocal Feature Detectors
Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,
More informationImage features. Image Features
Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in
More informationHarder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford
Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer
More informationImage Features. Work on project 1. All is Vanity, by C. Allan Gilbert,
Image Features Work on project 1 All is Vanity, by C. Allan Gilbert, 1873-1929 Feature extrac*on: Corners and blobs c Mo*va*on: Automa*c panoramas Credit: Ma9 Brown Why extract features? Mo*va*on: panorama
More informationEvaluation and comparison of interest points/regions
Introduction Evaluation and comparison of interest points/regions Quantitative evaluation of interest point/region detectors points / regions at the same relative location and area Repeatability rate :
More informationThe SIFT (Scale Invariant Feature
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical
More informationCAP 5415 Computer Vision Fall 2012
CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented
More informationCS 558: Computer Vision 4 th Set of Notes
1 CS 558: Computer Vision 4 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Overview Keypoint matching Hessian
More informationLocal Image Features
Local Image Features Computer Vision Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Flashed Face Distortion 2nd Place in the 8th Annual Best
More informationComparison of Feature Detection and Matching Approaches: SIFT and SURF
GRD Journals- Global Research and Development Journal for Engineering Volume 2 Issue 4 March 2017 ISSN: 2455-5703 Comparison of Detection and Matching Approaches: SIFT and SURF Darshana Mistry PhD student
More informationKey properties of local features
Key properties of local features Locality, robust against occlusions Must be highly distinctive, a good feature should allow for correct object identification with low probability of mismatch Easy to etract
More informationFeatures Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)
Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so
More information3D from Photographs: Automatic Matching of Images. Dr Francesco Banterle
3D from Photographs: Automatic Matching of Images Dr Francesco Banterle francesco.banterle@isti.cnr.it 3D from Photographs Automatic Matching of Images Camera Calibration Photographs Surface Reconstruction
More informationClick to edit title style
Class 2: Low-level Representation Liangliang Cao, Jan 31, 2013 EECS 6890 Topics in Information Processing Spring 2013, Columbia University http://rogerioferis.com/visualrecognitionandsearch Visual Recognition
More informationEdge and corner detection
Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements
More informationVisual Tracking (1) Tracking of Feature Points and Planar Rigid Objects
Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationPatch Descriptors. EE/CSE 576 Linda Shapiro
Patch Descriptors EE/CSE 576 Linda Shapiro 1 How can we find corresponding points? How can we find correspondences? How do we describe an image patch? How do we describe an image patch? Patches with similar
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 09 130219 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Feature Descriptors Feature Matching Feature
More informationDiscovering Visual Hierarchy through Unsupervised Learning Haider Razvi
Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping
More informationAdvanced Video Content Analysis and Video Compression (5LSH0), Module 4
Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Visual feature extraction Part I: Color and texture analysis Sveta Zinger Video Coding and Architectures Research group, TU/e ( s.zinger@tue.nl
More informationLocal Image preprocessing (cont d)
Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge
More informationWikipedia - Mysid
Wikipedia - Mysid Erik Brynjolfsson, MIT Filtering Edges Corners Feature points Also called interest points, key points, etc. Often described as local features. Szeliski 4.1 Slides from Rick Szeliski,
More informationSchool of Computing University of Utah
School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?
More informationUlas Bagci
CAP5415- Computer Vision Lecture 5 and 6- Finding Features, Affine Invariance, SIFT Ulas Bagci bagci@ucf.edu 1 Outline Concept of Scale Pyramids Scale- space approaches briefly Scale invariant region selecqon
More informationFeature Detection and Matching
and Matching CS4243 Computer Vision and Pattern Recognition Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS4243) Camera Models 1 /
More informationFeature Descriptors. CS 510 Lecture #21 April 29 th, 2013
Feature Descriptors CS 510 Lecture #21 April 29 th, 2013 Programming Assignment #4 Due two weeks from today Any questions? How is it going? Where are we? We have two umbrella schemes for object recognition
More informationAutomatic Image Alignment (feature-based)
Automatic Image Alignment (feature-based) Mike Nese with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2006 Today s lecture Feature
More informationPatch Descriptors. CSE 455 Linda Shapiro
Patch Descriptors CSE 455 Linda Shapiro How can we find corresponding points? How can we find correspondences? How do we describe an image patch? How do we describe an image patch? Patches with similar
More informationSIFT: Scale Invariant Feature Transform
1 / 25 SIFT: Scale Invariant Feature Transform Ahmed Othman Systems Design Department University of Waterloo, Canada October, 23, 2012 2 / 25 1 SIFT Introduction Scale-space extrema detection Keypoint
More informationObtaining Feature Correspondences
Obtaining Feature Correspondences Neill Campbell May 9, 2008 A state-of-the-art system for finding objects in images has recently been developed by David Lowe. The algorithm is termed the Scale-Invariant
More informationSegmentation and Grouping
Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation
More informationImage Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58
Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify
More informationLecture: RANSAC and feature detectors
Lecture: RANSAC and feature detectors Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 What we will learn today? A model fitting method for edge detection RANSAC Local invariant
More informationFiltering Images. Contents
Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents
More informationCorner Detection. GV12/3072 Image Processing.
Corner Detection 1 Last Week 2 Outline Corners and point features Moravec operator Image structure tensor Harris corner detector Sub-pixel accuracy SUSAN FAST Example descriptor: SIFT 3 Point Features
More informationEECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline
EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)
More informationAnno accademico 2006/2007. Davide Migliore
Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?
More informationComputer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting,
More informationCategory vs. instance recognition
Category vs. instance recognition Category: Find all the people Find all the buildings Often within a single image Often sliding window Instance: Is this face James? Find this specific famous building
More informationObject Recognition with Invariant Features
Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user
More informationRequirements for region detection
Region detectors Requirements for region detection For region detection invariance transformations that should be considered are illumination changes, translation, rotation, scale and full affine transform
More informationA NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION
A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely Lecture 4: Harris corner detection Szeliski: 4.1 Reading Announcements Project 1 (Hybrid Images) code due next Wednesday, Feb 14, by 11:59pm Artifacts due Friday, Feb
More informationFeature descriptors and matching
Feature descriptors and matching Detections at multiple scales Invariance of MOPS Intensity Scale Rotation Color and Lighting Out-of-plane rotation Out-of-plane rotation Better representation than color:
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More information3D Photography. Marc Pollefeys, Torsten Sattler. Spring 2015
3D Photography Marc Pollefeys, Torsten Sattler Spring 2015 Schedule (tentative) Feb 16 Feb 23 Mar 2 Mar 9 Mar 16 Mar 23 Mar 30 Apr 6 Apr 13 Apr 20 Apr 27 May 4 May 11 May 18 May 25 Introduction Geometry,
More informationPerformance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching
Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching Akshay Bhatia, Robert Laganière School of Information Technology and Engineering University of Ottawa
More informationEE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm
EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant
More informationImage Features: Detection, Description, and Matching and their Applications
Image Features: Detection, Description, and Matching and their Applications Image Representation: Global Versus Local Features Features/ keypoints/ interset points are interesting locations in the image.
More informationCS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University
CS534: Introduction to Computer Vision Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators Laplacian
More informationComparison of Local Feature Descriptors
Department of EECS, University of California, Berkeley. December 13, 26 1 Local Features 2 Mikolajczyk s Dataset Caltech 11 Dataset 3 Evaluation of Feature Detectors Evaluation of Feature Deriptors 4 Applications
More informationImplementation and Comparison of Feature Detection Methods in Image Mosaicing
IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p-ISSN: 2278-8735 PP 07-11 www.iosrjournals.org Implementation and Comparison of Feature Detection Methods in Image
More informationComputer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town
Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of
More informationA Comparison of SIFT, PCA-SIFT and SURF
A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National
More informationLocal Features Tutorial: Nov. 8, 04
Local Features Tutorial: Nov. 8, 04 Local Features Tutorial References: Matlab SIFT tutorial (from course webpage) Lowe, David G. Distinctive Image Features from Scale Invariant Features, International
More informationCS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University
CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges
More informationImage Features Detection, Description and Matching
Image Features Detection, Description and Matching M. Hassaballah, Aly Amin Abdelmgeid and Hammam A. Alshazly Abstract Feature detection, description and matching are essential components of various computer
More informationHISTOGRAMS OF ORIENTATIO N GRADIENTS
HISTOGRAMS OF ORIENTATIO N GRADIENTS Histograms of Orientation Gradients Objective: object recognition Basic idea Local shape information often well described by the distribution of intensity gradients
More informationLecture 6: Finding Features (part 1/2)
Lecture 6: Finding Features (part 1/2) Dr. Juan Carlos Niebles Stanford AI Lab Professor Stanford Vision Lab 1 What we will learn today? Local invariant features MoOvaOon Requirements, invariances Keypoint
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationCS4670: Computer Vision
CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know
More informationScott Smith Advanced Image Processing March 15, Speeded-Up Robust Features SURF
Scott Smith Advanced Image Processing March 15, 2011 Speeded-Up Robust Features SURF Overview Why SURF? How SURF works Feature detection Scale Space Rotational invariance Feature vectors SURF vs Sift Assumptions
More informationCS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges
CS 4495 Computer Vision Linear Filtering 2: Templates, Edges Aaron Bobick School of Interactive Computing Last time: Convolution Convolution: Flip the filter in both dimensions (right to left, bottom to
More informationFeature Matching and Robust Fitting
Feature Matching and Robust Fitting Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Project 2 questions? This
More informationVisual Tracking (1) Pixel-intensity-based methods
Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationScale Invariant Feature Transform by David Lowe
Scale Invariant Feature Transform by David Lowe Presented by: Jerry Chen Achal Dave Vaishaal Shankar Some slides from Jason Clemons Motivation Image Matching Correspondence Problem Desirable Feature Characteristics
More informationEdge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University
Edge and Texture CS 554 Computer Vision Pinar Duygulu Bilkent University Filters for features Previously, thinking of filtering as a way to remove or reduce noise Now, consider how filters will allow us
More information