Computer Vision Feature Extraction
|
|
- Hilary Lindsey
- 5 years ago
- Views:
Transcription
1 Feature Extraction
2 PART 1: OUVERTURE
3 Feature matching Goal : efficient i matching for registration, ti correspondences for 3D, tracking, recognition
4 Feature matching
5 Feature matching
6 Feature matching Large scale change, heavy occlusion
7 Feature matching Deformation, illumination change, occlusion
8 Feature matching Large scale change, perspective deformation, extensive clutter
9 Feature matching Extensive clutter, scale, occlusion, blur
10 Feature matching
11 Matching: a challenging task features to deal with large variations in Viewpoint
12 Matching: a challenging task features to deal with large variations in Viewpoint Illumination
13 Matching: a challenging task features to deal with large variations in Viewpoint Illumination Background
14 Matching: a challenging task features to deal with large variations in Viewpoint Illumination Background And with occlusions
15 Features à la carte considerations : 1. Completeness 2. Robustness of extraction 3. Ease of extraction 4. Global vs. local
16 THUS:
17 THUS: GO PATCHY!
18 PART 2: `INTEREST POINTS
19 Additional requirement: Discriminant + good localization
20 Additional requirement: A feature should have something typical and well locallised about the pattern within its local patch Shifting the patch a bit should make a big difference in terms of the underlying pattern
21 Uniqueness of a patch consider the pixel pattern within the green patches: Slide adapted from Darya Frolova, Denis Simakov, Deva Ramanan
22 Uniqueness of a patch How do the patterns change upon a shift? flat region: edge : corner : no change in all no change along significant change directions the edge direction in all directions Slide adapted from Darya Frolova, Denis Simakov, Deva Ramanan
23 Uniqueness of a patch Consider shifting the patch or `window W by (u,v) W how do the pixels in W change? compare each pixel before and after by summing up the squared differences (SSD) this defines an SSD error of E(u,v): Slide adapted from Darya Frolova, Denis Simakov, Deva Ramanan
24 Uniqueness of a patch Taylor Series expansion of I: If the motion (u,v) is small, then first order approx is goo Plugging this into the formula at the top Slide adapted from Darya Frolova, Denis Simakov, Deva Ramanan
25 Uniqueness of a patch W Then, with Slide adapted from Darya Frolova, Denis Simakov, Deva Ramanan
26 Uniqueness of a patch This can be rewritten further as: Which directions will result in the largest and smallest E values? We can find these directions by looking at the eigenvectors of H Slide adapted from Darya Frolova, Denis Simakov, Deva Ramanan
27 The eigenvectors of a matrix A are the vectors x that satisfy: if The scalar is the eigenvalue corresponding to x The eigenvalues are found by solving: In our case, A = H is a 2x2 2matrix, so we have The solution:
28 Uniqueness of a patch This can be rewritten further as: Eigenvalues and eigenvectors of H Define shifts with the smallest and largest change (E value) x + = direction of largest increase in E. λ+ = amount of increase in direction x + x - = direction of smallest increase in E. λ - = amount of increase in direction x + Slide adapted from Darya Frolova, Denis Simakov, Deva Ramanan
29 Uniqueness of a patch Want E(u,v) to be large for small shifts in all directions the minimum of E(u,v) should be large, over all unit vectors [u v] this minimum is given by the smaller eigenvalue ( λ-) of H Slide adapted from Darya Frolova, Denis Simakov, Deva Ramanan
30 Uniqueness of a patch Slide adapted from Darya Frolova, Denis Simakov, Deva Ramanan
31 Uniqueness of a patch IN SUMMARY: Compute the gradient at each point in the image Create the H matrix from the entries in the gradient Compute the eigenvalues. Find points with large response ( - > threshold) Choose those points where - is a local maximum as features Slide adapted from Darya Frolova, Denis Simakov, Deva Ramanan
32 The Harris corner detector λ - is a variant of the Harris operator for feature detection Determinant - k (Trace) 2 The trace is the sum of the diagonals, i.e., trace(h) = h 11 + h 22 Det(H) = product of eigenvalues, Trace(H) = sum Very similar to λ - but less expensive (no square root) Called the Harris Corner Detector or Harris Operator Lots of other detectors, this is one of the most popular Slide adapted from Darya Frolova, Denis Simakov, Deva Ramanan
33 The Harris corner detector Slide adapted from Darya Frolova, Denis Simakov, Deva Ramanan
34 The Harris corner detector 2 views of an object are the corners stable?
35 The Harris corner detector
36 The Harris corner detector
37 The Harris corner detector
38 The Harris corner detector
39 Interest points Corners are the most prominent example of so-called `Interest Points, i.e. points that can be well localised in different views of a scene `Blobs are another, as we will see but also Blobs are another, as we will see but also a blob is a region with intensity changes in multiple directions
40 PART 3: A TALE OF INVARIANCE
41 Need for a descriptor: There are many corners coming out of our DETECTOR, but they cannot still be told apart We need to describe their surrounding patch in a way that we can discriminate between then, i.e. we need to build a feature vector for the patch, a so-called DESCRIPTOR During a MATCHING step the descriptors can During a MATCHING step, the descriptors can then be compared
42 Additional requirement on the descriptor: Invariance under geom./phot. change
43 Geometric Invariance: A local patch is small, hence probably rather planar But how do planar patches deform when looking at their image projection?
44 Patterns to be compared
45 Projection : a camera model Reversed center-image pinhole model :
46 Projection : equations x = f X Z y = f Y Z an approximation... special cases : 1. Z constant, or 2. object small compared to its distance to the camera x = kx y = ky = pseudo-perspective
47 Deformations under projection Viewed from abitrary directions, the contour projections will be determined suppose the contours are planar we study 3 cases : 1. viewed from a perpendicular direction 2. viewed from any direction but at sufficient distance to use pseudo - perspective p 3. viewed from any direction and from any distance
48 Deformations under projection In particular, how do different views of the same planar contour differ? we study 3 cases : 1. viewed from a perpendicular direction 2. viewed from any direction but at sufficient distance to use pseudo - perspective p 3. viewed from any direction and from any distance
49 Deformations : case 1 X = Y = cos sin θ X θ X + sin cos θ Y θ Y + t + t Z = Z + ( X ( t ), Y ( t ), Z ( t ) ) ( x ( t ), y ( t ) ) ( X () t, Y () t, Z () t ) ( x () t, y () t ), kx, 1 2 t 3 x = y = ky, x ' = k X, y = k Y, x k cosθ sinθ x t1 = + k θ y k sinθ cos θ y t 2 Image projection undergoes a similarity transformation Euclidean motions if k = k (i.e. Z = Z )
50 Summary case 1 Case of fronto-parallel viewing : SIMILARITY + = 1 sin cos s t x x θ θ + = 2 cos sin s t y y θ θ
51 Deformations : case 2 1 X r11 r12 r13 X t = + Y r21 r22 r23 Y t2 Z r31 r32 r33 Z t3 x x = = kx, y = ky, k X, y = k Y, we easily derive k k x = r x + r12 y + k r13z + k k k 11 t1 and should eliminate Z
52 Deformations : case 2 cont d We use the planarity restriction : ax Hence a b + by + cz + d = x + y + cz + d k k Z = a kc x b kc y d c = 0 and thus k k x = r 1 k x x + ar 11 a+ 12 k r Z + k t x b 11 = 12 y k y a21 a22 y b 2 we find a 2D affine transformation
53 Summary case 2 Viewing from a distance : AFFINITY + = b x a a x + = b y a a y
54 Deformations : case 3 3D Euclidean motion, but perspective projecton x = = = f f f X Z r11x r X r r x = f X Y, y = f, Z Z x = f X Y y = f. Z Z Y Y + + r r Z Z + t + t fx fy fz r11 + r12 + r13 + t Z Z Z fx fy fz r31 + r32 + r33 + t Z Z Z f Z f Z
55 Deformations : case 3 Thus, x = f r11 x + r12 y + r13 f + r x + r y + r f ( ft ) 1 / Z ( ft )/ Z 3 y = f r r x x + + r r y y + + r r f f + + ( ft ) 2 / Z ( ft )/ Z 3
56 Deformations : case 3 cont d Using the planarity restriction :. cf by ax df Z + + = f t b t t f c d t r y b d t r x a d t r f x + + = f c d t r y b d t r x a d t r f c t r y b t r x a t r f c d t r y b d t r x a d t r f c d r y b d r x a d r f y = d d d p y p x p p y p x p x = p y p x p p y p x p y = p y p x p p y p x p + + we find a 2D projective transf. = homography
57 Summary case 3 General viewing conditions: PROJECTIVITY PROJECTIVITY p y p x p p y p x p x = p y p x p p y p x p y =
58 Invariance and groups Invariance w.r.t. group actions T 2 T 1 T 1 T 2 image 1 image 2 image 3 I 1 = I 2 = I 3 Invariance under transformations implies invariance under the smallest encompassing group
59 Remarks There are more groups, but the ones described Seem the most relevant for us
60 Remarks Complexity of the groups goes up with the generality of the viewing conditions, and so does the complexity of the group s invariantsi Fewer invariants are found going from left to right Similarities affinities projectivities Invar. Proj. invar. Aff. invar. Sim.
61 Remarks Many types of invariants can be generated, e.g. for points, for lines, for moments, etc. Examples for points Similarities: X X X X Affinities: X X X X X X X X X 1 X 3 Projectivities: X X X X X X X X X X X X X X X X X X X X Projectivities: X X X X X X X X X X X X
62 Thus: A our local l patches are by definition iti small, we expect that invariance under affine or similarity il it transformations ti will do
63 photometric changes
64 photometric changes A reasonable model lis given by R' s R 0 0 = G' 0 sg 0 B ' 0 0 s B R G + B o o o R G B linear part caters for changes of color and/or intensity of the illumination; the 3 scale factors are the same if the illumination changes intensity only the non-linear part caters for specularities
65 Thus: As our patches cover part of a surface containing corresponding surface texture, we will also have to be invariant i under photometric changes
66 photometric changes Contrasts (intensity differences) let the non-linear part cancel; gradients are good! Moreover the orientation of gradients in the color bands is invariant under their linear changes, as is the intensity gradient in case the scale factors are identical; this is indeed relevant if the illumination changes its intensity, but not its color
67 PART 4: EXAMPLES
68 Our goal D fi d i t t i t i Define good interest points, i.e. DETECTOR + DESCRIPTOR
69 Our goal We have glossed over an important t issue. The shape of the patch should change with the viewpoint i
70 The need for variable patch shape The important thing is to achieve such change in patch shape without having to compare the images, i.e. this should happen on the basis of information in one image only!
71 The need for variable patch shape A test case we will use Note the global perspective/projective distortion!
72 Example 1: edge corners + affine moments
73 Example 1: edge corners + affine moments 1. Harris corner detection
74 Example 1: edge corners + affine moments 2. Canny edge detection
75 Example 1: edge corners + affine moments 3. Evaluation relative affine invariant parameter along two edges l = abs( p ( s ) p p ( s ) ) ds i (1) i i i i i
76 Example 1: edge corners + affine moments 4. Construct 1-dimensional family of parallelogram shaped regions
77 Example 1: edge corners + affine moments 5. Select parallelograms based on invariant extrema of function For instance: extrema of average value of a color band within the patch f Ω(l)
78 Example 1: edge corners + affine moments 5. Select parallelograms based on local extrema of invariant function
79 Example 1: edge corners + affine moments
80 Example 1: edge corners + affine moments \6. Describe the pattern within the parallelogram with affine invariant moments Geometric/photometric t t i moment ti invariants i based on generalised colour moments: M = a, b, c p, q x p y q r a ( ) b( ) c x, y g x, y b ( x, y) dxdy M abc pq are not invariant themselves, need to be combined
81 Example 1: edge corners + affine moments \6. Describe the pattern within the parallelogram with affine invariant moments Example among several moment invariants:
82 Example 1: edge corners + affine moments AR application where an interest point s patch is filled with the texture `BIWI. The stability shows how well the patch adapts to the viewpoint
83 Ex. 2: intensity extrema + affine moments 1. Search intensity extrema 2. Observe intensity profile along rays 3. Search maximum of invariant i function f(t) along each ray 4. Connect local maxima 5. Fit ellipse 6. Double ellipse size 7. Describe elliptical patch with moment invariants f ( t) = max( abs( I 0 abs( I 0 t I) I) dt, d )
84 Ex. 2: intensity extrema + affine moments
85 Ex. 2: intensity extrema + affine moments
86 Remark In practice different types of interest points Are often combined Wide baseline stereo matching example. based on ex.1 and ex.2 interest points
87 MSER MSER = Maximally Stable Extremal Regions Similar to the Intensity-Based Regions we saw already Came later, but is more often used Start with intensity extremum Then move intensity threshold away from its value and watch the super/sub-threshold region grow Take regions at thresholds where the growth is slowest
88 MSER
89 MSER
90 MSER
91 MSER
92 MSER
93 MSER
94 MSER
95 MSER
96 MSER
97 MSER
98 MSER
99 MSER
100 MSER
101 MSER
102 MSER
103 MSER
104 MSER Extremal region: region such that Order regions Q... Q 1... Qi Qi+ 1 Maximally Stable Extremal Region: local minimum of q / Q ( i) = Qi +Δ \ Qi Δ i n
105 MSER
106 SIFT SIFT = Scale-Invariant Feature Transform SIFT, developed by David Lowe (Un. British Columbia, Canada), is a carefully crafted interest point detector + descriptor, based on intensity gradients (cf. our comment on photometric invariance) and invariant under similarities, not affine like so far Our summary is a simplified account!
107 SIFT Descriptor is based on blob detection, at several scales, i.e. local extrema of the Laplacian-of-Gaussian, or LoG σ 5 σ 4 L ( σ ) + L ( σ ) σ 3 xx yy σ 2 σ list of (x, y, σ)
108 SIFT
109 SIFT Dominant orientation selection Compute image gradients Build orientation histogram Find maximum 0 2π
110 SIFT Dominant orientation selection Compute image gradients Build orientation histogram Find maximum 0 2π
111 SIFT Computer Thresholded image gradients are sampled over a grid Create array of orientation histograms within blocks 8 orientations x 4x4 histogram array = 128 dimensions Apply weighting with a Gaussian located at the center Normalized to unit vector
112 SIFT Carefully crafted e.g. why 4 x 4 x 8?
113 SIFT Sensitivity to affine changes quite good!!!
114 SURF SURF = Speeded Up Robust Features Given the advantage of simplification, SURF was introduced as a faster alternative on a GPU, SURF can be extracted for video size 100Hz with ease
115 SURF detector t SURF A Hessian-based detector: blobs are detected not by the Laplacian, but tthe determinant tof fthe Hessian matrix : [ L L xy L xy] yy] H =[ L xx Because of less response on edges, better scale selection L yy L yy LREC 2004, 26 May 2004, Lisbon 115 L L L xx yy xy L xy
116 SURF detector SURF Approximate Gaussian derivatives with block filters: L yy L yy L xy LREC 2004, 26 May 2004, Lisbon 116 L L L xx yy xy
117 SURF detector SURF Scale analysis with constant t image size, but variable filter sizes Sca le Sc cale LREC 2004, 26 May 2004, Lisbon 117
118 SURF Fast due to calculations l with Integral images Viola & Jones S S=A-B+D-C LREC 2004, 26 May 2004, Lisbon 118
119 SURF detector results SURF LREC 2004, 26 May 2004, Lisbon 119
120 SURF detector SURF Computation time: +/- 10 times faster than DoG (~100msec)
121 SURF SURF descriptor Orientation assignment: summed x and y responses in wedge LREC 2004, 26 May 2004, Lisbon 121
122 SURF descriptor SURF Distribution-based based descriptor LREC 2004, 26 May 2004, Lisbon 122
123 SURF SURF descriptor dydy dx LREC 2004, 26 May 2004, Lisbon 123
124 SURF descriptor SURF instead of gradient histograms, summed responses of the Haar masks: dx, dx, dy, dy Extended descriptor dx, dx, dy, dy Dy>0 Dy>0 Dx>0 Dx>0 Rotation invariance is optional (U-SURF) LREC 2004, 26 May 2004, Lisbon 124
125 SURF descriptor SURF role of additional, absolute values:
126 SURF Application: robot navigation
127 SURF Robot equipped with omnidirection camera
128 SURF
129 SURF Robot automatically builds a topological map from SURF features
130 SURF Robot automatically builds a topological map, incl. revisited i places as link kintersectionsi Robot can localize itself in the map and navigate to a goal position go oal st tart
131 SURF goal start Visual targets: Navigation becomes an issue Navigation becomes an issue of going to positions with an image that matches that taken at the next target position during teaching
132 SURF Landmark recognition available as kooaba app for the iphone, Android, and Symbian platforms Also underpins kooaba Shooting Star personal photo organizer,
133 notes on matching Interest points are matched on the basis of their descriptors E.g. nearest neighbour, based on some distance like Euclidean or Mahalanobis; good to compare against 2 nd nearest neighbour: OK ifdifference is big; or fuzzy matching w. multiple neighbours Speed-ups by using lower-din. descriptor space (PCA) or through some coarse-to-fine scheme (very fast schemes exist to date!) Matching of individual points typically followed by some consistency check, e.g. epipolar geometry, homograpy, or topological
134 PART 5: EXTENSIONS
135 Examples: extensions of SURF to 3D - To show that there is more than 2D images to think about T i f l f li ti hi h l i t i - To give a feel for applications, which also exist in the 2D (image) case, but will be discussed in more detail later
136 Video: spatio-temporal features Video as a stack of frames: a volume E.g use of 3D version of SURF Different scales for spatial and temporal dimensions (ie. look at different frequencies) but aligned with the image and time axes
137 Video: spatio-temporal features Can be used for several apps: Near-duplicated video copy detection Action recognition o in video Action localisation in video Video synchronisation
138 Video: spatio-temporal features Copy detection ti Link news stories together via reused shots Detect copyright violations Not a trivial i problem Chirons (added labels) Cropping Scaling Changes in color
139 Video: spatio-temporal features Copy detection Look for matching spatio-temporal SURF
140 Video: spatio-temporal features Copy detection ti in a mash-up
141 Video: spatio-temporal features Exemplar-based action recognition i & localisation, with `Bag-of-Words Here we trained the system to detect drinking Training done on several movies Tested on 2 short movies
142 Video: spatio-temporal features Action recognition With so-called Bag-of-Words technique
143 Video: spatio-temporal features Exemplar-based action recognition i & localisation The top 15 detected Drinking actions Picking up jar looks like drinking Smoke floating upwards confuses system
144 3D shape as the set of features 3D SURF, now with fully free orientation in 3D 14
145 3D model description Can be used for several apps: Find other models of same objects Categorize e model as belonging g to a shape class Find other models of same class
146 3D shape as the set of features 3D SURF, now with fully free orientation in 3D 14
147 Similarity between 3D shapes As in 2D, descriptors can be directly compared, or first be clustered into `visual words Features belonging to same visual word Correspondences between same shapes Correspondences between shapes of same class 14
148 3D shape retrieval Given the query 3D shape, we want to find similar (relevant) shapes in large collection of 3D shapes (BoW). Query Retrieved similar shapes 14
149 3D shape categorisation We want to recognize the class (cat, woman, bike..) of the given 3D shape, e.g using ISM (Implicit Shape Model) Recognize the class Model Create recognition model elephant Query horse person Tiger Labelled database of shapes 14
150 3D shape categorization Examples of recognized classes based on 3D obtained from structure-from-motion (i.e. from images) bk bike person plant 15
Image Features. Work on project 1. All is Vanity, by C. Allan Gilbert,
Image Features Work on project 1 All is Vanity, by C. Allan Gilbert, 1873-1929 Feature extrac*on: Corners and blobs c Mo*va*on: Automa*c panoramas Credit: Ma9 Brown Why extract features? Mo*va*on: panorama
More informationImage matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.
Announcements Project 1 Out today Help session at the end of class Image matching by Diva Sian by swashford Harder case Even harder case How the Afghan Girl was Identified by Her Iris Patterns Read the
More informationHarder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford
Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer
More informationHarder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford
Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer
More informationBSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy
BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving
More informationLocal Feature Detectors
Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,
More informationEdge and corner detection
Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements
More informationLocal features: detection and description. Local invariant features
Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationAugmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit
Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection
More informationLocal invariant features
Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely Lecture 4: Harris corner detection Szeliski: 4.1 Reading Announcements Project 1 (Hybrid Images) code due next Wednesday, Feb 14, by 11:59pm Artifacts due Friday, Feb
More informationOutline 7/2/201011/6/
Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern
More informationRequirements for region detection
Region detectors Requirements for region detection For region detection invariance transformations that should be considered are illumination changes, translation, rotation, scale and full affine transform
More informationCS 556: Computer Vision. Lecture 3
CS 556: Computer Vision Lecture 3 Prof. Sinisa Todorovic sinisa@eecs.oregonstate.edu Interest Points Harris corners Hessian points SIFT Difference-of-Gaussians SURF 2 Properties of Interest Points Locality
More informationLocal features and image matching. Prof. Xin Yang HUST
Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source
More informationMulti-modal Registration of Visual Data. Massimiliano Corsini Visual Computing Lab, ISTI - CNR - Italy
Multi-modal Registration of Visual Data Massimiliano Corsini Visual Computing Lab, ISTI - CNR - Italy Overview Introduction and Background Features Detection and Description (2D case) Features Detection
More informationAutomatic Image Alignment (feature-based)
Automatic Image Alignment (feature-based) Mike Nese with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2006 Today s lecture Feature
More informationThe SIFT (Scale Invariant Feature
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical
More informationProf. Feng Liu. Spring /26/2017
Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationComputer Vision for HCI. Topics of This Lecture
Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi
More informationLocal Features: Detection, Description & Matching
Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British
More informationScale Invariant Feature Transform
Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic
More informationFeature Based Registration - Image Alignment
Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html
More informationLocal features: detection and description May 12 th, 2015
Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59
More informationCAP 5415 Computer Vision Fall 2012
CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented
More informationMotion Estimation and Optical Flow Tracking
Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction
More informationMotion illusion, rotating snakes
Motion illusion, rotating snakes Local features: main components 1) Detection: Find a set of distinctive key points. 2) Description: Extract feature descriptor around each interest point as vector. x 1
More informationSchool of Computing University of Utah
School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?
More informationScale Invariant Feature Transform
Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image
More informationBuilding a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882
Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk
More informationScott Smith Advanced Image Processing March 15, Speeded-Up Robust Features SURF
Scott Smith Advanced Image Processing March 15, 2011 Speeded-Up Robust Features SURF Overview Why SURF? How SURF works Feature detection Scale Space Rotational invariance Feature vectors SURF vs Sift Assumptions
More information3D from Photographs: Automatic Matching of Images. Dr Francesco Banterle
3D from Photographs: Automatic Matching of Images Dr Francesco Banterle francesco.banterle@isti.cnr.it 3D from Photographs Automatic Matching of Images Camera Calibration Photographs Surface Reconstruction
More informationIntroduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature
More informationFeature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1
Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline
More informationCS4670: Computer Vision
CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know
More informationSIFT - scale-invariant feature transform Konrad Schindler
SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective
More informationObject Recognition with Invariant Features
Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user
More informationFeature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking
Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)
More informationCEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.
CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationAK Computer Vision Feature Point Detectors and Descriptors
AK Computer Vision Feature Point Detectors and Descriptors 1 Feature Point Detectors and Descriptors: Motivation 2 Step 1: Detect local features should be invariant to scale and rotation, or perspective
More informationFeature Detection and Matching
and Matching CS4243 Computer Vision and Pattern Recognition Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS4243) Camera Models 1 /
More information2D Image Processing Feature Descriptors
2D Image Processing Feature Descriptors Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Overview
More informationWikipedia - Mysid
Wikipedia - Mysid Erik Brynjolfsson, MIT Filtering Edges Corners Feature points Also called interest points, key points, etc. Often described as local features. Szeliski 4.1 Slides from Rick Szeliski,
More informationCS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing
CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.
More informationComputer Vision I - Appearance-based Matching and Projective Geometry
Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 01/11/2016 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation
More informationImage Features: Detection, Description, and Matching and their Applications
Image Features: Detection, Description, and Matching and their Applications Image Representation: Global Versus Local Features Features/ keypoints/ interset points are interesting locations in the image.
More informationImage features. Image Features
Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in
More informationVisual Tracking (1) Tracking of Feature Points and Planar Rigid Objects
Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationCS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov
CS 378: Autonomous Intelligent Robotics Instructor: Jivko Sinapov http://www.cs.utexas.edu/~jsinapov/teaching/cs378/ Visual Registration and Recognition Announcements Homework 6 is out, due 4/5 4/7 Installing
More informationLocal Image preprocessing (cont d)
Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge
More informationFeatures Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)
Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so
More informationFeature Detectors and Descriptors: Corners, Lines, etc.
Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood
More informationFeature descriptors and matching
Feature descriptors and matching Detections at multiple scales Invariance of MOPS Intensity Scale Rotation Color and Lighting Out-of-plane rotation Out-of-plane rotation Better representation than color:
More informationMidterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability
Midterm Wed. Local features: detection and description Monday March 7 Prof. UT Austin Covers material up until 3/1 Solutions to practice eam handed out today Bring a 8.5 11 sheet of notes if you want Review
More informationComparison of Feature Detection and Matching Approaches: SIFT and SURF
GRD Journals- Global Research and Development Journal for Engineering Volume 2 Issue 4 March 2017 ISSN: 2455-5703 Comparison of Detection and Matching Approaches: SIFT and SURF Darshana Mistry PhD student
More informationComputer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting,
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationAutomatic Image Alignment
Automatic Image Alignment with a lot of slides stolen from Steve Seitz and Rick Szeliski Mike Nese CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2018 Live Homography
More informationEE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm
EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant
More informationCorner Detection. GV12/3072 Image Processing.
Corner Detection 1 Last Week 2 Outline Corners and point features Moravec operator Image structure tensor Harris corner detector Sub-pixel accuracy SUSAN FAST Example descriptor: SIFT 3 Point Features
More informationEvaluation and comparison of interest points/regions
Introduction Evaluation and comparison of interest points/regions Quantitative evaluation of interest point/region detectors points / regions at the same relative location and area Repeatability rate :
More informationComputer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town
Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of
More informationObject Detection by Point Feature Matching using Matlab
Object Detection by Point Feature Matching using Matlab 1 Faishal Badsha, 2 Rafiqul Islam, 3,* Mohammad Farhad Bulbul 1 Department of Mathematics and Statistics, Bangladesh University of Business and Technology,
More informationPatch-based Object Recognition. Basic Idea
Patch-based Object Recognition 1! Basic Idea Determine interest points in image Determine local image properties around interest points Use local image properties for object classification Example: Interest
More informationCS 556: Computer Vision. Lecture 3
CS 556: Computer Vision Lecture 3 Prof. Sinisa Todorovic sinisa@eecs.oregonstate.edu 1 Outline Matlab Image features -- Interest points Point descriptors Homework 1 2 Basic MATLAB Commands 3 Basic MATLAB
More informationLocal Image Features
Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment
More informationComputer Vision I - Appearance-based Matching and Projective Geometry
Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation
More informationFeature-based methods for image matching
Feature-based methods for image matching Bag of Visual Words approach Feature descriptors SIFT descriptor SURF descriptor Geometric consistency check Vocabulary tree Digital Image Processing: Bernd Girod,
More informationImplementation and Comparison of Feature Detection Methods in Image Mosaicing
IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p-ISSN: 2278-8735 PP 07-11 www.iosrjournals.org Implementation and Comparison of Feature Detection Methods in Image
More informationDiscovering Visual Hierarchy through Unsupervised Learning Haider Razvi
Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping
More informationVisual Tracking (1) Pixel-intensity-based methods
Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More information3D Vision. Viktor Larsson. Spring 2019
3D Vision Viktor Larsson Spring 2019 Schedule Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar 25 Apr 1 Apr 8 Apr 15 Apr 22 Apr 29 May 6 May 13 May 20 May 27 Introduction Geometry, Camera Model, Calibration Features,
More informationApplication questions. Theoretical questions
The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list
More informationObtaining Feature Correspondences
Obtaining Feature Correspondences Neill Campbell May 9, 2008 A state-of-the-art system for finding objects in images has recently been developed by David Lowe. The algorithm is termed the Scale-Invariant
More informationLocal Image Features
Local Image Features Computer Vision Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Flashed Face Distortion 2nd Place in the 8th Annual Best
More informationAutomatic Image Alignment
Automatic Image Alignment Mike Nese with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2010 Live Homography DEMO Check out panoramio.com
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationFeature Descriptors. CS 510 Lecture #21 April 29 th, 2013
Feature Descriptors CS 510 Lecture #21 April 29 th, 2013 Programming Assignment #4 Due two weeks from today Any questions? How is it going? Where are we? We have two umbrella schemes for object recognition
More informationSIFT: Scale Invariant Feature Transform
1 / 25 SIFT: Scale Invariant Feature Transform Ahmed Othman Systems Design Department University of Waterloo, Canada October, 23, 2012 2 / 25 1 SIFT Introduction Scale-space extrema detection Keypoint
More information3D Photography. Marc Pollefeys, Torsten Sattler. Spring 2015
3D Photography Marc Pollefeys, Torsten Sattler Spring 2015 Schedule (tentative) Feb 16 Feb 23 Mar 2 Mar 9 Mar 16 Mar 23 Mar 30 Apr 6 Apr 13 Apr 20 Apr 27 May 4 May 11 May 18 May 25 Introduction Geometry,
More informationImplementing the Scale Invariant Feature Transform(SIFT) Method
Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The
More informationLecture: RANSAC and feature detectors
Lecture: RANSAC and feature detectors Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 What we will learn today? A model fitting method for edge detection RANSAC Local invariant
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationThe bits the whirl-wind left out..
The bits the whirl-wind left out.. Reading: Szeliski Background: 1 Edge Features Reading: Szeliski - 1.1 + 4.2.1 (Background: Ch 2 + 3.1 3.3) Background: 2 Edges as Gradients Edge detection = differential
More informationEECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline
EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)
More informationImage Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58
Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify
More informationCS 558: Computer Vision 4 th Set of Notes
1 CS 558: Computer Vision 4 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Overview Keypoint matching Hessian
More informationOptical flow and tracking
EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,
More informationMultiple-Choice Questionnaire Group C
Family name: Vision and Machine-Learning Given name: 1/28/2011 Multiple-Choice naire Group C No documents authorized. There can be several right answers to a question. Marking-scheme: 2 points if all right
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections
More informationA Comparison of SIFT and SURF
A Comparison of SIFT and SURF P M Panchal 1, S R Panchal 2, S K Shah 3 PG Student, Department of Electronics & Communication Engineering, SVIT, Vasad-388306, India 1 Research Scholar, Department of Electronics
More informationBias-Variance Trade-off (cont d) + Image Representations
CS 275: Machine Learning Bias-Variance Trade-off (cont d) + Image Representations Prof. Adriana Kovashka University of Pittsburgh January 2, 26 Announcement Homework now due Feb. Generalization Training
More informationVideo Google: A Text Retrieval Approach to Object Matching in Videos
Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic, Frederik Schaffalitzky, Andrew Zisserman Visual Geometry Group University of Oxford The vision Enable video, e.g. a feature
More informationLecture 6: Finding Features (part 1/2)
Lecture 6: Finding Features (part 1/2) Dr. Juan Carlos Niebles Stanford AI Lab Professor Stanford Vision Lab 1 What we will learn today? Local invariant features MoOvaOon Requirements, invariances Keypoint
More informationComputer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11
Announcement Edge and Corner Detection Slides are posted HW due Friday CSE5A Lecture 11 Edges Corners Edge is Where Change Occurs: 1-D Change is measured by derivative in 1D Numerical Derivatives f(x)
More informationSCALE INVARIANT FEATURE TRANSFORM (SIFT)
1 SCALE INVARIANT FEATURE TRANSFORM (SIFT) OUTLINE SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion 2 SIFT BACKGROUND Scale-invariant feature transform SIFT: to detect
More information