CS 1674: Intro to Computer Vision. Midterm Review. Prof. Adriana Kovashka University of Pittsburgh October 10, 2016

Size: px
Start display at page:

Download "CS 1674: Intro to Computer Vision. Midterm Review. Prof. Adriana Kovashka University of Pittsburgh October 10, 2016"

Transcription

1 CS 1674: Intro to Computer Vision Midterm Review Prof. Adriana Kovashka University of Pittsburgh October 10, 2016

2 Reminders The midterm exam is in class on this coming Wednesday There will be no make-up exams unless you or a close relative is seriously ill!

3 Review requests I received Textures and texture representations, image responses to size and orientation of Gaussian filter banks, comparisons 4 Corner detection alg, Harris 4 Invariance vs covariance, affine intensity change, and applications to know 3 Scale-invariant detection, blob detection, Harris automatic scale selection 3 Sift and feature description 3 Keypoint matching alg, feature matching 2 Examples of how to compute and apply homography, epipolar geometry 2 Why it makes sense to use the ratio: distance to best match / distance to second best match when matching features across images Summary of equations students need to know Pyramids Convolution practical use Filters for transforming the image

4 Transformations, Homographies, Epipolar Geometry

5 2D Linear Transformations x' y' a c bx d y Only linear 2D transformations can be represented with a 2x2 matrix. Linear transformations are combinations of Scale, Rotation, Shear, and Mirror Alyosha Efros

6 2D Affine Transformations Affine transformations are combinations of Linear transformations, and Translations Maps lines to lines, parallel lines remain parallel w y x f e d c b a w y x ' ' ' Adapted from Alyosha Efros

7 Projective Transformations Projective transformations: Affine transformations, and Projective warps Parallel lines do not necessarily remain parallel w y x i h g f e d c b a w y x ' ' ' Kristen Grauman

8 How to stitch together a panorama (a.k.a. Basic Procedure mosaic)? Take a sequence of images from the same position Rotate the camera about its optical center Compute the homography (transformation) between second image and first Transform the second image to overlap with the first Blend the two together to create a mosaic (If there are more images, repeat) Modified from Steve Seitz

9 Computing the homography x x, y 1 1 1, y 1 x 2, y 2 x, y 2 2 x, x, n y n n y n To compute the homography given pairs of corresponding points in the images, we need to set up an equation where the parameters of H are the unknowns Kristen Grauman

10 Computing the homography p = Hp wx' a b c x wy' d e f y w g h i 1 Can set scale factor i=1. So, there are 8 unknowns. Set up a system of linear equations: Ah = b where vector of unknowns h = [a,b,c,d,e,f,g,h] T Need at least 8 eqs, but the more the better Solve for h. If overconstrained, solve using least-squares: min Ah b 2 Kristen Grauman

11 Computing the homography Assume we have four matched points: How do we compute homography H? h 0 ' ' ' ' ' ' y yy xy y x x yx xx y x ' ' ' ' ' p' w y w w x h h h h h h h h h H h h h h h h h h h h Derek Hoiem p =Hp Apply SVD: UDV T = A [U, S, V] = svd(a); h = V smallest (column of V corr. to smallest singular value) A

12 Transforming the second image Test point: Image 2 Image 1 canvas x, y wx wy w, w x, y To apply a given homography H Compute p = Hp (regular matrix multiply) Convert p from homogeneous to image coordinates Modified from Kristen Grauman wx' wy' w p * * * * * * H * * * x y 1 p

13 Transforming the second image Image 2 Image 1 canvas H(x,y) y y x x f(x,y) g(x,y ) Forward warping: Send each pixel f(x,y) to its corresponding location (x,y ) = H(x,y) in the right image Modified from Alyosha Efros

14 Depth from disparity We have two images taken from cameras with different intrinsic and extrinsic parameters. How do we match a point in the first image to a point in the second? image I(x,y) Disparity map D(x,y) image I (x,y ) So if we could find the corresponding points in two images, we could estimate relative depth Kristen Grauman

15 Epipolar geometry: notation X x x Derek Hoiem Baseline line connecting the two camera centers Epipoles = intersections of baseline with image planes = projections of the other camera center Epipolar Plane plane containing baseline Epipolar Lines - intersections of epipolar plane with image planes (always come in corresponding pairs) Note: All epipolar lines intersect at the epipole.

16 Epipolar constraint The epipolar constraint is useful because it reduces the correspondence problem to a 1D search along an epipolar line. Kristen Grauman, image from Andrew Zisserman

17 X X Essential matrix T RX 0 [T ] RX 0 x Let E [T x] R XEX X T EX 0 E is called the essential matrix, and it relates corresponding image points between both cameras, given the rotation and translation. Before we said: If we observe a point in one image, its position in other image is constrained to lie on line defined by above. Turns out Ex is the epipolar line through x in the first image, corresp. to x. Note: these points are in camera coordinate systems. Kristen Grauman

18 Basic stereo matching algorithm For each pixel in the first image Find corresponding epipolar scanline in the right image Search along epipolar line and pick the best match x Compute disparity x-x and set depth(x) = f*t/(x-x ) Derek Hoiem

19 Correspondence search Left Right scanline Matching cost disparity Slide a window along the right scanline and compare contents of that window with the reference window in the left image Matching cost: e.g. Euclidean distance Derek Hoiem

20 Geometry for a simple stereo system Assume parallel optical axes, known camera parameters (i.e., calibrated cameras). What is expression for Z? Similar triangles (p l, P, p r ) and (O l, P, O r ): T x l Z f x r T Z depth disparity Z f T x r x l Kristen Grauman

21 Results with window search Data Left image Right image Window-based matching Window-based matching Ground truth Ground truth Derek Hoiem

22 How can we improve? Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views Smoothness We expect disparity values to change slowly (for the most part) Derek Hoiem

23 Many of these constraints can be encoded in an energy function and solved using graph cuts Before Derek Hoiem Graph cuts Ground truth Y. Boykov, O. Veksler, and R. Zabih, Fast Approximate Energy Minimization via Graph Cuts, PAMI 2001 For the latest and greatest:

24 Projective structure from motion Given: m images of n fixed 3D points x ij = P i X j, i = 1,, m, j = 1,, n Problem: estimate m projection matrices P i and n 3D points X j from the mn corresponding 2D points x ij X j x 1j x 3j P 1 x 2j Svetlana Lazebnik P 2 P 3

25 Photo synth Noah Snavely, Steven M. Seitz, Richard Szeliski, "Photo tourism: Exploring photo collections in 3D," SIGGRAPH

26 3D from multiple images Building Rome in a Day: Agarwal et al. 2009

27 Recap: Epipoles Point x in left image corresponds to epipolar line l in right image Epipolar line passes through the epipole (the intersection of the cameras baseline with the image plane C C Derek Hoiem

28 Recap: Essential, Fundamental Matrices Fundamental matrix maps from a point in one image to a line in the other If x and x correspond to the same 3d point X: Essential matrix is like fundamental matrix but more constrained Adapted from Derek Hoiem

29 Recap: stereo with calibrated cameras Given image pair, R, T Detect some features Compute essential matrix E Match features using the epipolar and other constraints Triangulate for 3d structure and get depth Kristen Grauman

30 Texture representations

31 Correlation filtering Say the averaging window size is 2k+1 x 2k+1: Attribute uniform weight to each pixel Loop over all pixels in neighborhood around image pixel F[i,j] Now generalize to allow different weights depending on neighboring pixel s relative position: Non-uniform weights Kristen Grauman

32 Convolution vs. correlation Cross-correlation F u = -1, v = (i, j) Convolution H (0, 0)

33 Filters for computing gradients * = Slide credit: Derek Hoiem

34 Texture representation: example mean d/dx value mean d/dy value Win. # Win.# Win.# original image Kristen Grauman derivative filter responses, squared statistics to summarize patterns in small windows

35 Filter banks orientations scales Edges Bars What filters to put in the bank? Kristen Grauman Spots Typically we want a combination of scales and orientations, different types of patterns. Matlab code available for these examples:

36 Matching with filters Goal: find in image Method 0: filter the image with eye patch g[ m, n] h[ k, l] k, l f [ m k, n l] f = image g = filter What went wrong? Derek Hoiem Input Filtered Image

37 Matching with filters Goal: find in image Method 1: filter the image with zero-mean eye g[ m, n] ( h[ k, l] mean( h)) ( k, l Likes bright pixels where filters are above average, dark pixels where filters are below average. f [ m k, n l]) True detections False detections Derek Hoiem Input Filtered Image (scaled) Thresholded Image

38 Kristen Grauman Showing magnitude of responses

39 Kristen Grauman

40 Kristen Grauman

41 Representing texture by mean abs response Filters Mean abs responses Derek Hoiem

42 Computing distances using texture Dimension 1 Dimension 2 a b #dim ) ( ), ( ) ( ) ( ), ( i a i b i b a D b a b a b a D Kristen Grauman

43 Feature detection: Harris

44 Corners as distinctive interest points We should easily recognize the keypoint by looking through a small window Shifting a window in any direction should give a large change in intensity flat region: no change in all directions A. Efros, D. Frolova, D. Simakov edge : no change along the edge direction corner : significant change in all directions

45 Harris Detector: Mathematics Window-averaged squared change of intensity induced by shifting the image data by [u,v]: Window function Shifted intensity Intensity Window function w(x,y) = or 1 in window, 0 outside Gaussian D. Frolova, D. Simakov

46 Harris Detector: Mathematics Expanding I(x,y) in a Taylor series expansion, we have, for small shifts [u,v], a quadratic approximation to the error surface between a patch and itself, shifted by [u,v]: where M is a 2 2 matrix computed from image derivatives: D. Frolova, D. Simakov

47 y y y x y x x x I I I I I I I I y x w M ), ( x I I x y I I y y I x I I I y x Notation: K. Grauman Harris Detector: Mathematics

48 What does the matrix M reveal? Since M is symmetric, we have M X X 2 T Mx i x i i The eigenvalues of M reveal the amount of intensity change in the two principal orthogonal gradient directions in the window. K. Grauman

49 Corner response function edge : 1 >> 2 2 >> 1 corner : 1 and 2 are large, 1 ~ 2 flat region: 1 and 2 are small Adapted from A. Efros, D. Frolova, D. Simakov, K. Grauman

50 Harris Detector: Algorithm Compute image gradients Ix and Iy for all pixels For each pixel Compute by looping over neighbors x, y compute (k :empirical constant, k = ) Find points with large corner response function R (R > threshold) Take the points of locally maximum R as the detected feature points (i.e., pixels where R is bigger than for all the 4 or 8 neighbors) D. Frolova, D. Simakov 55

51 K. Grauman Example of Harris application

52 Feature detection: Scale-invariance

53 Invariance vs covariance A function is invariant under a certain family of transformations if its value does not change when a transformation from this family is applied to its argument. A function is covariant when it commutes with the transformation, i.e., applying the transformation to the argument of the function has the same effect as applying the transformation to the output of the function. [ ] [For example,] the area of a 2D surface is invariant under 2D rotations, since rotating a 2D surface does not make it any smaller or bigger. But the orientation of the major axis of inertia of the surface is covariant under the same family of transformations, since rotating a 2D surface will affect the orientation of its major axis in exactly the same way. Local Invariant Feature Detectors: A Survey by Tinne Tuytelaars and Krystian Mikolajczyk, in Foundations and Trends in Computer Graphics and Vision Vol. 3, No. 3 (2007) Chapter 1, 3.2, 7

54 What happens if: Affine intensity change Only derivatives are used => invariance to intensity shift I I + b Intensity scaling: I a I I a I + b R threshold R x (image coordinate) x (image coordinate) Partially invariant to affine intensity change L. Lazebnik

55 What happens if: Image translation Derivatives and window function are shift-invariant Corner location is covariant w.r.t. translation L. Lazebnik

56 What happens if: Image rotation Second moment ellipse rotates but its shape (i.e. eigenvalues) remains the same Corner location is covariant w.r.t. rotation L. Lazebnik

57 What happens if: Scaling Corner All points will be classified as edges Corner location is not covariant to scaling! L. Lazebnik

58 Scale Invariant Detection Problem: How do we choose corresponding circles independently in each image? Do objects in the image have a characteristic scale that we can identify? D. Frolova, D. Simakov

59 Scale Invariant Detection Solution: Design a function on the region which is scale invariant (has the same shape even if the image is resized) Take a local maximum of this function f Image 1 f Image 2 scale = 1/2 Adapted from A. Torralba s 1 s 2 region size region size

60 Automatic Scale Selection Function responses for increasing scale (scale signature) f ( I (, )) i i 1 x i i m f ( I 1 ( x, )) m K. Grauman, B. Leibe

61 Automatic Scale Selection Function responses for increasing scale (scale signature) f ( I (, )) i i 1 x i i m f ( I 1 ( x, )) m K. Grauman, B. Leibe

62 Automatic Scale Selection Function responses for increasing scale (scale signature) f I (, )) i i i i m f ( I 1 ( x, )) m K. Grauman, B. Leibe ( 1 x

63 What Is A Useful Signature Function? Laplacian of Gaussian = blob detector K. Grauman, B. Leibe

64 Difference of Gaussian Laplacian We can approximate the Laplacian with a difference of Gaussians; more efficient to implement. 2 L Gxx x y Gyy x y (,, ) (,, ) (Laplacian) DoG G( x, y, k) G( x, y, ) (Difference of Gaussians)

65 Difference of Gaussian: Efficient computation Computation in Gaussian scale pyramid Sampling with step 4 =2 Original image K. Grauman, B. Leibe

66 Find local maxima in position-scale space of Difference-of-Gaussian 5 Position-scale space: Adapted from K. Grauman, B. Leibe Find places where X greater than all of its neighbors (in green) List of (x, y, s)

67 Laplacian pyramid example Allows detection of increasingly coarse detail

68 Results: Difference-of-Gaussian K. Grauman, B. Leibe

69 Feature description

70 Gradients m(x, y) = sqrt(1 + 0) = 1 Θ(x, y) = atan(0/1) = 0

71 Scale Invariant Feature Transform Full version Divide the 16x16 window into a 4x4 grid of cells (2x2 case shown below) Quantize the gradient orientations i.e. snap each gradient to one of 8 angles Each gradient contributes not just 1, but magnitude(gradient) to the histogram, i.e. stronger gradients contribute more 16 cells * 8 orientations = 128 dimensional descriptor for each detected feature Adapted from L. Zitnick, D. Lowe

72 Scale Invariant Feature Transform Full version Divide the 16x16 window into a 4x4 grid of cells (2x2 case shown below) Quantize the gradient orientations i.e. snap each gradient to one of 8 angles Each gradient contributes not just 1, but magnitude(gradient) to the histogram, i.e. stronger gradients contribute more 16 cells * 8 orientations = 128 dimensional descriptor for each detected feature Normalize + clip (threshold normalize to 0.2) + normalize the descriptor After normalizing, we have: such that: 0.2 Adapted from L. Zitnick, D. Lowe

73 Making descriptor rotation invariant CSE 576: Computer Vision Rotate patch according to its dominant gradient orientation This puts the patches into a canonical orientation K. Grauman Image from Matthew Brown

74 Keypoint matching

75 Matching local features? Image 1 Image 2 To generate candidate matches, find patches that have the most similar appearance (e.g., lowest feature Euclidean distance) Simplest approach: compare them all, take the closest (or closest k, or within a thresholded distance) K. Grauman

76 Robust matching???? K. Grauman Image 1 Image 2 At what Euclidean distance value do we have a good match? To add robustness to matching, can consider ratio : distance to best match / distance to second best match If low, first match looks good. If high, could be ambiguous match.

77 Ratio: example Let q be the query from the first image, d1 be the closest match in the second image, and d2 be the second closest match Let dist(q, d1) and dist(q, d2) be the distances Let r = dist(q, d1) / dist(q, d2) What is the largest that r can be? What is the lowest that r can be? If r is 1, what do we know about the two distances? What about when r is 0.1?

78 Indexing local features: Setup When we see close points in feature space, we have similar descriptors, which indicates similar local content. Descriptor s feature space Query image K. Grauman Database images

79 Image matching

80 times appearing times appearing times appearing Describing images w/ visual words Summarize entire image based on its distribution (histogram) of word occurrences. Analogous to bag of words representation commonly used for documents. Feature patches: K. Grauman Visual words

81 Bag of visual words: Two uses 1. Represent the image 2. Using that representation, look for similar images 3. Can also use BOW to compute an inverted index, to simplify application #2

82 Visual words: main idea Extract some local features from a number of images e.g., SIFT descriptor space: each point is 128-dimensional D. Nister, CVPR 2006

83 D. Nister, CVPR 2006 Visual words: main idea

84 D. Nister, CVPR 2006 Quantize the space by grouping (clustering) the features. Note: For now, we ll treat clustering as a black box.

85 Inverted file index and bags of words similarity w (offline) Extract features in database images, cluster them to find words, make index 2. Extract words in query (extract features and map each to closest cluster center) 3. Use inverted file index to find frames relevant to query 4. For each relevant frame, rank them by comparing word counts (BOW) of query and frame Adapted from K. Grauman

86 precision Scoring retrieval quality Query Database size: 10 images Relevant (total): 5 images (e.g. images of Golden Gate) Results (ordered): precision = # returned relevant / # returned recall = # returned relevant / # total relevant Ondrej Chum recall

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision CS 1699: Intro to Computer Vision Epipolar Geometry and Stereo Vision Prof. Adriana Kovashka University of Pittsburgh October 8, 2015 Today Review Projective transforms Image stitching (homography) Epipolar

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision CS 1674: Intro to Computer Vision Epipolar Geometry and Stereo Vision Prof. Adriana Kovashka University of Pittsburgh October 5, 2016 Announcement Please send me three topics you want me to review next

More information

Local features: detection and description. Local invariant features

Local features: detection and description. Local invariant features Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms

More information

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017 CS 277: Intro to Computer Vision Multiple Views Prof. Adriana Kovashka Universit of Pittsburgh March 4, 27 Plan for toda Affine and projective image transformations Homographies and image mosaics Stereo

More information

Local features and image matching. Prof. Xin Yang HUST

Local features and image matching. Prof. Xin Yang HUST Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

Local features: detection and description May 12 th, 2015

Local features: detection and description May 12 th, 2015 Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59

More information

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability Midterm Wed. Local features: detection and description Monday March 7 Prof. UT Austin Covers material up until 3/1 Solutions to practice eam handed out today Bring a 8.5 11 sheet of notes if you want Review

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

Wikipedia - Mysid

Wikipedia - Mysid Wikipedia - Mysid Erik Brynjolfsson, MIT Filtering Edges Corners Feature points Also called interest points, key points, etc. Often described as local features. Szeliski 4.1 Slides from Rick Szeliski,

More information

Bias-Variance Trade-off (cont d) + Image Representations

Bias-Variance Trade-off (cont d) + Image Representations CS 275: Machine Learning Bias-Variance Trade-off (cont d) + Image Representations Prof. Adriana Kovashka University of Pittsburgh January 2, 26 Announcement Homework now due Feb. Generalization Training

More information

Structure from Motion

Structure from Motion 11/18/11 Structure from Motion Computer Vision CS 143, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from

More information

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental

More information

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian. Announcements Project 1 Out today Help session at the end of class Image matching by Diva Sian by swashford Harder case Even harder case How the Afghan Girl was Identified by Her Iris Patterns Read the

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

Stitching and Blending

Stitching and Blending Stitching and Blending Kari Pulli VP Computational Imaging Light First project Build your own (basic) programs panorama HDR (really, exposure fusion) The key components register images so their features

More information

Lecture 6: Finding Features (part 1/2)

Lecture 6: Finding Features (part 1/2) Lecture 6: Finding Features (part 1/2) Dr. Juan Carlos Niebles Stanford AI Lab Professor Stanford Vision Lab 1 What we will learn today? Local invariant features MoOvaOon Requirements, invariances Keypoint

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Mosaics. Today s Readings

Mosaics. Today s Readings Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Lecture 4: Harris corner detection Szeliski: 4.1 Reading Announcements Project 1 (Hybrid Images) code due next Wednesday, Feb 14, by 11:59pm Artifacts due Friday, Feb

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Image warping and stitching

Image warping and stitching Image warping and stitching Thurs Oct 15 Last time Feature-based alignment 2D transformations Affine fit RANSAC 1 Robust feature-based alignment Extract features Compute putative matches Loop: Hypothesize

More information

Motion illusion, rotating snakes

Motion illusion, rotating snakes Motion illusion, rotating snakes Local features: main components 1) Detection: Find a set of distinctive key points. 2) Description: Extract feature descriptor around each interest point as vector. x 1

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

Prof. Feng Liu. Spring /26/2017

Prof. Feng Liu. Spring /26/2017 Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Lecture: RANSAC and feature detectors

Lecture: RANSAC and feature detectors Lecture: RANSAC and feature detectors Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 What we will learn today? A model fitting method for edge detection RANSAC Local invariant

More information

Homographies and RANSAC

Homographies and RANSAC Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position

More information

Image Stitching. Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi

Image Stitching. Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi Combine two or more overlapping images to make one larger image Add example Slide credit: Vaibhav Vaish

More information

CS 558: Computer Vision 4 th Set of Notes

CS 558: Computer Vision 4 th Set of Notes 1 CS 558: Computer Vision 4 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Overview Keypoint matching Hessian

More information

Local Image Features

Local Image Features Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment

More information

Automatic Image Alignment

Automatic Image Alignment Automatic Image Alignment with a lot of slides stolen from Steve Seitz and Rick Szeliski Mike Nese CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2018 Live Homography

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Local features and image matching May 8 th, 2018

Local features and image matching May 8 th, 2018 Local features and image matcing May 8 t, 2018 Yong Jae Lee UC Davis Last time RANSAC for robust fitting Lines, translation Image mosaics Fitting a 2D transformation Homograpy 2 Today Mosaics recap: How

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

Image warping and stitching

Image warping and stitching Image warping and stitching May 4 th, 2017 Yong Jae Lee UC Davis Last time Interactive segmentation Feature-based alignment 2D transformations Affine fit RANSAC 2 Alignment problem In alignment, we will

More information

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of

More information

Structure from Motion

Structure from Motion /8/ Structure from Motion Computer Vision CS 43, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from motion

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

N-Views (1) Homographies and Projection

N-Views (1) Homographies and Projection CS 4495 Computer Vision N-Views (1) Homographies and Projection Aaron Bobick School of Interactive Computing Administrivia PS 2: Get SDD and Normalized Correlation working for a given windows size say

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Patch Descriptors. EE/CSE 576 Linda Shapiro

Patch Descriptors. EE/CSE 576 Linda Shapiro Patch Descriptors EE/CSE 576 Linda Shapiro 1 How can we find corresponding points? How can we find correspondences? How do we describe an image patch? How do we describe an image patch? Patches with similar

More information

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka CS223b Midterm Exam, Computer Vision Monday February 25th, Winter 2008, Prof. Jana Kosecka Your name email This exam is 8 pages long including cover page. Make sure your exam is not missing any pages.

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Texture Representation + Image Pyramids

Texture Representation + Image Pyramids CS 1674: Intro to Computer Vision Texture Representation + Image Pyramids Prof. Adriana Kovashka University of Pittsburgh September 14, 2016 Reminders/Announcements HW2P due tonight, 11:59pm HW3W, HW3P

More information

Lecture 10: Multi-view geometry

Lecture 10: Multi-view geometry Lecture 10: Multi-view geometry Professor Stanford Vision Lab 1 What we will learn today? Review for stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

Lecture 10: Multi view geometry

Lecture 10: Multi view geometry Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

Image warping and stitching

Image warping and stitching Image warping and stitching May 5 th, 2015 Yong Jae Lee UC Davis PS2 due next Friday Announcements 2 Last time Interactive segmentation Feature-based alignment 2D transformations Affine fit RANSAC 3 Alignment

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.

More information

Local Image Features

Local Image Features Local Image Features Computer Vision Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Flashed Face Distortion 2nd Place in the 8th Annual Best

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

Image Features. Work on project 1. All is Vanity, by C. Allan Gilbert,

Image Features. Work on project 1. All is Vanity, by C. Allan Gilbert, Image Features Work on project 1 All is Vanity, by C. Allan Gilbert, 1873-1929 Feature extrac*on: Corners and blobs c Mo*va*on: Automa*c panoramas Credit: Ma9 Brown Why extract features? Mo*va*on: panorama

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

Automatic Image Alignment

Automatic Image Alignment Automatic Image Alignment Mike Nese with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2010 Live Homography DEMO Check out panoramio.com

More information

Final project bits and pieces

Final project bits and pieces Final project bits and pieces The project is expected to take four weeks of time for up to four people. At 12 hours per week per person that comes out to: ~192 hours of work for a four person team. Capstone:

More information

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views? Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

More information

Feature descriptors and matching

Feature descriptors and matching Feature descriptors and matching Detections at multiple scales Invariance of MOPS Intensity Scale Rotation Color and Lighting Out-of-plane rotation Out-of-plane rotation Better representation than color:

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing

More information

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Guido Gerig CS 6320 Spring 2013 Credits: Prof. Mubarak Shah, Course notes modified from: http://www.cs.ucf.edu/courses/cap6411/cap5415/, Lecture

More information

Automatic Image Alignment (feature-based)

Automatic Image Alignment (feature-based) Automatic Image Alignment (feature-based) Mike Nese with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2006 Today s lecture Feature

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow

More information

Local Image Features

Local Image Features Local Image Features Ali Borji UWM Many slides from James Hayes, Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Overview of Keypoint Matching 1. Find a set of distinctive key- points A 1 A 2 A 3 B 3

More information

AK Computer Vision Feature Point Detectors and Descriptors

AK Computer Vision Feature Point Detectors and Descriptors AK Computer Vision Feature Point Detectors and Descriptors 1 Feature Point Detectors and Descriptors: Motivation 2 Step 1: Detect local features should be invariant to scale and rotation, or perspective

More information

Patch Descriptors. CSE 455 Linda Shapiro

Patch Descriptors. CSE 455 Linda Shapiro Patch Descriptors CSE 455 Linda Shapiro How can we find corresponding points? How can we find correspondences? How do we describe an image patch? How do we describe an image patch? Patches with similar

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Image Warping and Mosacing

Image Warping and Mosacing Image Warping and Mosacing 15-463: Rendering and Image Processing Alexei Efros with a lot of slides stolen from Steve Seitz and Rick Szeliski Today Mosacs Image Warping Homographies Programming Assignment

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

Recovering structure from a single view Pinhole perspective projection

Recovering structure from a single view Pinhole perspective projection EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Object Recognition in Large Databases Some material for these slides comes from www.cs.utexas.edu/~grauman/courses/spring2011/slides/lecture18_index.pptx

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic

More information

Stereo: Disparity and Matching

Stereo: Disparity and Matching CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski

Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2005 Today Go over Midterm Go over Project #3

More information

Feature Detection and Matching

Feature Detection and Matching and Matching CS4243 Computer Vision and Pattern Recognition Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS4243) Camera Models 1 /

More information