Dense Correspondence and Its Applications
|
|
- Donna Malone
- 5 years ago
- Views:
Transcription
1 Dense Correspondence and Its Applications NTHU 28 May 2015
2 Outline Dense Correspondence Optical ow Epipolar Geometry Stereo Correspondence
3 Why dense correspondence? Features are sparse Some applications of dense correspondence Frame-rate conversion View morphing, view synthesis Stereo Denition Compute a vector eld (u(x, y), v(x, y)) over the pixels of image I 1 so that the pixels I 1 (x, y) and I 2 (x + u(x, y), y + v(x, y)) correspond.
4 Related topics Motion vector Optical ow estimation Registration Stereo correspondence, epipolar geometry, rectication Video matching Image morphing, view synthesis
5 Ane transformation I 1 (x, y) and I 2 (x, y ) x = a 11 x + a 12 y + b 1 y = a 21 x + a 22 y + b 2 (1) Rigid motions (translations and rotations) Similarity transformations (rigid motions plus a uniform change in scale) Shape-changing transformations (shears)
6 Projective transformation Homography x = h 11x + h 12 y + h 13 h 31 x + h 32 y + h 33 y = h 21x + h 22 y + h 23 h 31 x + h 32 y + h 33 (2) Related to perspective projection camera undergoes pure rotation (panning, tilting, and zooming only) scene is entirely plannar Image registration Estimating the parameters
7 Homogeneous coordinates Representing an image location (x, y) as a triple (x, y, 1) ( any triple (x, y, z) can be converted backt to an image coordinate x, ) y z z x y 1 h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 x y 1 (3) The simbol means that the two vectors are equivalent up to a scalar multiple
8 Normalized direct linear transform (Hartley and Zisserman) Initial estimation of the parameters of a projective transformation given a set of feature matches Two sets of features {(x 1, y 1 ),..., (x n, y n )} and {(x 1, y 1 ),..., (x n, y n)} Normalize each set to have zero mean and average distance from the origin 2 Construct a 2n 9 matrix A, where each feature match generates two rows of A [ ] xi y A i = i 1 y x i i y y i i y i x i y i x x i i x y i i x (4) i (A i [h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ] T = 0) Singular value decomposition A = UDV T. Let h be the last column of V. Reshape h into a 3 3 matrix Ĥ Recover the nal projective transformation estimate as H = T 1 Ĥ T
9 Scattered data interpolation Find a continuous function f (x, y) dened over the rst image plane so that f (x i, y i ) = (x, y ), i = 1,..., n. i i given a set of feature matches unevenly distributed in each image the goal is to generate a dense correspondence for every point in the rst image plane.
10 Thin-plate spline interpolation f (x, y) = [ n ] i=1 w 1iφ(r i ) + a 11 x + a 12 y + b n 1 i=1 w 2iφ(r i ) + a 21 x + a 22 y + b 2 where φ(r) is called a radial bassis function and [ ] [ ] r i = x xi y y i 2 (5) (6)
11 Solving a linear system where r ij = [ xi y i ] [ xj y j ] 2 (7)
12 Thin-plate spline interpolation n correspondences, 2(n + 3) parameters including 6 global ane motion parameters and 2n coecients for rbfs
13 Choice of rbf Smallest bending energy Minimizingthe integral ( 2 f x 2 ) 2 ( ) 2 2 ( ) f 2 2 f + + dx dy (8) x y y 2 φ(r) = r 2 log r (9)
14 B-spline interpolation Basis spline Not at the feature point locations but at control points on a lattice overlaid on the rst image plane 3 3 f (x, y) = w kl (x, y)ψ kl (x, y) (10) ψ: cubic polynomials k=0 l=0
15 Dieomorphisms `ow' the rst image I 1 to the second image I 2 over time interval t [0, 1] the ow is represented by (u(x, y, t), v(x, y, t)) a mapping S(x, y, t) specifying the owed location of each point (x, y) at time t, t = 0 for I 1 and t = 1 for I 2 S(x, y, t) = t [ u(x, y, t) v(x, y, t) ], t [0, 1] (11)
16 Optimization The result is a dieomorphism that does not suer from the grid line self-intersection problem
17 As-rigid-as-possible deformation f (x, y) = T x,y (x, y), where T x,y is a rigid transformation dened at (x, y) each T x,y is estimated by minimizing n T x,y (x i, y i ) (x, y i i ) 2 2 (x, y) (x i, y i ) 2α 2 i=1 (12)
18 Problems with scattered data interpolation techniques Hard to control (e.g. to keep straight lines straight) Independent of the underlying image intensities Might reuqire signicant user interaction
19 Optical ow Estimating a motion vector (u, v) at every point (x, y) such that I 1 (x, y) and I 2 (x + u, y + v) correspond: Horn-Schunck method Lucas-Kanade method
20 The Horn-Schunck method Brightness constancy assumption I (x + u, y + v, t + 1) = I (x, y, t) (13) Lambertian assumption same brightness regardless of viewing direction Ideal image formation process no vignetting
21 The Horn-Schunck method Taylor expansion I x u + I y v + I t = 0 (14) ] [ u I v = I t The component of the ow vector in the direction of the gradient is I t (15) I 2 I (16) 2
22 The Horn-Schunck method Considering smoothness E HS (u, v) = E data (u, v) + λe smoothness (u, v) (17) λ is a regularization parameter E data (u, v) = x,y ( I x u + I y v + I ) 2 (18) t E smoothness = x,y ( ) 2 ( ) 2 ( ) 2 ( ) 2 u u v v (19) x y x y
23 Euler-Lagrange ( ) 2 I λ 2 u = u + I I x x y v + I I x t ( ) 2 I λ 2 v = v + I I y x y u + I I y t (20) (21)
24 Finite dierences
25 Strength and weakness Advantage difussion equation, creates good estimates of the dense correspondence eld even when the local gradient is nearly zero smoothly lling in reasonable ow vectors based on nearby locations Disadvantage Taylor series approximation is noly valid when (u, v) is close to zero I 1 and I 2 are already very similar Hierarchical or multiresolution approach
26 Hierarchical approach Laplacian pyramid Flow is computed for the coarset level of the pyramid Warp the second image plane to the rst using the estimated ow the ow between the rst image and the warped second image is expected to be nearly zero
27 Improvements Applying a median lter to the incremental ow eld during the warping process D. Sun, S. Roth, and M. Black. Secrets of optical ow estimation and their principles. CVPR 2010.
28 The Lucas-Kanade method Optical ow vector at a point (x 0, y 0 ) is dened as the minimizer (u, v) of E LK (u, v) = (x,y) w(x, y)(i (x + u, y + v, t + 1) I (x, y, t)) 2 (22) where w(x, y) is a window function centered at (x 0, y 0 ).
29 Solution of the linear system H = ( ) 2 (x,y) w(x, y) I (x, y) x ( ) (x,y) w(x, y) I I (x, y) (x, y) x y H [ u v ( ) (x,y) w(x, y) I I (x, y) (x, y) x y ( ) 2 (x,y) w(x, y) I (x, y) y ] [ (x,y) = w(x, y) ( ] I I (x, y) (x, y)) ( x t ) (x,y) w(x, y) I I (x, y) (x, y) y t (23) (24)
30 Comparisons the Lucas-Kanade algorithm is local the ow vector can be computed at each pixel independently easier to solve the Horn-Schunck algorithm is global all the ow vectors depend on each other through the dierential equation Both assume that the ow vectors are small pyramidal implementation
31 Renements and extensions Changing the data term Changing the smoothness term Changing the form of cost functions
32 Changing the data term Horn-Schunck brightness constancy assumtion Uras et al. gradient constancy assumption I (x + u, y + v, t + 1) = I (x, y, t) (25) Uras et al. A computational approach to motion perception. Biological Cybernetics, 60( 2): 79 87, Dec allows some local variation to illumination changes
33 Changing the data term Brox et al. assume both brightness and gradient constancy E data (u, v) = x,y (I 2 (x+u, y+v) I 1 (x, y)) 2 +γ I 2 (x+u, y+v) I 1 (x, y) 2 2 (26) Brox et al. High accuracy optical ow estimation based on a theory for warping. ECCV γ weights the contribution of the terms (typically γ 100) directly expresses the deviation from the constancy assumptions, instead of using the Taylor approximation that is only valid for small u and v Xu et al. introduce a binary swithch variable choosing either the brightness constancy or gradient constancy assumption
34 Changing the data term LK + HS E data (u, v) = (x,y) w(x, y)(i 2 (x + u, y + v) I 1 (x, y)) 2 (27) A. Bruhn, J. Weickert, and C. Schnöorr. Lucas/ Kanade meets Horn/ Schunck: Combining local and global optic ow methods. IJCV, 61( 3): , Feb small window size, local spatial smoothing, which makes the data term robust to noise global regularization, which makes the ow eld smooth and dense Probabilistic data term (Sun et al.) learned distribution of I 2 (x + u, y + v) I 1 (x, y) mixture of Gaussian
35 Changing the smoothnes term The smoothness term should be downweighted perpendicular to image edges correspond to depth discontinuities Nagel and Enkelmann. An investigation of smoothness constraints for the estimation of displacement vector elds from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-8( 5): , Sept E smoothness (u, v) = trace ( [ u x v y v x v y D : R R 2 2 is the anisotropic diusion tensor D(g) = ] T D( I 1 (x, y)) [ u 1 ( ) g 2 + g g T + β 2 I β2 x v y v x v y ] ) (28) (29)
36 Anisotropic diusion and eigenvalues the major and minor axes of each ellipse are aligned with the tensor's eigenvectors and weighted by the corresponding eigenvalues
37 Changing the smoothness term as a prior term on the optical ow eld reects the assumptions about `good' ows S. Roth and M. Black. On the spatial statistics of optical ow. IJCV, 74( 1): 33 50, Aug Field-of-Experts model
38 Changing the cost functions Horn and Schunck: quadratic functions Robust estimation M. J. Black and P. Anandan. The robust estimation of multiple motions: Parametric and piecewise-smooth ow elds. CVIU, 63( 1): , Jan Robust data term E data (u, v) = x,y ( I ρ x u + I y v + I ) t (30) Robust smoothness term E smoothness (u, v) = ρ ( [ ] u x, u y, v x, v ) y 2 (31) ρ(z) = z 2 : Horn-Schunck other choices
39 Robust penalty functions typically used for optical ow
40 Occlusion Implicit assumption: every pixel in I 2 corresponds to some pixel in I 1. This assumption is violated at occlusions. Cross checking, or left-right checking (u, v) fwd (x, y) = (u, v) bwd (I 2 (x + u fwd (x, y), y + v fwd (x, y))) (32) Robust cost functions can partially alleviate the occlusion problem
41 Layered Flow Layered motion for video matting Post-converting a monocular lm into stereo M. J. Black and P. Anandan. The robust estimation of multiple motions: Parametric and piecewise-smooth ow elds. CVIU, 63( 1): , Jan (using robust penalty functions to estimate a dominant motion in the scene, classify inlier pixels into a solved layer, and re-apply the process to outlier pixels) M. Black and A. Jepson. Estimating optical ow in segmented images using variable-order parametric models with local deformations. PAMI, 18( 10): , Oct (ow eld in each region can be represented by a low-dimensinal parametric transformation (e.g. ane), coarse to ne) Y. Weiss. Smoothness in layers: Motion segmentation using nonparametric mixture estimation. CVPR (the non-parametric ow eld is smooth, using EM to estimate layer memberships and estimate the ow eld in each layer)
42 Large-displacement optical ow T. Brox, C. Bregler, and J. Malik. Large displacement optical ow. CVPR small structure, fast motion, e.g. hand waving starts with the segmentation of each image into roughly constant-texture patches, each of which is described by a SIFT-like descriptor matching descriptors, as additional constraints C. Liu, J. Yuen, and A. Torralba. SIFT ow: Dense correspondence across scenes and its applications. PAMI, 33( 5): , May replaces the brightness constancy assumption with a SIFT-descriptor constancy assumption, S 2 (x + u, y + v) S 1 (x, y) dense SIFT designed to match between dierent scenes
43 Human-assisted motion annotation C. Liu, W. Freeman, E. Adelson, and Y. Weiss. Human-assisted motion annotation. CVPR semi-automatically created layers
44 Evaluation
45 Epipolar Geometry Two images taken at the same time, in dierent positions, by dierent cameras don't need to search across the entire image to estimate the motion vector (u, v) only one degree of freedom for the possible correspondence
46 Epipolar line and epipole all of the epipolar lines in one image intersect at the epipole, the projection of the camera center of the other image every point in the scene has to lie on one of those planes
47 The fundamental matrix the epipolar line in one image corresponding to a point in the other image can be computed from the fundamental matrix a 3 3 matrix F that concisely expresses the relationship between any two matching points any correspondence {(x, y) I 1, (x, y ) I 2 } must satisfy x T x y 1 F y 1 = 0 (33) the fundamental matrix F is dened up to scale F has rank 2 what are the coecients on x, y, and 1?
48 Geometric argument to see why F has rank 2 all the epipolar lines in I 1 intersect at the epipole e = (x e, y e ) for any (x, y ) I 2, e lies on the corresponding epipolar line x T x e y F y e = 0 (34) 1 1 holds for every (x, y ) it means that F so [x e, y e, 1] T is an eigenvector of F with eigenvalue 0. x e y e 1 = 0 (35) similarly, [x e, y e, 1 ] T is an eigenvector of F T with eigenvalue 0.
49 Representating F as a factorization based on the epipole in the second image x e F = y e M (36) 1 where M is a full-rank 3 3 matrix. The notation [e] : 0 e 3 e 2 [e] = e 3 0 e 1 (37) e 2 e 1 0 rank? the fundamental matrix is not dened for an image pair that shares the same camera center a project transformation that directly species each pair of corresponding points; the same type of relationship holds when the scene contains only a single plane the fundamental matrix for an image pair taken by a pair of separated cameras of a real-world scene is dened and unique (up to scale)
50 Estimating the fundamental matrix like a projective transformation, the fundamental matrix can be estimated from a set of feature matches [x i x i, x i y i, x i, y i x i, y i y i, y i, x i y i 1][f 11 f 12 f 13 f 21 f 22 f 23 f 31 f 32 f 33 ] T = 0 (38) collecting the linear equations for each point yields an n 9 linear system Af = 0 basic algorithm: normalized eight-point algorithm
51 Normalized eight-point algorithm 1. The input is two sets of features {(x 1, y 1 ),..., (x n, y n )} and {(x 1, y 1 ),..., (x n, y n)}. Normalize each set of feature matches to have zero mean and average distance from the origin 2. Done by a pair of similarity transformations S and S. 2. Construct the n 9 matrix A, where each feature match generates a row. 3. Compute the singular value decomposition of A, A = UDV T. Let f be the last column of V (corresponding to the smallest sigular value). 4. Reshape f into a 3 3 matrix ˆF, ling in the elements from left to right in each row. 5. Compute the singular value decomposition of ˆF, ˆF = UDV T. Set the lower right (3, 3) entry of D equal to zero to create ˆD and replace ˆF with U ˆDV T. 6. Recover the nal fundamental matrix estimate as F = S T ˆF S.
52 Example
53 Extensions Maximum likelihood estimate under the assumptions that the measurement errors in each feature location are Gaussian Non-linear optimization and RANSAC When each camera's location, orientation, and internal conguration are known, the funcamental matrix can be computed directly
54 Image rectication epipolar lines are typically slanted, which can make estimating correspondences along conjugate epipolar lines complicated due to repeated image resampling operations rectication: making conjugate epipolar lines coincide with scanlines epipolar lines are parallel and the epipoles are `at innity', represented by the homogeneous coordinate [1, 0, 0] the fundamental matrix for a rectied image pair is given by F = (39) for a correspondence in the rectied pair of images, y = y, corresponding to the denition of rectication.
55 Rectication method proposed by Hartley the idea is to estimate a projective transformation H 2 for the second image that moves the epipole to the homogeneous coordinate [1, 0, 0] while resembling a rigid transformation as much as possible Estimate the fundamental matrix F from the feature matches using the eight-point algorithm. Factor F in the form of F = [e ] M, where e = [x e, y e, 1] T is the homogeneous coordinate of the epipole in the second image. Choose a location (x, 0 y 0 ) in the second image, e.g., the center of the image, and determine a 3 3 homogeneous translation matrix T that moves (x, 0 y 0 ) to the origin. Determine a rotation matrix R that movels the epipole onto the x-axis; let its new location be (x, 0). Compute H 2 as H 2 = /x 0 1 RT (40)
56 Rectication method proposed by Hartley (cont.) 5.(cont.) The rst matrix moves the epipole to innity along the x-axis. 6.Apply the projective transformation H 2 M to the features in I 1 and the projective transformation H 2 to the features in I 2 to get a transformed set of feature matches. 7.At this point, the two images are rectied, but applying H 2 M to I 1 may result in an unacceptably distorted image. The next step is to nd a horizontal shear and translation that bring the feature matches as close together as possible. n (aˆx i + bŷ i + c ˆx i ) 2 (41) i=1 8.Compute H 1 as H 1 = a b c H2 M (42)
57 Example
58 Stereo correspondence Estimating the disparity map Quantized disparity values Occlusions are explicitly modeled Monotonicity assumption Calibration is assumed known Images are acquired simultaneously
Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47
Flow Estimation Min Bai University of Toronto February 8, 2016 Min Bai (UofT) Flow Estimation February 8, 2016 1 / 47 Outline Optical Flow - Continued Min Bai (UofT) Flow Estimation February 8, 2016 2
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationDense Image-based Motion Estimation Algorithms & Optical Flow
Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix
More informationVC 11/12 T11 Optical Flow
VC 11/12 T11 Optical Flow Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Optical Flow Constraint Equation Aperture
More informationMotion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation
Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion
More informationCS-465 Computer Vision
CS-465 Computer Vision Nazar Khan PUCIT 9. Optic Flow Optic Flow Nazar Khan Computer Vision 2 / 25 Optic Flow Nazar Khan Computer Vision 3 / 25 Optic Flow Where does pixel (x, y) in frame z move to in
More informationCS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow
CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement
More informationCOMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE
COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment
More informationNotes 9: Optical Flow
Course 049064: Variational Methods in Image Processing Notes 9: Optical Flow Guy Gilboa 1 Basic Model 1.1 Background Optical flow is a fundamental problem in computer vision. The general goal is to find
More informationPart II: Modeling Aspects
Yosemite test sequence Illumination changes Motion discontinuities Variational Optical Flow Estimation Part II: Modeling Aspects Discontinuity Di ti it preserving i smoothness th tterms Robust data terms
More informationMotion Estimation (II) Ce Liu Microsoft Research New England
Motion Estimation (II) Ce Liu celiu@microsoft.com Microsoft Research New England Last time Motion perception Motion representation Parametric motion: Lucas-Kanade T I x du dv = I x I T x I y I x T I y
More informationFinal Exam Study Guide CSE/EE 486 Fall 2007
Final Exam Study Guide CSE/EE 486 Fall 2007 Lecture 2 Intensity Sufaces and Gradients Image visualized as surface. Terrain concepts. Gradient of functions in 1D and 2D Numerical derivatives. Taylor series.
More informationReminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches
Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary
More informationMidterm Exam Solutions
Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow
More informationVisual Tracking (1) Tracking of Feature Points and Planar Rigid Objects
Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationCS 4495 Computer Vision Motion and Optic Flow
CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationLearning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009
Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationMatching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.
Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationPeripheral drift illusion
Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video
More informationFundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F
Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationOptical Flow CS 637. Fuxin Li. With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun
Optical Flow CS 637 Fuxin Li With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun Motion and perceptual organization Sometimes, motion is the only cue Motion and perceptual
More informationTechnion - Computer Science Department - Tehnical Report CIS
Over-Parameterized Variational Optical Flow Tal Nir Alfred M. Bruckstein Ron Kimmel {taln, freddy, ron}@cs.technion.ac.il Department of Computer Science Technion Israel Institute of Technology Technion
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow
More informationStereo imaging ideal geometry
Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and
More informationMotion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi
Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationComparison between Motion Analysis and Stereo
MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 12 130228 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Panoramas, Mosaics, Stitching Two View Geometry
More informationfirst order approx. u+d second order approx. (S)
Computing Dierential Properties of 3-D Shapes from Stereoscopic Images without 3-D Models F. Devernay and O. D. Faugeras INRIA. 2004, route des Lucioles. B.P. 93. 06902 Sophia-Antipolis. FRANCE. Abstract
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More informationComputational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --
Computational Optical Imaging - Optique Numerique -- Single and Multiple View Geometry, Stereo matching -- Autumn 2015 Ivo Ihrke with slides by Thorsten Thormaehlen Reminder: Feature Detection and Matching
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More information1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)
Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 19: Optical flow http://en.wikipedia.org/wiki/barberpole_illusion Readings Szeliski, Chapter 8.4-8.5 Announcements Project 2b due Tuesday, Nov 2 Please sign
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationSURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD
SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD M.E-II, Department of Computer Engineering, PICT, Pune ABSTRACT: Optical flow as an image processing technique finds its applications
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationFinal Review CMSC 733 Fall 2014
Final Review CMSC 733 Fall 2014 We have covered a lot of material in this course. One way to organize this material is around a set of key equations and algorithms. You should be familiar with all of these,
More informationImage Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003
Image Transfer Methods Satya Prakash Mallick Jan 28 th, 2003 Objective Given two or more images of the same scene, the objective is to synthesize a novel view of the scene from a view point where there
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationFinal project bits and pieces
Final project bits and pieces The project is expected to take four weeks of time for up to four people. At 12 hours per week per person that comes out to: ~192 hours of work for a four person team. Capstone:
More informationFinal Exam Study Guide
Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationNonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.
Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2
More informationNinio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,
Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationOptical Flow Estimation
Optical Flow Estimation Goal: Introduction to image motion and 2D optical flow estimation. Motivation: Motion is a rich source of information about the world: segmentation surface structure from parallax
More informationMAPI Computer Vision. Multiple View Geometry
MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry
More informationp w2 μ 2 Δp w p 2 epipole (FOE) p 1 μ 1 A p w1
A Unied Approach to Moving Object Detection in 2D and 3D Scenes Michal Irani P. Anandan David Sarno Research Center CN5300, Princeton, NJ 08543-5300, U.S.A. Email: fmichal,anandang@sarno.com Abstract The
More informationCorrespondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong
More informationRuch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS
Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska 1 Krzysztof Krawiec IDSS 2 The importance of visual motion Adds entirely new (temporal) dimension to visual
More informationLecture 16: Computer Vision
CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field
More information3D Geometry and Camera Calibration
3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often
More informationOptic Flow and Basics Towards Horn-Schunck 1
Optic Flow and Basics Towards Horn-Schunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information.
More informationOverview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.
Overview Video Optical flow Motion Magnification Colorization Lecture 9 Optical flow Motion Magnification Colorization Overview Optical flow Combination of slides from Rick Szeliski, Steve Seitz, Alyosha
More informationEpipolar Geometry CSE P576. Dr. Matthew Brown
Epipolar Geometry CSE P576 Dr. Matthew Brown Epipolar Geometry Epipolar Lines, Plane Constraint Fundamental Matrix, Linear solution + RANSAC Applications: Structure from Motion, Stereo [ Szeliski 11] 2
More informationStereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17
Stereo Wrap + Motion CSE252A Lecture 17 Some Issues Ambiguity Window size Window shape Lighting Half occluded regions Problem of Occlusion Stereo Constraints CONSTRAINT BRIEF DESCRIPTION 1-D Epipolar Search
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More informationMotion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures
Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationMosaics. Today s Readings
Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationOptical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object
More informationLecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20
Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit
More informationAugmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit
Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection
More informationLecture 14: Basic Multi-View Geometry
Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationLucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Lucas-Kanade Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object
More informationLecture 16: Computer Vision
CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods
More informationThe 2D/3D Differential Optical Flow
The 2D/3D Differential Optical Flow Prof. John Barron Dept. of Computer Science University of Western Ontario London, Ontario, Canada, N6A 5B7 Email: barron@csd.uwo.ca Phone: 519-661-2111 x86896 Canadian
More informationImage Stitching. Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi Combine two or more overlapping images to make one larger image Add example Slide credit: Vaibhav Vaish
More informationMotion Tracking and Event Understanding in Video Sequences
Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!
More informationMidterm Examination CS 534: Computational Photography
Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8 1. [6] (a) [3] What camera setting(s)
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Structure Computation Lecture 18 March 22, 2005 2 3D Reconstruction The goal of 3D reconstruction
More informationIssues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou
an edge image, nd line or curve segments present Given the image. in Line and Curves Detection 1 Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationOptical flow and tracking
EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,
More informationLearning Two-View Stereo Matching
Learning Two-View Stereo Matching Jianxiong Xiao Jingni Chen Dit-Yan Yeung Long Quan Department of Computer Science and Engineering The Hong Kong University of Science and Technology The 10th European
More informationWeek 2: Two-View Geometry. Padua Summer 08 Frank Dellaert
Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationLast lecture. Passive Stereo Spacetime Stereo
Last lecture Passive Stereo Spacetime Stereo Today Structure from Motion: Given pixel correspondences, how to compute 3D structure and camera motion? Slides stolen from Prof Yungyu Chuang Epipolar geometry
More informationVision par ordinateur
Epipolar geometry π Vision par ordinateur Underlying structure in set of matches for rigid scenes l T 1 l 2 C1 m1 l1 e1 M L2 L1 e2 Géométrie épipolaire Fundamental matrix (x rank 2 matrix) m2 C2 l2 Frédéric
More informationStereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision
Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras
More informationApplication questions. Theoretical questions
The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list
More informationRobust Model-Free Tracking of Non-Rigid Shape. Abstract
Robust Model-Free Tracking of Non-Rigid Shape Lorenzo Torresani Stanford University ltorresa@cs.stanford.edu Christoph Bregler New York University chris.bregler@nyu.edu New York University CS TR2003-840
More informationComputer Vision Lecture 20
Computer Vision Lecture 2 Motion and Optical Flow Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de 28.1.216 Man slides adapted from K. Grauman, S. Seitz, R. Szeliski,
More information3D Motion Analysis Based on 2D Point Displacements
3D Motion Analysis Based on 2D Point Displacements 2D displacements of points observed on an unknown moving rigid body may provide information about - the 3D structure of the points - the 3D motion parameters
More information