Structure from Motion

Similar documents
Structure from motion

Structure from motion

Structure from Motion CSC 767

Structure from motion

BIL Computer Vision Apr 16, 2014

C280, Computer Vision

Structure from Motion

Structure from Motion

CS 532: 3D Computer Vision 7 th Set of Notes

CS231M Mobile Computer Vision Structure from motion

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Lecture 10: Multi-view geometry

CS231A Course Notes 4: Stereo Systems and Structure from Motion

Unit 3 Multiple View Geometry

Lecture 10: Multi view geometry

calibrated coordinates Linear transformation pixel coordinates

Lecture 9: Epipolar Geometry

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion

CS 664 Structure and Motion. Daniel Huttenlocher

CS231A Midterm Review. Friday 5/6/2016

Stereo Vision. MAN-522 Computer Vision

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Last lecture. Passive Stereo Spacetime Stereo

1 Projective Geometry

Epipolar geometry. x x

Two-view geometry Computer Vision Spring 2018, Lecture 10

Lecture 6 Stereo Systems Multi-view geometry

Metric Structure from Motion

Structure from Motion and Multi- view Geometry. Last lecture

Computer Vision Projective Geometry and Calibration. Pinhole cameras

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

Structure from Motion

Agenda. Rotations. Camera models. Camera calibration. Homographies

The end of affine cameras

A Factorization Method for Structure from Planar Motion

Stereo and Epipolar geometry

Agenda. Rotations. Camera calibration. Homography. Ransac

Recovering structure from a single view Pinhole perspective projection

N-View Methods. Diana Mateus, Nassir Navab. Computer Aided Medical Procedures Technische Universität München. 3D Computer Vision II

Epipolar Geometry and Stereo Vision

Visual Recognition: Image Formation

Multiple Views Geometry

3D Computer Vision. Structure from Motion. Prof. Didier Stricker

Epipolar Geometry and Stereo Vision

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

CS 231A: Computer Vision (Winter 2018) Problem Set 2

3D Geometry and Camera Calibration

Mei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract

Epipolar Geometry in Stereo, Motion and Object Recognition

Geometry for Computer Vision

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Mysteries of Parameterizing Camera Motion - Part 1

Camera Geometry II. COS 429 Princeton University

Undergrad HTAs / TAs. Help me make the course better! HTA deadline today (! sorry) TA deadline March 21 st, opens March 15th

Multi-view geometry problems

Multiple View Geometry in Computer Vision

Pin Hole Cameras & Warp Functions

EECS 442: Final Project

Pin Hole Cameras & Warp Functions

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

3D reconstruction class 11

Step-by-Step Model Buidling

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Structure from motion

Lecture 14: Basic Multi-View Geometry

Vision par ordinateur

Camera models and calibration

C / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Camera Calibration. COS 429 Princeton University

3D Reconstruction from Two Views

Multiple View Geometry in Computer Vision Second Edition

Robust Geometry Estimation from two Images

Projective geometry for 3D Computer Vision

Epipolar Geometry and the Essential Matrix

An idea which can be used once is a trick. If it can be used more than once it becomes a method

Camera calibration. Robotic vision. Ville Kyrki

Announcements. Stereo

Epipolar Geometry and Stereo Vision

CS201 Computer Vision Camera Geometry

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Lecture 3: Camera Calibration, DLT, SVD

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Perception and Action using Multilinear Forms

Computer Vision Lecture 17

Computer Vision Lecture 17

Lecture 8.2 Structure from Motion. Thomas Opsahl

Rectification and Distortion Correction

Multiview Stereo COSC450. Lecture 8

Multiple-View Structure and Motion From Line Correspondences

Invariance of l and the Conic Dual to Circular Points C

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

CSE 252B: Computer Vision II

Transcription:

Structure from Motion

Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions

Structure from motion Recover both 3D scene geoemetry and camera positions

SLAM: Simultaneous Localization and Mapping

Recall: 2-view stereo

An annoying detail What to do when ray s don t intersect?

Minimize reprojection error X? x x 2 2 O O 2 min X f(x) = x Proj(X,M ) 2 + x 2 Proj(X,M 2 ) 2 Perspective projection equations where M = 3x4 matrix of extrinics (R,t) and intrinsics (f) of camera For this equation, its easier to parameterize camera position in absolute coordinates instead of relative ones

Minimize reprojection error X? x x 2 2 O O 2 min X f(x) = x Proj(X,M ) 2 + x 2 Proj(X,M 2 ) 2 Perspective projection equations where M = 3x4 matrix of extrinics (R,t) and intrinsics (f) of camera For this equation, its easier to parameterize camera position in absolute coordinates instead of relative ones

Generalize triangulation to multiple points from multiple cameras min X,X 2,... mx i= nx j= x ij Proj(X j,m i ) 2 As written, could this minimization done independantly for each 3D point?

Bundle adjustment Minimize reprojection error over multiple 3D points and cameras min X,X 2,...,M,M 2,... mx i= nx x ij Proj(X j,m i ) 2 j=

Basic SFM pipeline.find candidate correspondences (interest points + descriptor matches) 2.Select subset that are consistent with epipolar constraints on pairs of images (RANSAC + fundmental matrix) 3.Solve for 3D points and camera that minimize reprojection error (Lots of variants; e.g., iteratively build map of 3D points and cameras as new images arrive)

min X,X 2,...,M,M 2,... mx i= nx x ij Proj(X j,m i ) 2 j= M,..,M m X,..,X n

Bundle adjustment: nonlinear least-squares Encode visibility graphs of what points are seen in what images, i.e. Features: A,B,C,D,E Images:,2,3 Implies jacobian of error function is sparse

Excellent reference Bundle Adjustment A Modern Synthesis Bill Triggs,PhilipMcLauchlan 2, Richard Hartley 3 and Andrew Fitzgibbon 4 INRIA Rhône-Alpes, 655 avenue de l Europe, 38330 Montbonnot, France. Bill.Triggs@inrialpes.fr http://www.inrialpes.fr/movi/people/triggs 2 School of Electrical Engineering, Information Technology & Mathematics University of Surrey, Guildford, GU2 5XH, U.K. P.McLauchlan@ee.surrey.ac.uk http://www.ee.surrey.ac.uk/personal/p.mclauchlan 3 General Electric CRD, Schenectady, NY, 230 hartley@crd.ge.com 4 Dept of Engineering Science, University of Oxford, 9 Parks Road, OX 3PJ, U.K. awf@robots.ox.ac.uk http://www.robots.ox.ac.uk/ awf

Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions

Recall camera projection 2 3 2 3 2 3 2 3 X x fs x fs o x r r 2 r 3 t x 4y5 = 4 0 fs y o y 5 4r 2 r 22 r 23 t y 5 6Y 7 4Z 5 0 0 r 3 r 32 r 33 t z 2 3 X = K 3 3 R3 3 T 3 6Y 7 4Z 5 2 3 X = M 3 4 6Y 7 4Z 5 x MX x = mt X m T 3 X y = mt 2 X m T 3 X

Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of /k, the 2D homogenous vector remains exactly the same: x MX MX =( k M)(kX) It is impossible to recover the absolute scale of the scene!

Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of /k, the 2D homogenous vector remains exactly the same: More generally: if we transform the scene using a transformation Q and apply the inverse transformation to the camera matrices, nothing changes x MX MX =(MQ )(QX)

Recall: transformations in 3D Make use of 3D homogenous coordinates 2 3 x 6y 7 4z5

Recall: transformations in 3D

Recall: transformations in 3D Normalize by last coordinate to recover 3D points 2 6 4 3 x y 7 z5

Back to ambiguities for 3D reconstruction x = PX = ( - PQ )( QX) Projective A T v t v Preserves intersection Affine A T 0 t Preserves parallellism, Similarity s R T 0 t Preserves angles, ratios Euclidean R T 0 t Preserves angles, With no constraints on the camera calibration matrix or on the scene, we get a projective reconstruction Need additional information to upgrade the reconstruction to affine, similarity, or Euclidean

Projective ambiguity x MX =(MQ )(QX) Q p = A T v t v

Projective ambiguity (straight line are preserved)

Affine ambiguity x MX =(MQ )(QX) Affine Q A = A 0 T t

Affine ambiguity (parallel lines are preserved)

Similarity ambiguity x MX =(MQ )(QX) Q s = sr 0 T t

Similarity ambiguity angles+lengths preserved (but can t recover world coordinate system)

Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Large-scale SFM

Structure from motion Let s start with affine cameras (the math is easier) center at infinity

Recall: Orthographic Projection Special case of perspective projection Image World Another common notation for a projection matrix: 2 3 x 4y5 = 2 4 3 5 2 3 X 6Y 7 4Z 5 ) (x, y) Sometimes notationally convenient because 2D homogenous coordinates allow us to write 2D transformations as matrix multiplication

Recall: affine cameras Model as 3D affine transformation + orthographic projection + 2D affine transformation apple x y = apple 2 4 3 5 2 6 6 4 3 7 7 5 2 6 6 4 X Y Z 3 7 7 5 = 2 4 a a 2 a 3 b a 2 a 22 a 23 b 2 3 5 2 6 6 4 X Y Z 3 7 7 5 2 4 3 5 2 6 6 3 7 7 2 6 6 3 7 7 Affine camera defined by 8 parameters

Affine cameras Model as 3D affine transformation + orthographic projection + 2D affine transformation apple x y = = = 2 3 2 3 2 3 X apple 4 5 6 7 6Y 7 4 5 4Z 5 2 3 2 3 X a a 2 a 3 b 4a 2 a 22 a 23 b 2 5 6Y 7 4Z 5 2 3 2 3 X a a 2 a 3 b 4a 2 a 22 a 23 b 2 5 Q a Qa 6Y 7 4Z 5, Q a = 2 3 6 7 4 5 Qa must be 3D affine transformation (2 parameters) inorder to maintain structure of affine projection matrix

Affine cameras Model as 3D affine transformation + orthographic projection + 2D affine transformation apple x y = = = = 2 3 2 3 2 3 X apple 4 5 6 7 6Y 7 4 5 4Z 5 2 3 2 3 X a a 2 a 3 b 4a 2 a 22 a 23 b 2 5 6Y 7 4Z 5 2 3 2 3 2 3 X a a 2 a 3 b 4a 2 a 22 a 23 b 2 5 Q a Qa 6Y 7 4Z 5, Q a = 6 7 4 5 apple a a 2 a 3 a 2 a 22 a 23 x = AX + b 2 3 X 4Y Z 5 + apple b b 2 2D points = linear transformation of 3D points + 2D translation

Affine structure from motion (my favorite vision algorithm) Given: m images of n fixed 3D points: x ij = A i X j + b i, i =,, m, j =,, n Problem: use the mn correspondences x ij to estimate m projection matrices A i and translation vectors b i, and n points X j # of images # of points 2 3 x x 2... 6x 4 2 x 22... 7 5.........

Affine structure from motion Given: m images of n fixed 3D points: x ij = A i X j + b i, i =,, m, j =,, n Problem: use the mn correspondences x ij to estimate m projection matrices A i and translation vectors b i, and n points X j The reconstruction is defined up to an arbitrary 3D affine transformation Q (2 degrees of freedom): A 0 b A 0 b Q How many knowns are there? How many unknowns?, X Q X

Affine structure from motion Given: m images of n fixed 3D points: x ij = A i X j + b i, i =,, m, j =,, n Problem: use the mn correspondences x ij to estimate m projection matrices A i and translation vectors b i, and n points X j The reconstruction is defined up to an arbitrary 3D affine transformation Q (2 degrees of freedom): A 0 b A 0 b Q, Q We have 2mn knowns and 8m + 3n unknowns (minus 2 dof for affine ambiguity) Thus, we must have 2mn >= 8m + 3n 2 For m=2 views, we need n=4 point correspondences X X

Affine structure from motion Let s try centering the points in each 2D image # of images # of points 2 3 x x 2... 6x 4 2 x 22... 7 5......... ˆx ij = x ij n nx k x ik Plug the following into the above expression: x ij = A i X j + b i [on board]

Affine structure from motion Centering: subtract the centroid of the image points After centering, each normalized point x ij is related to the 3D point X i by ( ) j i n k k j i n k i k i i j i n k ik ij ij n n n A X X X A b A X b A X x x x ˆ ˆ = = + + = = = = = ˆx ij = A i ˆX j Given a set of 2D correspondences, simply center them in each image Affine image projection now becomes linear (represent Ai by a 2x3 matrix and x^ij is 2 vector)

Affine structure from motion Let s create a 2m n data (measurement) matrix: D = xˆ xˆ xˆ 2 m xˆ xˆ xˆ 2 22 m2!! "! xˆ xˆ xˆ n 2n mn cameras (2 m) points (n) C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2):37-54, November 992.

Affine structure from motion Let s create a 2m n data (measurement) matrix: cameras (2 m 3) points (3 n) The measurement matrix D = MS must have rank 3! C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2):37-54, November 992. [ ] n m mn m m n n X X X A A A x x x x x x x x x D! "! #!! 2 2 2 2 22 2 2 ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ = =

Fundamental Decomposition 2m Measurements = Motion Shape 3 n D = MS n 3

n n n n 2m D = U V T n 3 3 To reduce to rank 3, we just need to set all the singular values to 0 except for the first 3 2m D = U 3 3 3 V 3 T 3

3 n 2m D U 3 3 V T 3 3 = 3 3 Possible decomposition: M = U 3, S = 3 V T 3 D = M S This decomposition minimizes D-MS 2

Affine ambiguity The decomposition is not unique. We get the same D by using any 3 3 matrix C and applying the transformations M MC, S C - S That is because we have only an affine transformation and we have not enforced any Euclidean constraints (like forcing the image axes to be perpendicular, for example) Source: M. Hebert

Eliminating the affine ambiguity enforce image axes to be ortthonogal and length a a 2 = 0 x a 2 = a 2 2 = a 2 a M = 2 6 4 A A 2. 3 7 5 X We want C such that Ai CC T A T i = Id Write out optimization as min min L A i LA T i Id 2

Orthographic: image axes are perpendicular and scale is 2 3 a a 2 = 0 M = Eliminating the affine ambiguity 6 4 A A 2. 7 5 a 2 x a 2 = a 2 2 = a X This translates into 3m equations in L = CC T : A i L A T i = Id, i =,, m Solve for L Recover C from L by Cholesky decomposition: L = CC T (in practice, easy to do) Update M and S: M = MC, S = C - S Source: M. Hebert

Algorithm summary Given: m images and n features x ij For each image i, center the feature coordinates Construct a 2m n measurement matrix D: Column j contains the projection of point j in all views Row i contains one coordinate of the projections of all the n points in image i Factorize D: Compute SVD: D = U W V T Create U 3 by taking the first 3 columns of U Create V 3 by taking the first 3 columns of V Create W 3 by taking the upper left 3 3 block of W Create the motion and shape matrices: M = U 3, S = 3 V3 T Eliminate affine ambiguity Source: M. Hebert

Results C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method.

Results

Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions

Dealing with missing data So far, we have assumed that all points are visible in all views In reality, the measurement matrix typically looks something like this: cameras points

Sequential structure from motion Intuition: exploit low-rank redundancy of matrix Possible solution: decompose matrix into dense sub-blocks, factorize each sub-block, and fuse the results Finding dense maximal sub-blocks of the matrix is NP-complete (equivalent to finding maximal cliques in a graph) Incremental bilinear refinement points () Perform factorization on a dense subblock cameras

Sequential structure from motion Intuition: exploit low-rank redundancy of matrix Possible solution: decompose matrix into dense sub-blocks, factorize each sub-block, and fuse the results Finding dense maximal sub-blocks of the matrix is NP-complete (equivalent to finding maximal cliques in a graph) Incremental bilinear refinement points (2) Solve for a new 3D point visible by at least two known cameras (linear least squares) min S D MS 2 cameras

Sequential structure from motion Intuition: exploit low-rank redundancy of matrix Possible solution: decompose matrix into dense sub-blocks, factorize each sub-block, and fuse the results Finding dense maximal sub-blocks of the matrix is NP-complete (equivalent to finding maximal cliques in a graph) Incremental bilinear refinement points (3) Solve for a new camera that sees at least three known 3D points (linear least squares) min M D MS 2 cameras F. Rothganger, S. Lazebnik, C. Schmid, and J. Ponce. Segmenting, Modeling, and Matching Video Clips Containing Multiple Moving Objects. PAMI 2007.

Nonrigid structure motion D = X S S2 M 2 M Assume 3D shapes are a linear combination of K basis shapes S = KX k= k S k For each image, we need to solve for camera matrix and scaling coefficients Simply relax rank constraint on measurement matrix from 3 to 3K!

Projective structure from motion Given: m images of n fixed 3D points x ij M i X j, i =,, m, j =,, n Problem: estimate m projection matrices M i and n 3D points X j from the mn correspondences x ij With no calibration info, cameras and points can only be recovered up to a 4x4 projective transformation Q: X QX, M MQ - We can solve for structure and motion when 2mn >= m +3n 5 For two cameras, at least 7 points are needed

Projective SFM: Two-camera case Compute fundamental matrix F between the two views (from 7 or more correspondences) First camera matrix: M = [I 0] Second camera matrix: M = [A3x3 b3x] Then one can compute A,b from F as follows: b is the right null vector, or epipole (F T b = 0) A = ˆbF (where hat notation refers to skew symmetric matrix) For proof, see F & P Sec 8.3 For more than 2 images, things get more implicated

Back to bundle adjustment Minimize reprojection error over multiple 3D points and cameras min X,X 2,...,M,M 2,... mx i= nx x ij Proj(X j,m i ) 2 j=

Self-calibration Self-calibration (auto-calibration) is the process of determining intrinsic camera parameters directly from uncalibrated images For example, when the images are acquired by a single moving camera, we can use the constraint that the intrinsic parameter matrix remains fixed for all the images Compute initial projective reconstruction and find 3D projective transformation matrix Q such that all camera matrices are in the form M i = K [R i t i ] Can use constraints on the form of the calibration matrix: orthogonal image axis Can use vanishing points

Review: Structure from motion Ambiguity Affine structure from motion Factorization Dealing with missing data Incremental structure from motion Projective structure from motion Bundle adjustment Self-calibration

Summary: 3D geometric vision Single-view geometry The pinhole camera model Variation: orthographic projection The perspective projection matrix Intrinsic parameters Extrinsic parameters Calibration Multiple-view geometry Triangulation The epipolar constraint Essential matrix and fundamental matrix Stereo Binocular, multi-view Structure from motion Reconstruction ambiguity Affine SFM Projective SFM