Structure from Motion

Similar documents
Structure from Motion

Structure from Motion CSC 767

Structure from motion

Structure from motion

Structure from motion

BIL Computer Vision Apr 16, 2014

Structure from Motion

C280, Computer Vision

Structure from Motion

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Structure from Motion

CS231M Mobile Computer Vision Structure from motion

Lecture 10: Multi-view geometry

CS 532: 3D Computer Vision 7 th Set of Notes

Epipolar Geometry and Stereo Vision

Lecture 10: Multi view geometry

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Epipolar Geometry and Stereo Vision

Lecture 9: Epipolar Geometry

Lecture 6 Stereo Systems Multi-view geometry

Recovering structure from a single view Pinhole perspective projection

Epipolar Geometry and Stereo Vision

Camera Geometry II. COS 429 Princeton University

Undergrad HTAs / TAs. Help me make the course better! HTA deadline today (! sorry) TA deadline March 21 st, opens March 15th

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Lecture 5 Epipolar Geometry

Multiple Views Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Unit 3 Multiple View Geometry

calibrated coordinates Linear transformation pixel coordinates

Stereo and Epipolar geometry

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

CS231A Midterm Review. Friday 5/6/2016

Stereo Vision. MAN-522 Computer Vision

Stereo Epipolar Geometry for General Cameras. Sanja Fidler CSC420: Intro to Image Understanding 1 / 33

CS 664 Structure and Motion. Daniel Huttenlocher

Projective Geometry and Camera Models

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Epipolar Geometry and Stereo Vision

Two-view geometry Computer Vision Spring 2018, Lecture 10

CS231A Course Notes 4: Stereo Systems and Structure from Motion

Projective geometry for 3D Computer Vision

Multiple View Geometry in Computer Vision

Epipolar geometry. x x

Metric Structure from Motion

Epipolar Geometry CSE P576. Dr. Matthew Brown

Last lecture. Passive Stereo Spacetime Stereo

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

1 Projective Geometry

Step-by-Step Model Buidling

CS201 Computer Vision Camera Geometry

Projective Geometry and Camera Models

Visual Recognition: Image Formation

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

The end of affine cameras

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Camera Calibration. COS 429 Princeton University

Multi-view geometry problems

Final project bits and pieces

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

3D reconstruction class 11

Part I: Single and Two View Geometry Internal camera parameters

Structure from motion

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

CS 1674: Intro to Computer Vision. Midterm Review. Prof. Adriana Kovashka University of Pittsburgh October 10, 2016

CS 231A: Computer Vision (Winter 2018) Problem Set 2

Computer Vision Lecture 20

3D Reconstruction from Two Views

Chaplin, Modern Times, 1936

But First: Multi-View Projective Geometry

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Homographies and RANSAC

Epipolar Geometry and the Essential Matrix

Two-View Geometry (Course 23, Lecture D)

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Transformations Between Two Images. Translation Rotation Rigid Similarity (scaled rotation) Affine Projective Pseudo Perspective Bi linear

Invariance of l and the Conic Dual to Circular Points C

Fundamental Matrix. Lecture 13

Computer Vision Lecture 18

Geometry for Computer Vision

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Miniature faking. In close-up photo, the depth of field is limited.

Structure from Motion and Multi- view Geometry. Last lecture

Camera models and calibration

Multi-view stereo. Many slides adapted from S. Seitz

A Factorization Method for Structure from Planar Motion

Geometric camera models and calibration

Projective 2D Geometry

CS 231A Computer Vision (Winter 2015) Problem Set 2

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Computer Vision cmput 428/615

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois

CS4670: Computer Vision

Computer Vision. 2. Projective Geometry in 3D. Lars Schmidt-Thieme

Fundamental Matrix & Structure from Motion

Image warping and stitching

Transcription:

/8/ Structure from Motion Computer Vision CS 43, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert

This class: structure from motion Recap of epipolar geometry Depth from two views Affine structure from motion

Recap: Epipoles Point in left image corresponds to epipolar line l in right image Epipolar line passes through the epipole (the intersection of the cameras baseline with the image plane C C

Recap: Fundamental Matri Fundamental matri maps from a point in one image to a line in the other If and correspond to the same 3d point X:

Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R,t R 2,t 2 R 3,t 3 Camera Camera 2 Camera 3?? Slide credit: Noah Snavely

Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of /k, the projections of the scene points in the image remain eactly the same: PX k P( kx) It is impossible to recover the absolute scale of the scene!

How do we know the scale of image content?

Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of /k, the projections of the scene points in the image remain eactly the same More generally: if we transform the scene using a transformation Q and apply the inverse transformation to the camera matrices, then the images do not change PX PQ - QX

Projective structure from motion Given: m images of n fied 3D points ij = P i X j, i =,, m, j =,, n Problem: estimate m projection matrices P i and n 3D points X j from the mn corresponding points ij X j j 3j P 2j Slides from Lana Lazebnik P 2 P 3

Projective structure from motion Given: m images of n fied 3D points ij = P i X j, i =,, m, j =,, n Problem: estimate m projection matrices P i and n 3D points X j from the mn corresponding points ij With no calibration info, cameras and points can only be recovered up to a 44 projective transformation Q: X QX, P PQ - We can solve for structure and motion when 2mn >= m +3n 5 For two cameras, at least 7 points are needed

Types of ambiguity Projective 5dof A T v t v Preserves intersection and tangency Affine 2dof A T 0 t Preserves parallellism, volume ratios Similarity 7dof s R T 0 t Preserves angles, ratios of length Euclidean 6dof R T 0 t Preserves angles, lengths With no constraints on the camera calibration matri or on the scene, we get a projective reconstruction Need additional information to upgrade the reconstruction to affine, similarity, or Euclidean

Projective ambiguity Q p A T v t v PX PQ - Q X P P

Projective ambiguity

Affine ambiguity Affine Q A A T 0 t PX PQ - Q X A A

Affine ambiguity

Similarity ambiguity Q s sr T 0 t PX PQ -Q X S S

Similarity ambiguity

Bundle adjustment Non-linear method for refining structure and motion Minimizing reprojection error m E( P, X) D n i j 2, P X ij i j X j P X j j 3j P P 2 X j 2j P 3 X j P 2 P 3

Photo synth Noah Snavely, Steven M. Seitz, Richard Szeliski, "Photo tourism: Eploring photo collections in 3D," SIGGRAPH 2006 http://photosynth.net/

Structure from motion Let s start with affine cameras (the math is easier) center at infinity

Affine structure from motion Affine projection is a linear mapping + translation in inhomogeneous coordinates y a a 2 a a 2 22 a a 3 23 X t Y t Z y AX t a 2 a X Projection of world origin. We are given corresponding 2D points () in several frames 2. We want to estimate the 3D points (X) and the affine parameters of each camera (A)

Affine structure from motion Centering: subtract the centroid of the image points For simplicity, assume that the origin of the world coordinate system is at the centroid of the 3D points After centering, each normalized point ij is related to the 3D point X i by j i n k k j i n k i k i i j i n k ik ij ij n n n A X X X A b A X b A X j i ij X A

Suppose we know 3D points and affine camera parameters then, we can compute the observed 2d positions of each point mn m m n n n m X X X A A A 2 2 22 2 2 2 2 Camera Parameters (2m3) 3D Points (3n) 2D Image Points (2mn)

What if we instead observe corresponding 2d image points? Can we recover the camera parameters and 3d points? cameras (2 m) points (n) n m mn m m n n X X X A A A D 2 2 2 2 22 2 2? What rank is the matri of 2D points?

Factorizing the measurement matri AX Source: M. Hebert

Factorizing the measurement matri Singular value decomposition of D: Source: M. Hebert

Factorizing the measurement matri Singular value decomposition of D: Source: M. Hebert

Factorizing the measurement matri Obtaining a factorization from SVD: Source: M. Hebert

Factorizing the measurement matri Obtaining a factorization from SVD: This decomposition minimizes D-MS 2 Source: M. Hebert

Affine ambiguity A ~ X ~ S The decomposition is not unique. We get the same D by using any 3 3 matri C and applying the transformations A AC, X C - X That is because we have only an affine transformation and we have not enforced any Euclidean constraints (like forcing the image aes to be perpendicular, for eample) Source: M. Hebert

Eliminating the affine ambiguity Orthographic: image aes are perpendicular and scale is a a 2 = 0 a 2 a a 2 = a 2 2 = X This translates into 3m equations in L = CC T : A i L A T i = Id, i =,, m Solve for L Recover C from L by Cholesky decomposition: L = CC T Update M and S: M = MC, S = C - S Source: M. Hebert

Algorithm summary Given: m images and n tracked features ij For each image i, center the feature coordinates Construct a 2m n measurement matri D: Column j contains the projection of point j in all views Row i contains one coordinate of the projections of all the n points in image i Factorize D: Compute SVD: D = U W V T Create U 3 by taking the first 3 columns of U Create V 3 by taking the first 3 columns of V Create W 3 by taking the upper left 3 3 block of W Create the motion (affine) and shape (3D) matrices: A = U 3 W 3 ½ and X = W 3 ½ V 3 T Eliminate affine ambiguity Source: M. Hebert

Dealing with missing data So far, we have assumed that all points are visible in all views In reality, the measurement matri typically looks something like this: cameras points One solution: solve using a dense submatri of visible points Iteratively add new cameras

A nice short eplanation Class notes from Lischinksi and Gruber http://www.cs.huji.ac.il/~csip/sfm.pdf