Structure from motion

Similar documents
Structure from motion

Structure from Motion CSC 767

BIL Computer Vision Apr 16, 2014

Structure from motion

Structure from Motion

Structure from Motion

C280, Computer Vision

Structure from Motion

CS 532: 3D Computer Vision 7 th Set of Notes

Lecture 10: Multi-view geometry

CS231M Mobile Computer Vision Structure from motion

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15

Lecture 10: Multi view geometry

Structure from Motion

Lecture 9: Epipolar Geometry

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Lecture 6 Stereo Systems Multi-view geometry

Unit 3 Multiple View Geometry

Structure from Motion

calibrated coordinates Linear transformation pixel coordinates

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Epipolar geometry. x x

CS231A Course Notes 4: Stereo Systems and Structure from Motion

The end of affine cameras

Stereo Vision. MAN-522 Computer Vision

Epipolar Geometry and Stereo Vision

Multi-view geometry problems

Multiple Views Geometry

1 Projective Geometry

Stereo and Epipolar geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

CS 664 Structure and Motion. Daniel Huttenlocher

Two-view geometry Computer Vision Spring 2018, Lecture 10

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Structure from Motion and Multi- view Geometry. Last lecture

Camera Geometry II. COS 429 Princeton University

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision

Structure from motion

Recovering structure from a single view Pinhole perspective projection

Lecture 5 Epipolar Geometry

A Factorization Method for Structure from Planar Motion

Visual Recognition: Image Formation

CS231A Midterm Review. Friday 5/6/2016

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Last lecture. Passive Stereo Spacetime Stereo

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

3D reconstruction class 11

Metric Structure from Motion

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Epipolar Geometry and Stereo Vision

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

3D Computer Vision. Structure from Motion. Prof. Didier Stricker

Camera models and calibration

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Lecture 17

Computer Vision Lecture 17

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Two-View Geometry (Course 23, Lecture D)

Multiple View Geometry in Computer Vision

Cameras and Stereo CSE 455. Linda Shapiro

CS201 Computer Vision Camera Geometry

Geometric camera models and calibration

Projective geometry for 3D Computer Vision

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

Camera Calibration. COS 429 Princeton University

Lecture 14: Basic Multi-View Geometry

Invariance of l and the Conic Dual to Circular Points C

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Epipolar Geometry class 11

Perception and Action using Multilinear Forms

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Projective Geometry and Camera Models

CSE 252B: Computer Vision II

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

CS 231A Computer Vision (Winter 2015) Problem Set 2

N-View Methods. Diana Mateus, Nassir Navab. Computer Aided Medical Procedures Technische Universität München. 3D Computer Vision II

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

Computer Vision cmput 428/615

Geometry for Computer Vision

N-Views (1) Homographies and Projection

3D Reconstruction from Two Views

Step-by-Step Model Buidling

Mei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract

3D Geometry and Camera Calibration

Computer Vision Lecture 20

3D Photography: Epipolar geometry

Fundamental Matrix & Structure from Motion

Computer Vision Lecture 20

3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Toward a Stratification of Helmholtz Stereopsis

Structure from Motion

Introduction to Computer Vision

Transcription:

Structure from motion

Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera 2 Camera 3?? Slide credit: Noah Snavely

Structure from motion Given: m images of n fixed 3D points x ij = P i X j, i = 1,, m, j = 1,, n Problem: estimate m projection matrices P i and n 3D points X j from the mn correspondences x ij X j x 1j x 3j P 1 x 2j P 2 P 3

Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of 1/k, the projections of the scene points in the image remain exactly the same: x = PX = 1 k P ( k X) ) It is impossible to recover the absolute scale of the scene!

Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of 1/k, the projections of the scene points in the image remain exactly the same More generally: if we transform the scene using a transformation Q and apply the inverse transformation to the camera matrices, then the images do not change x = PX = ( PQ -1)( QX)

Types of ambiguity Projective 15dof A T v t v Preserves intersection and tangency Affine 12dof Similarity 7dof A t T 0 1 s R t T 0 1 Preserves parallellism, volume ratios Preserves angles, ratios of length Euclidean 6dof R t T 1 Preserves angles, lengths 0 T With no constraints on the camera calibration matrix or on the scene, we get a projective reconstruction ti Need additional information to upgrade the reconstruction to affine, similarity, or Euclidean

Projective ambiguity Q p = A t T v v x = PX = ( PQ -1 )( Q X) P P

Projective ambiguity

Affine ambiguity Affine Q A = A T 0 t 1 x = PX = ( PQ -1 )( Q X) A A

Affine ambiguity

Similarity ambiguity Q s = s R t T 0 1 x = PX = ( PQ -1)( Q X) S S

Similarity ambiguity

Structure from motion Let s start with affine cameras (the math is easier) center at infinity

Recall: Orthographic Projection Special case of perspective projection Distance from center of projection to image plane is infinite Image World Projection matrix: Slide by Steve Seitz

Affine cameras Orthographic Projection Parallel Projection

Affine cameras A general affine camera combines the effects of an affine transformation of the 3D space, orthographic projection, and an affine transformation of the image: 1 0 0 0 P = [3 3affine] 0 1 0 0 [4 4affine] = 0 0 0 1 a a 0 11 12 13 [ 21 a22 a23 b2 a a 0 0 b1 b 1 A = 0 b 1 Affine projection is a linear mapping + translation in inhomogeneous coordinates a 2 a 1 x X x a11 a12 a13 b1 x = = Y + = AX + b y a21 a22 a23 b2 Z X Projection of world origin

Affine structure from motion Given: m images of n fixed 3D points: x ij = A i X j + b i, i = 1,, m, j = 1,, n Problem: use the mn correspondences x ij to estimate t m projection matrices A i and translation vectors b i, and n points X j The reconstruction is defined up to an arbitrary affine transformation Q (12 degrees of freedom): A 1 0 b 1 A 0 b Q 1, X 1 X Q 1 We have 2mn knowns and 8m + 3n unknowns (minus 12 dof for affine ambiguity) Thus, we must have 2mn >= 8m +3n 12 For two views, we need four point correspondences

Affine structure from motion Centering: subtract the centroid of the image points 1 1 xˆ + ij n n = x ij n k = 1 n k = 1 = A i X j x ik = A i X j + b i ( A i X k + b i ) n 1 X k = n k = 1 A Xˆ For simplicity, assume that the origin of the world coordinate system is at the centroid of the 3D points After centering, each normalized point x ij is related to the 3D point X i by i j xˆ = A ij i X j

Affine structure from motion D Let s create a 2m n data (measurement) matrix: xˆ xˆ xˆ 11 12 L 1 n = xˆ xˆ xˆ 21 22 L 2n O xˆ m xˆ m xˆ 1 2 L mn cameras (2m) points (n) C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2):137-154, November 1992.

Affine structure from motion D Let s create a 2m n data (measurement) matrix: = xˆ xˆ L xˆ 11 12 1 n A 1 xˆ xˆ 21 xˆ 22 L O xˆ m xˆ 1 m2 L ˆ x 2n mn = A M A 2 m cameras (2m 3) [ X X L X ] 1 2 points (3 n) n The measurement matrix D = MS must have rank 3! C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2):137-154, November 1992.

Factorizing the measurement matrix Source: M. Hebert

Factorizing the measurement matrix Singular value decomposition of D: Source: M. Hebert

Factorizing the measurement matrix Singular value decomposition of D: Source: M. Hebert

Factorizing the measurement matrix Obtaining a factorization from SVD: Source: M. Hebert

Factorizing the measurement matrix Obtaining a factorization from SVD: This decomposition minimizes D-MS 2 Source: M. Hebert

Affine ambiguity The decomposition is not unique. We get the same D by using any 3 3 matrix C and applying the transformations M MC, S C -1 S That is because we have only an affine transformation and we have not enforced any Euclidean constraints (like forcing the image axes to be perpendicular, for example) Source: M. Hebert

Eliminating the affine ambiguity Orthographic: image axes are perpendicular and scale is 1 a 1 a 2 =0 a 2 a 1 x a 1 2 = a 2 2 = 1 X This translates into 3m equations in L = CC T : A LA T i i =Id Id, i =1 1,, m Solve for L Recover C from L by Cholesky decomposition: L=CC T Update M and S: M = MC, S = C -1 S Source: M. Hebert

Algorithm summary Given: m images and n features x ij For each image i, center the feature coordinates Construct t a 2m n measurement matrix D: Column j contains the projection of point j in all views Row i contains one coordinate of the projections of all the n points in image i Factorize D: Compute SVD: D = UWV T Create U 3 by taking the first 3 columns of U Create V 3 by taking the first 3 columns of V Create W 3 by taking the upper left 3 3 block of W Create the motion and shape matrices: M = U ½ 3 W 3 and S = W 3½ V 3T (or M = U 3 and S = W 3 V 3T ) Eliminate affine ambiguity Source: M. Hebert

Reconstruction results C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2):137-154, November 1992.

Dealing with missing data So far, we have assumed that all points are visible in all views In reality, the measurement matrix typically looks something like this: cameras points

Dealing with missing data Possible solution: decompose matrix into dense subblocks, factorize each sub-block, and fuse the results Finding dense maximal sub-blocks blocks of the matrix is NP- complete (equivalent to finding maximal cliques in a graph) Incremental bilinear refinement (1) Perform (2) Solve for a new (3) Solve for a new factorization on a dense sub-block 3D point visible by at least two known cameras (linear least squares) camera that sees at least three known 3D points (linear least squares) F. Rothganger, S. Lazebnik, C. Schmid, and J. Ponce. Segmenting, Modeling, and Matching Video Clips Containing Multiple Moving Objects. PAMI 2007.

Projective structure from motion Given: m images of n fixed 3D points z ij x ij = P i X j, i = 1,, m, j = 1,, n Problem: estimate m projection matrices P i and n 3D points X j from the mn correspondences x ij X j x 1j x 3j P 1 x 2j P 2 P 3

Projective structure from motion Given: m images of n fixed 3D points z ij x ij = P i X j, i = 1,, m, j = 1,, n Problem: estimate m projection matrices P i and n 3D points X j from the mn correspondences x ij With no calibration info, cameras and points can only be recovered up to a 4x4 projective transformation Q: X QX, P PQ -1 We can solve for structure and motion when 2mn >= 11m +3n 15 For two cameras, at least 7 points are needed

Projective SFM: Two-camera case Compute fundamental matrix F between the two views First camera matrix: [I 0] Second camera matrix: [A b] Then b is the epipole (F T b = 0), A = [b ]F F&P sec. 13.3.1

Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure by triangulation points For each additional view: Determine projection matrix of new camera using all the known 3D points that are visible in its image calibration cameras

Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure by triangulation points For each additional view: Determine projection matrix of new camera using all the known 3D points that are visible in its image calibration Refine and extend structure: compute new 3D points, re-optimize existing points that are also seen by this camera triangulation cameras

Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure by triangulation points For each additional view: Determine projection matrix of new camera using all the known 3D points that are visible in its image calibration Refine and extend structure: compute new 3D points, re-optimize existing points that are also seen by this camera triangulation Refine structure and motion: bundle adjustment cameras

Bundle adjustment Non-linear method for refining structure and motion Minimizing reprojection error m n E( P, X) = D i= 1 j= 1 ( ) 2 x, P X ij i j X j P 1 X j x 1j x 3j P 1 P 2 X j x 2j P 3 X j P 2 P 3

Self-calibration Self-calibration (auto-calibration) is the process of determining intrinsic camera parameters directly from uncalibrated images For example, when the images are acquired by a single moving camera, we can use the constraint that the intrinsic parameter matrix remains fixed for all the images Compute initial projective reconstruction and find 3D projective transformation matrix Q such that all camera matrices are in the form P i = K [R i t i ] Can use constraints on the form of the calibration matrix: zero skew Can use vanishing points

Review: Structure from motion Ambiguity Affine structure from motion Factorization Dealing with missing data Incremental structure from motion Projective structure from motion Bundle adjustment Self-calibration

Summary: 3D geometric vision Single-view geometry The pinhole camera model Variation: orthographic projection The perspective projection matrix Intrinsic parameters Extrinsic parameters Calibration Multiple-view geometry Triangulation The epipolar constraint Essential matrix and fundamental matrix Stereo Binocular, multi-view Structure from motion Reconstruction ambiguity Affine SFM Projective SFM