BIL Computer Vision Apr 16, 2014

Similar documents
Structure from motion

Structure from motion

Structure from Motion CSC 767

Structure from motion

Stereo. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz

Lecture 10: Multi view geometry

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Lecture 10: Multi-view geometry

Structure from Motion

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

Structure from Motion

Epipolar Geometry and Stereo Vision

C280, Computer Vision

Structure from Motion

Epipolar Geometry and Stereo Vision

Image Based Reconstruction II

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15

Lecture 6 Stereo Systems Multi-view geometry

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Lecture 9 & 10: Stereo Vision

Chaplin, Modern Times, 1936

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Introduction à la vision artificielle X

Final project bits and pieces

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

CS 532: 3D Computer Vision 7 th Set of Notes

CS231M Mobile Computer Vision Structure from motion

Lecture'9'&'10:'' Stereo'Vision'

CS5670: Computer Vision

Cameras and Stereo CSE 455. Linda Shapiro

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Epipolar Geometry and Stereo Vision

Stereo: Disparity and Matching

Multiple View Geometry

Structure from Motion

Epipolar Geometry and Stereo Vision

Lecture 9: Epipolar Geometry

Stereo and Epipolar geometry

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Computer Vision Lecture 17

Computer Vision Lecture 17

Stereo Vision. MAN-522 Computer Vision

Recap from Previous Lecture

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Multi-view stereo. Many slides adapted from S. Seitz

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

What have we leaned so far?

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

calibrated coordinates Linear transformation pixel coordinates

Unit 3 Multiple View Geometry

Lecture 14: Basic Multi-View Geometry

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Dense 3D Reconstruction. Christiano Gava

1 Projective Geometry

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Step-by-Step Model Buidling

Multiple View Geometry

Camera Geometry II. COS 429 Princeton University

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Structure from Motion and Multi- view Geometry. Last lecture

Recovering structure from a single view Pinhole perspective projection

Introduction to Computer Vision. Week 10, Winter 2010 Instructor: Prof. Ko Nishino

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017

CS231A Course Notes 4: Stereo Systems and Structure from Motion

Epipolar geometry. x x

Epipolar geometry contd.

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Computer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

CS231A Midterm Review. Friday 5/6/2016

The end of affine cameras

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

6.819 / 6.869: Advances in Computer Vision Antonio Torralba and Bill Freeman. Lecture 11 Geometry, Camera Calibration, and Stereo.

A virtual tour of free viewpoint rendering

Dense 3D Reconstruction. Christiano Gava

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Multiple Views Geometry

Kinect Device. How the Kinect Works. Kinect Device. What the Kinect does 4/27/16. Subhransu Maji Slides credit: Derek Hoiem, University of Illinois

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Structure from Motion

CS 664 Structure and Motion. Daniel Huttenlocher

Lecture 8 Active stereo & Volumetric stereo

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Lecture 5 Epipolar Geometry

Lecture 14: Computer Vision

3D Photography: Stereo Matching

Multi-view geometry problems

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Transcription:

BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University

Slide credit: S. Lazebnik Basic stereo matching algorithm For each pixel in the first image Find corresponding epipolar line in the right image Examine all pixels on the epipolar line and pick the best match Triangulate the matches to get depth information Simplest case: epipolar lines are scanlines When does this happen?

Slide credit: S. Lazebnik Simplest Case: Parallel images Image planes of cameras are parallel to each other and to the baseline Camera centers are at same height Focal lengths are the same

Slide credit: S. Lazebnik Simplest Case: Parallel images Image planes of cameras are parallel to each other and to the baseline Camera centers are at same height Focal lengths are the same Then, epipolar lines fall along the horizontal scan lines of the images

Essential matrix for parallel images R t E E x x T ] [, 0 = = " " # $ $ $ % & = = 0 0 0 0 0 0 0 ] [ T T R t E R = I t = (T, 0, 0) Epipolar constraint: " # $ $ $ % & = 0 0 0 ] [ x y x z y z a a a a a a a t x x Slide credit: S. Lazebnik

Essential matrix for parallel images R t E E x x T ] [, 0 = = " " # $ $ $ % & = = 0 0 0 0 0 0 0 ] [ T T R t E ( ) ( ) v T Tv Tv T v u v u T T v u = = " " " # $ % % % & ' = " " " # $ % % % & ' ) ) ) * +,,, -. 0 0 1 0 1 0 0 0 0 0 0 0 1 R = I t = (T, 0, 0) The y-coordinates of corresponding points are the same t x x Slide credit: S. Lazebnik Epipolar constraint:

Stereo image rectification Slide credit: S. Lazebnik. S. Seitz

Slide credit: S. Lazebnik. S. Seitz Stereo image rectification reproject image planes onto a common plane parallel to the line between optical centers pixel motion is horizontal after this transformation two homographies (3x3 transform), one for each input image reprojection C. Loop and Z. Zhang. Computing Rectifying Homographies for Stereo Vision. IEEE Conf. Computer Vision and Pattern Recognition, 1999.

Rectification example Slide credit: S. Lazebnik. S. Seitz

Why rectification is useful? Makes the correspondence problem easier Makes triangulation easy Slide credit: S. Savarese

Slide credit: S. Lazebnik. S. Seitz Depth from disparity X x x z f f O Baseline O B disparity = x x" B = z f Disparity is inversely proportional to depth

Disparity map / depth map Disparity map with occlusions Slide credit: S. Savarese Disparity maps Stereo pair

Slide credit: S. Lazebnik Correspondence search Left Right scanline Matching cost disparity Slide a window along the right scanline and compare contents of that window with the reference window in the left image Matching cost: SSD or normalized correlation

Correspondence search Left Right scanline SSD Slide credit: S. Lazebnik

Correlation Method Pick up a window around p(u,v) Build vector W Slide the window along v line in image 2 and compute w Keep sliding until w w is maximized. Slide credit: S. Savarese

Slide credit: S. Savarese Correlation Method Normalized Correlation; minimize:

Correspondence search Left Right scanline Norm. corr Slide credit: S. Lazebnik

Slide credit: S. Lazebnik, S. Seitz Effect of window size Smaller window + More detail More noise Larger window + Smoother disparity maps Less detail W = 3 W = 20

Failures of correspondence search Textureless surfaces Occlusions, repetition Non-Lambertian surfaces, specularities Slide credit: S. Lazebnik. S. Seitz

Slide credit: S. Seitz Stereo results Data from University of Tsukuba Similar results on other images without ground truth Scene Ground truth

Slide credit: S. Lazebnik. S. Seitz Results with window search Data Window-based matching (best window size) Ground truth

Better methods exist... Graph cuts Ground truth Y. Boykov, O. Veksler, and R. Zabih, Fast Approximate Energy Minimization via Graph Cuts, PAMI 2001 For the latest and greatest: http://www.middlebury.edu/stereo/ Slide credit: S. Lazebnik. S. Seitz

Slide credit: S. Lazebnik. S. Seitz How can we improve window-based matching? The similarity constraint is local (each reference window is matched independently) Need to enforce non-local correspondence constraints

Slide credit: S. Lazebnik. S. Seitz Non-local constraints Uniqueness For any point in one image, there should be at most one matching point in the other image

Slide credit: S. Lazebnik. S. Seitz Non-local constraints Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views

Ordering constraint doesn t hold Slide credit: S. Lazebnik. S. Seitz Non-local constraints Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views

Slide credit: S. Lazebnik. S. Seitz Non-local constraints Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views Smoothness We expect disparity values to change slowly (for the most part)

Slide credit: S. Lazebnik. S. Seitz Scanline stereo Try to coherently match pixels on the entire scanline Different scanlines are still optimized independently Left image Right image

Shortest paths for scan-line stereo Left image I Right image I S left Right occlusion q t Left occlusion correspondence C occl s p S right C occl C corr Can be implemented with dynamic programming Ohta & Kanade 85, Cox et al. 96 Slide credit: Y. Boykov Slide credit: S. Lazebnik. S. Seitz

Stereo as energy minimization What defines a good stereo correspondence? 1. Match quality Want each pixel to find a good match in the other image 2. Smoothness If two pixels are adjacent, they should (usually) move about the same amount Slide credit: S. Seitz

Smoothness Slide credit: S. Savarese

Stereo matching as energy minimization I 1 I 2 D W 1 (i ) W 2 (i+d(i )) D(i ) 2 ( W ) 1 ( i) W2 ( i + D( i)) + λ ( D( i) D( j) ) E( D) = ρ i neighbors i, j data term smoothness term Energy functions of this form can be minimized using graph cuts Y. Boykov, O. Veksler, and R. Zabih, Fast Approximate Energy Minimization via Graph Cuts, PAMI 2001 Slide credit: S. Lazebnik

Stereo matching as energy minimization

L. Zhang, B. Curless, and S. M. Seitz. Rapid Shape Acquisition Using Color Structured Light and Multi-pass Dynamic Programming. 3DPVT 2002 Slide credit: S. Lazebnik Active stereo with structured light Project structured light patterns onto the object Simplifies the correspondence problem Allows us to use only one camera camera projector

Active stereo with structured light L. Zhang, B. Curless, and S. M. Seitz. Rapid Shape Acquisition Using Color Structured Light and Multi-pass Dynamic Programming. 3DPVT 2002 Slide credit: S. Lazebnik

Active stereo with structured light http://en.wikipedia.org/wiki/structured-light_3d_scanner Slide credit: S. Lazebnik

Kinect: Structured infrared light http://bbzippo.wordpress.com/2010/11/28/kinect-in-infrared/ Slide credit: S. Lazebnik. S. Seitz

Slide credit: S. Lazebnik. S. Seitz Laser scanning Digital Michelangelo Project Levoy et al. http://graphics.stanford.edu/projects/mich/ Optical triangulation Project a single stripe of laser light Scan it across the surface of the object This is a very precise version of structured light scanning

Slide credit: S. Seitz Laser scanned models The Digital Michelangelo Project, Levoy et al.

Laser scanned models The Digital Michelangelo Project, Levoy et al. Slide credit: S. Seitz

Slide credit: S. Seitz Laser scanned models The Digital Michelangelo Project, Levoy et al.

Beyond two-view stereo Slide credit: S. Lazebnik

Slide credit: B. Freeman and A. Torralba Structure from motion Multiple views of a static scene from different cameras One camera, a moving object

Slide credit: N. Snavely Volumetric stereo Scene Volume V Input Images (Calibrated) Goal: Determine occupancy, color of points in V

Discrete formulation: Voxel Coloring Discretized Scene Volume Input Images (Calibrated) Goal: Assign RGBA values to voxels in V photo-consistent with images Slide credit: N. Snavely

Complexity and computability Discretized Scene Volume 3 N voxels C colors True Scene Photo-Consistent Scenes All Scenes (C N3 ) Slide credit: N. Snavely

K.N. Kutulakos and S.M. Seitz, A Theory of Shape and Space Carving, ICCV 1999 Slide credit: S. Lazebnik Space Carving Image 1 Image N... Space Carving Algorithm Initialize to a volume V containing the true scene Choose a voxel on the outside of the volume Project to visible input images Carve if not photo-consistent Repeat until convergence

Space Carving Results: African Violet Input Input Image (1 (1 of 45) of 45) Reconstruction Reconstruction Reconstruction Slide credit: S. Seitz

Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates? Camera 1? Camera 2 Camera 3 R 1,t 1?? R 2,t R 2 3,t 3 Slide credit: N. Snavely

Slide credit: S. Lazebnik Structure from motion Given: m images of n fixed 3D points x ij = P i X j, i = 1,, m, j = 1,, n Problem: estimate m projection matrices P i and n 3D points X j from the mn correspondences x ij X j x 1j x 2j P 1 P 3 x 3j

Slide credit: S. Lazebnik Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of 1/k, the projections of the scene points in the image remain exactly the same: x = PX = & $ % 1 k # P ( kx) " It is impossible to recover the absolute scale of the scene

Slide credit: S. Lazebnik Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of 1/ k, the projections of the scene points in the image remain exactly the same More generally: if we transform the scene using a transformation Q and apply the inverse transformation to the camera matrices, then the images do not change x = PX = ( PQ -1)( QX)

With no constraints on the camera calibration matrix or on the scene, we get a projective reconstruction Need additional information to upgrade the reconstruction to affine, similarity, or Euclidean Slide credit: S. Lazebnik Types of ambiguity Projective 15dof & A $ T % v t# v " Preserves intersection and tangency Affine 12dof & A $ T % 0 t# 1 " Preserves parallellism, volume ratios Similarity 7dof & s R $ T % 0 t# 1 " Preserves angles, ratios of length Euclidean 6dof & R $ T % 0 t# 1 " Preserves angles, lengths

Slide credit: S. Lazebnik Projective ambiguity Q p = & A $ T % v t# v " x = PX = ( PQ -1)( Q X) P P

Projective ambiguity Slide credit: S. Lazebnik

Slide credit: S. Lazebnik Affine ambiguity Affine Q A = & A $ T % 0 t# 1 " x = PX = ( PQ -1 )( Q X) A A

Affine ambiguity Slide credit: S. Lazebnik

Slide credit: S. Lazebnik Similarity ambiguity Q s = & sr $ T % 0 t# 1 " x = PX = ( PQ -1)( Q X) S S

Similarity ambiguity Slide credit: S. Lazebnik

Slide credit: S. Lazebnik Structure from motion Let s start with affine cameras (the math is easier) center at infinity

Recall: Orthographic Projection Special case of perspective projection Distance from center of projection to image plane is infinite Projection matrix: Image World Slide credit: S. Seitz

Slide credit: S. Lazebnik Affine cameras Orthographic Projection Parallel Projection

Affine cameras A general affine camera combines the effects of an affine transformation of the 3D space, orthographic projection, and an affine transformation of the image: Affine projection is a linear mapping + translation in inhomogeneous coordinates " # $ % & = " # $ $ $ % & = " # $ $ $ % & = 1 0 b A P 1 0 0 0 4affine] [4 1 0 0 0 0 0 1 0 0 0 0 1 3affine] 3 [ 2 23 22 21 1 13 12 11 b a a a b a a a x X a 1 a 2 b AX x + = " # $ $ % & + " # $ $ $ % & ' ( ) * +, = " # $ % & = 2 1 23 22 21 13 12 11 b b Z Y X a a a a a a y x Projection of world origin Slide credit: S. Lazebnik

Affine structure from motion Given: m images of n fixed 3D points: x ij = A i X + b j i, i = 1,, m, j = 1,, n Problem: use the mn correspondences x ij to estimate m projection matrices A i and translation vectors b i, and n points X j The reconstruction is defined up to an arbitrary affine transformation Q (12 degrees of freedom): - A 1 +, 0 b* 1 ( ) -A +, 0 b* ( Q 1), & X# Q$ % 1 " We have 2mn knowns and 8m + 3n unknowns (minus 12 dof for affine ambiguity) Thus, we must have 2mn >= 8m + 3n 12 For two views, we need four point correspondences & $ % X 1 # " Slide credit: S. Lazebnik

Affine structure from motion Centering: subtract the centroid of the image points For simplicity, assume that the origin of the world coordinate system is at the centroid of the 3D points After centering, each normalized point x ij is related to the 3D point X i by ( ) j i n k k j i n k i k i i j i n k ik ij ij n n n A X X X A b A X b A X x x x ˆ 1 1 1 ˆ 1 1 1 = " # $ % & = + + = = = = = j i ij X x = A ˆ Slide credit: S. Lazebnik

C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2):137-154, November 1992. Slide credit: S. Lazebnik Affine structure from motion Let s create a 2m n data (measurement) matrix: D = & xˆ $ $ xˆ $ $ % xˆ 11 21 m1 xˆ xˆ xˆ 12 22 m2 " xˆ xˆ xˆ 1n 2n mn # " cameras (2 m) points (n)

Affine structure from motion Let s create a 2m n data (measurement) matrix: [ ] n m mn m m n n X X X A A A x x x x x x x x x D " # 2 1 2 1 2 1 2 22 21 1 12 11 ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ " # $ $ $ $ % & = " # $ $ $ $ % & = cameras (2 m 3) points (3 n) The measurement matrix D = MS must have rank 3 (Because the motion matrix M and the structure matrix S are rank 3) C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2):137-154, November 1992. Slide credit: S. Lazebnik

Factorizing the measurement matrix Slide credit: M. Hebert

Slide credit: M. Hebert Factorizing the measurement matrix Singular value decomposition of D:

Slide credit: M. Hebert Factorizing the measurement matrix Singular value decomposition of D:

Factorizing the measurement matrix Obtaining a factorization from SVD: Slide credit: M. Hebert

Factorizing the measurement matrix Obtaining a factorization from SVD: This decomposition minimizes D-MS 2 Slide credit: M. Hebert

Affine ambiguity The decomposition is not unique. We get the same D by using any 3 3 matrix C and applying the transformations M MC, S C -1 S That is because we have only an affine transformation and we have not enforced any Euclidean constraints (like forcing the image axes to be perpendicular, for example) Slide credit: M. Hebert

Eliminating the affine ambiguity Orthographic: image axes are perpendicular and scale is 1 a 1 a 2 = 0 x a 1 2 = a 2 2 = 1 a 2 a 1 This translates into 3m equations in L = CC T : A i L A i T = Id, i = 1,, m X Solve for L Recover C from L by Cholesky decomposition: L = CC T Update M and S: M = MC, S = C -1 S Slide credit: M. Hebert

Algorithm summary Given: m images and n features x ij For each image i, center the feature coordinates Construct a 2m n measurement matrix D: Column j contains the projection of point j in all views Row i contains one coordinate of the projections of all the n points in image i Factorize D: Compute SVD: D = U W V T Create U 3 by taking the first 3 columns of U Create V 3 by taking the first 3 columns of V Create W 3 by taking the upper left 3 3 block of W Create the motion and shape matrices: M = U 3 W 3 ½ and S = W 3 ½ V 3 T (or M = U 3 and S = W 3 V 3T ) Eliminate affine ambiguity Slide credit: M. Hebert

Reconstruction results C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2):137-154, November 1992. Slide credit: S. Lazebnik

Slide credit: S. Lazebnik Dealing with missing data So far, we have assumed that all points are visible in all views In reality, the measurement matrix typically looks something like this: points cameras

Dealing with missing data Possible solution: decompose matrix into dense sub-blocks, factorize each sub-block, and fuse the results Finding dense maximal sub-blocks of the matrix is NP-complete (equivalent to finding maximal cliques in a graph) Incremental bilinear refinement (1) Perform factorization on a dense sub-block (2) Solve for a new 3D point visible by at least two known cameras (linear least squares) (3) Solve for a new camera that sees at least three known 3D points (linear least squares) F. Rothganger, S. Lazebnik, C. Schmid, and J. Ponce. Segmenting, Modeling, and Matching Video Clips Containing Multiple Moving Objects. IEEE PAMI 2007. Slide credit: S. Lazebnik

Projective structure from motion Given: m images of n fixed 3D points z ij x ij = P i X j, i = 1,, m, j = 1,, n Problem: estimate m projection matrices P i and n 3D X j points X j from the mn correspondences x ij x 1j x 3j P 1 x 2j P 2 P 3 Slide credit: S. Lazebnik

Slide credit: S. Lazebnik Projective structure from motion Given: m images of n fixed 3D points z ij x ij = P i X j, i = 1,, m, j = 1,, n Problem: estimate m projection matrices P i and n 3D points X j from the mn correspondences x ij With no calibration info, cameras and points can only be recovered up to a 4x4 projective transformation Q: X QX, P PQ -1 We can solve for structure and motion when 2mn >= 11m +3n 15 For two cameras, at least 7 points are needed

Projective SFM: Two-camera case Compute fundamental matrix F between the two views First camera matrix: [I 0] Second camera matrix: [A b] Then b is the epipole (F T b = 0), A = [b ]F F&P sec. 13.3.1 Slide credit: S. Lazebnik

Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure by triangulation points For each additional view: Determine projection matrix of new camera using all the known 3D points that are visible in its image calibration cameras Slide credit: S. Lazebnik

Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure by triangulation points For each additional view: Determine projection matrix of new camera using all the known 3D points that are visible in its image calibration Refine and extend structure: compute new 3D points, re-optimize existing points that are also seen by this camera triangulation cameras Slide credit: S. Lazebnik

Slide credit: S. Lazebnik Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure by triangulation For each additional view: Determine projection matrix of new camera using all the known 3D points that are visible in its image calibration Refine and extend structure: compute new 3D points, re-optimize existing points that are also seen by this camera triangulation Refine structure and motion: bundle adjustment cameras points

Bundle adjustment Non-linear method for refining structure and motion Minimizing reprojection error m E( P, X) = D n i= 1 j= 1 X j ( ) 2 x, P X ij i j P 1 X j x 1j x 3j P 1 P 2 X j x 2j P 3 X j P 2 P 3 Slide credit: S. Lazebnik

Slide credit: S. Lazebnik Self-calibration Self-calibration (auto-calibration) is the process of determining intrinsic camera parameters directly from uncalibrated images For example, when the images are acquired by a single moving camera, we can use the constraint that the intrinsic parameter matrix remains fixed for all the images Compute initial projective reconstruction and find 3D projective transformation matrix Q such that all camera matrices are in the form P i = K [R i t i ] Can use constraints on the form of the calibration matrix: zero skew Can use vanishing points

Summary: 3D geometric vision Single-view geometry The pinhole camera model Variation: orthographic projection The perspective projection matrix Intrinsic parameters Extrinsic parameters Calibration Multiple-view geometry Triangulation The epipolar constraint Essential matrix and fundamental matrix Stereo Binocular, multi-view Structure from motion Reconstruction ambiguity Affine SFM Projective SFM Slide credit: S. Lazebnik