Epipolar Geometry and Stereo Vision

Similar documents
Epipolar Geometry and Stereo Vision

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Undergrad HTAs / TAs. Help me make the course better! HTA deadline today (! sorry) TA deadline March 21 st, opens March 15th

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

BIL Computer Vision Apr 16, 2014

Stereo vision. Many slides adapted from Steve Seitz

Cameras and Stereo CSE 455. Linda Shapiro

Epipolar geometry. x x

Stereo. Many slides adapted from Steve Seitz

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

Final project bits and pieces

Epipolar Geometry and Stereo Vision

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Lecture 9 & 10: Stereo Vision

Introduction à la vision artificielle X

Lecture'9'&'10:'' Stereo'Vision'

Epipolar Geometry and Stereo Vision

Lecture 6 Stereo Systems Multi-view geometry

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Image Based Reconstruction II

Chaplin, Modern Times, 1936

Lecture 10: Multi view geometry

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Lecture 9: Epipolar Geometry

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15

Lecture 10: Multi-view geometry

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence

Multi-view geometry problems

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Structure from motion

Computer Vision Lecture 17

Recovering structure from a single view Pinhole perspective projection

Lecture 5 Epipolar Geometry

Computer Vision Lecture 17

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

Epipolar Geometry CSE P576. Dr. Matthew Brown

Computer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15

Multiple View Geometry

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Two-view geometry Computer Vision Spring 2018, Lecture 10

Stereo and Epipolar geometry

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Lecture 14: Basic Multi-View Geometry

Camera Geometry II. COS 429 Princeton University

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Stereo: Disparity and Matching

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Kinect Device. How the Kinect Works. Kinect Device. What the Kinect does 4/27/16. Subhransu Maji Slides credit: Derek Hoiem, University of Illinois

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Announcements. Stereo

Last lecture. Passive Stereo Spacetime Stereo

CS5670: Computer Vision

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Structure from motion

Robust Geometry Estimation from two Images

Structure from motion

Stereo Vision. MAN-522 Computer Vision

Announcements. Stereo

Structure from Motion and Multi- view Geometry. Last lecture

Recap from Previous Lecture

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Multi-stable Perception. Necker Cube

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

calibrated coordinates Linear transformation pixel coordinates

Fundamental Matrix & Structure from Motion

CS201 Computer Vision Camera Geometry

Stereo Vision Computer Vision (Kris Kitani) Carnegie Mellon University

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Multiple View Geometry

Epipolar geometry contd.

CS 1674: Intro to Computer Vision. Midterm Review. Prof. Adriana Kovashka University of Pittsburgh October 10, 2016

MAPI Computer Vision. Multiple View Geometry

1 Projective Geometry

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Depth from two cameras: stereopsis

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Unit 3 Multiple View Geometry

Multiple View Geometry. Frank Dellaert

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

What have we leaned so far?

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Structure from Motion

Depth from two cameras: stereopsis

Perception II: Pinhole camera and Stereo Vision

Step-by-Step Model Buidling

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

Multi-View Geometry Part II (Ch7 New book. Ch 10/11 old book)

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

Multiple Views Geometry

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Structure from Motion

Rectification and Disparity

EE795: Computer Vision and Intelligent Systems

But First: Multi-View Projective Geometry

Transcription:

Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem

Last class: Image Stitching Two images with rotation/zoom but no translation. X x x' f f'

This class: Two-View Geometry Epipolar geometry Relates cameras from two positions Stereo depth estimation Recover depth from two images

Depth from Stereo Goal: recover depth by finding image coordinate x that corresponds to x X X x x x z x' f f C Baseline C B

Depth from Stereo Goal: Recover depth by finding image coordinate x that corresponds to x Sub-Problems 1. Calibration: What s the relation of the two cameras? 2. Correspondence: Where is the matching point x? X x x'

Correspondence Problem x? Two images taken from cameras with different intrinsic and extrinsic parameters How do we match a point in the first image to a point in the second? How can we constrain our search?

Key idea: Epipolar constraint

Key idea: Epipolar constraint X X X x x x x Potential matches for x have to lie on the corresponding line l. Potential matches for x have to lie on the corresponding line l.

Epipolar geometry: notation X x x Baseline line connecting the two camera centers Epipoles = intersections of baseline with image planes = projections of the other camera center Epipolar Plane plane containing baseline (1D family)

Epipolar geometry: notation X x x Baseline line connecting the two camera centers Epipoles = intersections of baseline with image planes = projections of the other camera center Epipolar Plane plane containing baseline (1D family) Epipolar Lines - intersections of epipolar plane with image planes (always come in corresponding pairs)

Example: Converging cameras

Example: Motion parallel to image plane

Example: Forward motion What would the epipolar lines look like if the camera moves directly forward?

Example: Forward motion e e Epipole has same coordinates in both images. Points move along lines radiating from e: Focus of expansion

Epipolar constraint: Calibrated case X x x Given the intrinsic parameters of the cameras: 1. Convert to normalized coordinates by pre-multiplying all points with the inverse of the calibration matrix; set first camera s coordinate system to world coordinates xˆ K 1 x X Homogeneous 2d point (3D ray towards X) 2D pixel coordinate (homogeneous) 3D scene point xˆ K x X 1 3D scene point in 2 nd camera s 3D coordinates

Epipolar constraint: Calibrated case X x x Given the intrinsic parameters of the cameras: 1. Convert to normalized coordinates by pre-multiplying all points with the inverse of the calibration matrix; set first camera s coordinate system to world coordinates 2. Define some R and t that relate X to X as below xˆ K 1 x X for some scale factor xˆ Rxˆ t xˆ K x X 1

Epipolar constraint: Calibrated case X x x x x t xˆ K 1 x X xˆ K xˆ Rxˆ t xˆ [ t ( Rxˆ )] 0 (because x, R x, and t are co-planar) x X 1 t x = t R x + t = t (R x ) x t x = x t R x = 0

Matrix representation of the cross product Essential matrix X x x xˆ [ t ( Rxˆ )] 0 ˆ x T E xˆ 0 with E t R Essential Matrix (Longuet-Higgins, 1981)

Properties of the Essential matrix X x x xˆ [ t ( Rxˆ )] 0 x T Drop ^ below to simplify notation E x is the epipolar line associated with x (l = E x ) E T x is the epipolar line associated with x (l = E T x) E e = 0 and E T e = 0 E is singular (rank two) E has five degrees of freedom (3 for R, 2 for t because it s up to a scale) ˆ E xˆ 0 with E t R Skewsymmetric matrix

The Fundamental Matrix Without knowing K and K, we can define a similar relation using unknown normalized coordinates xˆ T Ex ˆ 0 T T 1 xˆ K 1 xˆ K x x 1 x F x 0 with F K Fundamental Matrix (Faugeras and Luong, 1992) EK

Properties of the Fundamental matrix X x x x T F x 0 with F x is the epipolar line associated with x (l = F x ) F T x is the epipolar line associated with x (l = F T x) F e = 0 and F T e = 0 F is singular (rank two): det(f)=0 F has seven degrees of freedom: 9 entries but defined up to scale, det(f)=0 F K EK T 1

Estimating the Fundamental Matrix 8-point algorithm Least squares solution using SVD on equations from 8 pairs of correspondences Enforce det(f)=0 constraint using SVD on F 7-point algorithm Use least squares to solve for null space (two vectors) using SVD and 7 pairs of correspondences Solve for linear combination of null space vectors that satisfies det(f)=0 Minimize reprojection error Non-linear least squares Note: estimation of F (or E) is degenerate for a planar scene.

8-point algorithm 1. Solve a system of homogeneous linear equations a. Write down the system of equations x T Fx 0 uu f 11 + uv f 12 + uf 13 + vu f 21 + vv f 22 + vf 23 + u f 31 + v f 32 + f 33 = 0 Af = u 1 u 1 u 1 v 1 u 1 v 1 u 1 v 1 v 1 v 1 u 1 v 1 1 u n u v u n v n u n v n u n v n v n v n u n v n 1 f 11 f 12 f 13 f 21 f 33 =0

8-point algorithm 1. Solve a system of homogeneous linear equations a. Write down the system of equations b. Solve f from Af=0 using SVD Matlab: [U, S, V] = svd(a); f = V(:, end); F = reshape(f, [3 3]) ;

Need to enforce singularity constraint

8-point algorithm 1. Solve a system of homogeneous linear equations a. Write down the system of equations b. Solve f from Af=0 using SVD Matlab: [U, S, V] = svd(a); f = V(:, end); F = reshape(f, [3 3]) ; 2. Resolve det(f) = 0 constraint using SVD Matlab: [U, S, V] = svd(f); S(3,3) = 0; F = U*S*V ;

8-point algorithm 1. Solve a system of homogeneous linear equations a. Write down the system of equations b. Solve f from Af=0 using SVD 2. Resolve det(f) = 0 constraint by SVD Notes: Use RANSAC to deal with outliers (sample 8 points) How to test for outliers? x F x < threshold? Solve in normalized coordinates mean=0 standard deviation ~= (1,1,1) just like with estimating the homography for stitching

Comparison of homography estimation and the 8- point algorithm Assume we have matched points x x with outliers Homography (No Translation) Correspondence Relation x' Hx x' Hx 1. Normalize image coordinates ~ x Tx ~ x T x 2. RANSAC with 4 points Solution via SVD 3. De-normalize: H 0 T 1 ~ HT Fundamental Matrix (Translation) Correspondence Relation x T Fx 0 1. Normalize image coordinates ~ x Tx ~ x T x 2. RANSAC with 8 points Initial solution via SVD Enforce ~ det F 0by SVD ~ 3. De-normalize: F T T FT

7-point algorithm Faster (need fewer points) and could be more robust (fewer points), but also need to check for degenerate cases

Gold standard algorithm Use 8-point algorithm to get initial value of F Use F to solve for P and P (discussed later) Jointly solve for 3d points X and F that minimize the squared re-projection error X x x' See Algorithm 11.2 and Algorithm 11.3 in HZ (pages 284-285) for details

Comparison of estimation algorithms 8-point Normalized 8-point Nonlinear least squares Av. Dist. 1 2.33 pixels 0.92 pixel 0.86 pixel Av. Dist. 2 2.18 pixels 0.85 pixel 0.80 pixel

We can get projection matrices P and P up to a projective ambiguity K *rotation K *translation P I 0 P e F e e T F 0 See HZ p. 255-256 Code: function P = vgg_p_from_f(f) [U,S,V] = svd(f); e = U(:,3); P = [-vgg_contreps(e)*f e]; If we know the intrinsic matrices (K and K ), we can resolve the ambiguity

Let s recap Fundamental matrix song

Moving on to stereo Fuse a calibrated binocular stereo pair to produce a depth image image 1 image 2 Dense depth map Many of these slides adapted from Steve Seitz and Lana Lazebnik

Basic stereo matching algorithm For each pixel in the first image Find corresponding epipolar line in the right image Search along epipolar line and pick the best match Triangulate the matches to get depth information Simplest case: epipolar lines are scanlines When does this happen?

Simplest Case: Parallel images Image planes of cameras are parallel to each other and to the baseline Camera centers are at same height Focal lengths are the same

Simplest Case: Parallel images Image planes of cameras are parallel to each other and to the baseline Camera centers are at same height Focal lengths are the same Then, epipolar lines fall along the horizontal scan lines of the images

Simplest Case: Parallel images R t E E x x T, 0 0 0 0 0 0 0 0 T T R t E Epipolar constraint: v T Tv Tv T v u v u T T v u 0 0 1 0 1 0 0 0 0 0 0 0 1 R = I t = (T, 0, 0) The y-coordinates of corresponding points are the same t x x

Depth from disparity X x x O O f z x x z f f O Baseline O B disparity x x B z f Disparity is inversely proportional to depth.

Stereo image rectification

Stereo image rectification Reproject image planes onto a common plane parallel to the line between camera centers Pixel motion is horizontal after this transformation Two homographies (3x3 transform), one for each input image reprojection C. Loop and Z. Zhang. Computing Rectifying Homographies for Stereo Vision. IEEE Conf. Computer Vision and Pattern Recognition, 1999.

Rectification example

Basic stereo matching algorithm If necessary, rectify the two stereo images to transform epipolar lines into scanlines For each pixel x in the first image Find corresponding epipolar scanline in the right image Search the scanline and pick the best match x Compute disparity x-x and set depth(x) = fb/(x-x )

Correspondence search Left Right scanline Matching cost disparity Slide a window along the right scanline and compare contents of that window with the reference window in the left image Matching cost: SSD or normalized correlation

Correspondence search Left Right scanline SSD

Correspondence search Left Right scanline Norm. corr

Effect of window size Smaller window + More detail More noise W = 3 W = 20 Larger window + Smoother disparity maps Less detail Fails near boundaries

Failures of correspondence search Textureless surfaces Occlusions, repetition Non-Lambertian surfaces, specularities

Results with window search Data Window-based matching Ground truth

How can we improve window-based matching? So far, matches are independent for each point What constraints or priors can we add?

Stereo constraints/priors Uniqueness For any point in one image, there should be at most one matching point in the other image

Stereo constraints/priors Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views

Stereo constraints/priors Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views Ordering constraint doesn t hold

Priors and constraints Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views Smoothness We expect disparity values to change slowly (for the most part)

Stereo matching as energy minimization I 1 I 2 D W 1 (i) W 2 (i+d(i)) D(i) E E data ( D; I1, I 2 ) Esmooth ( D) E W ( i) W ( i D( i 2 data 1 2 )) i D( i) smooth ) neighbors i, j Energy functions of this form can be minimized using graph cuts E D( j 2 Y. Boykov, O. Veksler, and R. Zabih, Fast Approximate Energy Minimization via Graph Cuts, PAMI 2001

Many of these constraints can be encoded in an energy function and solved using graph cuts Before Graph cuts Ground truth Y. Boykov, O. Veksler, and R. Zabih, Fast Approximate Energy Minimization via Graph Cuts, PAMI 2001 For the latest and greatest: http://www.middlebury.edu/stereo/

Things to remember Epipolar geometry Epipoles are intersection of baseline with image planes Matching point in second image is on a line passing through its epipole Fundamental matrix maps from a point in one image to a line (its epipolar line) in the other Can solve for F given corresponding points (e.g., interest points) Can recover canonical camera matrices from F (with projective ambiguity) Stereo depth estimation Estimate disparity by finding corresponding points along scanlines Depth is inverse to disparity

Next class: structure from motion