Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Similar documents
Cameras and Stereo CSE 455. Linda Shapiro

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Multi-view geometry problems

Structure from motion

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

BIL Computer Vision Apr 16, 2014

Stereo vision. Many slides adapted from Steve Seitz

Epipolar geometry. x x

Stereo. Many slides adapted from Steve Seitz

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

CS5670: Computer Vision

Lecture 9 & 10: Stereo Vision

Final project bits and pieces

Computer Vision Lecture 17

Computer Vision Lecture 17

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

Image Based Reconstruction II

Lecture'9'&'10:'' Stereo'Vision'

Introduction à la vision artificielle X

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

Chaplin, Modern Times, 1936

What have we leaned so far?

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence

Computer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15

Lecture 6 Stereo Systems Multi-view geometry

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Lecture 10: Multi view geometry

Lecture 14: Computer Vision

Epipolar Geometry and Stereo Vision

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15

Multiple View Geometry

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Recovering structure from a single view Pinhole perspective projection

Lecture 10: Multi-view geometry

Recap from Previous Lecture

Dense 3D Reconstruction. Christiano Gava

Epipolar Geometry and Stereo Vision

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Stereo Vision Computer Vision (Kris Kitani) Carnegie Mellon University

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

Dense 3D Reconstruction. Christiano Gava

Structure from motion

Two-view geometry Computer Vision Spring 2018, Lecture 10

Undergrad HTAs / TAs. Help me make the course better! HTA deadline today (! sorry) TA deadline March 21 st, opens March 15th

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Stereo: Disparity and Matching

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Structure from motion

Lecture 9: Epipolar Geometry

Announcements. Stereo

Multiple View Geometry

Lecture 14: Basic Multi-View Geometry

Stereo and Epipolar geometry

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

6.819 / 6.869: Advances in Computer Vision Antonio Torralba and Bill Freeman. Lecture 11 Geometry, Camera Calibration, and Stereo.

Announcements. Stereo

Geometric Reconstruction Dense reconstruction of scene geometry

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Stereo Vision. MAN-522 Computer Vision

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16

CS201 Computer Vision Camera Geometry

Kinect Device. How the Kinect Works. Kinect Device. What the Kinect does 4/27/16. Subhransu Maji Slides credit: Derek Hoiem, University of Illinois

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017

Unit 3 Multiple View Geometry

Computer Vision, Lecture 11

EE795: Computer Vision and Intelligent Systems

Computer Vision I. Announcement

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Lecture 5 Epipolar Geometry

Multi-view stereo. Many slides adapted from S. Seitz

But First: Multi-View Projective Geometry

Perception II: Pinhole camera and Stereo Vision

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

1 Projective Geometry

calibrated coordinates Linear transformation pixel coordinates

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

Introduction to Computer Vision. Week 10, Winter 2010 Instructor: Prof. Ko Nishino

A virtual tour of free viewpoint rendering

Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Structure from Motion and Multi- view Geometry. Last lecture

Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?

Structure from motion

Geometric camera models and calibration

Computer Vision Lecture 20

Step-by-Step Model Buidling

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

Final Exam Study Guide CSE/EE 486 Fall 2007

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

CS231A Course Notes 4: Stereo Systems and Structure from Motion

Transcription:

Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz

Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world coords Rotation R of the image plane focal length f, principle point (x c, y c ), pixel size (s x, s y ) blue parameters are called extrinsics, red are intrinsics Projection equation The projection matrix models the cumulative effect of all parameters Useful to decompose into a series of operations identity matrix intrinsics projection rotation translation The definitions of these parameters are not completely standardized especially intrinsics varies from one book to another

Extrinsics How do we get the camera to canonical form? (Center of projechon at the origin, x- axis points right, y- axis points up, z- axis points backwards) Step 1: Translate by - c 0

Extrinsics How do we get the camera to canonical form? (Center of projechon at the origin, x- axis points right, y- axis points up, z- axis points backwards) Step 1: Translate by - c How do we represent translahon as a matrix mulhplicahon? 0

Extrinsics How do we get the camera to canonical form? (Center of projechon at the origin, x- axis points right, y- axis points up, z- axis points backwards) Step 1: Translate by - c Step 2: Rotate by R 0 3x3 rotahon matrix

Extrinsics How do we get the camera to canonical form? (Center of projechon at the origin, x- axis points right, y- axis points up, z- axis points backwards) Step 1: Translate by - c Step 2: Rotate by R 0

PerspecHve projechon (intrinsics) (converts from 3D rays in camera coordinate system to pixel coordinates) in general, (upper triangular matrix) : aspect ra+o (1 unless pixels are not square) : skew (0 unless pixels are shaped like rhombi/parallelograms) : principal point ((0,0) unless ophcal axis doesn t intersect projechon plane at origin)

ProjecHon matrix intrinsics projechon rotahon translahon

ProjecHon matrix = 0 (in homogeneous image coordinates)

Epipolar constraint: Calibrated case X x x Assume that the intrinsic and extrinsic parameters of the cameras are known We can multiply the projection matrix of each camera (and the image points) by the inverse of the calibration matrix to get normalized image coordinates We can also set the global coordinate system to the coordinate system of the first camera. Then the projection matrices of the two cameras can be written as [I 0] and [R t]

Epipolar constraint: Calibrated case X = (x,1) T x x = Rx+t t R The vectors Rx, t, and x are coplanar

Epipolar constraint: Calibrated case X x x xʹ [ t ( Rx)] = 0 xʹ T E x = with E = [ t ] R 0 Essential Matrix (Longuet-Higgins, 1981) The vectors Rx, t, and x are coplanar

Epipolar constraint: Calibrated case X x x xʹ [ t ( Rx)] = 0 xʹ T E x = with E = [ t ] R 0 E x is the epipolar line associated with x (l' = E x) E T x' is the epipolar line associated with x' (l = E T x') E e = 0 and E T e' = 0 E is singular (rank two) E has five degrees of freedom

Epipolar constraint: Uncalibrated case X x x The calibration matrices K and K of the two cameras are unknown We can write the epipolar constraint in terms of unknown normalized coordinates: ˆ ʹ E xˆ = x T 0 ˆ 1 1 x = K x, xˆ ʹ = Kʹ xˆ ʹ

Epipolar constraint: Uncalibrated case X x x xˆ ʹ T E xˆ = 0 xʹ T F x = 0 with F = K T ʹ E K 1 xˆ xˆ ʹ = = K Kʹ 1 1 x xʹ Fundamental Matrix (Faugeras and Luong, 1992)

Epipolar constraint: Uncalibrated case X x x xˆ ʹ T T T E xˆ = 0 xʹ F x = 0 with F = Kʹ E K 1 F x is the epipolar line associated with x (l' = F x) F T x' is the epipolar line associated with x' (l' = F T x') F e = 0 and F T e' = 0 F is singular (rank two) F has seven degrees of freedom

The eight-point algorithm T x = ( u, v,1), xʹ = ( uʹ, vʹ,1) f f 11 u 1 [ ʹ vʹ 1] f f f v = 0 21 31 f f 12 22 32 f f 13 u [ uʹ u uʹ v uʹ vʹ u vʹ v vʹ u v 1] f = 0 23 33 Minimize: N T 2 ( xʹ i F xi ) i= 1 under the constraint F 2 =1 A Smallest eigenvalue of A T A f f f f f f f f 11 12 13 21 22 23 31 32 33

The eight-point algorithm Meaning of error sum of squared algebraic distances between points x i and epipolar lines F x i (or points x i and epipolar lines F T x i ) Nonlinear approach: minimize sum of squared geometric distances : ) ( 2 1 i N i T i x F x = ʹ [ ] = ʹ + ʹ N i i i i i 1 2 2 ), ( d ), ( d x F x F x x T

Problem with eight-point algorithm " # u! u u! v u! v! u v! v v! u v " & & & & & $ % & & & & & & # f 11 f 12 f 13 f 21 f 22 f 23 f 31 f 32 $ ' ' ' ' ' ' = 0 ' ' ' ' ' %

Problem with eight-point algorithm f f f f f f f [ ] 21 uʹ u uʹ v uʹ vʹ u vʹ v vʹ u v = 1 Poor numerical conditioning Can be fixed by rescaling the data f 11 12 13 22 23 31 32

The normalized eight-point algorithm (Hartley, 1995) Center the image data at the origin, and scale it so the mean squared distance between the origin and the data points is 2 pixels Use the eight-point algorithm to compute F from the normalized points Enforce the rank-2 constraint (for example, take SVD of F and throw out the smallest singular value) Transform fundamental matrix back to original units: if T and T are the normalizing transformations in the two images, then the fundamental matrix in original coordinates is T T F T

Comparison of estimation algorithms 8-point Normalized 8-point Nonlinear least squares Av. Dist. 1 2.33 pixels 0.92 pixel 0.86 pixel Av. Dist. 2 2.18 pixels 0.85 pixel 0.80 pixel

Moving on to stereo Fuse a calibrated binocular stereo pair to produce a depth image image 1 image 2 Dense depth map Many of these slides adapted from Steve Seitz and Lana Lazebnik

Depth from disparity X x xʹ O Oʹ = f z x x z f f O Baseline O B disparity = x xʹ B f = z Disparity is inversely proportional to depth.

Basic stereo matching algorithm If necessary, rectify the two stereo images to transform epipolar lines into scanlines For each pixel x in the first image Find corresponding epipolar scanline in the right image Search the scanline and pick the best match x Compute disparity x-x and set depth(x) = fb/(x-x )

Basic stereo matching algorithm For each pixel in the first image Find corresponding epipolar line in the right image Search along epipolar line and pick the best match Triangulate the matches to get depth information Simplest case: epipolar lines are scanlines When does this happen?

Simplest Case: Parallel images R t E Ex x T = = ʹ, 0 = = 0 0 0 0 0 0 0 T T R t E Epipolar constraint: ( ) ( ) v T Tv Tv T v u v u T T v u ʹ = = ʹ = ʹ ʹ 0 0 1 0 1 0 0 0 0 0 0 0 1 R = I t = (T, 0, 0) The y-coordinates of corresponding points are the same t x x

Stereo image rectification

Stereo image rectification Reproject image planes onto a common plane parallel to the line between camera centers Pixel motion is horizontal after this transformation Two homographies (3x3 transform), one for each input image reprojection! C. Loop and Z. Zhang. Computing Rectifying Homographies for Stereo Vision. IEEE Conf. Computer Vision and Pattern Recognition, 1999.

Example Unrectified Rectified

Correspondence search Left Right scanline Matching cost disparity Slide a window along the right scanline and compare contents of that window with the reference window in the left image Matching cost: SSD or normalized correlation

Correspondence search Left Right scanline SSD

Correspondence search Left Right scanline Norm. corr

Effect of window size Smaller window + More detail More noise W = 3 W = 20 Larger window + Smoother disparity maps Less detail Fails near boundaries

Failures of correspondence search Textureless surfaces Occlusions, repetition Non-Lambertian surfaces, specularities

Results with window search Data Window-based matching Ground truth

How can we improve window-based matching? So far, matches are independent for each point What constraints or priors can we add?

Stereo constraints/priors Uniqueness For any point in one image, there should be at most one matching point in the other image

Stereo constraints/priors Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views

Stereo constraints/priors Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views Ordering constraint doesn t hold

Priors and constraints Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views Smoothness We expect disparity values to change slowly (for the most part)

Stereo as energy minimization What defines a good stereo correspondence? 1. Match quality Want each pixel to find a good match in the other image 2. Smoothness If two pixels are adjacent, they should (usually) move about the same amount

Stereo as energy minimization Better objective function { match cost { smoothness cost Want each pixel to find a good match in the other image Adjacent pixels should (usually) move about the same amount

Stereo as energy minimization match cost: smoothness cost: SSD distance between windows = I(x, y) and J(x, y + d(x,y)) : set of neighboring pixels 4- connected 8- connected neighborhood neighborhood

Smoothness cost L 1 distance Po`s model

Dynamic programming Can minimize this independently per scanline using dynamic programming (DP) : minimum cost of soluhon such that d(x,y) = d

Energy minimization via graph cuts edge weight d 3 d 2 d 1 Labels (disparihes) edge weight

Energy minimization via graph cuts d 3 d 2 d 1 Graph Cut Delete enough edges so that each pixel is connected to exactly one label node Cost of a cut: sum of deleted edge weights Finding min cost cut equivalent to finding global minimum of energy function

Stereo as energy minimization I(x, y) J(x, y) y = 141 d x C(x, y, d); the disparity space image (DSI)

Stereo as energy minimization y = 141 d x Simple pixel / window matching: choose the minimum of each column in the DSI independently:

Matching windows Similarity Measure Formula Sum of Absolute Differences (SAD) Sum of Squared Differences (SSD) Zero-mean SAD Locally scaled SAD Normalized Cross Correlation (NCC) SAD SSD NCC Ground truth http://siddhantahuja.wordpress.com/category/stereo-vision/

Before & After Before Graph cuts Ground truth Y. Boykov, O. Veksler, and R. Zabih, Fast Approximate Energy Minimization via Graph Cuts, PAMI 2001 For the latest and greatest: http://www.middlebury.edu/stereo/

Real-time stereo Nomad robot searches for meteorites in Antartica http://www.frc.ri.cmu.edu/projects/meteorobot/index.html Used for robot navigation (and other tasks) Several software-based real-time stereo techniques have been developed (most based on simple discrete search)

Why does stereo fail? Fronto-Parallel Surfaces: Depth is constant within the region of local support

Why does stereo fail? Monotonic Ordering - Points along an epipolar scanline appear in the same order in both stereo images Occlusion All points are visible in each image

Why does stereo fail? Image Brightness Constancy: Assuming Lambertian surfaces, the brightness of corresponding points in stereo images are the same.

Why does stereo fail? Match Uniqueness: For every point in one stereo image, there is at most one corresponding point in the other image.

Stereo reconstruction pipeline Steps Calibrate cameras Rectify images Compute disparity Estimate depth What will cause errors? Camera calibration errors Poor image resolution Occlusions Violations of brightness constancy (specular reflections) Large motions Low-contrast image regions

Choosing the stereo baseline width of a pixel all of these points project to the same pair of pixels Large Baseline Small Baseline What s the optimal baseline? Too small: large depth error Too large: difficult search problem

Multi-view stereo?

Beyond two-view stereo The third view can be used for verification

Using more than two images Multi-View Stereo for Community Photo Collections M. Goesele, N. Snavely, B. Curless, H. Hoppe, S. Seitz Proceedings of ICCV 2007,