Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

Similar documents
Stereo vision. Many slides adapted from Steve Seitz

Computer Vision Lecture 17

Computer Vision Lecture 17

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

Stereo. Many slides adapted from Steve Seitz

Multiple View Geometry

Multiple View Geometry

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Active Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

CS5670: Computer Vision

Rectification and Disparity

Image Based Reconstruction II

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Think-Pair-Share. What visual or physiological cues help us to perceive 3D shape and depth?

Multi-view stereo. Many slides adapted from S. Seitz

Miniature faking. In close-up photo, the depth of field is limited.

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful.

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

EE795: Computer Vision and Intelligent Systems

Lecture 14: Computer Vision

Cameras and Stereo CSE 455. Linda Shapiro

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

What have we leaned so far?

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

Stereo: Disparity and Matching

BIL Computer Vision Apr 16, 2014

Epipolar Geometry and Stereo Vision

Robert Collins CSE486, Penn State Lecture 08: Introduction to Stereo

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

3D Modeling of Objects Using Laser Scanning

Mahdi Amiri. May Sharif University of Technology

Mosaics wrapup & Stereo

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence

Stereo Vision Computer Vision (Kris Kitani) Carnegie Mellon University

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Stereo Vision. MAN-522 Computer Vision

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Epipolar Geometry and Stereo Vision

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Multiview Reconstruction

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Geometric Reconstruction Dense reconstruction of scene geometry

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

A Comparison between Active and Passive 3D Vision Sensors: BumblebeeXB3 and Microsoft Kinect

Introduction to 3D Machine Vision

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

CS201 Computer Vision Camera Geometry

Computer Vision for Computer Graphics

Epipolar Geometry and Stereo Vision

Volumetric Scene Reconstruction from Multiple Views

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION

COMP 102: Computers and Computing

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Epipolar Geometry and Stereo Vision

Outline. ETN-FPI Training School on Plenoptic Sensing

Lecture 9 & 10: Stereo Vision

Chaplin, Modern Times, 1936

Stereo and structured light

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Stereo. Shadows: Occlusions: 3D (Depth) from 2D. Depth Cues. Viewing Stereo Stereograms Autostereograms Depth from Stereo

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

Multi-View Geometry (Ch7 New book. Ch 10/11 old book)

Basic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Prof. Feng Liu. Spring /27/2014

Computer Vision, Lecture 11

LUMS Mine Detector Project

Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza

Theory of Stereo vision system

Practice Exam Sample Solutions

Lecture 6 Stereo Systems Multi-view geometry

CS 4495/7495 Computer Vision Frank Dellaert, Fall 07. Dense Stereo Some Slides by Forsyth & Ponce, Jim Rehg, Sing Bing Kang

Announcements. Stereo

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Dense 3D Reconstruction. Christiano Gava

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Recap from Previous Lecture

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Range Sensors (time of flight) (1)

Lecture 14: Basic Multi-View Geometry

Overview of Active Vision Techniques

Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Computer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15

Omni Stereo Vision of Cooperative Mobile Robots

Some books on linear algebra

Subpixel accurate refinement of disparity maps using stereo correspondences

Passive 3D Photography

Stereo and Epipolar geometry

Epipolar Geometry CSE P576. Dr. Matthew Brown

CS4495/6495 Introduction to Computer Vision

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Transcription:

Stereo Vision A simple system Dr. Gerhard Roth Winter 2012

Stereo Stereo Ability to infer information on the 3-D structure and distance of a scene from two or more images taken from different viewpoints Humans use only two eyes/images (try thumb trick) Two important problems in stereo Correspondence and reconstruction Correspondence What parts of left and right images are parts of same object? Reconstruction Given correspondences in left and right images, and possibly information on stereo geometry, compute the 3D location and structure of the observed objects

What is stereo vision? Narrower formulation: given a calibrated binocular stereo pair, fuse it to produce a depth image Left image Right image Dense depth map

What is stereo vision? Narrower formulation: given a calibrated binocular stereo pair, fuse it to produce a depth image Humans can do it Stereograms: Invented by Sir Charles Wheatstone, 1838

What is stereo vision? Narrower formulation: given a calibrated binocular stereo pair, fuse it to produce a depth image Autostereograms: www.magiceye.com

What is stereo vision? Narrower formulation: given a calibrated binocular stereo pair, fuse it to produce a depth image Autostereograms: www.magiceye.com

Anaglyphs One way for stereo Many slides adapted from Steve Seitz

Nintendo 3DS 3d vision for gaming Autostereogram with parallax barrier each eye gets different view But you must be at proper location (right in the middle) to see 3d First mass produced hand held 3d autostereogram gamming system

Application of stereo: Robotic exploration Nomad robot searches for meteorites in Antartica Real-time stereo on Mars

Application: View Interpolation Right Image

Application: View Interpolation Left Image

Application: View Interpolation

Stereo scene point image plane optical center

Stereo Basic Principle: Triangulation Gives reconstruction as intersection of two rays Requires Camera calibration Point correspondence

Simplest Case: Parallel images Image planes of cameras are parallel to each other and to the baseline Camera centers are at same height Focal lengths are the same

Simple Stereo Camera

Simple Stereo System Left and right image planes are coplanar Represented by I L and I R So this means that all matching features are on the same horizontal line So the search for matches can proceed on the same line Left Right scanline

Simple Stereo System (2D) Distance between centers of projection is called the baseline T Centers of projection of cameras C L and C R Point P in 3D space projects to P L and P R X L and X R are co-ordinates of P L and P R with respect to principal points C L and C R These are camera co-ordinates in millimeters Z is the difference between point P and the baseline Z is called the depth

Simple Stereo System

Simple Stereo System

Basic Stereo Derivations Derive expression for Z as a function of x 1, x 2, f and B Here xr is projection on rightmost camera looking out from cameras

Stereo Derivations (camera coords) Similar triangles (P L,P,P R ) and (O L,P,O R ) T xl Z f x r T Z + - + Define the disparity: - Z f T d Z d 1 d x x r l Since f, T are constant Note that x l is always to left of xr so disparity d >= 0 in all cases

Range Versus Disparity z fixed for d

Range field (in Z) of Stereo b, f and d_min, and d_max define the range in Z

Implicit Iso-Disparity for Simple Stereo Lines where the disparity value has the same value (see stereo-res.jpg) Notice the rapid Increase in spacing Implication is that the resolution of stereo depth depends on the disparity

Discrete Depth according to Disparity Depth discretized into parallel planes, one for each possible disparity value, more accuracy requires sub-pixel precision which is costly

Disparity Map Disparity d was in mm (camera co-ordinates) If x l and xr in pixels then d = x l x r > 0 but only for simple stereo (origin top left) Usually disparity is stated as number of pixels d = x l x r For points at infinity the disparity d is zero Maximum possible disparity d depends on how close we can get to the camera (there is a maximum value) Depth still inversely proportional to disparity If we compute the disparity for the entire images then we have a disparity map (computed relative to the left image) Often show disparity in image form Bright points have highest disparity (closest) Dark points have lowest disparity (farthest)

Disparity Map

Example Pentagon Stereo Pair

Example Pentagon Disparity Map

Pentagon example left image right image disparity map

Example Tskuba Stereo Pair

Example Tskuba Disparity Map

Characteristics of Simple Stereo FOV is field of view of cameras Overlap of the two cameras Baseline is a system parameter It is also a tradeoff If B is the Baseline Depth Error 1/B PROS of Longer baseline better depth estimation CONS smaller common FOV Correspondence harder due to increased chance of occlusion Occlusion means that a feature is visible in one image but not in another because something occludes it Occlusion more likely as baseline increases Left FOV right

Baseline Tradeoff Short Baseline Better matches are likely (similarity constraint) Fewer occlusions are likely (in general) Less precision (depth not computed as accurately) Larger Baseline Poorer matches are likely (similarity constraint) More occlusions are likely (in general) More precision (depth is computed more accurately) Use smallest baseline you need to get the depth precision that you want!

Real-Time Stereo Systems There are a number of systems that can compute disparity maps In practice systems only work if there is texture in the regions that must be matched Often such systems return sparse depth A few thousand images in regions where there is texture Do some interpolation when there is no texture Point Grey research makes such a camera A successful Canadian company Produces a variety of stereo cameras

BumblelBee

Example image from BumbleBee

Passive stereo systems If system only receives light rays and does not emit any radiation it is a passive systems Your visual system is passive Minus More likely to get bad matches and false depth Can not work in regions without any texture Results depend on the ambient lighting Plus Requires only two calibrated cameras and is simple If cameras are in simple configuration (mechanically aligned) then there are well know algorithms to do matching and compute a disparity map in real-time

Active stereo systems Emits radiation then it is an active system With two cameras an active stereo system Simplest example is just use a laser pointer with stereo! Can have active systems with just one camera! An example is Kinect which has self-identifying patterns Minus More complex hardware, active sensor can be dangerous Need a way to spread active radiation over scene quickly Sweep a laser beam, or use a grating to spread the laser out Usually more complex calibration than a passive system Plus Can work well in many different lighting conditions When it returns depth we can be very confident that this depth is correct (fewer false matches)