Introduction to 3D Imaging: Perceiving 3D from 2D Images

Similar documents
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

CS5670: Computer Vision

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Perspective projection

Computer Vision Lecture 17

Computer Vision Lecture 17

Practice Exam Sample Solutions

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo Vision. MAN-522 Computer Vision

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16

Multiple View Geometry

Lecture 14: Basic Multi-View Geometry

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful.

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Epipolar Geometry CSE P576. Dr. Matthew Brown

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

Robert Collins CSE486, Penn State Lecture 08: Introduction to Stereo

Range Sensors (time of flight) (1)

Stereo imaging ideal geometry

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Depth. Chapter Stereo Imaging

3D Models from Contours: Further Identification of Unexposed Areas

A Stereo Machine Vision System for. displacements when it is subjected to elasticplastic

Image Based Reconstruction II

Dense 3D Reconstruction. Christiano Gava

Stereo. Shadows: Occlusions: 3D (Depth) from 2D. Depth Cues. Viewing Stereo Stereograms Autostereograms Depth from Stereo

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Drones Lecture 3 3D data generation

Multiple View Geometry

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Epipolar Geometry and Stereo Vision

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

Lecture 14: Computer Vision

Introduction to Computer Vision. Week 10, Winter 2010 Instructor: Prof. Ko Nishino

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Miniature faking. In close-up photo, the depth of field is limited.

Multi-View Stereo for Static and Dynamic Scenes

Dense 3D Reconstruction. Christiano Gava

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

What is Computer Vision?

Rectification and Disparity

1 Projective Geometry

Flexible Calibration of a Portable Structured Light System through Surface Plane

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Multi-view stereo. Many slides adapted from S. Seitz

Computer Vision cmput 428/615

BIL Computer Vision Apr 16, 2014

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

Stereo: Disparity and Matching

3D Modeling using multiple images Exam January 2008

Constructing a 3D Object Model from Multiple Visual Features

Lecture 9 & 10: Stereo Vision

Outline. ETN-FPI Training School on Plenoptic Sensing

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Geometry of Multiple views

CS201 Computer Vision Camera Geometry

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Optics II. Reflection and Mirrors

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

CMSC427 Final Practice v2 Fall 2017

COMP 546. Lecture 11. Shape from X: perspective, texture, shading. Thurs. Feb. 15, 2018

Final Exam Study Guide

Chaplin, Modern Times, 1936

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,

Final Review CMSC 733 Fall 2014

Lecture 8 Active stereo & Volumetric stereo

Chapter 5. Projections and Rendering

INFO - H Pattern recognition and image analysis. Vision

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

CPSC 425: Computer Vision

Last Time: Acceleration Data Structures for Ray Tracing. Schedule. Today. Shadows & Light Sources. Shadows

Lecture 10: Multi view geometry

Perspective and vanishing points

Step-by-Step Model Buidling

Three-Dimensional Computer Vision

Contents I IMAGE FORMATION 1

Perception II: Pinhole camera and Stereo Vision

Interpolation using scanline algorithm

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy

Stereo vision. Many slides adapted from Steve Seitz

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

University of Cambridge Engineering Part IIB Module 4F12: Computer Vision and Robotics Handout 1: Introduction

Chapter 4. Chapter 4. Computer Graphics 2006/2007 Chapter 4. Introduction to 3D 1

Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?

Distributed Ray Tracing

Transcription:

Introduction to 3D Imaging: Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1. intrinsic images: a 2D representation that stores some 3D properties of the scene 2. 3D shape from X: methods of inferring 3D depth information from various sources 1

What can you determine about 1. the sizes of objects 2. the distances of objects from the camera? What knowledge do you use to analyze this image? 2

3 What objects are shown in this image? How can you estimate distance from the camera? What feature changes with distance?

Intrinsic Images: 2.5 D The idea of intrinsic images is to label features of a 2D image with information that tells us something about the 3D structure of the scene. occluding edge convex edge 4

Contour Labels for Intrinsic Images convex crease (+) concave crease (-) blade (>) + + + M limb (>>) shadow (S) illumination boundary (I) S I 5 reflectance boundary (M)

Labeling Simple Line Drawings Huffman and Clowes showed that blocks world drawings could be labeled (with +, -, >) based on real world constraints. Labeling a simple blocks world image is a consistent labeling problem! Waltz extended the work to cracks and shadows and developed one of the first discrete relaxation algorithms, known as Waltz filtering. 6

Problems with this Approach Research on how to do these labelings was confined to perfect blocks world images There was no way to extend it to real images with missing segments, broken segments, nonconnected junctions, etc. It led some groups down the wrong path for a while. 7

3D Shape from X shading silhouette texture stereo light striping motion mainly research used in practice 8

Perspective Imaging Model: 1D real image point E zc xi f O xf camera lens D image of point B in front image This is the axis of the real image plane. O is the center of projection. This is the axis of the front image plane, which we use. xc B 3D object point xi = xc f zc 9

Perspective in 2D Yc (Simplified) 3D object point P=(xc,yc,zc) =(xw,yw,zw) ray F xi P =(xi,yi,f) yi f camera Xc 10 Zc xc yc zw=zc optical axis Here camera coordinates equal world coordinates. xi xc = f zc yi = yc f zc xi = (f/zc)xc yi = (f/zc)yc

3D from Stereo 3D point left image right image disparity: the difference in image location of the same 3D point when projected under perspective to two different cameras. d = xleft - xright 11

Depth Perception from Stereo Simple Model: Parallel Optic Axes camera L f image plane xl z Z baseline camera R b f xr x-b X P=(x,z) z = x f xl z x-b = f xr z = y = y f yl yr y-axis is perpendicular to the page. 12

Resultant Depth Calculation For stereo cameras with parallel optical axes, focal length f, baseline b, corresponding image points (xl,yl) and (xr,yr) with disparity d: z = f*b / (xl - xr) = f*b/d x = xl*z/f or b + xr*z/f y = yl*z/f or yr*z/f This method of determining depth from disparity is called triangulation. 13

Finding Correspondences If the correspondence is correct, triangulation works VERY well. But correspondence finding is not perfectly solved. for the general stereo problem. For some very specific applications, it can be solved for those specific kind of images, e.g. windshield of a car. 14

3 Main Matching Methods 1. Cross correlation using small windows. dense 2. Symbolic feature matching, usually using segments/corners. sparse 15 3. Use the newer interest operators, ie. SIFT. sparse

Epipolar Geometry Constraint: 1. Normal Pair of Images The epipolar plane cuts through the image plane(s) forming 2 epipolar lines. P C1 b y1 P1 C2 The match for P1 (or P2) in the other image, must lie on the same epipolar line. P2 z1 y2 x z2 epipolar plane 16

Epipolar Geometry: General Case P P1 C1 y1 e1 x1 y2 e2 P2 x2 C2 17

Constraints 1. Epipolar Constraint: Matching points lie on corresponding epipolar lines. P Q 2. Ordering Constraint: Usually in the same order across the lines. C1 e1 e2 C2 18

Structured Light 3D data can also be derived using light stripe a single camera a light source that can produce stripe(s) on the 3D object light source camera 19

Structured Light 3D Computation 3D data can also be derived using a single camera 3D point (x, y, z) a light source that can produce stripe(s) on the 3D object b [x y z] = --------------- [x y f] 3D f cot θ -x image light source θ b f (x,y,f) (0,0,0) x axis 20

Depth from Multiple Light Stripes What are these objects? 21

Our (former) System 4-camera light-striping stereo cameras projector rotation table 22 3D object