3D Photography: Active Ranging, Structured Light, ICP

Similar documents
3D Photography: Stereo

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Active Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

Structured light 3D reconstruction

Project Updates Short lecture Volumetric Modeling +2 papers

5.2 Surface Registration

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

Surface Registration. Gianpaolo Palma

Lecture 8 Active stereo & Volumetric stereo

Dense 3D Reconstruction. Christiano Gava

Chaplin, Modern Times, 1936

Overview of Active Vision Techniques

Lecture 10: Multi view geometry

Computer Vision. 3D acquisition

Other Reconstruction Techniques

Lecture 10: Multi-view geometry

Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo

3D Photography. Marc Pollefeys, Torsten Sattler. Spring 2015

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

Processing 3D Surface Data

Lecture 8 Active stereo & Volumetric stereo

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Dense 3D Reconstruction. Christiano Gava

What have we leaned so far?

BIL Computer Vision Apr 16, 2014

Perspective projection

Capturing, Modeling, Rendering 3D Structures

3D Object Representations. COS 526, Fall 2016 Princeton University

3D Photography: Epipolar geometry

Depth Sensors Kinect V2 A. Fornaser

Stereo vision. Many slides adapted from Steve Seitz

Image Based Reconstruction II

3D Perception. CS 4495 Computer Vision K. Hawkins. CS 4495 Computer Vision. 3D Perception. Kelsey Hawkins Robotics

CS4495/6495 Introduction to Computer Vision

Stereo. Many slides adapted from Steve Seitz

3D Computer Vision 1

3D object recognition used by team robotto

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

EE795: Computer Vision and Intelligent Systems

Stereo and structured light

Multi-view reconstruction for projector camera systems based on bundle adjustment

Computer Vision Lecture 17

Computer Vision Lecture 17

Multiple View Geometry

Multi-view stereo. Many slides adapted from S. Seitz

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Lecture 9 & 10: Stereo Vision

Local features: detection and description. Local invariant features

Introduction to Mobile Robotics Iterative Closest Point Algorithm. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras

Processing 3D Surface Data

Traditional pipeline for shape capture. Active range scanning. New pipeline for shape capture. Draw contours on model. Digitize lines with touch probe

Rigid ICP registration with Kinect

Multiple View Geometry

Local features: detection and description May 12 th, 2015

Spacetime Stereo: A Unifying Framework for Depth from Triangulation. Abstract

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

Recap from Previous Lecture

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.

Single Spin Image-ICP matching for Efficient 3D Object Recognition

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Miniature faking. In close-up photo, the depth of field is limited.

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

Structured light , , Computational Photography Fall 2017, Lecture 27

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

The main problem of photogrammetry

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

#$ % $ $& "$%% " $ '$ " '

Optical Active 3D Scanning. Gianpaolo Palma

Robotics Programming Laboratory

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Advanced Digital Photography and Geometry Capture. Visual Imaging in the Electronic Age Lecture #10 Donald P. Greenberg September 24, 2015

Outline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping

Computational light transport

CS 468 Data-driven Shape Analysis. Shape Descriptors

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

More computational light transport

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

SIMULTANEOUS REGISTRATION OF MULTIPLE VIEWS OF A 3D OBJECT Helmut Pottmann a, Stefan Leopoldseder a, Michael Hofer a

Algorithm research of 3D point cloud registration based on iterative closest point 1

Geometric Reconstruction Dense reconstruction of scene geometry

CS5670: Computer Vision

Processing 3D Surface Data

FEATURE-BASED REGISTRATION OF RANGE IMAGES IN DOMESTIC ENVIRONMENTS

3D scanning. 3D scanning is a family of technologies created as a means of automatic measurement of geometric properties of objects.

Sensing Deforming and Moving Objects with Commercial Off the Shelf Hardware

Multi-view Stereo. Ivo Boyadzhiev CS7670: September 13, 2011

Local Feature Detectors

Surround Structured Lighting for Full Object Scanning

Project Ideas. Guido Gerig CS 6643, Computer Vision Spring 2016

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Transcription:

3D Photography: Active Ranging, Structured Light, ICP Kalin Kolev, Marc Pollefeys Spring 2013 http://cvg.ethz.ch/teaching/2013spring/3dphoto/

Schedule (tentative) Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar 25 Apr 1 Apr 8 Apr 15 Apr 22 Apr 29 May 6 May 13 May 20 May 27 Introduction Lecture: Geometry, Camera Model, Calibration Lecture: Features, Tracking/Matching Project Proposals by Students Lecture: Epipolar Geometry Lecture: Stereo Vision Easter Short lecture Stereo Vision (2) + 2 papers Project Updates Short lecture Active Ranging, Structured Light + 2 papers Short lecture Volumetric Modeling + 2 papers Short lecture Mesh-based Modeling + 2 papers Short lecture SfM, SLAM + 2 papers Pentecost / White Monday Final Demos

Structured light and active ranging techniques

Today s class Obtaining depth maps / range images unstructured light structured light time-of-flight Registering range images (some slides from Szymon Rusinkiewicz, Brian Curless)

Taxonomy 3D modeling passive active stereo structured/ unstructured shape from light silhouettes photometric stereo laser scanning

Unstructured light project texture to disambiguate stereo

Space-time stereo Davis, Ramamoothi, Rusinkiewicz, CVPR 03

Space-time stereo Davis, Ramamoothi, Rusinkiewicz, CVPR 03

Space-time stereo Zhang, Curless and Seitz, CVPR 03

Space-time stereo results Zhang, Curless and Seitz, CVPR 03

Light Transport Constancy Davis, Yang, Wang, ICCV05

Triangulation Light / Laser Camera Peak position in image reveals depth

Triangulation: Moving the Camera and Illumination Moving independently leads to problems with focus, resolution Most scanners mount camera and light source rigidly, move them as a unit, allows also for (partial) pre-calibration

Triangulation: Moving the Camera and Illumination

Triangulation: Extending to 3D Alternatives: project dot(s) or stripe(s) Object Laser Camera

Triangulation Scanner Issues Accuracy proportional to working volume (typical is ~1000:1) Scales down to small working volume (e.g. 5 cm. working volume, 50 µm. accuracy) Does not scale up (baseline too large ) Two-line-of-sight problem (shadowing from either camera or laser) Triangulation angle: non-uniform resolution if too small, shadowing if too big (useful range: 15-30 )

Triangulation Scanner Issues Material properties (dark, specular) Subsurface scattering Laser speckle Edge curl Texture embossing Where is the exact (subpixel) spot position?

Space-time analysis Curless 95

Space-time analysis Curless 95

Projector as camera

Multi-Stripe Triangulation To go faster, project multiple stripes But which stripe is which? Answer #1: assume surface continuity e.g. Eyetronics ShapeCam

Kinect Infrared projector Infrared camera Works indoors (no IR distraction) invisible for human Depth Map: note stereo shadows! Color Image (unused for depth) IR Image

Kinect Projector Pattern strong texture Correlation-based stereo between IR image and projected pattern possible stereo shadow Bad SNR / too close Homogeneous region, ambiguous without pattern

Multi-Stripe Triangulation To go faster, project multiple stripes But which stripe is which? Answer #2: colored stripes (or dots)

Multi-Stripe Triangulation To go faster, project multiple stripes But which stripe is which? Answer #3: time-coded stripes

Time-Coded Light Patterns Assign each stripe a unique illumination code over time [Posdamer 82] Time Space

Better codes Gray code Neighbors only differ one bit => Structured light presented afterwards

Poor man s scanner Bouguet and Perona, ICCV 98

Pulsed Time of Flight Basic idea: send out pulse of light (usually laser), time how long it takes to return d 1 = c t 2

Pulsed Time of Flight Advantages: Large working volume (up to 100 m.) Disadvantages: Not-so-great accuracy (at best ~5 mm.) Requires getting timing to ~30 picoseconds Does not scale with working volume Often used for scanning buildings, rooms, archeological sites, etc.

Depth cameras 2D array of time-of-flight sensors e.g. Canesta s CMOS 3D sensor jitter too big on single measurement, but averages out on many (10,000 measurements 100x improvement)

3D modeling Aligning range images Pairwise Globally (some slides from S. Rusinkiewicz, J. Ponce, )

Aligning 3D Data If correct correspondences are known (from feature matches, colors, ), it is possible to find correct relative rotation/translation

X 1 Aligning 3D Data X 2 T e.g. Kinect motion X 1 X 2 X i = T X i For T as general 4x4 matrix: Linear solution from 5 corrs. T is Euclidean Transform: 3 corrs. (using quaternions) [Horn87] Closed-form solution of absolute orientation using unit quaternions

Aligning 3D Data How to find corresponding points? Previous systems based on user input, feature matching, surface signatures, etc.

Spin Images [Johnson and Hebert 97] Signature that captures local shape Similar shapes similar spin images

Computing Spin Images Start with a point on a 3D model Find (averaged) surface normal at that point Define coordinate system centered at this point, oriented according to surface normal and two (arbitrary) tangents Express other points (within some distance) in terms of the new coordinates

Computing Spin Images Compute histogram of locations of other points, in new coordinate system, ignoring rotation around normal: α = p nˆ β = p nˆ radial dist. elevation

Computing Spin Images elevation radial dist.

Spin Image Parameters Size of neighborhood Determines whether local or global shape is captured Big neighborhood: more discriminative power Small neighborhood: resilience to clutter Size of bins in histogram: Big bins: less sensitive to noise Small bins: captures more detail

Alignment with Spin Image Compute spin image for each point / subset of points in both sets Find similar spin images => potential correspondences Compute alignment from correspondences Same problems as with image matching: - Robustness of descriptor vs. discriminative power - Mismatches => robust estimation required

Solving 3D puzzles with VIPs SIFT features Extracted from 2D images Variation due to viewpoint VIP features Extracted from 3D model Viewpoint invariant (Wu et al., CVPR08) 43

Aligning 3D Data Alternative: assume closest points correspond to each other, compute the best transform

Aligning 3D Data and iterate to find alignment Iterated Closest Points (ICP) [Besl & McKay 92] Converges if starting position close enough

ICP Variant Point-to-Plane Error Metric Using point-to-plane distance instead of point-topoint lets flat regions slide along each other more easily [Chen & Medioni 92]

Finding Corresponding Points Finding closest point is most expensive stage of ICP Brute force search O(n) Spatial data structure (e.g., k-d tree) O(log n) Voxel grid O(1), but large constant, slow preprocessing

Finding Corresponding Points For range images, simply project point [Blais/Levine 95] Constant-time, fast Does not require precomputing a spatial data structure

Efficient ICP Efficient Variants of the ICP algorithm [Rusinkiewicz & Levoy, 3DIM 2001] => Presented afterwards

Presentations [Scharstein/Szeliski 03]: High Accuracy Stereo Depth Maps using Structured Light [Rusinkiewicz/Levoy 01]: Efficient Variants of the ICP Algorithm