3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

Similar documents
Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo

3D Photography: Active Ranging, Structured Light, ICP

3D Photography: Stereo

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

Active Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth

Structured light 3D reconstruction

Computer Vision. 3D acquisition

Overview of Active Vision Techniques

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Perspective projection

Structured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)

Traditional pipeline for shape capture. Active range scanning. New pipeline for shape capture. Draw contours on model. Digitize lines with touch probe

Structured light , , Computational Photography Fall 2017, Lecture 27

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

CS4495/6495 Introduction to Computer Vision

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Stereo and structured light

Image Based Reconstruction II

The main problem of photogrammetry

Other Reconstruction Techniques

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

3D Object Representations. COS 526, Fall 2016 Princeton University

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

3D Modeling of Objects Using Laser Scanning

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D scanning. 3D scanning is a family of technologies created as a means of automatic measurement of geometric properties of objects.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

Outline. ETN-FPI Training School on Plenoptic Sensing

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Chaplin, Modern Times, 1936

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Optical Active 3D Scanning. Gianpaolo Palma

Lecture 8 Active stereo & Volumetric stereo

Depth Sensors Kinect V2 A. Fornaser

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

3D photography. Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/5/15

Surround Structured Lighting for Full Object Scanning

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

What have we leaned so far?

Stereo. Many slides adapted from Steve Seitz

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Multi-view Stereo. Ivo Boyadzhiev CS7670: September 13, 2011

High-Fidelity Augmented Reality Interactions Hrvoje Benko Researcher, MSR Redmond

3D Perception. CS 4495 Computer Vision K. Hawkins. CS 4495 Computer Vision. 3D Perception. Kelsey Hawkins Robotics

ROBOT SENSORS. 1. Proprioceptors

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

Stereo vision. Many slides adapted from Steve Seitz

3D object recognition used by team robotto

Lecture 8 Active stereo & Volumetric stereo

Sensing Deforming and Moving Objects with Commercial Off the Shelf Hardware

Ch 22 Inspection Technologies

Image-based modeling (IBM) and image-based rendering (IBR)

Visual Perception Sensors

Range Sensors (time of flight) (1)

3D Scanning. Lecture courtesy of Szymon Rusinkiewicz Princeton University

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Computational Optical Imaging - Optique Numerique. -- Active Light 3D--

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

Multiple View Geometry

Epipolar geometry contd.

Advanced Digital Photography and Geometry Capture. Visual Imaging in the Electronic Age Lecture #10 Donald P. Greenberg September 24, 2015

Surface Registration. Gianpaolo Palma

3D Models from Range Sensors. Gianpaolo Palma

Sensor technology for mobile robots

CS 395T Numerical Optimization for Graphics and AI (3D Vision) Qixing Huang August 29 th 2018

MotionCam-3D. The highest resolution and highest accuracy area based 3D camera in the world. Jan Zizka, CEO AWARD 2018

Multiple View Geometry

Mixed-Reality for Intuitive Photo-Realistic 3D-Model Generation

A High Speed Face Measurement System

Advanced Digital Photography and Geometry Capture. Visual Imaging in the Electronic Age Lecture #10 Donald P. Greenberg September 24, 2015

5.2 Surface Registration

Other approaches to obtaining 3D structure

A 3-D Scanner Capturing Range and Color for the Robotics Applications

Optical Imaging Techniques and Applications

An Evaluation of Volumetric Interest Points

BIL Computer Vision Apr 16, 2014

Processing 3D Surface Data

Multi-view stereo. Many slides adapted from S. Seitz

Computational Imaging for Self-Driving Vehicles

GABRIELE GUIDI, PHD POLITECNICO DI MILANO, ITALY VISITING SCHOLAR AT INDIANA UNIVERSITY NOV OCT D IMAGE FUSION

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

Lecture 9 & 10: Stereo Vision

Computer and Machine Vision

Inline Computational Imaging: Single Sensor Technology for Simultaneous 2D/3D High Definition Inline Inspection

Depth Camera for Mobile Devices

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Photoneo's brand new PhoXi 3D Camera is the highest resolution and highest accuracy area based 3D

Dense 3D Reconstruction. Christiano Gava

A Full-Range of 3D Body Scanning Solutions

ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Lecture 10: Multi view geometry

Registration of Dynamic Range Images

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Dense 3D Reconstruction. Christiano Gava

EE565:Mobile Robotics Lecture 3

3D shape from the structure of pencils of planes and geometric constraints

Transcription:

3D Scanning Qixing Huang Feb. 9 th 2017 Slide Credit: Yasutaka Furukawa

Geometry Reconstruction Pipeline

This Lecture Depth Sensing ICP for Pair-wise Alignment

Next Lecture Global Alignment Pairwise Multiple

Depth Sensing Contact Mechanical (CMM, jointed arm) Inertial (gyroscope, accelerometer) Ultrasonic trackers Magnetic trackers Range acquisition Transmissive Industrial CT Ultrasound MRI Reflective Non-optical Optical Radar Sonar Slide credit: Szymon Rusinkiewicz

Touch Probes Jointed arms with angular encoders Return position, orientation of tip Faro Arm Faro Technologies, Inc.

Depth Sensing Optical methods Passive Active Shape from X: stereo motion shading texture focus defocus Active variants of passive methods Stereo w. projected texture Active depth from defocus Photometric stereo Time of flight Triangulation

Pulsed Time of Flight Basic idea: send out pulse of light (usually laser), time how long it takes to return d 1 ct 2

Pulsed Time of Flight Advantages: Large working volume (up to 100 m.) Disadvantages: Not-so-great accuracy (at best ~5 mm.) Requires getting timing to ~30 picoseconds Often used for scanning buildings, rooms, archeological sites, etc.

AM Modulation Time of Flight Modulate a laser at frequency m, it returns with a phase shift Note the ambiguity in the measured phase! Range ambiguity of 1 / 2 m n 2 2 2 1 n ν c d m

AM Modulation Time of Flight Accuracy / working volume tradeoff (e.g., noise ~ 1 / 500 working volume) In practice, often used for room-sized environments (cheaper, more accurate than pulsed time of flight)

Triangulation

Triangulation: Moving the Camera and Illumination Moving independently leads to problems with focus, resolution Most scanners mount camera and light source rigidly, move them as a unit

Triangulation: Moving the Camera and Illumination

Triangulation: Extending to 3D Possibility #1: add another mirror (flying spot) Possibility #2: project a stripe, not a dot Object Laser Camera

Pattern Design

Structured Light General Principle Zhang et al, 3DPVT (2002) Project a known pattern onto the scene and infer depth from the deformation of that pattern

Time-Coded Light Patterns Assign each stripe a unique illumination code over time [Posdamer 82]

Time-Coded Light Patterns Assign each stripe a unique illumination code over time [Posdamer 82] Time Space

Binary Encoded Light Stripes Set of light planes are projected into the scene Individual light planes are indexed by an encoding scheme for the light patterns Obtained images are used to uniquely address the light plane corresponding to every image point

Binary Coding Example Codeword of this pixel: 1010010 -> Identifies the corresponding pattern stripe

Accounting for Reflectance Because of surface reflectance and ambient light, distinguishing between black and white not always easy Solution: project all-white (and sometimes all-black) frame Permits multiple shades of gray

Multiple Shades of Gray

Multiple Shades of Gray

Temporal vs. Spatial Continuity Structured-light systems make certain assumptions about the scene: Temporal continuity assumption: Assume scene is static Assign stripes a code over time Spatial continuity assumption: Assume scene is one object Project a grid, pattern of dots, etc.

Grid Methods Assume exactly one continuous surface Count dots or grid lines Occlusions cause problems Some methods use dynamic programming

Codes Assuming Local Spatial Continuity Codeword

Codes Assuming Local Spatial Continuity [Zhang et al. 02] [Ozturk et al. 03]

Homework: How to handle moving objects?

Kinect Kinect combines structured light with two classic computer vision techniques: depth from focus, and depth from stereo Figure copied from Kinect for Windows SDK Quickstarts

The Kinect uses infrared laser light, with a speckle pattern Shpunt et al, PrimeSense patent application US 2008/0106746

Depth from Stereo Uses Parallax The Kinect analyzes the shift of the speckle pattern by projecting from one location and observing from another

ICP for Pairwise Alignment

Pairwise Alignment ICP [Besel and Mckay 92]

ICP Formulation

ICP Variants Point-plane distance [Chen and Medioni 91] Stable sampling [Gelfand et al. 03] Robust norm