Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Similar documents
3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.

Structured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)

Stereo and structured light

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

Structured light , , Computational Photography Fall 2017, Lecture 27

The main problem of photogrammetry

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

Multiple View Geometry

Computer Vision Lecture 17

Computer Vision Lecture 17

Flexible Calibration of a Portable Structured Light System through Surface Plane

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

Computer Vision. 3D acquisition

Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Dense 3D Reconstruction. Christiano Gava

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Stereo Vision. MAN-522 Computer Vision

Sensing Deforming and Moving Objects with Commercial Off the Shelf Hardware

Agenda. DLP 3D scanning Introduction DLP 3D scanning SDK Introduction Advance features for existing SDK

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Active Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth

Dense 3D Reconstruction. Christiano Gava

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Introduction to 3D Machine Vision

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Photography: Active Ranging, Structured Light, ICP

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Multiple View Geometry

ENGN D Photography / Spring 2018 / SYLLABUS

Computer Vision cmput 428/615

Recap from Previous Lecture

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Two-view geometry Computer Vision Spring 2018, Lecture 10

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Surround Structured Lighting for Full Object Scanning

A 3D Pattern for Post Estimation for Object Capture

BIL Computer Vision Apr 16, 2014

Geometric camera models and calibration

Step-by-Step Model Buidling

Metrology and Sensing

ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

Metrology and Sensing

Multiview Stereo COSC450. Lecture 8

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Projector Calibration for Pattern Projection Systems

Image Guided Phase Unwrapping for Real Time 3D Scanning

Multi-Projector Display with Continuous Self-Calibration

3D Photography: Stereo

arxiv: v1 [cs.cv] 28 Sep 2018

Stereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17

Mosaics. Today s Readings

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

CS5670: Computer Vision

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Stereo vision. Many slides adapted from Steve Seitz

calibrated coordinates Linear transformation pixel coordinates

CS201 Computer Vision Camera Geometry

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Gabriel Taubin. Desktop 3D Photography

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER

Surround Structured Lighting for Full Object Scanning

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

1 Projective Geometry

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

Chaplin, Modern Times, 1936

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Optical Imaging Techniques and Applications

Outline. ETN-FPI Training School on Plenoptic Sensing

Structured light 3D reconstruction

Lecture 9: Epipolar Geometry

Depth Sensors Kinect V2 A. Fornaser

Acquisition and Visualization of Colored 3D Objects

Handy Rangefinder for Active Robot Vision

ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows

High quality three-dimensional (3D) shape measurement using intensity-optimized dithering technique

Last time: Disparity. Lecture 11: Stereo II. Last time: Triangulation. Last time: Multi-view geometry. Last time: Epipolar geometry

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

A three-step system calibration procedure with error compensation for 3D shape measurement

Multiple View Geometry in Computer Vision

CS4670: Computer Vision

3D Modeling of Objects Using Laser Scanning

Epipolar Geometry and Stereo Vision

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Optical Active 3D Scanning. Gianpaolo Palma

Outline. ETN-FPI Training School on Plenoptic Sensing

DEVELOPMENT OF LARGE SCALE STRUCTURED LIGHT BASED MEASUREMENT SYSTEMS

Stereo Observation Models

Camera model and multiple view geometry

Metrology and Sensing

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

Camera Calibration. COS 429 Princeton University

Improving Initial Estimations for Structure from Motion Methods

Transcription:

Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe

Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based multiple view reconstruction Today: Structured Light Usage of active devices (lasers/ projectors) Correspondence generation Mesh alignment 1/10/2012 Lecture 3D Computer Vision 2

Motivation Last lecture: Patch-based dense matching of pixels with NCC This works well only for objects with distinctive texture features High NCC within quad Low NCC within quad It is thus impossible to reconstruct objects of constant color or monotonous texture Object multiple view reconstruction 1/10/2012 Lecture 3D Computer Vision 3

Motivation While the vase yielded some geometry in the colored parts, unicolor objects cannot be reconstructed multiple view reconstruction 1/10/2012 Lecture 3D Computer Vision 4

Motivation Solution: If objects don t have a texture, give them one with e.g. Point/line lasers Video projectors 1/10/2012 Lecture 3D Computer Vision 5

Active Reconstruction: Concept 1/10/2012 Lecture 3D Computer Vision 6

Traditional stereo Concept: Active Reconstruction I J 1/10/2012 Lecture 3D Computer Vision 7

Concept: Active Reconstruction Active stereo I J 1/10/2012 Lecture 3D Computer Vision 8

Structured Light Concept: Active Reconstruction I J 1/10/2012 Lecture 3D Computer Vision 9

Calibration: Extrinsics and Intrinsics 1/10/2012 Lecture 3D Computer Vision 10

Calibration Extrinsics Rel Intrinsics C Intrinsics P 3D position (Midpoint triangulation) 1/10/2012 Lecture 3D Computer Vision 11

Calibration Intrinsics camera: Usual procedure: Capture calibration sequence with known calibration pattern (here: chessboard) Find chessboard in the images Use captured correspondences in order to calibrate K c => See lecture 3: Calibration We assume that the camera moves around the board and the board stays static 1/10/2012 Lecture 3D Computer Vision 12

Calibration Intrinsics projector How to model the projector? => Use same pinhole model as for the camera (see lecture 1: Camera) But: How to generate correspondences between the calibration board and the projector? Problem: Projector can t see like the camera 1/10/2012 Lecture 3D Computer Vision 13

Calibration Recall: Stereo camera case (2 cameras) The same chessboard is seen by both camera Thus correspondences to both cameras can be established using only one chessboard 3D points of the chessboard corners stay the same, what changes are their 2D projections in the images We virtually move the cameras around the calibration board 1/10/2012 Lecture 3D Computer Vision 14

Calibration Now: Projector / Camera system The projector can not see anything, thus no correspondences between the calibration board and the projector can be established We need 2 chessboards, one to see for the camera and one to be projected by the projector 1/10/2012 Lecture 3D Computer Vision 15

Calibration Now: Projector / Camera system The image plane of the projector (i.e. seen image ) is the image to be projected and is static independent of the board position => 2D points stay the same 3D positions where the chessboard corners are projected change with the board position Printed chessboard can be seen by the camera and thereby the camera pose w.r.t. to the board can be calculated since K C is known => See lecture 4: Camera pose estimation 1/10/2012 Lecture 3D Computer Vision 16

Calibration Summary Calibration Camera Projector 3D chessboard corners 2D chessboard corners Fixed Variable Variable Fixed So how to get the variable values? Camera s 2D points: Use chessboard detection algorithm Projector s 3D points: Use calibrated camera to compute the positions 1/10/2012 Lecture 3D Computer Vision 17

Calibration Projector s 3D point obtained by intersecting camera rays with calibration plane Extrinsics Printed chessboard R t 1/10/2012 Lecture 3D Computer Vision 18

Calibration With the 3D <-> 2D correspondences of the projector and the camera compute independently K C and K P [R Ci t Ci ] and [R Pi t Pi ] with i = 1 n for n the number of board positions However we need one relative extrinsics [R t] Simple method: Choose some i and compute [R t] = [R i t i ] = [R Ci (R Pi ) T t Ci - R Ci (R Pi ) T t Pi ] which maps projector coordinates to camera coordinates Better method: Formulate as an optimization problem Use simple method s output as initial guess Optimize in parallel K C, K P and [R t] 1/10/2012 Lecture 3D Computer Vision 19

Calibration Initial setup: Static calibration board (for calibration we assumed that we moved the camera / projector around the board) P 2 C 2 C 1 P 1 P 3 C 3 1/10/2012 Lecture 3D Computer Vision 20

Equivalent setup: Calibration In reality however we moved the board while the camera / projector remained static It doesn t matter whether we moved the camera / projector or the world P P 1 C 1 P 2 C 2 P 3 C 3 C 1/10/2012 Lecture 3D Computer Vision 21

Calibration We search for one optimal pose which mostly approximates the input correspondences This is done by minimizing the reprojection error using Levenberg- Marquardt => see lecture 5: Parameter estimation P C 1/10/2012 Lecture 3D Computer Vision 22

Calibration Camera Non-uniform coverage => Additional camera constraints Projector Iteratively optimize in parallel K C, K P and [R t] => Effect can be seen as the red lines in the projector image 1/10/2012 Lecture 3D Computer Vision 23

Correspondences generation 1/10/2012 Lecture 3D Computer Vision 24

Correspondence generation Correspondence generation is essential for 3d reconstruction Correspondences => triangulation => 3d information We focus now on correspondence generation between cameras and light emitting devices Remark: There exist several classes of laser scanners. We only consider those based on triangulation 1/10/2012 Lecture 3D Computer Vision 25

Structured light principle Light source projects a known pattern onto the measuring scene Captured and projected patterns are related to each other Establish correspondences 3D scene object j i i pattern projection detail imaging pattern detail j Image Sensor Pattern Projecting System 1/10/2012 Lecture 3D Computer Vision Slide from UdG 26

Correspondence methods Single dot: No correspondence problem. Scanning both axis Stripe patterns: Correspondence problem among slits No scanning Single stripe: Correspondence problem among points of the same slit Scanning the axis orthogonal to the stripe Grid, multiple dots: Correspondence problem among all the imaged segments No scanning Slide from UdG 1/10/2012 Lecture 3D Computer Vision 27

Correspondence methods Multi-stripe Multi-frame Single-stripe Single-frame Slow, robust Fast, fragile 1/10/2012 Lecture 3D Computer Vision 28

Single dot The correspondence in this case is unique Laser Camera P. Hurbain, www.electronics-lab.com 1/10/2012 Lecture 3D Computer Vision 29

Single stripe Because the lines span planes in 3D, it is possible to intersect camera rays with the plane spanned by the emitted light rays Object Light Plane Ax By Cz D 0 Laser/projector Image Point ( x', y') Camera 1/10/2012 Lecture 3D Computer Vision 30

Multiple stripes / Grid When multiple stripes are to be used at the same time, they must be encoded in order to be identifiable (Similar: Grid) Stripes have to be made distinguishable Solution: 1D or 2D encoding Encoded axis Encoded axes Camera Projector Camera Projector J. Salvi 1/10/2012 Lecture 3D Computer Vision 31

Encoding A pattern is called encoded, when after projecting it onto a surface, a set of regions of the observed projection can be easily matched with the original pattern. Example: encoding by color Decoding a projected pattern allows a large set of correspondences to be easily found thanks to the a prior knowledge of the pattern Jung, Computer Vision (EEE6503) Fall 2009, 1/10/2012 Lecture 3D Computer Vision Yonsei Univ. 32

Multiple stripes - Color The simplest way: Unique color for each stripe (direct codification) Problem: Colors are altered during the light transport Colors interfere with surface colors Lower robustness for larger amount of stripes 1/10/2012 Lecture 3D Computer Vision 33

Multiple stripes - Color Solution: Encode the color assignment itself, such that colors can repeat themselves Each point is encoded by its surrounding intensities (spatial codification) 1/10/2012 Lecture 3D Computer Vision 34

Multiple stripes Binary coding A more robust way for distinguishing a large amount of stripes are binary codes n stripe patterns can encode stripes n 2 1 Projected over time Example: 3 binary-encoded patterns which allows the measuring surface to be divided in 8 sub-regions Jung, Computer Vision (EEE6503) Fall 2009, Yonsei Univ. 1/10/2012 Lecture 3D Computer Vision 35

Multiple stripes Binary coding Assign each stripe a unique illumination code over time [Posdamer 82] Time 0110 Space 1/10/2012 Lecture 3D Computer Vision 36

Multiple stripes Binary coding Example: 7 binary patterns proposed by Posdamer & Altschuler Projected over time Pattern 3 Pattern 2 Codeword of this pixel: 1010010 identifies the corresponding pattern stripe Pattern 1 Jung, Computer Vision (EEE6503) Fall 2009, 1/10/2012 Lecture 3D Computer Vision Yonsei Univ. 37

Multiple stripes Binary coding More robust but requires a lot of images one image for each bit Instead of binary codes, gray codes are often used in practice Adjacent code words differ only in one bit This allows to correct some errors 1/10/2012 Lecture 3D Computer Vision 38

Multiple stripes Binary coding Problem: Large resolution requires a lot of images to be projected i.e. 1024x768 => 10 images (2^10 = 1024) In practice not possible to distinguish projected stripes with a width of only one pixel Consequently the full resolution of the projector cannot be exploited 1/10/2012 Lecture 3D Computer Vision 39

Conclusion so far Scanning with a single dot yields a 1:1 correspondence The correspondence for a scanline were implicitly included in a rayplane intersection Ok for a laser (no lens) Camera/projector: How should lens distortion of the projector s model be regarded? Stripe encoding cannot exploit the full resolution of the projector Is it possible to go down to pixel level? 1/10/2012 Lecture 3D Computer Vision 40

Phase shifting A widely used method to achieve these goals is phase-shifted structured light Consider a function φ ref (x,y) = x Encoding this function into grayscales and projecting it could be used as direct codification Problem again: The camera cannot precisely distinguish between the grayscales Camera Projector (φ ref (x,y)) 1/10/2012 Lecture 3D Computer Vision 41

Phase shifting Solution: Phase shifted structured light can be used to encode the function φ ref (x,y) in a more efficient way. Let x [0..2nπ] Then g(x,y) = cos(φ ref (x,y)) = cos(x) yields an image with n vertical fringes in horizontal direction Camera Projector (g(x,y)) 1/10/2012 Lecture 3D Computer Vision 42

Phase shifting The captured fringe images can be described as I(x,y) = A(x,y) + B(x,y)cos(φ obj (x,y)) A(x,y): Background or ambient light intensity B(x,y): Cosine amplitude Φ obj (x,y): Object phase This is what we want to compute The 2D reference phase on the object as seen by the camera 1/10/2012 Lecture 3D Computer Vision 43

Phase shifting To compute φ obj (x,y), the initial phase φ ref (x,y) is shifted. We present the 3 step phase shifting algorithm by Zhang et al. We project the following shifted fringe images 2 g x, y) cos( ( x, y) ), i( ref i 1 3 2 0, 3 2 3 1/10/2012 Lecture 3D Computer Vision 44

Phase shifting The 3 captured images are described by We thus have 3 equations with 3 unknowns (A, B, Φ obj ) for each pixel Solving it with the known shifts yields 1/10/2012 Lecture 3D Computer Vision 45 ) 2 3 ( tan ), ( 3 1 2 3 1 1 I I I I I y x obj ) ), ( )cos(, ( ), ( ), ( i obj i y x y x B y x A y x I

Phase shifting Problem: Due to tan^{-1} the resulting phase is wrapped for more than 1 stripe in 2π steps Φ obj using only stripe (x in [0..2π]) Φ obj using multiple stripes 1 stripe: No wrapping problem (=> unique correspondences) but imprecise 3D reconstruction Multiple stripes: Wrapping problem (=> ambiguous correspondences) but precise reconstruction 1/10/2012 Lecture 3D Computer Vision 46

Phase shifting The phase thus must be unwrapped (elimination of the 2πdiscontinuities Challenging Unrobust, if using only a single wrapped phase especially at discontinuities 1/10/2012 Lecture 3D Computer Vision 47

Phase shifting A robust solution is to use the level based unwrapping algorithm by Wang et al. Capture multiple fringe levels, e.g. Level 0 => shift 1 stripe => no wrapping Level 1 => shift e.g. 5 stripes => wrapping => use information from level previous for unwrapping Level 2 => shift e.g. 20 stripes => 1/10/2012 Lecture 3D Computer Vision 48

Phase shifting 1/10/2012 Lecture 3D Computer Vision 49

Phase shifting Once the phase has been computed, every pixel in the image creates a point to stripe correspondence (either horizontal or vertical) Camera Projector 1/10/2012 Lecture 3D Computer Vision 50

Phase shifting Finding the pixel correspondences is easy using epipolar geometry However no distortion is included 1/10/2012 Lecture 3D Computer Vision 51

Phase Shifting Correspondence generation including lens distortion 1/10/2012 Lecture 3D Computer Vision 52

Phase Shifting Correspondence generation including lens distortion 1. Undistort 2. Compute epipolar curve 1/10/2012 Lecture 3D Computer Vision 53

Phase Shifting - Summary 1/10/2012 Lecture 3D Computer Vision 54