Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe
|
|
- Job Hubbard
- 5 years ago
- Views:
Transcription
1 Structured Light Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe
2 Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based multiple view reconstruction Today: Structured Light Usage of active devices (lasers/ projectors) Correspondence generation Mesh alignment 1/10/2012 Lecture 3D Computer Vision 2
3 Motivation Last lecture: Patch-based dense matching of pixels with NCC This works well only for objects with distinctive texture features High NCC within quad Low NCC within quad It is thus impossible to reconstruct objects of constant color or monotonous texture Object multiple view reconstruction 1/10/2012 Lecture 3D Computer Vision 3
4 Motivation While the vase yielded some geometry in the colored parts, unicolor objects cannot be reconstructed multiple view reconstruction 1/10/2012 Lecture 3D Computer Vision 4
5 Motivation Solution: If objects don t have a texture, give them one with e.g. Point/line lasers Video projectors 1/10/2012 Lecture 3D Computer Vision 5
6 Active Reconstruction: Concept 1/10/2012 Lecture 3D Computer Vision 6
7 Traditional stereo Concept: Active Reconstruction I J 1/10/2012 Lecture 3D Computer Vision 7
8 Concept: Active Reconstruction Active stereo I J 1/10/2012 Lecture 3D Computer Vision 8
9 Structured Light Concept: Active Reconstruction I J 1/10/2012 Lecture 3D Computer Vision 9
10 Calibration: Extrinsics and Intrinsics 1/10/2012 Lecture 3D Computer Vision 10
11 Calibration Extrinsics Rel Intrinsics C Intrinsics P 3D position (Midpoint triangulation) 1/10/2012 Lecture 3D Computer Vision 11
12 Calibration Intrinsics camera: Usual procedure: Capture calibration sequence with known calibration pattern (here: chessboard) Find chessboard in the images Use captured correspondences in order to calibrate K c => See lecture 3: Calibration We assume that the camera moves around the board and the board stays static 1/10/2012 Lecture 3D Computer Vision 12
13 Calibration Intrinsics projector How to model the projector? => Use same pinhole model as for the camera (see lecture 1: Camera) But: How to generate correspondences between the calibration board and the projector? Problem: Projector can t see like the camera 1/10/2012 Lecture 3D Computer Vision 13
14 Calibration Recall: Stereo camera case (2 cameras) The same chessboard is seen by both camera Thus correspondences to both cameras can be established using only one chessboard 3D points of the chessboard corners stay the same, what changes are their 2D projections in the images We virtually move the cameras around the calibration board 1/10/2012 Lecture 3D Computer Vision 14
15 Calibration Now: Projector / Camera system The projector can not see anything, thus no correspondences between the calibration board and the projector can be established We need 2 chessboards, one to see for the camera and one to be projected by the projector 1/10/2012 Lecture 3D Computer Vision 15
16 Calibration Now: Projector / Camera system The image plane of the projector (i.e. seen image ) is the image to be projected and is static independent of the board position => 2D points stay the same 3D positions where the chessboard corners are projected change with the board position Printed chessboard can be seen by the camera and thereby the camera pose w.r.t. to the board can be calculated since K C is known => See lecture 4: Camera pose estimation 1/10/2012 Lecture 3D Computer Vision 16
17 Calibration Summary Calibration Camera Projector 3D chessboard corners 2D chessboard corners Fixed Variable Variable Fixed So how to get the variable values? Camera s 2D points: Use chessboard detection algorithm Projector s 3D points: Use calibrated camera to compute the positions 1/10/2012 Lecture 3D Computer Vision 17
18 Calibration Projector s 3D point obtained by intersecting camera rays with calibration plane Extrinsics Printed chessboard R t 1/10/2012 Lecture 3D Computer Vision 18
19 Calibration With the 3D <-> 2D correspondences of the projector and the camera compute independently K C and K P [R Ci t Ci ] and [R Pi t Pi ] with i = 1 n for n the number of board positions However we need one relative extrinsics [R t] Simple method: Choose some i and compute [R t] = [R i t i ] = [R Ci (R Pi ) T t Ci - R Ci (R Pi ) T t Pi ] which maps projector coordinates to camera coordinates Better method: Formulate as an optimization problem Use simple method s output as initial guess Optimize in parallel K C, K P and [R t] 1/10/2012 Lecture 3D Computer Vision 19
20 Calibration Initial setup: Static calibration board (for calibration we assumed that we moved the camera / projector around the board) P 2 C 2 C 1 P 1 P 3 C 3 1/10/2012 Lecture 3D Computer Vision 20
21 Equivalent setup: Calibration In reality however we moved the board while the camera / projector remained static It doesn t matter whether we moved the camera / projector or the world P P 1 C 1 P 2 C 2 P 3 C 3 C 1/10/2012 Lecture 3D Computer Vision 21
22 Calibration We search for one optimal pose which mostly approximates the input correspondences This is done by minimizing the reprojection error using Levenberg- Marquardt => see lecture 5: Parameter estimation P C 1/10/2012 Lecture 3D Computer Vision 22
23 Calibration Camera Non-uniform coverage => Additional camera constraints Projector Iteratively optimize in parallel K C, K P and [R t] => Effect can be seen as the red lines in the projector image 1/10/2012 Lecture 3D Computer Vision 23
24 Correspondences generation 1/10/2012 Lecture 3D Computer Vision 24
25 Correspondence generation Correspondence generation is essential for 3d reconstruction Correspondences => triangulation => 3d information We focus now on correspondence generation between cameras and light emitting devices Remark: There exist several classes of laser scanners. We only consider those based on triangulation 1/10/2012 Lecture 3D Computer Vision 25
26 Structured light principle Light source projects a known pattern onto the measuring scene Captured and projected patterns are related to each other Establish correspondences 3D scene object j i i pattern projection detail imaging pattern detail j Image Sensor Pattern Projecting System 1/10/2012 Lecture 3D Computer Vision Slide from UdG 26
27 Correspondence methods Single dot: No correspondence problem. Scanning both axis Stripe patterns: Correspondence problem among slits No scanning Single stripe: Correspondence problem among points of the same slit Scanning the axis orthogonal to the stripe Grid, multiple dots: Correspondence problem among all the imaged segments No scanning Slide from UdG 1/10/2012 Lecture 3D Computer Vision 27
28 Correspondence methods Multi-stripe Multi-frame Single-stripe Single-frame Slow, robust Fast, fragile 1/10/2012 Lecture 3D Computer Vision 28
29 Single dot The correspondence in this case is unique Laser Camera P. Hurbain, 1/10/2012 Lecture 3D Computer Vision 29
30 Single stripe Because the lines span planes in 3D, it is possible to intersect camera rays with the plane spanned by the emitted light rays Object Light Plane Ax By Cz D 0 Laser/projector Image Point ( x', y') Camera 1/10/2012 Lecture 3D Computer Vision 30
31 Multiple stripes / Grid When multiple stripes are to be used at the same time, they must be encoded in order to be identifiable (Similar: Grid) Stripes have to be made distinguishable Solution: 1D or 2D encoding Encoded axis Encoded axes Camera Projector Camera Projector J. Salvi 1/10/2012 Lecture 3D Computer Vision 31
32 Encoding A pattern is called encoded, when after projecting it onto a surface, a set of regions of the observed projection can be easily matched with the original pattern. Example: encoding by color Decoding a projected pattern allows a large set of correspondences to be easily found thanks to the a prior knowledge of the pattern Jung, Computer Vision (EEE6503) Fall 2009, 1/10/2012 Lecture 3D Computer Vision Yonsei Univ. 32
33 Multiple stripes - Color The simplest way: Unique color for each stripe (direct codification) Problem: Colors are altered during the light transport Colors interfere with surface colors Lower robustness for larger amount of stripes 1/10/2012 Lecture 3D Computer Vision 33
34 Multiple stripes - Color Solution: Encode the color assignment itself, such that colors can repeat themselves Each point is encoded by its surrounding intensities (spatial codification) 1/10/2012 Lecture 3D Computer Vision 34
35 Multiple stripes Binary coding A more robust way for distinguishing a large amount of stripes are binary codes n stripe patterns can encode stripes n 2 1 Projected over time Example: 3 binary-encoded patterns which allows the measuring surface to be divided in 8 sub-regions Jung, Computer Vision (EEE6503) Fall 2009, Yonsei Univ. 1/10/2012 Lecture 3D Computer Vision 35
36 Multiple stripes Binary coding Assign each stripe a unique illumination code over time [Posdamer 82] Time 0110 Space 1/10/2012 Lecture 3D Computer Vision 36
37 Multiple stripes Binary coding Example: 7 binary patterns proposed by Posdamer & Altschuler Projected over time Pattern 3 Pattern 2 Codeword of this pixel: identifies the corresponding pattern stripe Pattern 1 Jung, Computer Vision (EEE6503) Fall 2009, 1/10/2012 Lecture 3D Computer Vision Yonsei Univ. 37
38 Multiple stripes Binary coding More robust but requires a lot of images one image for each bit Instead of binary codes, gray codes are often used in practice Adjacent code words differ only in one bit This allows to correct some errors 1/10/2012 Lecture 3D Computer Vision 38
39 Multiple stripes Binary coding Problem: Large resolution requires a lot of images to be projected i.e. 1024x768 => 10 images (2^10 = 1024) In practice not possible to distinguish projected stripes with a width of only one pixel Consequently the full resolution of the projector cannot be exploited 1/10/2012 Lecture 3D Computer Vision 39
40 Conclusion so far Scanning with a single dot yields a 1:1 correspondence The correspondence for a scanline were implicitly included in a rayplane intersection Ok for a laser (no lens) Camera/projector: How should lens distortion of the projector s model be regarded? Stripe encoding cannot exploit the full resolution of the projector Is it possible to go down to pixel level? 1/10/2012 Lecture 3D Computer Vision 40
41 Phase shifting A widely used method to achieve these goals is phase-shifted structured light Consider a function φ ref (x,y) = x Encoding this function into grayscales and projecting it could be used as direct codification Problem again: The camera cannot precisely distinguish between the grayscales Camera Projector (φ ref (x,y)) 1/10/2012 Lecture 3D Computer Vision 41
42 Phase shifting Solution: Phase shifted structured light can be used to encode the function φ ref (x,y) in a more efficient way. Let x [0..2nπ] Then g(x,y) = cos(φ ref (x,y)) = cos(x) yields an image with n vertical fringes in horizontal direction Camera Projector (g(x,y)) 1/10/2012 Lecture 3D Computer Vision 42
43 Phase shifting The captured fringe images can be described as I(x,y) = A(x,y) + B(x,y)cos(φ obj (x,y)) A(x,y): Background or ambient light intensity B(x,y): Cosine amplitude Φ obj (x,y): Object phase This is what we want to compute The 2D reference phase on the object as seen by the camera 1/10/2012 Lecture 3D Computer Vision 43
44 Phase shifting To compute φ obj (x,y), the initial phase φ ref (x,y) is shifted. We present the 3 step phase shifting algorithm by Zhang et al. We project the following shifted fringe images 2 g x, y) cos( ( x, y) ), i( ref i , /10/2012 Lecture 3D Computer Vision 44
45 Phase shifting The 3 captured images are described by We thus have 3 equations with 3 unknowns (A, B, Φ obj ) for each pixel Solving it with the known shifts yields 1/10/2012 Lecture 3D Computer Vision 45 ) 2 3 ( tan ), ( I I I I I y x obj ) ), ( )cos(, ( ), ( ), ( i obj i y x y x B y x A y x I
46 Phase shifting Problem: Due to tan^{-1} the resulting phase is wrapped for more than 1 stripe in 2π steps Φ obj using only stripe (x in [0..2π]) Φ obj using multiple stripes 1 stripe: No wrapping problem (=> unique correspondences) but imprecise 3D reconstruction Multiple stripes: Wrapping problem (=> ambiguous correspondences) but precise reconstruction 1/10/2012 Lecture 3D Computer Vision 46
47 Phase shifting The phase thus must be unwrapped (elimination of the 2πdiscontinuities Challenging Unrobust, if using only a single wrapped phase especially at discontinuities 1/10/2012 Lecture 3D Computer Vision 47
48 Phase shifting A robust solution is to use the level based unwrapping algorithm by Wang et al. Capture multiple fringe levels, e.g. Level 0 => shift 1 stripe => no wrapping Level 1 => shift e.g. 5 stripes => wrapping => use information from level previous for unwrapping Level 2 => shift e.g. 20 stripes => 1/10/2012 Lecture 3D Computer Vision 48
49 Phase shifting 1/10/2012 Lecture 3D Computer Vision 49
50 Phase shifting Once the phase has been computed, every pixel in the image creates a point to stripe correspondence (either horizontal or vertical) Camera Projector 1/10/2012 Lecture 3D Computer Vision 50
51 Phase shifting Finding the pixel correspondences is easy using epipolar geometry However no distortion is included 1/10/2012 Lecture 3D Computer Vision 51
52 Phase Shifting Correspondence generation including lens distortion 1/10/2012 Lecture 3D Computer Vision 52
53 Phase Shifting Correspondence generation including lens distortion 1. Undistort 2. Compute epipolar curve 1/10/2012 Lecture 3D Computer Vision 53
54 Phase Shifting - Summary 1/10/2012 Lecture 3D Computer Vision 54
3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light I Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationStructured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)
Structured Light II Guido Gerig CS 6320, Spring 2013 (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC) http://www.cs.cmu.edu/afs/cs/academic/class/15385- s06/lectures/ppts/lec-17.ppt Variant
More informationStereo and structured light
Stereo and structured light http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 20 Course announcements Homework 5 is still ongoing. - Make sure
More informationL2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods
L2 Data Acquisition Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods 1 Coordinate Measurement Machine Touch based Slow Sparse Data Complex planning Accurate 2
More informationStructured light , , Computational Photography Fall 2017, Lecture 27
Structured light http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 27 Course announcements Homework 5 has been graded. - Mean: 129. - Median:
More informationThe main problem of photogrammetry
Structured Light Structured Light The main problem of photogrammetry to recover shape from multiple views of a scene, we need to find correspondences between the images the matching/correspondence problem
More information3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa
3D Scanning Qixing Huang Feb. 9 th 2017 Slide Credit: Yasutaka Furukawa Geometry Reconstruction Pipeline This Lecture Depth Sensing ICP for Pair-wise Alignment Next Lecture Global Alignment Pairwise Multiple
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More information3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller
3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationComputer Vision. 3D acquisition
è Computer 3D acquisition Acknowledgement Courtesy of Prof. Luc Van Gool 3D acquisition taxonomy s image cannot currently be displayed. 3D acquisition methods Thi passive active uni-directional multi-directional
More informationMulti-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo
Vision Sensing Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo The Digital Michelangelo Project, Stanford How to sense 3D very accurately? How to sense
More information3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,
3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationSensing Deforming and Moving Objects with Commercial Off the Shelf Hardware
Sensing Deforming and Moving Objects with Commercial Off the Shelf Hardware This work supported by: Philip Fong Florian Buron Stanford University Motivational Applications Human tissue modeling for surgical
More informationAgenda. DLP 3D scanning Introduction DLP 3D scanning SDK Introduction Advance features for existing SDK
Agenda DLP 3D scanning Introduction DLP 3D scanning SDK Introduction Advance features for existing SDK Increasing scanning speed from 20Hz to 400Hz Improve the lost point cloud 3D Machine Vision Applications:
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationActive Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth
Active Stereo Vision COMP 4900D Winter 2012 Gerhard Roth Why active sensors? Project our own texture using light (usually laser) This simplifies correspondence problem (much easier) Pluses Can handle different
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationThere are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...
STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own
More informationIntroduction to 3D Machine Vision
Introduction to 3D Machine Vision 1 Many methods for 3D machine vision Use Triangulation (Geometry) to Determine the Depth of an Object By Different Methods: Single Line Laser Scan Stereo Triangulation
More information3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More information3D Photography: Active Ranging, Structured Light, ICP
3D Photography: Active Ranging, Structured Light, ICP Kalin Kolev, Marc Pollefeys Spring 2013 http://cvg.ethz.ch/teaching/2013spring/3dphoto/ Schedule (tentative) Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More informationENGN D Photography / Spring 2018 / SYLLABUS
ENGN 2502 3D Photography / Spring 2018 / SYLLABUS Description of the proposed course Over the last decade digital photography has entered the mainstream with inexpensive, miniaturized cameras routinely
More informationComputer Vision cmput 428/615
Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?
More informationRecap from Previous Lecture
Recap from Previous Lecture Tone Mapping Preserve local contrast or detail at the expense of large scale contrast. Changing the brightness within objects or surfaces unequally leads to halos. We are now
More informationEpipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz
Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two
More informationTwo-view geometry Computer Vision Spring 2018, Lecture 10
Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationSurround Structured Lighting for Full Object Scanning
Surround Structured Lighting for Full Object Scanning Douglas Lanman, Daniel Crispell, and Gabriel Taubin Brown University, Dept. of Engineering August 21, 2007 1 Outline Introduction and Related Work
More informationA 3D Pattern for Post Estimation for Object Capture
A 3D Pattern for Post Estimation for Object Capture Lei Wang, Cindy Grimm, and Robert Pless Department of Computer Science and Engineering Washington University One Brookings Drive, St. Louis, MO, 63130
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More informationStep-by-Step Model Buidling
Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure
More informationMetrology and Sensing
Metrology and Sensing Lecture 4: Fringe projection 2016-11-08 Herbert Gross Winter term 2016 www.iap.uni-jena.de 2 Preliminary Schedule No Date Subject Detailed Content 1 18.10. Introduction Introduction,
More informationENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning
ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning Instructor: Gabriel Taubin Assignment written by: Douglas Lanman 26 February 2009 Figure 1: Structured
More informationMERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia
MERGING POINT CLOUDS FROM MULTIPLE KINECTS Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia Introduction What do we want to do? : Use information (point clouds) from multiple (2+) Kinects
More informationMetrology and Sensing
Metrology and Sensing Lecture 4: Fringe projection 2018-11-09 Herbert Gross Winter term 2018 www.iap.uni-jena.de 2 Schedule Optical Metrology and Sensing 2018 No Date Subject Detailed Content 1 16.10.
More informationMultiview Stereo COSC450. Lecture 8
Multiview Stereo COSC450 Lecture 8 Stereo Vision So Far Stereo and epipolar geometry Fundamental matrix captures geometry 8-point algorithm Essential matrix with calibrated cameras 5-point algorithm Intersect
More informationStereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision
Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras
More informationProjector Calibration for Pattern Projection Systems
Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.
More informationImage Guided Phase Unwrapping for Real Time 3D Scanning
Image Guided Phase Unwrapping for Real Time 3D Scanning Thilo Borgmann, Michael Tok and Thomas Sikora Communication Systems Group Technische Universität Berlin Abstract 3D reconstructions produced by active
More informationMulti-Projector Display with Continuous Self-Calibration
Multi-Projector Display with Continuous Self-Calibration Jin Zhou Liang Wang Amir Akbarzadeh Ruigang Yang Graphics and Vision Technology Lab (GRAVITY Lab) Center for Visualization and Virtual Environments,
More information3D Photography: Stereo
3D Photography: Stereo Marc Pollefeys, Torsten Sattler Spring 2016 http://www.cvg.ethz.ch/teaching/3dvision/ 3D Modeling with Depth Sensors Today s class Obtaining depth maps / range images unstructured
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationStereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17
Stereo Wrap + Motion CSE252A Lecture 17 Some Issues Ambiguity Window size Window shape Lighting Half occluded regions Problem of Occlusion Stereo Constraints CONSTRAINT BRIEF DESCRIPTION 1-D Epipolar Search
More informationMosaics. Today s Readings
Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections
More informationBinocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?
Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationStereo vision. Many slides adapted from Steve Seitz
Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationCS201 Computer Vision Camera Geometry
CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationLecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013
Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth
More informationGabriel Taubin. Desktop 3D Photography
Sring 06 ENGN50 --- D Photograhy Lecture 7 Gabriel Taubin Brown University Deskto D Photograhy htt://www.vision.caltech.edu/bouguetj/iccv98/.index.html D triangulation: ray-lane Intersection lane ray intersection
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationIEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER 2012 411 Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems Ricardo R. Garcia, Student
More informationSurround Structured Lighting for Full Object Scanning
Surround Structured Lighting for Full Object Scanning Douglas Lanman, Daniel Crispell, and Gabriel Taubin Department of Engineering, Brown University {dlanman,daniel crispell,taubin}@brown.edu Abstract
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More information3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly
More informationChaplin, Modern Times, 1936
Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationOptical Imaging Techniques and Applications
Optical Imaging Techniques and Applications Jason Geng, Ph.D. Vice President IEEE Intelligent Transportation Systems Society jason.geng@ieee.org Outline Structured light 3D surface imaging concept Classification
More informationOutline. ETN-FPI Training School on Plenoptic Sensing
Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration
More informationStructured light 3D reconstruction
Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationDepth Sensors Kinect V2 A. Fornaser
Depth Sensors Kinect V2 A. Fornaser alberto.fornaser@unitn.it Vision Depth data It is not a 3D data, It is a map of distances Not a 3D, not a 2D it is a 2.5D or Perspective 3D Complete 3D - Tomography
More informationAcquisition and Visualization of Colored 3D Objects
Acquisition and Visualization of Colored 3D Objects Kari Pulli Stanford University Stanford, CA, U.S.A kapu@cs.stanford.edu Habib Abi-Rached, Tom Duchamp, Linda G. Shapiro and Werner Stuetzle University
More informationHandy Rangefinder for Active Robot Vision
Handy Rangefinder for Active Robot Vision Kazuyuki Hattori Yukio Sato Department of Electrical and Computer Engineering Nagoya Institute of Technology Showa, Nagoya 466, Japan Abstract A compact and high-speed
More informationENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows
ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows Instructor: Gabriel Taubin Assignment written by: Douglas Lanman 29 January 2009 Figure 1: 3D Photography
More informationHigh quality three-dimensional (3D) shape measurement using intensity-optimized dithering technique
Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 2014 High quality three-dimensional (3D) shape measurement using intensity-optimized dithering technique Beiwen
More informationLast time: Disparity. Lecture 11: Stereo II. Last time: Triangulation. Last time: Multi-view geometry. Last time: Epipolar geometry
Last time: Disarity Lecture 11: Stereo II Thursday, Oct 4 CS 378/395T Prof. Kristen Grauman Disarity: difference in retinal osition of same item Case of stereo rig for arallel image lanes and calibrated
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More informationA three-step system calibration procedure with error compensation for 3D shape measurement
January 10, 2010 / Vol. 8, No. 1 / CHINESE OPTICS LETTERS 33 A three-step system calibration procedure with error compensation for 3D shape measurement Haihua Cui ( ), Wenhe Liao ( ), Xiaosheng Cheng (
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Structure Computation Lecture 18 March 22, 2005 2 3D Reconstruction The goal of 3D reconstruction
More informationCS4670: Computer Vision
CS467: Computer Vision Noah Snavely Lecture 13: Projection, Part 2 Perspective study of a vase by Paolo Uccello Szeliski 2.1.3-2.1.6 Reading Announcements Project 2a due Friday, 8:59pm Project 2b out Friday
More information3D Modeling of Objects Using Laser Scanning
1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More informationFAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES
FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing
More informationOptical Active 3D Scanning. Gianpaolo Palma
Optical Active 3D Scanning Gianpaolo Palma 3D Scanning Taxonomy SHAPE ACQUISTION CONTACT NO-CONTACT NO DESTRUCTIVE DESTRUCTIVE X-RAY MAGNETIC OPTICAL ACOUSTIC CMM ROBOTIC GANTRY SLICING ACTIVE PASSIVE
More informationOutline. ETN-FPI Training School on Plenoptic Sensing
Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration
More informationDEVELOPMENT OF LARGE SCALE STRUCTURED LIGHT BASED MEASUREMENT SYSTEMS
DEVELOPMENT OF LARGE SCALE STRUCTURED LIGHT BASED MEASUREMENT SYSTEMS By Chi Zhang A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR
More informationStereo Observation Models
Stereo Observation Models Gabe Sibley June 16, 2003 Abstract This technical report describes general stereo vision triangulation and linearized error modeling. 0.1 Standard Model Equations If the relative
More informationCamera model and multiple view geometry
Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then
More informationMetrology and Sensing
Metrology and Sensing Lecture 4: Fringe projection 2017-11-09 Herbert Gross Winter term 2017 www.iap.uni-jena.de 2 Preliminary Schedule No Date Subject Detailed Content 1 19.10. Introduction Introduction,
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can
More informationProject 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)
Announcements Stereo Project 2 due today Project 3 out today Single image stereogram, by Niklas Een Readings Szeliski, Chapter 10 (through 10.5) Public Library, Stereoscopic Looking Room, Chicago, by Phillips,
More informationCamera Calibration. COS 429 Princeton University
Camera Calibration COS 429 Princeton University Point Correspondences What can you figure out from point correspondences? Noah Snavely Point Correspondences X 1 X 4 X 3 X 2 X 5 X 6 X 7 p 1,1 p 1,2 p 1,3
More informationImproving Initial Estimations for Structure from Motion Methods
Improving Initial Estimations for Structure from Motion Methods University of Bonn Outline Motivation Computer-Vision Basics Stereo Vision Bundle Adjustment Feature Matching Global Initial Estimation Component
More information