Visual Perception Sensors

Similar documents
Outline. ETN-FPI Training School on Plenoptic Sensing

Depth Sensors Kinect V2 A. Fornaser

New Sony DepthSense TM ToF Technology

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

New Sony DepthSense TM ToF Technology

A Comparison between Active and Passive 3D Vision Sensors: BumblebeeXB3 and Microsoft Kinect

3D scanning. 3D scanning is a family of technologies created as a means of automatic measurement of geometric properties of objects.

Time-of-Flight and Structured Light Depth Cameras

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

Depth Camera for Mobile Devices

Ceilbot vision and mapping system

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

Sensor Modalities. Sensor modality: Different modalities:

3D Time-of-Flight Image Sensor Solutions for Mobile Devices

3D Computer Vision 1

CS4495/6495 Introduction to Computer Vision

Active Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth

ToF Camera for high resolution 3D images with affordable pricing

Structured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)

Photoneo's brand new PhoXi 3D Camera is the highest resolution and highest accuracy area based 3D

3D object recognition used by team robotto

3D Modeling of Objects Using Laser Scanning

Optical Active 3D Scanning. Gianpaolo Palma

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Freestyle 3D. Features and applications. Dr. Daniel Döring Team Lead Product Development Freestyle 1

2 Depth Camera Assessment

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras

PMD [vision] Day Vol. 3 Munich, November 18, PMD Cameras for Automotive & Outdoor Applications. ifm electronic gmbh, V.Frey. Dr.

Time-of-flight basics

Overview of Active Vision Techniques

Range Sensors (time of flight) (1)

Active Light Time-of-Flight Imaging

Optical Sensors: Key Technology for the Autonomous Car

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning

Sensor technology for mobile robots

PhenoBot: A Robot System for Phenotyping in the Greenhouse Using 3D Light-Field Technology

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

Distance Sensors: Sound, Light and Vision DISTANCE SENSORS: SOUND, LIGHT AND VISION - THOMAS MAIER 1

From infinity To infinity Object Projected spot Z: dinstance Lens center b: baseline length Image plane k f: focal length k CCD camera Origin (Project

Other Reconstruction Techniques

Using infrared proximity sensors for close 2D localization and object size recognition. Richard Berglind Neonode

Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

Ray Optics. Lecture 23. Chapter 34. Physics II. Course website:

Processing 3D Surface Data

Indoor Mobile Robot Navigation and Obstacle Avoidance Using a 3D Camera and Laser Scanner

EE631 Cooperating Autonomous Mobile Robots

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

SIMULATION AND VISUALIZATION IN THE EDUCATION OF COHERENT OPTICS

3D Photography: Stereo

ROBOT SENSORS. 1. Proprioceptors

Other approaches to obtaining 3D structure

3D Maps. Prof. Dr. Andreas Nüchter Jacobs University Bremen Campus Ring Bremen 1

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.

Motion capture: An evaluation of Kinect V2 body tracking for upper limb motion analysis

Electronic Travel Aids for Blind Guidance an Industry Landscape Study

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

A study of a multi-kinect system for human body scanning

Computer and Machine Vision

3D Photography: Active Ranging, Structured Light, ICP

Semantic Mapping and Reasoning Approach for Mobile Robotics

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Usability study of 3D Time-of-Flight cameras for automatic plant phenotyping

Information page for written examinations at Linköping University TER2

SiPM and SPAD Arrays for Next Generation LiDAR

Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo

No-Reference Quality Metric for Depth Maps

ECGR4161/5196 Lecture 6 June 9, 2011

Ability of Terrestrial Laser Scanner Trimble TX5 in Cracks Monitoring at Different Ambient Conditions

SL A Tordivel - Thor Vollset -Stereo Vision and structured illumination creates dense 3D Images Page 1

SICK AG WHITEPAPER. LiDAR SENSOR FUNCTIONALITY AND VARIANTS

The main problem of photogrammetry

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

CSE 165: 3D User Interaction. Lecture #3: Displays

Non-line-of-sight imaging

Interactive Virtual Environments

High-Fidelity Augmented Reality Interactions Hrvoje Benko Researcher, MSR Redmond

3D Object Representations. COS 526, Fall 2016 Princeton University

Solid-State Hybrid LiDAR for Autonomous Driving Product Description

Processing 3D Surface Data

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017

Multiple View Geometry

General Computing Concepts. Coding and Representation. General Computing Concepts. Computing Concepts: Review

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

MIRROR IDENTIFICATION AND CORRECTION OF 3D POINT CLOUDS

Computer Vision. 3D acquisition

Autonomous Vehicle Navigation Using Stereoscopic Imaging

People Tracking for Enabling Human-Robot Interaction in Large Public Spaces

Large-Scale 3D Point Cloud Processing Tutorial 2013

Computational Imaging for Self-Driving Vehicles

Geometry of Multiple views

LUMS Mine Detector Project

Handheld scanning with ToF sensors and cameras

CS5670: Computer Vision

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

Photometric Stereo.

Stereo and structured light

Transcription:

G. Glaser Visual Perception Sensors 1 / 27 MIN Faculty Department of Informatics Visual Perception Sensors Depth Determination Gerrit Glaser University of Hamburg Faculty of Mathematics, Informatics and Natural Sciences Department of Informatics Technical Aspects of Multimodal Systems November 13. 2017

G. Glaser Visual Perception Sensors 2 / 27 Table of Contents 1. Motivation Camera 2. Triangulation Approaches Stereoscopic Cameras Binary Projection Camera Microsoft Kinect 3. Time of Flight Approaches Depth Camera Kinect V2 LIDAR 4. Conclusion

G. Glaser Visual Perception Sensors 3 / 27 Motivation for Visual Perception in Robotics basic question for mobile robotics: Where am I? autonomous movement through unknown terrain scan environment for obstacles distances to surroundings Possible solution Add visual perception sensors, to allow robots to see their environment.

Camera Motivation I I I I I Triangulation Approaches Time of Flight Approaches Conclusion References image as projection of 3D world: leads to loss of depth data estimate depths through known size of an object and size of the object in the image. error-prone, even in human visual perception not applicable outside of known surroundings passive approach Stump in Sequoia National Park. [1, p. 529, fig. 2] G. Glaser Visual Perception Sensors 4 / 27

G. Glaser Visual Perception Sensors 5 / 27 Triangulation Approaches compute point through known distance and measured angles Triangulation. [7, p. 19, fig. 1] Triangulation Calculation. [7, p. 20, fig. 1]

G. Glaser Visual Perception Sensors 6 / 27 Stereoscopic Cameras one camera not sufficient for meaningful depth measurements use second camera to recover lost dimension triangulate distance known baseline between cameras corresponding points measured angles passive approach Rotated stereo-camera rig and a Kinect. [6, p. 5, fig. 1.2]

Stereoscopic Cameras Problems G. Glaser Visual Perception Sensors 7 / 27 identification of corresponding points in both images occlusion computationally expensive depends on illumination cameras need to be synchronized Stereo-Camera example. [5, p. 38, fig. 2.1]

G. Glaser Visual Perception Sensors 8 / 27 Structured-Light project additional information on the object to allow recovery of lost depth dimension several different approaches time multiplexing spatial multiplexing wavelength multiplexing

G. Glaser Visual Perception Sensors 9 / 27 Binary Projection one camera, one projector several passes required deformity of lines as measure for depth time multiplexing active approach Binary projection at different Binary projection. [7, p. 30, fig. 1] times t. [7, p. 33, fig. 1]

G. Glaser Visual Perception Sensors 10 / 27 Binary Projection Problems frames taken at different points in time time multiplexing not applicable for moving objects points directly on edges are uncertain soultion: gray code pattern Gray code projection at different times t. [7, p. 33, fig. 1]

G. Glaser Visual Perception Sensors 11 / 27 Microsoft Kinect RGB camera: 30fps @ 640x480px spatial multiplexing USB 2.0 depth image: 30fps @ 320x240px practical range [4] 0.8-4.5m in default mode 0.4-3m in near mode Microsoft Kinect. [3, p. 2, fig. 1-1]

G. Glaser Visual Perception Sensors 12 / 27 Microsoft Kinect IR Laser Emitter projection pseudo random noise-like pattern 830nm wavelength laser heated/cooled to maintain wavelength 70mW output power eye safety through scattering Projected IR pattern. [3, p. 12, fig. 2-2]

Microsoft Kinect Depth Image IR camera image compared to known pattern disturbances can be used to calculate distances distances visualized as depth images red areas: close blue areas: further away black areas: no depth information available Depth image and corresponding RGB image. [3, p. 9, fig. 1-3] G. Glaser Visual Perception Sensors 13 / 27

G. Glaser Visual Perception Sensors 14 / 27 Microsoft Kinect Problems overexposure of IR camera by sunlight (only usable indoors) by reflecting surfaces only close range distances limited by laser output translucent objects not measurable latency of ~100ms [4] active approach, not easy to scale-out interferences with projected patterns

G. Glaser Visual Perception Sensors 15 / 27 Triangulation Approaches Conclusion Stereo Cameras good to calculate depths for distinct markers otherwise computationally expensive works indoors and outdoors completely passive, scaling out is possible without problems

G. Glaser Visual Perception Sensors 16 / 27 Triangulation Approaches Conclusion Structured-Light Cameras all approaches trouble measuring reflecting or transparent objects time multiplexing depth calculation for whole field of vision only for stationary objects spatial multiplexing (Kinect) computation done by hardware pretty complete depth map occluded areas too close or too far points wavelength multiplexing depth calculation with one photo low spatial resolution achievable

G. Glaser Visual Perception Sensors 17 / 27 Time of Flight Approaches actively send out a signal measure time until reflection returns Light: P = 299.792.458 m s t 2 Simple ToF measurement. [8, p. 28, fig. 1.14]

G. Glaser Visual Perception Sensors 18 / 27 Depth Camera active approach TX: illuminates whole scene with array of IR emitters RX: ToF-receiver grid commonly used: sinus modulation for emitted light measure point in time when emitted signal returns calculate distance through ToF MESA Imaging SR4000, IR emitters. [8, p. 32, fig. 1.16]

G. Glaser Visual Perception Sensors 19 / 27 Depth Camera Problems hardware restrictions IR-emitter and ToF-receievers in different position simulate central emitter to avoid occlusion effects falsification of measurements through multi path hopping point B will measure a combination of two distances accurate time measurement required Pattern of IR emitters to avoid occlusion. [8, p. 34, fig. 1.17] Multipath phenomenon. [8, p. 104, fig. 3.16]

G. Glaser Visual Perception Sensors 20 / 27 Kinect V2 depth image: 50fps @ 512x424px range 0.5-8m [4] latency of ~50ms [4] square wave modulation differential pixel array switches with square wave save returned light difference used to compute distances high volume of data, requires USB 3.0 Kinect V2. [4, p. 6, fig. 1-5]

G. Glaser Visual Perception Sensors 21 / 27 LIDAR Light Detection And Ranging sends out single laser beam ToF to calculate distance single point sampling mirrors rotate laser beam to scan line of points additional rotation possible to scan area instead of line Simple ToF measurement. [8, p. 28, fig. 1.14] Point clouds created by rotated line scanners. [2, p. 46, fig. 2.21]

LIDAR Problems G. Glaser Visual Perception Sensors 22 / 27 loss of spatial resolution with increased measurement distance transparent objects can not be measured mechanical moving parts

G. Glaser Visual Perception Sensors 23 / 27 Time-of-Flight Conclusion high laser outputs possible high measurement range sunlight can be compensated high sampling rates possible dynamic measurement range short and long distances can be measured together

G. Glaser Visual Perception Sensors 24 / 27 Conclusion Required Ambient Lighting structured light approaches require dark surroundings often used for optical measurements and inspection in industrial robotics very precise measurements LIDAR can be built for outdoor usage other active approaches falsified/annulled by direct sunlight

G. Glaser Visual Perception Sensors 25 / 27 Conclusion Computational Costs active approaches distance calculation mostly handled by hardware stereoscopic cameras expensive: calculate matching points in both images

G. Glaser Visual Perception Sensors 26 / 27 Conclusion Moving Objects depth cameras, spatial multiplexing structured light well suited record whole scene at single point in time binary projection not usable, time encoding through different frames LIDAR suitability depends on sampling rate and object movement Triangulation Time-of-Flight Stereo Camera Binary Projection Kinect Kinect V2 LIDAR outdoor usability ( ) complete depth map ( ) ( ) passive scale out ( ) ( ) ( ) ( ) moving parts cheap ( )

G. Glaser Visual Perception Sensors 27 / 27 [1] Prof. Dr. Leonie Dreschler-Fischer. Interactive visual computing lecture ws2012. Handouts 2012. [2] Joachim Hertzberg, Kai Lingemann, and Andreas Nüchter. Mobile Roboter. Springer, Berlin, 2009. ISBN 978-3-642-01726-1. [3] Jeff Kramer, Matt Parker, Daniel Herrera C., Nicolas Burrus, and Florian Echtler. Hacking the Kinect. Apress, New York, 2012. ISBN 978-1-4302-3868-3. [4] Mansib Rahman. Beginning Microsoft Kinect for Windows SDK 2.0. Apress, Berkeley, CA, 2017. ISBN 978-1-4842-2316-1. Motion and Depth Sensing for Natural User Interfaces. [5] Stephen Se and Nick Pears. Passive 3D Imaging, pages 35 94. Springer London, London, 2012. ISBN 978-1-4471-4063-4. DOI:10.1007/978-1-4471-4063-4_2. URL https://doi.org/10.1007/978-1-4471-4063-4_2. [6] Jan Smisek, Michal Jancosek, and Tomas Pajdla. 3D with Kinect, pages 3 25. Springer London, London, 2013. ISBN 978-1-4471-4640-7. DOI:10.1007/978-1-4471-4640-7_1. URL https://doi.org/10.1007/978-1-4471-4640-7_1. [7] Prof. Dr. Frank Steinicke. User interface & software & technology lecture ws2014. Chapter 5: 3D Capturing Systems (Slides by Susanne Schmidt), 2014. [8] Pietro Zanuttigh, Giulio Marin, Carlo Dal Mutto, Fabio Dominio, Ludovico Minto, and Guido Maria Cortelazzo. Time-of-flight and structured light depth cameras : technology and applications. Springer, Switzerland, 2016. ISBN 978-3-319-30973-6.