Automatic 3-D 3 D Model Acquisition from Range Images. Michael K. Reed and Peter K. Allen Computer Science Department Columbia University

Similar documents
3-D Modeling from Range Imagery: An Incremental Method with a Planning Component

View Planning for Automated Site Modeling

A Two-Stage Algorithm for Planning the Next View From Range Images

Unwrapping of Urban Surface Models

CSc Topics in Computer Graphics 3D Photography

Lecture 8 Active stereo & Volumetric stereo

Generating 3D Meshes from Range Data

Multiple View Geometry

Ruigang Yang and Peter K. Allen Department of Computer Science, Columbia University, New York, NY *

Integrated View and Path Planning for an Autonomous six-dof Eye-in-hand Object Modeling System

EECS 442 Computer vision. Announcements

Multi-view stereo. Many slides adapted from S. Seitz

Lecture 8 Active stereo & Volumetric stereo

A Solution to the Next Best View Problem for Automated Surface Acquisition

Structured light 3D reconstruction

Computer Vision. 3D acquisition

Processing 3D Surface Data

A Best Next View selection algorithm incorporating a quality criterion

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

3D Object Representation. Michael Kazhdan ( /657)

Image Based Reconstruction II

DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS

Sensor Modalities. Sensor modality: Different modalities:

Volumetric Scene Reconstruction from Multiple Views

Free-form 3D object reconstruction from range images. C. Schütz, T. Jost, H. Hügli

Prof. Trevor Darrell Lecture 18: Multiview and Photometric Stereo

Data Acquisition and View Planning for 3-D Modeling Tasks

A 3-D Scanner Capturing Range and Color for the Robotics Applications

Ioannis Stamos and P eter K. Allen, Colum bia University,fistamos, acquisition, segmentation, volumetric modeling, viewpoint

HOW TO RECONSTRUCT DAMAGED PARTS BASED ON PRECISE AND (PARTLY-)AUTOMATISED SCAN METHODS

Ceilbot vision and mapping system

Dedicated Software Algorithms for 3D Clouds of Points

Some Thoughts on Visibility

Processing 3D Surface Data

Multiview Reconstruction

Convergent Modeling and Reverse Engineering

ENGN D Photography / Spring 2018 / SYLLABUS

Recognizing Deformable Shapes. Salvador Ruiz Correa Ph.D. UW EE

Cell Decomposition for Building Model Generation at Different Scales

Omni Stereo Vision of Cooperative Mobile Robots

Recognizing Deformable Shapes. Salvador Ruiz Correa (CSE/EE576 Computer Vision I)

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

Automated Feature-Based Range Registration of Urban Scenes of Large Scale

CS5670: Computer Vision

LATEST TRENDS on APPLIED MATHEMATICS, SIMULATION, MODELLING

Robot Localization based on Geo-referenced Images and G raphic Methods

Reconstruction of complete 3D object model from multi-view range images.

3D Surface Reconstruction from 2D Multiview Images using Voxel Mapping

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

LUMS Mine Detector Project

Recognition: Face Recognition. Linda Shapiro EE/CSE 576

Surface and Solid Geometry. 3D Polygons

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

INCORPORATION OF A-PRIORI INFORMATION IN PLANNING THE NEXT BEST VIEW

Project Updates Short lecture Volumetric Modeling +2 papers

Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo

Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera

View Planning and Automated Data Acquisition for Three-Dimensional Modeling of Complex Sites

Manhattan-World Assumption for As-built Modeling Industrial Plant

View Planning for Automated Three-Dimensional Object Reconstruction and Inspection

3D Object Representations. COS 526, Fall 2016 Princeton University

Acquisition and Visualization of Colored 3D Objects

Geometry and Texture Recovery of Scenes of Large Scale

BIL Computer Vision Apr 16, 2014

Visibility. Tom Funkhouser COS 526, Fall Slides mostly by Frédo Durand

Pattern Recognition for Autonomous. Pattern Recognition for Autonomous. Driving. Freie Universität t Berlin. Raul Rojas

3D Modeling I. CG08b Lior Shapira Lecture 8. Based on: Thomas Funkhouser,Princeton University. Thomas Funkhouser 2000

Semi-automatic range to range registration: a feature-based method

Overview of 3D Object Representations

P1: OTA/XYZ P2: ABC c01 JWBK288-Cyganek December 5, :11 Printer Name: Yet to Come. Part I COPYRIGHTED MATERIAL

Some books on linear algebra

Simulation in Computer Graphics Space Subdivision. Matthias Teschner

Dedication This thesis is dedicated to my parents Patricia Jo Wong and Seok Pin Wong who have given me the means to get this far and have never doubte

Segmentation and Tracking of Partial Planar Templates

3D Modeling of Objects Using Laser Scanning

Mapping textures on 3D geometric model using reflectance image

Modeling, Combining, and Rendering Dynamic Real-World Events From Image Sequences

LAIR. UNDERWATER ROBOTICS Field Explorations in Marine Biology, Oceanography, and Archeology

CS4620/5620: Lecture 14 Pipeline

Collision Detection based on Spatial Partitioning

Visual Learning and Recognition of 3D Objects from Appearance

CELL DECOMPOSITION FOR THE GENERATION OF BUILDING MODELS AT MULTIPLE SCALES

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

A Volumetric Method for Building Complex Models from Range Images

Complex Models from Range Images. A Volumetric Method for Building. Brian Curless. Marc Levoy. Computer Graphics Laboratory. Stanford University

Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots

Estimation of common groundplane based on co-motion statistics

Integrated High-Quality Acquisition of Geometry and Appearance for Cultural Heritage

3D Reconstruction from Scene Knowledge

ARCHITECTURE & GAMES. A is for Architect Simple Mass Modeling FORM & SPACE. Industry Careers Framework. Applied. Getting Started.

3D Digitization of a Hand-held Object with a Wearable Vision Sensor

3D Complex Scenes Segmentation from a Single Range Image Using Virtual Exploration

Augmenting Reality, Naturally:

Intelligent Outdoor Navigation of a Mobile Robot Platform Using a Low Cost High Precision RTK-GPS and Obstacle Avoidance System

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR THE NAMES OF THE PARTIES TO A JOINT RESEARCH AGREEMENT.

Simultaneous Localization and Mapping (SLAM)

A Simulation of Automatic 3D Acquisition and Post-processing Pipeline

Dual-Beam Structured-Light Scanning for 3-D Object Modeling

3D Collision Avoidance for Navigation in Unstructured Environments

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Transcription:

Automatic 3-D 3 D Model Acquisition from Range Images Michael K. Reed and Peter K. Allen Computer Science Department Columbia University

Introduction Objective: given an arbitrary object or scene, construct a representative CAD model Method: rangefinding sensor solid modeling view planning

Motivation: Applications Virtual environment generation acquire model for use in VRML, entertainment, etc Reverse engineering acquiring a model from a part copying/modification Part inspection compare acquired model to acceptable model 3D FAX transmit acquired model to remote RP machine Architectural site modeling

Talk Outline Introduction, motivation, & background Solid modeling of range images Sensor viewpoint planning Conclusions & future research

An Introduction to a Solution How do people solve this problem? An algorithm: Acquire data from viewpoint Model data (one viewpoint) Integrate with previous model Plan next view no Done? yes

Previous Work Computer Vision: oriented towards recognition task Computer Graphics: oriented towards recovery task [Dickenson et al. 1994] [Wu & Levine 1994] [Thompson et al. 1996] [Hoppe 1994] [Fua 1992] [Connolly 1989] [Tarbox & Gottshlich 1995] meshes & zippering: [Turk 1994] [Rutishauser 94] meshes & iso-surface extraction: [Curless 1996]

Previous Work: Issues Final model is not closed Single object only Shape and topological restrictions Assumed complete surface sampling No modeling of scene occlusion

Modeling Desiderata no topological constraints on object / scene resulting model is watertight incremental model improvement model acquisition in few views ability to support sensor viewpoint planning Our method attempts to deal with each of these issues

Modeling with Mesh Surfaces Advantages: able to accurately model many shapes effective at varying resolutions simple!

Modeling with Mesh Surfaces Integration: find overlap between 2 meshes clip & retriangulate

Modeling with Mesh Surfaces Issues: not closed until surface is completely imaged no representation of space occluded from sensor integration of non-manifold surfaces not welldefined Our solution: use mesh surface & volume! build solid model from each range image represent both imaged surface & occluded volume integrate using set intersection

Sensor Model Rangefinder acts as point light source in plane Motion of rangefinder is linear, perpendicular to scanning plane

Sensor Imaging Characteristics Range image: rectangular sampling Image has structure: surface elements occlusion elements

Constructing the Solid Sweep mesh surface M to form a solid S by: S = U extrude ( m ), m M m

Surface Type Attributes Surfaces in the model are tagged: imaged for those directly imaged by the sensor occlusion for those due to scene occlusion occlusion imaged

Merging Models from Different Views Each model is a solid, therefore we may use regularized set intersection to integrate: initialize composite model to the entire workspace acquire a new solid model update the composite model by set intersection with new model Assumption: the set described by each model must be a superset of the actual imaged object

Experimental Setup Sensor: Servo-Robot laser rangefinder Motion: IBM 7575 SCARA robot and turntable

Movie!!! cvpr.lnk

Example: Hip Replacement

Example: Hip Replacement

Example: Video Game Controller

Example: Video Game Controller

Example: Propeller Blade

Example: Propeller Blade

Example: Toy Bear in 4 Views

Example: Toy Bear in 4 Views

Effects of Sampling on Integration Violation of sampling theorem: mesh elements do not necessarily correspond to object surfaces! Background Object Rangefinder

Effects of Sampling on Integration Surfaces requiring dilation may be identified by their surface type attribute

Effects of Sampling on Integration Results: without dilation with dilation

Integration of View Planning Why plan sensor viewpoints? To ensure adequate model acquisition To reduce the number of sensing operations How does one determine the next view? Must represent what has not been sensed Must determine a viewpoint that will improve model

Previous Work: Planning in unknown environments [Connolly 1989] [Maver & Bajcsy 1990] [Whaite & Ferrie 1992] [Kutulakos 1994] [Pito 1997] Planning in known environments [Sedas-Gersey 1993] [Abrams 1997] [Tarabanis et al. 1995]

Strategy in Viewpoint Planning At what point should planning be done? at the outset? after some intial model? never? area of "occlusion" surface 100 50 0 1 2 3 4 5 6 7 8 9 10 11 12 # of views ->

Integration of View Planning Caveats for methods in unknown environs: scene self-occlusion not considered use brute -force raycasting discrete solutions only Our approach: acquire preliminary model first, then plan plan to acquire occlusion surfaces (targets) apply static sensor planning methods solutions (plans) are in continuous space

Sensor Viewpoint Planning Our approach: target driven - computes visibility for target using the composite model operates in continuous space - solutions are 3-D volumes targets are selected from model surfaces labeled occlusion

Viewpoint Planning: Method Determine constraints on sensor position sensor imaging constraints model occlusion constraints sensor placement constraints Integrate constraints to form plan Discretize as final step (if necessary)

Sensor Imaging Constraints Describe limits of sensor s ability to acquire data rangefinder example: breakdown angle α, standoff:

Model Occlusion Constraints Describe space occluded from target Must be calculated for all occluding model surfaces! occlusion constraint volume occluding surface target surface

Sensor Placement Constraints Describe workspace positions available to sensor Usually a function of manipulator type

Constraint Integration Constraints are represented using solid modeling primitives Plan is calculated by: (imaging constraints - occlusion constraints) placement constraints

Planning Example Target is green, other model surfaces in red:

Planning Example Compute sensor imaging constraint:

Planning Example Compute model occlusion constraints:

Planning Example Compute visibility volume

Planning Example Compute sensor placement constraint:

Planning Example Intersect placement constraint with visibility volume:

Planning Example Discretize sensor space and test for inclusion:

Example of View Planning: Strut Target: Occluded Surface of Max. Area

Example of View Planning: Strut

Example of View Planning: Strut

Strut: Final Model

Considering Multiple Targets

Including Sensor Placement Constraints

Example: Model City 3 buildings with very high occlusion:

City Model: After 4 Views

City Model: Target Visibility for 30 Largest Occluded Surfaces by Area

City Model: Placement Constraint

City Model: Total Plan

City Model: Final Reconstruction

City Model: Texture Mapped

City Model: Analysis View Volume Surface Area Occ. Area Plan Area % Planned 1 4712 1571 1571 - - 2 1840 1317 942 - - 3 1052 1151 590 - - 4 432 658 140 - - 5 416 656 121 61 50% 6 404 659 104 28 27% 7 391 657 90 12 13% 8 386 647 84 8 10% 9 382 644 75 15 20% 10 380 651 62 7 11% 11 374 622 53 16 30% 12 370 604 36 9 25%

Efficiency Considerations Modeling restricted set union mesh sweep operation problem subdivision is set union Planning: target surface decimation occlusion constraints calculated only for model surfaces within imaging constraints volume

City Model: Analysis

Limitations Requires surrounding views No consideration of overlapping sampling in distinct images Planning does not consider partial visibility Requires a calibrated system

Conclusion The system presented has the following attributes: is incremental uses hybrid of both mesh and volume representations represents both imaged and unimaged surfaces for planning the next view determines a solid model at each step

Future Research Outdoor Scene Modeling Reduce number of full-model intersections subtraction for some modeling operations extension to other environments Optimal viewpoint and target selection partial visibility Reduction of model to canonical form use symmetry, features, heuristics consider common design principles

Range Scanning Outdoor Structures

Mesh Surface

Scan of Rodin s The Thinker

MOBILE ROBOT FOR SITE MODELING Outdoor Mobile Base Centimeter accuracy GPS system Omni-directional 360º Color Camera 80 meter laser scanner Integrated sonar units Wireless ethernet connection