Building Reliable 2D Maps from 3D Features

Similar documents
Ground Plan Based Exploration with a Mobile Indoor Robot

Combining Dynamic Frontier Based and Ground Plan Based Exploration:

Unwrapping of Urban Surface Models

AUTONOMOUS BEHAVIOR-BASED EXPLORATION OF OFFICE ENVIRONMENTS

Permanent Structure Detection in Cluttered Point Clouds from Indoor Mobile Laser Scanners (IMLS)

Methods for Automatically Modeling and Representing As-built Building Information Models

Exploiting Indoor Mobile Laser Scanner Trajectories for Interpretation of Indoor Scenes

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE

Exploration of an Indoor-Environment by an Autonomous Mobile Robot

Uncertainties: Representation and Propagation & Line Extraction from Range data

Online Simultaneous Localization and Mapping in Dynamic Environments

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

An Automatic Method for Adjustment of a Camera Calibration Room

Kaijen Hsiao. Part A: Topics of Fascination

Watertight Planar Surface Reconstruction of Voxel Data

REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA

PacSLAM Arunkumar Byravan, Tanner Schmidt, Erzhuo Wang

A Lightweight SLAM algorithm using Orthogonal Planes for Indoor Mobile Robotics

Real-Time Object Detection for Autonomous Robots

Sensor Data Representation

Localization and Map Building

Homographies and RANSAC

3DReshaper Help DReshaper Beginner's Guide. Surveying

Improving Door Detection for Mobile Robots by fusing Camera and Laser-Based Sensor Data

Simultaneous Localization

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011

3D Audio Perception System for Humanoid Robots

Collective Classification for Labeling of Places and Objects in 2D and 3D Range Data

Simultaneous Localization and Mapping (SLAM)

CSc Topics in Computer Graphics 3D Photography

5. Tests and results Scan Matching Optimization Parameters Influence

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Interior Reconstruction under Occlusion and Clutter, based on the 3D Hough Transform

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS. Oliver Wulf, Bernardo Wagner

Intelligent Outdoor Navigation of a Mobile Robot Platform Using a Low Cost High Precision RTK-GPS and Obstacle Avoidance System

3D Reconstruction from Scene Knowledge

UNIVERSITY OF NORTH CAROLINA AT CHARLOTTE

Manhattan-World Assumption for As-built Modeling Industrial Plant

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Measurement of 3D Foot Shape Deformation in Motion

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

Semantic Interpretation of Mobile Laser Scanner Point Clouds in Indoor Scenes Using Trajectories

AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING

INDOOR 3D MODEL RECONSTRUCTION TO SUPPORT DISASTER MANAGEMENT IN LARGE BUILDINGS Project Abbreviated Title: SIMs3D (Smart Indoor Models in 3D)

Automatic Construction of Polygonal Maps From Point Cloud Data

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images:

OVERVIEW OF BUILDING RESEARCH AT THE APPLIED GEOTECHNOLOGIES

Plane Based Free Stationing for Building Models

Calibration of a rotating multi-beam Lidar

Digital Image Processing

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Data-driven Depth Inference from a Single Still Image

Optimizing Monocular Cues for Depth Estimation from Indoor Images

Introduction to Mobile Robotics Techniques for 3D Mapping

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

A Relative Mapping Algorithm

Team Description Paper Team AutonOHM

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES

Scan-point Planning and 3-D Map Building for a 3-D Laser Range Scanner in an Outdoor Environment

Handheld Augmented Reality. Reto Lindegger

Fast Sampling Plane Filtering, Polygon Construction and Merging from Depth Images

Math 8 Shape and Space Resource Kit Parent Guide

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds

3D Point Cloud Processing

Urban Scene Segmentation, Recognition and Remodeling. Part III. Jinglu Wang 11/24/2016 ACCV 2016 TUTORIAL

FEATURE-BASED REGISTRATION OF RANGE IMAGES IN DOMESTIC ENVIRONMENTS

Reconstruction of Polygonal Faces from Large-Scale Point-Clouds of Engineering Plants

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Simultaneous Localization and Mapping

Probabilistic Robotics

Homework #2 Posted: February 8 Due: February 15

Robot Localization: Historical Context

3D Collision Avoidance for Navigation in Unstructured Environments

Range Sensing Based Autonomous Canal Following Using a Simulated Multi-copter. Ali Ahmad

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing

Long-term motion estimation from images

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

Impact of Intensity Edge Map on Segmentation of Noisy Range Images

Probabilistic Matching for 3D Scan Registration

CS5670: Computer Vision

3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots

Object Classification in Domestic Environments

Probabilistic Robotics. FastSLAM

3D Spatial Layout Propagation in a Video Sequence

MultiAR Project Michael Pekel, Ofir Elmakias [GIP] [234329]

ROS navigation stack Costmaps Localization Sending goal commands (from rviz) (C)2016 Roi Yehoshua

A MOBILE ROBOT MAPPING SYSTEM WITH AN INFORMATION-BASED EXPLORATION STRATEGY

Contexts and 3D Scenes

A Method of Annotation Extraction from Paper Documents Using Alignment Based on Local Arrangements of Feature Points

Safe Prediction-Based Local Path Planning using Obstacle Probability Sections

Real-time Visual Self-localisation in Dynamic Environments

Transcription:

Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern; Tel. 0631-2052621; wettach@informatik.unikl.de Abstract This paper presents a novel approach for creating a reliable map of an arbitrary indoor environment with a mobile robot. The map consists of a topological component representing rooms and doors and a grid map for obstacles within each room. Both maps are built via a SLAM 2 approach based on distance measurements of two planar laser scanners. In order to get a proper overview of room and furniture primitives even in highly cluttered areas, plane patches are extracted from 3D information provided by a third, rotating laser scanner. These features are used to detect room walls invisible to the planar scanners and to find interesting places as table tops and shelves. Thus the 2D maps are enhanced by 3D features and guide the robot during its main task environmental exploration and detection of objects of daily use. Keywords/Relevant Areas 3D Mapping, Environmental Modeling, Indoor Robotics 1. Introduction In order to perform reasonable tasks a mobile robot has to use a certain kind of environmental representation of its working area for navigation and user interaction. Ideally the robot has to extract this map autonomously during exploration of its a priori unknown environment. As the robot needs the map for reliable self-localization while the continuous map expansion depends on an accurate robot pose, this challenging SLAM problem has been in the focus of research in recent years (see e.g. [1] for an overview). Thus solutions exists for creating 2D and 3D maps ([2], [3]) with different kinds of abstraction. On the lower level grid maps are used to mark occupied and traversable areas and are therefore best suited for local (room-level) path planning and navigation. On the higher level topological maps represent interesting places as nodes and connections between them as global 1 http://rrlab.informatik.uni-kl.de 2 Simultaneous localization and mapping

(building-level) navigation hints. Consequently combining both approaches in hybrid maps seems promising as environmental representation for a mobile robot. Due to their reliability, distance measurements from 2D laser scanners are widely used for robot mapping. As they provide distances to nearest objects in one plane, this input is used to directly fill a 2D grid of cells or to extract line features for a geometric map. Corresponding SLAM approaches are very efficient [2], but these maps only represent a sectional view of the complete environment. On the other hand, 3D mapping provides a robust world representation [3], but at the expense of a more complex sensor system and increased consumption of computation time and memory. In this paper a new approach of environmental representation is described that relies on 2D maps on different levels of abstraction, enhanced by 3D features taken from a rotating laser scanner. First the applied SLAM approaches and the strategies for extracting grid maps and topological maps are introduced in section 2 and 3. In section 4 the extraction of 3D features for improving the existing maps is introduced. The explained strategies are then evaluated in a realistic 3D simulation scenario and in a real office environment (section 5). Section 6 provides a summary of the achieved results and an overview of future research. 2. 2D SLAM Based on Grid Maps Application scenario for the mapping strategies is the exploration of an indoor environment with mobile robot Marvin 3 in order to detect and record objects of daily use (see [4]). Entities as books, folders and cups a human might by interested in are localized and remembered by the robot to provide position information on demand or to start an active search. On the robot two planar laser scanners with a scan area of 180 are mounted 10 cm above ground at its front and rear end respectively. Another scanner mounted on a rotating unit in a height of about 1 m provides 3D distance measurements (see fig. 1). For reliable robot localization a state-of-the-art SLAM strategy using a particle filter for multihypotheses position tracking based on a 2D grid map is applied [2]. As an efficient implementation of this approach is freely available for research purposes at [5], the corresponding code has been integrated with minor adaptations into the existing robot control software. It continuously gets distance measurements from the front scanner and odometry-based robot pose estimations as input, updates a grid map as environmental representation and uses the map to estimate a couple of most probable robot poses based 3 http://agrosy.informatik.uni-kl.de/roboter/marvin/

on the actual sensor readings. The best pose estimation is then applied for correcting the initial estimation. The grid map is used in combination with a grid map based path planning and navigation system developed at RRLAB [6]. rotating scanner kitchenette fridge workbench Figure 1: Robot Marvin (left), typical indoor scenario with furniture in front of walls(right) 3. 2D SLAM Based on Topological Maps For global exploration and building-level navigation purposes a topological mapping system has been developed [7]. It uses 2D measurements from the front and rear scanner to extract walls and openings (doors) as major features. Based on these, rooms are extracted as rectangular composition of walls and represent nodes in the map. Connections between rooms build edges in the map. Exploration strategies on different levels enable the robot to extract this map autonomously and to explore all reachable areas. Pose correction during mapping is based on estimating distances to walls and orientation of walls and is fused with the grid map based pose estimation. The main advantage of this representation is that the map can directly be used as navigation graph for guiding the robot from one end of its working space to the other (relying on basic obstacle avoidance capabilities). The main drawback is that the mapping fails when the 2D scanners do not see enough features to extract room walls, e.g. in highly cluttered areas with many objects (PCs, chairs, tables, dustbins, cabinets) on the ground.

4. 3D Feature Extraction In order to improve the topological mapping approach in real world scenarios, a 3D feature extraction strategy has been developed based on distance measurements of the rotating laser scanner [4]. 3D point clouds are collected via one scan sweep from start to end position ( 65 65 ) and assigned to a 3D grid of cells. After scanning a best fitting plane is calculated within each cell containing sufficient samples. The plane fitting uses RANSAC based parameter estimation [8] and principal component analysis for least square error minimization. In order to extract main environmental features, planes of neighboring cells are fused in a final region growing step and those supported by only a small amount of cells are discarded. For getting a complete overview of a room, four 3D scans with 90 orientation displacement are used as input for plane extraction. This way floor, ceiling, walls and table tops are reliably extracted. Even in cluttered areas walls can be detected from the fact, that above height of humans environmental complexity decreases and intersections of walls and ceiling can be reliably determined (see fig. 1 right). Consequently the calculated planes are grouped into floor, ceiling, walls and others using their orientation and position relative to each other. In detail, the following steps are performed: 1. Group planes into horizontal ones, vertical ones and others criterion: direction of plane normal 2. Lowest horizontal plane: floor, highest horizontal plane: ceiling (choose biggest one if there are several distinct plane patches with similar distance from origin) criterion: plane distance from origin 3. Group vertical planes into walls, wall candidates and others criterion: distance of vertical plane from floor and ceiling vertical planes connected to floor and ceiling are regarded as walls vertical planes connected to floor or ceiling are regarded as wall candidates vertical planes without connection to floor or ceiling: parts of chairs, screens determination of connectivity based on distance threshold The connection of vertical planes to ceiling is the strongest criterion for wall detection as usually no occlusions are present at this height level. Indeed, a large cabinet reaching from floor to ceiling could hide the wall completely, but in this case the corresponding plane would be interpreted as wall feature.

Thus apart from identifying interesting places for object search, these 3D features are used for augmenting the topological map (section 3). As only the final planes are stored for further processing, memory consumption is reasonable. 5. Experiments and Results The grid map based SLAM system, the topological mapping and 3D plane extraction approaches have been tested within a realistic 3D simulation of an office environment [9] as well as on the real robot. Using grids of 3.5 cm diameter the particle SLAM algorithm keeps the robot pose error below 10 cm in translation and 10 in orientation. In less complex areas (e.g. hallways) where the topological SLAM system works directly, the position estimation is even improved. Fig. 2 shows an example grid map of the simulated lab and a topological map of the complete application area. The lab simulation shown in fig. 3 is used to produce realistic, noisy sensor data. In the grid map occupied cells are marked as red circles. Here the map is filled by distance measurements from the planar scanners. Thus only legs of tables and chairs are detected. As the number of furniture objects that hide parts of walls is limited the extraction of room walls from 2D scans succeeds. Thus the lab is represented by two rectangular rooms connected by an opening (see fig. 2 right). The regular structure of the map is also caused by a reduction of cluttered areas in simulation. Figure 2: dots (right) Grid map of lab (left); complete topological map with lab area marked by black The main planes that have been extracted from 4 3D scans in this scenario are shown in fig. 3 (left). The collected point cloud consists of about 240000 samples. The extraction time was less than 3 seconds. Obviously ceiling, walls, cabinet and table tops are correctly detected.

The floor is represented by several distinct planes due to furniture objects hiding major areas of it. Figure 3: Simulated indoor scenario (left), extracted planes from 4 3D scans (right) To evaluate the plane and wall extraction strategy in highly cluttered areas, a test run has been performed in the right part of the real lab (consisting of kitchenette, workbench and cabinets, see fig. 1 right). The environment has again been scanned from one position and in four directions, resulting in a point cloud of about 169000 samples (see fig. 4 left). For better understanding three main features annotated in fig. 1 have been marked in the point cloud as well as in the extracted planes (see fig. 4 right). The planes represent main features as floor, ceiling, parts of walls and cabinets. The workbench is only partially detected as plane due to the numerous stuff located on its top (cf. fig. 1). kitchenette fridge workbench Figure 4: fig. 1 right) 3D samples (left) and extracted planes (right) from real indoor scenario (see

The planes are used as input for room wall detection as described in section 4. Thus planes connected to floor and bottom are analyzed as walls and those connected either to floor or bottom as wall candidates. From both types of features a 2D projection as line segments is calculated and enhanced by virtual lines (gaps) to a closed line segment sequence (see fig 5 left). kitchenette fridge workbench 1 2 Figure 5: Left: detected walls (yellow), gaps (blue) and extracted room rectangle (red) Right: grid map filled with 3D samples (red=occupied) and marked walls (blue) This sequence is given to the topological mapping system which extracts room walls and openings and constructs a rectangular room. The result is shown as red rectangle in fig. 5. The circle marks the actual robot position (from which the sensor data has been collected). Of course in this situation evaluating only one 3d scan is not sufficient: the gap on the left side is much bigger than in reality. Consequently a refinement of the calculated room has to be executed from different scanning positions. To clarify the situation the corresponding grid map filled by the 3D point cloud (projected on 2D) is shown in fig. 5 (right). Here the grid cells representing walls are colorized blue. On the left side of the room rectangle two irregularities are obvious: the detected wall is not long enough because the plane extraction has failed to generate a plane for this area (marked by arrow 1); and there are some artifacts detected as occupied cells (arrow 2) which are not present as stable features in reality (perhaps from a person walking through the 3D scan). Therefore moving the robot to new viewing positions and scanning again is necessary for dependable exploration of complete areas.

6. Conclusion and Future Work In this paper the combination of a grid map based SLAM approach and a higher level topological mapping strategy have been presented. Besides the benefit of using plane features detected from 3D scans for getting a clear overview of a room has been evolved. Experiments performed in a 3D simulation scenario as well as in a real indoor scene have been discussed. Future work concentrates on increasing the performance of the 3D feature extraction approach (automatically assigning semantic information to planes, e.g. table tops and screens) and the combination of global (topological map based) and local (grid map based) navigation. That means the topological information will be used as navigation graph for guiding the robot from room to room whereas the grid map information is well suited for local navigation within one room (reaching a target position exactly). Furthermore the consistent update of the grid map from planar and 3D scanner data will be realized because using only 2D information is not feasible in real-world scenarios (as a comparison of fig. 2 and 5 suggests). Finally, the tests have shown that evaluating a single 360 scan contains a lot of occlusions which makes an extraction of the complete room structure impossible. Thus an exploration strategy for continuously updating the topological map via next best view calculation and re-scanning will be implemented. References: [1] Thrun, S.: Robotic Mapping: A Survey, technical report, CMU, 2002, CMU-CS-02-111 [2] Eliazar, A.: Hierarchical Linear/Constant Time SLAM Using Particle Filters for Dense Maps, NIPS 2005 [3] Weingarten, J.: Feature-Based 3D SLAM, EPFL PhD thesis 2006 [4] Wettach, J.: 3D Reconstruction for Exploration of Indoor Environments, AMS 2007, p. 57-63 [5] Eliazar, A. and Parr, D.: DP-SLAM, http://www.cs.duke.edu/~parr/dpslam/ [6] Armbrust C.: Mobile Robot Navigation Using Dynamic Maps and a Behaviour-Based Anti-Collision System, AG Robotersysteme, TU Kaiserslautern Diplomarbeit 2007 [7] Schmidt, D.: Autonomous Behavior-Based Exploration of Office Environments, ICINCO 2006 [8] Fischler, M.: Random Sample Consensus, Commun. ACM 24, 1981 [9] Braun, T.: A Customizable, Multi-Host Simulation and Visualization Framework for Robot Applications, ICAR 2007