for real time map scan assembly would be an effective compromise of accuracy and quickness. By fitting only some points in each scan with selected poi

Similar documents
Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

First scan matching algorithms. Alberto Quattrini Li University of South Carolina

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Robot Localization based on Geo-referenced Images and G raphic Methods

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Sensor Modalities. Sensor modality: Different modalities:

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images:

Practical Course WS12/13 Introduction to Monte Carlo Localization

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Kaijen Hsiao. Part A: Topics of Fascination

Exploring Unknown Structured Environments

A MOBILE ROBOT MAPPING SYSTEM WITH AN INFORMATION-BASED EXPLORATION STRATEGY

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Localization, Where am I?

COS Lecture 13 Autonomous Robot Navigation

Calibration of a rotating multi-beam Lidar

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Indoor Mobile Robot Navigation and Obstacle Avoidance Using a 3D Camera and Laser Scanner

Visually Augmented POMDP for Indoor Robot Navigation

Localization and Map Building

Probabilistic Matching for 3D Scan Registration

A Genetic Algorithm for Simultaneous Localization and Mapping

Mapping Contoured Terrain Using SLAM with a Radio- Controlled Helicopter Platform. Project Proposal. Cognitive Robotics, Spring 2005

PacSLAM Arunkumar Byravan, Tanner Schmidt, Erzhuo Wang

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

Lab2: Single Photon Interference

Building Reliable 2D Maps from 3D Features

Localization of Multiple Robots with Simple Sensors

Vehicle Localization. Hannah Rae Kerner 21 April 2015

This chapter explains two techniques which are frequently used throughout

EE565:Mobile Robotics Lecture 3

Using 3D Laser Range Data for SLAM in Outdoor Environments

Chapter 5. Conclusions

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Exploring Unknown Structured Environments

Monte Carlo Localization for Mobile Robots

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta

Localization algorithm using a virtual label for a mobile robot in indoor and outdoor environments

A Neural Network for Real-Time Signal Processing

3D Point Cloud Processing

Combined Vision and Frontier-Based Exploration Strategies for Semantic Mapping

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Domain Adaptation For Mobile Robot Navigation

Intelligent Outdoor Navigation of a Mobile Robot Platform Using a Low Cost High Precision RTK-GPS and Obstacle Avoidance System

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Engineered Diffusers Intensity vs Irradiance

Neural Networks for Obstacle Avoidance

Ground Plane Motion Parameter Estimation For Non Circular Paths

Final Exam Practice Fall Semester, 2012

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Fast Local Planner for Autonomous Helicopter

Constraint-Based Landmark Localization

Compressed Point Cloud Visualizations

MAPPING ALGORITHM FOR AUTONOMOUS NAVIGATION OF LAWN MOWER USING SICK LASER

Simultaneous Localization

Robot Vision without Calibration

Robotic Mapping. Outline. Introduction (Tom)

Simultaneous Localization and Mapping (SLAM)

Optimization of a two-link Robotic Manipulator

Registration concepts for the just-in-time artefact correction by means of virtual computed tomography

Announcements. Recap Landmark based SLAM. Types of SLAM-Problems. Occupancy Grid Maps. Grid-based SLAM. Page 1. CS 287: Advanced Robotics Fall 2009

Particle Filter Localization

Design and Performance Improvements for Fault Detection in Tightly-Coupled Multi-Robot Team Tasks

: Easy 3D Calibration of laser triangulation systems. Fredrik Nilsson Product Manager, SICK, BU Vision

REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA

Robotics. Haslum COMP3620/6320

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Probabilistic Robotics

DYNAMIC POSITIONING OF A MOBILE ROBOT USING A LASER-BASED GONIOMETER. Joaquim A. Batlle*, Josep Maria Font*, Josep Escoda**

Heuristic Optimization for Global Scan Matching with Focused Sonar on a Mobile Robot

Mobile Robots: An Introduction.

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

A Repository Of Sensor Data For Autonomous Driving Research

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Transducers and Transducer Calibration GENERAL MEASUREMENT SYSTEM

Real-Time Object Detection for Autonomous Robots

Uncertainties: Representation and Propagation & Line Extraction from Range data

ROBOTICS AND AUTONOMOUS SYSTEMS

DP-SLAM: Fast, Robust Simultaneous Localization and Mapping Without Predetermined Landmarks

Exam in DD2426 Robotics and Autonomous Systems

Impact of Intensity Edge Map on Segmentation of Noisy Range Images

Robotics and Autonomous Systems

Robotics and Autonomous Systems

Final project: 45% of the grade, 10% presentation, 35% write-up. Presentations: in lecture Dec 1 and schedule:

Automated Laser-Beam Steering. Margot Epstein

Design and Construction of Relational Database for Structural Modeling Verification and Validation. Weiju Ren, Ph. D.

USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS. Oliver Wulf, Bernardo Wagner

Disambiguating Robot Positioning Using Laser and Geomagnetic Signatures

FPGA-Based Feature Detection

Project 1 : Dead Reckoning and Tracking

Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization

Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Probabilistic Robotics

Transcription:

Quantitative and qualitative comparison of three laser-range mapping algorithms using two types of laser scanner data Alexander Scott Lynne E. Parker Claude Touzet DePauw University Oak Ridge National Laboratory University of Provence Greencastle, IN Oak Ridge, TN Marseille, France Abstract This paper presents our initial results in comparing three algorithms for autonomous robotic mapping using two types of laser scanner data. The algorithms compared are the Markov localization approach of Thrun, the Lu and Milios iterative dual correspondence algorithm, and the Touzet model-free landmark extraction algorithm. The twotypes of laser scanner data utilized are the AccuRange laser scanner from Acuity and the SICK laser scanner. We compare these algorithms in terms of the quality of mapped results, computational requirements, and sensitivity to data and odometry error. While the complete comparison of these algorithms on all these measures is not yet accomplished, our results to date indicate that laser mapping algorithms are not immediately transferrable from one type of laser scanner data to another. Instead, algorithms appear to make implicit assumptions on the quality orcontent of laser data that play a strong role in the quality of the mapping results. 1 Introduction Assembling 2D laser or sonar scans into a coherent mapisanareaofinterest to many robotics researchers. Many promising approaches have been developed and have been demonstrated to operate successfully within certain constraints. Some algorithms, for example, yield highly precise maps but are computationally intensive. Other algorithms may operate quickly, but with reduced quality inthe resulting map. Some algorithms require reasonably accurate odometric data and scans with relatively low noise, whereas other approaches are more robust to uncertainties. Nearly all algorithms, however, are demonstrated in operation on one specific type of laser scanner data. This paper reports the results of our initial investigations of the strengths and weaknesses of alternative techniques for laser scanner mapping, developing quantitative and qualitative measures of their similarities and differences when applied to data generated from two different types of laser scanners the Acuity and the SICK. Our approach focuses on three very different algorithms for laser scanner mapping: (1) Thrun's mapping based on Markov localization [4], (2) Lu and Milios' iterative dual correspondence algorithm [1], and (3) Touzet's model-free landmark extraction algorithm [5]. We present the initial results of our experimentation of these algorithms on this data, comparing them in terms of the quality of the resulting map, computational requirements, and sensitivity to odometry error and sensor noise. We have not yet completed a comparison of all the algorithms on all the data sets for all of the identified issues of sensory and odometry noise. However, the results developed to date are instructive and begin to show the characteristics and contrasts of the capabilities of the various mapping algorithms. In the next section, we provide some background of the three approaches and the two types of laser scanners. We then present the results of our comparisons in the remaining sections. 2 Background Assembling laser or sonar scans of a 2D indoor environment into a coherent map is an area that has been extensively studied in robotics research. Recent developments in robotic scan map assembly are aimed at improvements in the areas of accuracy of scan assembly and computational efficiency (e.g., [3]). Achieving both can be very difficult; some highly accurate algorithms require the comparison of each point in a scan to every other point in the preceding scan, which is computationally intensive. Conversely, achieving real time map production often requires some shortcuts, resulting in scan matching falling into local minima instead of the true orientation. Ideally, an algorithm

for real time map scan assembly would be an effective compromise of accuracy and quickness. By fitting only some points in each scan with selected points from the map, some scan orientation accuracy is sacrificed in order to realize a drastically reduced run time. Three interesting approaches to mapping are the iterative dual correspondence (IDC) approach developed by Lu and Milios [1] the Markov localization approach of Thrun [4], and the model-free landmark matching algorithm of Touzet [5]. The IDC algorithm works by comparing two scans and initially reorienting the second scan into the coordinates of the first using odometric readings, then finding the best fit by translating and rotating the second scan until they are optimally placed. The key idea in the Thrun algorithm is to compute a discrete approximation of a probability distribution over all possible poses in the environment, and then base the mapping upon this accurate localization. The Touzet approach is based upon the use of simple but robust primitives to extract implicit landmarks that enable a scan matching to become independent of the number of laser scan range measures, thus making real-time on-board mapping possible. Two types of laser scanners were used to gather data to test these algorithms: the SICK and the Accu- Range, shown in Figure 2. These scanners return a list of points corresponding to the intersection points of a laser beam with objects in the robot's environment. The laser beam rotates in a horizontal plane and emanates from the sensor mounted on the robot. A range scan is a 2D slice of the environment. Points in the range scan are given in a polar coordinate system whose origin is the position of the laser scanner. The direction of each range measure may beprovided by the laser scanner. The AccuRange scanner is a more lightweight, less power-demanding sensor than the SICK. However, its accuracy is less than the SICK scanner, and the data is less consistent (e.g., missing data points are common). These inconsistencies in the AccuRange data lead to distortions of flat surfaces that make coherent mapmaking quite difficult. The Thrun algorithm was originally implemented and tested using SICK laser scanner data 1. The Lu and Milios algorithm was implemented and tested by our colleagues [2], and was initially implemented and tested using AccuRange laser data. The Touzet algorithm was implemented by us and was tested using AccuRange laser data. 1 We obtained this implementation and data directly from Thrun. Figure 1: The Urban Robot, whose tracks provide only minimal odometry information. The noise associated with a range scan is important. Such noise is usually reduced by statistical methods. However, the energy consumption associated to laser scanner operation makes this solution prohibitive for mobile robots. The scan rate frequency is kept at the bare minimum (order(s) of magnitude smaller than what could be achieved). Therefore, range scans are inherently noisy, and scan matching methods must account for it. For a long time, odometry i.e., the ability to compute the robot position by monitoring the motor encoders has been the only tool available to locate a robot in its environment. It supposes that a model of the map has been made available for the robot. However, even in controlled environments, small errors due to friction and slippery tends to accumulate, until the position estimation is of no use. Today, odometry is mostly used in combination with sensors such as sonar, stereo-vision and laser range. A local odometry can be very helpful for scan matching, since the distance traveled between two following scans is usually small, and the odometry is reset after each match. However, more recent robots, such as the Urban Robot shown in Figure 1, are equipped with tracks to enable navigation in more challenging terrains. However, tracks, as opposed to wheels, do not even allow for a minimum odometric performance. Therefore, mapping methods for robots such as the Urban Robot must be based primarily on range scan matching. 3 Approach In our experiments to date, we have collected comparative data of both Thrun's algorithm and the Lu and

Figure 2: The SICK and AccuRange laser scanners. The SICK has a range of about 50 meters, coverage of 180 degrees, with a resolution of ± 50 mm. The AccuRange 4000 laser has a range of about 25 meters, coverage of 360 degrees (minus the support structure for the mirror), and a resolution of ± 50 mm. Milios algorithm when running the SICK laser data, as well as two sets of AccuRange data. We have also investigated the sensitivity of the Thrun approach to data error rates and odometry error rates when processing SICK laser data. We have run the Touzet approach using AccuRange data, and have compared these results with the Lu and Milios approach for AccuRange data. We tested these algorithms both qualitatively as well as quantitatively. We subjectively observed the quality of the resulting maps to ensure their coherence. We collected quantitative data on the computational requirements of the approaches, as well as the sensitivity of the Thrun algorithm to data and odometry error rates. The computational time comparisons were calculated while running the algorithms on a 60 MHz Sparc-20 computer with 64 megabytes of RAM and a 36 kilobyte cache. The number of scans processed and the total length of time to finish were recorded. The experiments were run using data collected from the same environment, the complete floorplan of which is shown in Figure 3. Figure 3: The complete floorplan of the area in which the SICK and AccuRange laser data was collected. 4 Results Our most important result shows the dependence of the mapping algorithms on the type of laser scanner data used. The algorithms vary considerably in all the aspects we studied - map quality, computational requirements, and sensitivity to odometric and sensor noise. No single algorithm is best in all factors for both types of laser data. Our results illustrate the importance of fully understanding the requirements of the application and sensory data characteristics in selecting an algorithm for autonomous robot indoor map building. Figure 4 shows the qualitative comparisons of the Thrun approach using SICK data, the Lu and Milios approach using AccuRange data, and the Touzet approach using AccuRange data. These results show the algorithms in their best case", running the laser scanner data for which they were designed. Table 1 shows the runtime comparisons of these approaches. For the Lu and Milios algorithm, the Acuity data #1 took 75.8 seconds per scan, the Acuity data #2 took 116.1 seconds per scan, while the SICK data only required 6.04 seconds per scan. This is not surprising; given that the SICK data has 180 points per scan and the Acuity datahasbetween 1200 and 1500 points per scan, an O(n 2 ) algorithm should process the SICK data 52 times as fast. Since the SICK data was only processed twelve times as fast as the Acuity data set #1 and nineteen times as fast as the Acuity

Figure 4: Qualitative comparisons of the Thrun, Lu and Milios, and Touzet approaches to mapping. data set #2, this indicates that there is a constant amount of time per scan in addition to the O(n 2 )part that is required for building the map. Table 1 also shows the results of the two sets of Acuity laser data and the SICK data using Thrun's algorithm. The first Acuity data set took 3.48 seconds per scan, the second Acuity data set took 0.795 seconds per scan, and the SICK data took only 0.473 seconds per scan. Once again, the differences between run times of the data sets are to be expected. A larger set of points per scan will require more processor time to correlate with each other. The second Acuity data set has an anomalously low run time; we believe this is due to the display function. The first Acuity data set had a scaling factor applied to each data point to increase the size of the displayed map without changing proportions. The second Acuity data set had data values too large to apply a scaling factor to, but produced a smaller dimensioned map than either the first Acuity data set or the SICK data, and thus ran faster during the display function. The interesting comparison is between run times for Thrun's program and run times for Liu and Milios. While the SICK data only ran 12.8 times faster on Thrun's algorithm than Lu and Milios, the Acuity set 1 ran 21.8 times as fast. This is a significant reduction in time and allows a map to be assembled as the robot is taking the data. From this perspective, the Thrun algorithm is very time efficient. Comparing the Lu and Milios map with the Touzet map, as shown in Figure 4, shows that they result in similar qualities. Similar difficulties, relative to the accumulation of errors generating a corridor curvature, is present. The major difference lies in the computation time. The Touzet fast indoor mapping algorithm only requires an average of 0.3 seconds per scan, which is a significant savings. Testing for map assembly accuracy was more involved than testing for time efficiency. Thrun's algorithm was unable to consistently piece together many scans from the Acuity laser data, whether or not odometry information was used. This seems to imply that one of the following is true: Thrun's program is not general enough to accurately assemble other sets of data, the Acuity laser data is not accurate to within the requirements of the program, or the simulated odometry is not accurate to within the required bounds of the program. We chose to operate on the assumption that the laser data and odometry data had exceeded the acceptable amount of error. To test these assumptions, error was introduced into

Acuity data#1 Acuity data #2 SICK data Lu & Milios Scans used: 1032 1082 101 Runtime: 75.8 sec/scan 116.1 sec/scan 6.04 sec/scan Thrun Scans used: 600 450 1559 Runtime: 3.48 sec/scan 0.795 sec/scan 0.473 sec/scan Touzet Scans used: 1032 1082 Runtime: 0.3 sec/scan 0.3 sec/scan Table 1: Comparative results of two sets of Acuity laser data and one set of SICK data, when processed by the mapping algorithms. the SICK data and were run using Thurn's algorithm. The assumptions were that the data in the SICK package realistically reflected the environment, i.e., the odometry had a negligible amount of error and the laserdatawas a true indication of distance, and that introduced error would fluctuate randomly around the original value but be held within constraints. Thus, we would be modeling a laser and odometry device that guaranteed values within a certain error percentage of the true value." Error percentages based on the amount of movement of the robot were used. Twenty trials were run for eachset of values for odometry error and data value error. Our results, shown in Figure 5, were somewhat surprising. We had expected that measurements made by the lower quality laser AccuRange laser were the main reason that Thrun's program was unable to consistently piece together the Acuity scans. However, the tests show that 15% fluctuation in data values above or below the true value" still resulted in success over 60% of the time. The map outlines are significantly blurred beyond what would be reasonable to expect from a functioning laser, but the program can still assemble the scans. Thus, it is not likely that low laser quality is the cause of map incoherence while utilizing odometry. However, a mere 4% fluctuation above or below the true value" for the odometry with no data error resulted in only a 35% success rate, with an average of.75 failures per trial. Adding data error did affect the failure rate somewhat, but mostly it seemed linked to the odometry. With 4% fluctuation in odometry and 17.5% in data, the success rate had fallen to 20% and there was an average of 1.2 failures per trial. At 4% fluctuation, there were twenty-five occurrences of two failures per trial and two more occurrences where there were three failures per trial. Out of 112 failures, 34 occurred between scans 500 and 600, and another 37 Figure 5: Failure rates versus data error rates, for odometry error rates of 5% and 8%, for the Thrun algorithm applied to SICK laser scanner data.

failures occurred between scans 1000 and 1100. These are both spots where the robot executed a turn and there was a strong opportunity for scans to become misaligned. At 2.5% fluctuation, there were only thirteen occurrences of two failures per trial, and none with three failures. Out of 74 failures, 40 occurred between scans 500 and 600, and another 22 failures occurred between scans 1000 and 1100. 5 Conclusions and Future Work It seems obvious that Thrun's and Touzet's algorithms are superior in the area of time efficiency over the Lu and Milios algorithm. Thrun's algorithm provides high map accuracy when measured by error tolerances. Certainly it is robust when considering data measurements that have random error, although consistent errors or a bias might cause different behavior. However, it seems to require only a little odometric error to destroy map coherence. Possible future studies might propose a more realistic model of odometric error, such as accumulated error or only add error when the robot turns through an angle. Other studies might test the performance of Thrun's algorithm at more commonly experienced error levels, such as 2-5% laser error. This paper has presented the initial studies of comparing three very different approaches to autonomous indoor mapping based upon two types laser range data. While the results are still incomplete, they seem to indicate the importance of identifying the requirements of the application and the characteristics of the available sensor in determining the most appropriate algorithm for autonomous indoor mapping. Our results should be useful for future applications of robot mapping, by providing guidance for making the appropriate algorithm selection given the constraints of the current application and sensors. purposes. Oak Ridge National Laboratory is managed by UT-Battelle, LLC for the U.S. Dept. of Energy under contract DE-AC05-00OR22725. References [1] Feng Lu and Evangelos Milios. Robot pose estimation in unknown environments by matching 2d range scans. Journal of Intelligent and Robotic Systems, 18:249 275, 1997. [2] Pete Richert. Map making using the iterative dual correspondence approach in an urban robot. Engineering research undergraduate laboratory fellowship final report, Oak Ridge National Laboratory, Oak Ridge, Tennessee, August 1999. [3] S. Thrun, W. Burgard, and D. Fox. A real-time algorithm for mobile robot mapping with applications to multi-robot and 3d mapping. In Proceedings of IEEE International Conference on Robotics and Automation, 2000. [4] S. Thrun, D. Fox,, and W. Burgard. A probabilistic approach to concurrent mapping and localization for mobile robots. Autonomous Robots, 5:253 271, 1998. [5] C. Touzet. Fast indoor map building using laser scans. forthcoming, 2000. Acknowledgements The authors thank Prof. Sebastian Thrun and Dr. David Jung for their assistance in this project. This project was funded in part by the Great Lakes Colleges Association, in part by the Defense Advanced Research Projects Agency, and in part by the Engineering Research Program of the Office of Basic Energy Sciences, U. S. Department of Energy. The U.S. Government retains a nonexclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U. S. Government