Robot Mapping for Rescue Robots

Similar documents
Geometric Robot Mapping

Using Augmented Measurements to Improve the Convergence of ICP. Jacopo Serafin and Giorgio Grisetti

6D SLAM with Kurt3D. Andreas Nüchter, Kai Lingemann, Joachim Hertzberg

Probabilistic Matching for 3D Scan Registration

Robust Laser Scan Matching in Dynamic Environments

Simulation of a mobile robot with a LRF in a 2D environment and map building

A MOBILE ROBOT MAPPING SYSTEM WITH AN INFORMATION-BASED EXPLORATION STRATEGY

Online Simultaneous Localization and Mapping in Dynamic Environments

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Improving Simultaneous Mapping and Localization in 3D Using Global Constraints

Shape Matching for Robot Mapping

Scan-point Planning and 3-D Map Building for a 3-D Laser Range Scanner in an Outdoor Environment

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

Localization of Multiple Robots with Simple Sensors

Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang

Collecting outdoor datasets for benchmarking vision based robot localization

Surface Registration. Gianpaolo Palma

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Range Image Registration with Edge Detection in Spherical Coordinates

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Multi Robot Mapping Using Force Field Simulation

A Novel Map Merging Methodology for Multi-Robot Systems

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Ensemble of Bayesian Filters for Loop Closure Detection

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

Salient Visual Features to Help Close the Loop in 6D SLAM

Acquiring Models of Rectangular 3D Objects for Robot Maps

CS 223B Computer Vision Problem Set 3

Filtering and mapping systems for underwater 3D imaging sonar

Shape Similarity and Visual Parts

CS 231A Computer Vision (Fall 2012) Problem Set 3

Building Reliable 2D Maps from 3D Features

3D object recognition used by team robotto

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

Heuristic Optimization for Global Scan Matching with Focused Sonar on a Mobile Robot

Free-Hand Stroke Approximation for Intelligent Sketching Systems

A Genetic Algorithm for Simultaneous Localization and Mapping

NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM

Processing of distance measurement data

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

A Repository Of Sensor Data For Autonomous Driving Research

3D-2D Laser Range Finder calibration using a conic based geometry shape

Probabilistic Robotics

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

Probabilistic Robotics. FastSLAM

Learning Image-Based Landmarks for Wayfinding using Neural Networks

Seminar Dept. Automação e Sistemas - UFSC Scan-to-Map Matching Using the Hausdorff Distance for Robust Mobile Robot Localization

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

Robust Monte-Carlo Localization using Adaptive Likelihood Models

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

FEATURE-BASED REGISTRATION OF RANGE IMAGES IN DOMESTIC ENVIRONMENTS

Incremental Robot Mapping with Fingerprints of Places

Orthogonal SLAM: a Step toward Lightweight Indoor Autonomous Navigation

USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS. Oliver Wulf, Bernardo Wagner

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

Shape Descriptor using Polar Plot for Shape Recognition.

A High Dynamic Range Vision Approach to Outdoor Localization

Integrated Sensing Framework for 3D Mapping in Outdoor Navigation

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

SCAN MATCHING WITHOUT ODOMETRY INFORMATION

Long-term motion estimation from images

Pose Registration Model Improvement: Crease Detection. Diego Viejo Miguel Cazorla

Improvements for an appearance-based SLAM-Approach for large-scale environments

COMPLETE AND SCALABLE MULTI-ROBOT PLANNING IN TUNNEL ENVIRONMENTS. Mike Peasgood John McPhee Christopher Clark

Robot localization method based on visual features and their geometric relationship

Real Time 3D Environment Modeling for a Mobile Robot by Aligning Range Image Sequences

Octree-Based Obstacle Representation and Registration for Real-Time

Localization and Map Building

3-D MAP GENERATION BY A MOBILE ROBOT EQUIPPED WITH A LASER RANGE FINDER. Takumi Nakamoto, Atsushi Yamashita, and Toru Kaneko

Planning Robot Motion for 3D Digitalization of Indoor Environments

Image Augmented Laser Scan Matching for Indoor Localization

Real-time Scan-Matching Using L 0 -norm Minimization Under Dynamic Crowded Environments

Intelligent Outdoor Navigation of a Mobile Robot Platform Using a Low Cost High Precision RTK-GPS and Obstacle Avoidance System

Robot Localization based on Geo-referenced Images and G raphic Methods

Rotation Invariant Image Registration using Robust Shape Matching

Planning With Uncertainty for Autonomous UAV

LOAM: LiDAR Odometry and Mapping in Real Time

Task analysis based on observing hands and objects by vision

A New Omnidirectional Vision Sensor for Monte-Carlo Localization

Real-time Obstacle Avoidance and Mapping for AUVs Operating in Complex Environments

Scene Text Detection Using Machine Learning Classifiers

A Two-stage Scheme for Dynamic Hand Gesture Recognition

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

PROGRAMA DE CURSO. Robotics, Sensing and Autonomous Systems. SCT Auxiliar. Personal

Model-based segmentation and recognition from range data

3D Sensing and Mapping for a Tracked Mobile Robot with a Movable Laser Ranger Finder

Planning for Simultaneous Localization and Mapping using Topological Information

Environment Identification by Comparing Maps of Landmarks

Analyzing the Quality of Matched 3D Point Clouds of Objects

Estimating Geospatial Trajectory of a Moving Camera

EE565:Mobile Robotics Lecture 3

Exploring Unknown Structured Environments

A Relative Mapping Algorithm

3D Audio Perception System for Humanoid Robots

Pictures at an Exhibition

Robust Localization of Persons Based on Learned Motion Patterns

A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

Transcription:

Robot Mapping for Rescue Robots Nagesh Adluru Temple University Philadelphia, PA, USA nagesh@temple.edu Longin Jan Latecki Temple University Philadelphia, PA, USA latecki@temple.edu Rolf Lakaemper Temple University Philadelphia, PA, USA lakaemper@temple.edu Raj Madhavan NIST Gaithersburg, MD, USA raj.madhavan@nist.gov Abstract Rescue robots operate in disaster areas where odometric information is known to be highly unreliable. This paper presents a new approach to odometry independent 2D robot mapping. Scans obtained from different robot positions are assembled to a single global map using alignment based on scan intrinsic information only, namely shape features. The approach detects geometric structures in different scans which are similar by shape and aligns the scans accordingly. The detection of shape similarity is the key to generate global maps even of highly cluttered environments and thus the proposed approach is particularly suitable for applications in the field of rescue robots. I. INTRODUCTION Rescue robots are mobile, autonomous robots, which are deployed in disaster areas mainly for search and rescue of victims. Even though actually rescue robots are remotely controlled by human operators, their navigation in such areas is usually extremely difficult. The only information that is available to a human operator is a stream of images transmitted from one or more cameras mounted on the robot. Due to limited view of the cameras, it is an extremely hard task for the operator to keep track of the location of the robot and navigate it. A global map of the robot environment with a robot path marked can be very helpful to the operator. The overview knowledge in the form of a global map is particularly important to localize victims in catastrophe scenarios (e.g., in collapsed buildings) and to ensure that the whole target region has been searched [1]. Moreover, when a victim is found, the map gives a responder team a much better situation awareness than currently used sketches which are manually drawn. However, the highly cluttered environments (e. g., due to piles of rubble) and scans from sliding and tilted robots make mapping for rescue robots an especially challenging task. In particular, the odometry information is very unreliable. External sensor data like GPS to assist in this task is not available. Currently, laser range scanners are the most popular sensing devices to generate maps of robot environments, but the existing robot mapping approaches are based on the assumption that consecutive laser scans are highly overlapping. However, when mapping is done in cluttered environments, frequent tilts of the robot and occlusions cause consecutive scans to be very different from each other. Robot mapping, i.e. constructing the internal map autonomously and self-localization, i.e. localizing the robot within its internal map are of high importance to the field of mobile robotics [2]. Coping with unknown or changing environments requires to carry out both tasks simultaneously, this problem has been termed the SLAM problem: Simultaneous Localization and Mapping [3]. It has received considerable attention [2] [4]. Successful stochastical approaches mostly based on particle filters have been developed to tackle representation and handling of uncertain data, which is one key problem in SLAM [5]. As current stochastical models are powerful, even linking them to a simple geometric representation like reflection points measured by a range sensor already yields impressive results. Though advances in stochastical means have improved the overall performance, they leave the basic spatial (geometric) representation untouched. Since the internal geometric representation is a foundation for these sophisticated stochastical techniques, shortcomings on the level of geometric representation affect the overall performance. The highly cluttered environment, possible robot tilt, and unreliable odometry information cause the existing SLAM approaches to fail, since they assume large overlaps between scans and also good estimates of robot poses. Also current SLAM approaches fail in conditions where the environment changes rapidly, e.g. in a crowded shopping mall, due to occlusions. This paper presents an approach that makes it possible to successfully complete the task of robot mapping of highly cluttered environments without any odometry information. We use shape information extracted from range data of robot scans to find and align similar structures among different scans. Shape information abstracts from actual positions of scan points in local scan maps. It allows to identify similar structures with assumption of minimal overlaps between consecutive scans only. The only requirement is the presence of similar structures so that it is possible to identify common parts of the robot environment. The rest of the paper is organized as follows: Sec. II describes shape information and similarities used, Sec. III describes how we estimate odometry after extracting similar structures. Sec. IV describes global map construction using the estimated odometry. Finally Sec. V presents our experimental results. Our future work directions are presented in

the last section. The values on axes in all the figures are in centi-meters. II. MATCHING STRUCTURES USING SHAPE SIMILARITY The input data is a set of points obtained by a laser range scanner. This data contains information of the location of laser beam reflection points, hence it consists of a set of sample points of the environment. We abstract from the position of these scan points by extraction of the shape information utilizing tangent directions and angular histograms. Preserving the scan order and geometric properties of the scan allows us to match similarities between scans. A. Tangent Directions The points on each scan are approximated by polylines using Expectation Maximization and perceptual grouping as discussed in [6]. For each point we find the closest polyline, its tangent direction (angle ranging from to 36 ) is stored as the direction property of the point. Since we maintain the order of scan points, we obtain a sequence of tangent directions for each scan. Approximation of point sets with polylines using EM is a robust technique, and hence the sequence of tangent directions represents robust shape information of a scan. Fig. 1 shows polyline approximation of points. Two points in different scans are considered similar if their associated tangent directions are within 1 shift. B. Angular Histograms For each point in a scan we construct an angular histogram with 36 bins by counting the number of scan points for each bin within a radius of 5 cm. The circular frame and a sample angular histogram are shown in Fig. 1. This representation is similar to shape histograms used in [7] and to shape context used in [8]. In our approach a different similarity measure is used. χ 2 statistic used in [8] is not applicable in our case, because most bins are empty. Instead, we use the number of differing bins as the distance between two histograms. Two points in different scans are similar if they differ in at most 2 bins. We consider two bins b 1, b 2 as different if b 1 b 2 >.25 where b i represents normalized value of the i th bin. C. Longest Common Subsequence Once we obtain sequences of tangent directions and angular histograms between two scans, we use a well established algorithmic technique to obtain longest common subsequences (LCS). The LCS algorithm [9] considers two points to be similar based on similarity between both tangent directions and angular histograms as explained in Sections II-A-II-B. This technique finds corresponding points between two scans, but we face two problems here: first there may be no similar shape structures between two scans as illustrated in Fig. 2, and second there may be false positives in shape matching. The first problem can be solved by aligning the query scan to the partial global map composed of a few recent scans instead of just the previous scan. To reduce computational demands we choose to align the query scan to a map composed of three previous scans. As shown in Fig. 3, this approach makes it possible to find similar structures. To understand the second problem we can view the matchings generated by LCS as the hypotheses of correspondences. We have to eliminate false hypotheses. The false positives are eliminated using two deterministic filters, explained in Section II-D. To eliminate the problem of rotation of scans we perform the shape matching for several different rotations of the query scan. We chose the rotation that gives us the largest true overlap as the best rotation as defined in Section III. D. Eliminating false positive correspondences We eliminate false positives using two filters, namely collinear filter and segment filter. Collinear filter:. Collinear structures, e.g. as gained from subseguent scans of walls of long hallways, do not contain a highly distinct amount of information. We therefore eliminate collinear matches from the match candidates extracted by LCS. If the direction of a correspondence vector (the translation vector between two corresponding points) is within 1 shift to the tangent direction of either of the corresponding points, then the correspondence is considered collinear and removed. Segment filter: We first divide the corresponding points detected by LCS into parts, called segments, and eliminate bad segments. Division of LCS into segments is based on distance. If there is a distance of 3 cm or more between one correspondence and the previous correspondence in either sequence we create a new segment. For each segment we compute a quality measure. Let the segment for which we compute the quality measure be called main segment. We first translate the query scan to the target scan based on the average translation vector calculated using the correspondences in the main segment. After putting the query scan onto the target scan, we compute the average distance between the corresponding points in the main segment. Now we compute average distances for all other segments and eliminate those that are above twice the average distance for the main segment by 2 cm. Thus we eliminate only very bad segments. The quality measure of the main segment is the number of the left over corresponding points. We then pick the segment with highest quality measure which leaves us with LCS correspondences that are good after translation. The effect of eliminating false positive matches obtained by LCS can be seen in Fig. 4.

iteration 8, after merge 5 4 3 2 1 1 2 3 4 5 1 15 2 5 5 5 Fig. 1. shows a piecewise linear approximation of points in part of a sample scan obtained using EM. Each point is assigned the tangent direction of the closest line segment. The circular frame used for calculating angular histograms of points. 4 Scan 6 4 Scan 7 2 2 2 2 4 4 6 6 8 6 4 2 2 4 6 8 1 8 6 4 2 2 4 6 8 1 Fig. 2. This figure shows two consecutive scans 6 and 7 in our test data set that do not contain any similar structures. 4 LCS 6 7 Rotation 2 4 LCS with Shape Context and... 4 6 7 Rot 2 2 3 2 1 2 1 4 2 6 3 4 8 4 2 2 4 6 5 4 2 2 4 6 Fig. 3. The corresponding points computed using shape similarity in between scans 6 (red) and 7 (cyan) are all false positives. The rectangular region shows the true positive correspondences between a partial global map composed of scans 4 5 6 (red) and scan 7 (cyan). The blue lines link the corresponding points. III. ROBOT POSE ESTIMATION We compute the optimal translation and rotation for robot pose estimation. For each rotation, we pick the best segment wise translation, and score the rotation using scan overlap measure described below. Then we select the best rotation and the corresponding translation. Segment wise translation: After eliminating false positive correspondences, the best translation and ro-

4 Original LCS 13 13 14 Rotation 2 4 Refined (Segmentwise processing) LCS 13 13 14 Rotation 2 2 2 2 2 4 4 6 6 8 8 6 4 2 2 4 6 8 1 8 8 6 4 2 2 4 6 8 1 4 13 14 Prealigned and the closest point correspondences used for ICP 2 2 4 6 8 1 5 5 1 (c) Fig. 4. This figure shows the effect of the segment filter. The corresponding points between scans 13 and 14 extracted using LCS. Actual correspondence after passing through our filters. We can see that only 3 good segments are left. (c) 13 th and 14 th scans aligned using segment wise transformation described in Section III. tation of query scan points onto the model scan points are chosen, using the segments computed in segment filtering. For each of these segments the translated query scan is superimposed onto the target scan to compute the average distance between the correspondences in the segment used for translation. We eliminate all other point correspondences that are farther than the average distance. We pick the translation that gives us the largest number of point correspondences. The main difference between segment filter and this step is that in the segment filter false positives are eliminated segment wise, whereas in this case the elimination is done based on point pairs. Scan overlap measure: As mentioned in Section II-C, we perform the process for several rotations. For these experiments we rotated query scans from 2 to 2 with 1 interval. After we translate the query scan to target scan based on the best translation, we compute the number of points in query scan that are close, within a distance threshold to the points in the target scan. The distance threshold is computed dynamically as follows: We compute the average distance between correspondences in the segment that was picked by the segment wise translation process. The distance threshold is the average distance plus 4 cm. We consider only those point correspondences for which the tangent directions of the corresponding points are within 1 shift. The length of this correspondence is the scan overlap measure. IV. GLOBAL MAP CONSTRUCTION Currently the best method for mapping robot scans is using particle filters [5]. Particle filtering is a powerful tool for mapping robot scans but it has two major constraints. One is that there needs to be large overlaps between immediate scans and also that the environment is not very cluttered or very dynamic. The other is that the pose estimates are good. We compute very good pose estimates but since the overlaps between immediate scans are small and the environment is cluttered particle filters fail in our

case. Therefore, we use a deterministic technique to construct the global map and correct the errors in robot poses. We use Iterative Closest Point (ICP) [1] and [11]. We align maps by the translation and rotation obtained using our shape similarity approach. The correspondences used for ICP are selected using distance and tangent thresholds. Points from two scans should be below a distance threshold and also within a tangent threshold to correspond. The distance threshold is dynamically calculated using the average distance between the scan points aligned using pose estimates computed. We run the ICP algorithm iteratively as long as the quality of alignment increases with an upper bound of 2 iterations. The quality is measured using scan overlap measure and the average distance among the corresponding points in the scan overlap (overlap distance). If in an iteration the scan overlap measure decreases and the overlap distance increases, we roll back the iteration and break the ICP loop. V. EXPERIMENTAL RESULTS We use benchmark data from National Institute of Standards and Technology. NIST collected robot scans in a simulated urban search and rescue arena at 16 different positions. Four scans are taken at each position at four different orientations that differ by 9. Each complete scan is composed of 164 readings, 41 in each direction with the angle span of 1 and with.25 scan interval. We performed our experiments on the data without any odometry information. Fig. 5(c) shows the global map generated by our approach. For comparison, we show in 5(d) the map based on manually measured odometry. The global map generated using currently the best SLAM approach (based on particle filtering) is shown in 5. It was obtained using the CARMEN software described in Section V-B with the input shown in 5. Fig. 6 shows the estimated and the manually measured robot poses. Fig. 6 shows the error plot of the estimated robot poses. As can be seen, although the error in pose estimate increases in 1-14, our approach is able to recover and significantly reduce the error in 15. A. Decoupled ICP Since each scan is composed of four subscans taken after 9 rotations, we perform decoupled ICP after the ICP on the complete scan. The measurements between different regions have overlaps and manually introduced errors due to the data acquisition process. The ICP applied to the complete scan cannot correct these local errors without sacrificing quality of global alignment. Hence we run the ICP algorithm on the four regions separately. We iterate on each region with quality check as explained in the Section IV. B. CARMEN CARMEN is a robot navigation software package developed by CMU. It has various modules useful for autonomous robots. The scan matching module of CARMEN is called VASCO and it is used to generate global maps from the robot scans. We input the data from NIST with the estimated odometry computed by the proposed approach to VASCO. Our input to VASCO is shown in Fig. 5. As can be seen in 5, VASCO was not able to improve the input map (shown in 5). VI. CONCLUSION AND FUTURE WORK We have demonstrated that shape features of scans can be very powerful in scan matching in cluttered environments like rescue situations or dynamic environments. Our shape similarity technique is very useful in extracting overlaps among scans even when consecutive maps do not have large overlaps. Once we get the overlaps using shape we can generate a priori distribution of the particles and use the particle filtering approach. We are working on using particle filters with the prior generated by our approach to compute the actual robot pose. ACKNOWLEDGMENT We would like to acknowledge the support from the National Institute of Standards and Technology, award Nr. NIST 7NANB5H11119, and discussions with Elena Messina and Adam Jacoff. We thank Dirk Haehnel for his prompt help in using CARMEN s scan matching module. REFERENCES [1] A. Jacoff, E. Messina, and J. Evans, Performance evaluation of autonomous mobile robots, Industrial Robot: An Int. Journal, vol. 29(3), 22. [2] S. Thrun, Robotic mapping: A survey, in Exploring Artificial Intelligence in the New Millenium, G. Lakemeyer and B. Nebel, Eds. Morgan Kaufmann, 22. [3] G. Dissanayake, P. Newman, S. Clark, H. Durrant-Whyte, and M. Csorba, A solution to the simultaneous localization and map building (SLAM) problem, IEEE Trans. of Robotics and Automation, 21. [4] D. Hähnel, D. Schulz, and W. Burgard, Map building with mobile robots in populated environments, in Proceedings of International Conference on Intelligent Robots and Systems (IROS 2), 22. [5] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. The MIT Press Cambridge, 25. [6] L. J. Latecki and R. Lakaemper, Polygonal approximation of laser range data based on perceptual grouping and em, in IEEE Int. Conf. on Robotics and Automation (ICRA), 26. [7] M. Ankerst, G. Kastenmüller, H.-P. Kriegel, and T. Seidl, 3D shape histograms for similarity search and classification in spatial databases, in Advances in Spatial Databases, 6th Int. Symposium, SSD 99, R. Güting, D. Papadias, and F. Lochovsky, Eds., vol. 1651. Hong Kong, China: Springer, 1999, pp. 27 228. [8] S. Belongie, J. Malik, and J. Puzicha, Shape matching and object recognition using shape contexts, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, pp. 75 522, 22. [9] E. W. Myers, An o(nd) difference algorithm and its variations, Algorithmica, vol. 1, no. 2, pp. 251 266, 1986. [1] P. Besl and N. McKay, A method for registration of 3-d shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239 256, 1992.

Cumulative Cartesian Plot for Scans 61 64 for Position #16 Globally and locally aligned maps 1 through 16 6 12 1 2 8 Y [cm] 4 6 2 4 4 2 6 4 2 2 4 6 8 6 1 4 2 (c) 2 X [cm] 4 6 8 1 (d) Fig. 5. The input global map of NIST log data with the odometry estimated by the proposed approach. The global map generated by VASCO module of CARMEN. (c) The global map generated by our approach. (d) The global map created by NIST based on manually measured odometry. Blue(+) manually measured poses. Red(o) shape based estimated poses Distances in cm between acutal and etimated poses 1 9 8 9 8 7 14 6 6 16 7 3 5 15 5 13 6 4 5 4 7 2 3 12 1 4 2 8 11 1 3 9 1 2 2 2 4 Cm 6 8 Fig. 6. 2 4 6 8 1 12 14 16 1 The distance differences between the estimated (plus) and manually measured (circle) robot poses in and error plot in. [11] A. Fitzgibbon, Robust registration of 2d and 3d point sets, in Proc. British Machine Vision Conference, volume II, Manchester, UK, 21, pp. 411 42.