Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Similar documents
Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Practical Course WS12/13 Introduction to Monte Carlo Localization

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS. Oliver Wulf, Bernardo Wagner

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta

Mapping Contoured Terrain Using SLAM with a Radio- Controlled Helicopter Platform. Project Proposal. Cognitive Robotics, Spring 2005

A New Omnidirectional Vision Sensor for Monte-Carlo Localization

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

This chapter explains two techniques which are frequently used throughout

Localization and Map Building

Monte Carlo Localization for Mobile Robots

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM

Domain Adaptation For Mobile Robot Navigation

NERC Gazebo simulation implementation

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

Sensory Augmentation for Increased Awareness of Driving Environment

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms

Artificial Intelligence for Robotics: A Brief Summary

Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines

Robust Monte-Carlo Localization using Adaptive Likelihood Models

Calibration of a rotating multi-beam Lidar

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Probabilistic Robotics. FastSLAM

Omnidirectional vision for robot localization in urban environments

Probabilistic Robotics

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm

Mobile Robot Mapping and Localization in Non-Static Environments

Monte Carlo Localization using 3D Texture Maps

Proprioceptive Localization for Mobile Manipulators

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Probabilistic Robotics

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

POINT CLOUD ANALYSIS FOR ROAD PAVEMENTS IN BAD CONDITIONS INTRODUCTION

Autonomous Mobile Robot Design

Implementation of Odometry with EKF for Localization of Hector SLAM Method

Localization and Map Building

Tracking Multiple Moving Objects with a Mobile Robot

Active Monte Carlo Localization in Outdoor Terrains using Multi-Level Surface Maps

Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners

Localization and Mapping in Urban Environments Using Mobile Robots

Probabilistic Robotics

EKF Localization and EKF SLAM incorporating prior information

An Overview of a Probabilistic Tracker for Multiple Cooperative Tracking Agents

PacSLAM Arunkumar Byravan, Tanner Schmidt, Erzhuo Wang

Introduction to Mobile Robotics SLAM Landmark-based FastSLAM

Monte Carlo Localization

A Unified 3D Mapping Framework using a 3D or 2D LiDAR

3D Convolutional Neural Networks for Landing Zone Detection from LiDAR

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing

Probabilistic Matching for 3D Scan Registration

Scan-point Planning and 3-D Map Building for a 3-D Laser Range Scanner in an Outdoor Environment

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Path Planning with Dynamic Obstacle Avoidance for a Jumping-Enabled Robot

LOAM: LiDAR Odometry and Mapping in Real Time

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Data Association for SLAM

F1/10 th Autonomous Racing. Localization. Nischal K N

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization

Aerial Robotic Autonomous Exploration & Mapping in Degraded Visual Environments. Kostas Alexis Autonomous Robots Lab, University of Nevada, Reno

What is the SLAM problem?

Evaluating the Performance of a Vehicle Pose Measurement System

Detection and Tracking of Moving Objects Using 2.5D Motion Grids

arxiv: v2 [cs.ds] 7 Mar 2018

Probabilistic Robotics

Chapters 1 7: Overview

High-Precision Positioning Unit 2.2 Student Exercise: Calculating Topographic Change

Segmentation and Tracking of Partial Planar Templates

Normal Distributions Transform Monte-Carlo Localization (NDT-MCL)

Localization of Multiple Robots with Simple Sensors

Real Time Data Association for FastSLAM

Introduction to Mobile Robotics SLAM Grid-based FastSLAM. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Navigation methods and systems

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

CANAL FOLLOWING USING AR DRONE IN SIMULATION

APPENDIX E2. Vernal Pool Watershed Mapping

Introduction to Mobile Robotics Probabilistic Motion Models

Final Exam Practice Fall Semester, 2012

Introduction to Mobile Robotics Techniques for 3D Mapping

EE565:Mobile Robotics Lecture 3

Neural Networks for Obstacle Avoidance

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping

Autonomous Navigation of Humanoid Using Kinect. Harshad Sawhney Samyak Daga 11633

Aerial and Ground-based Collaborative Mapping: An Experimental Study

Team Description Paper Team AutonOHM

Simulation of a mobile robot with a LRF in a 2D environment and map building

Testing omnidirectional vision-based Monte-Carlo Localization under occlusion

Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots

Introduction to Mobile Robotics

Robotic Mapping. Outline. Introduction (Tom)

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

Transcription:

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213 c Carnegie Mellon University

1 Introduction Current outdoor localization methods are heavily dependent on accurate pose estimation in the form of GPS and inertial measurement. However, GPS technology is limited in accuracy and depends on unobstructed views of the sky, and inertial measurement systems tolerant of outdoor driving conditions are very expensive. We attempt to localize the pose of a robotic ground vehicle using only unreliable vehicle speed estimates and a high-speed 3D laser scanner. Furthermore, while most localization systems use a-priori maps generated by other ground sensors, we use a map generated from aerial lidar. 1.1 Dataset The primary vehicle data set consists of the sensor measurements collected from a Velodyne lidar. This lidar unit generates 2.5 million range measurements per second, sampling a full 360 degrees around the system. This information is provided as a sequence of log files containing timestamped point sets in 3D Cartesian coordinates referenced from an origin at the Velodyne sensor. We collected this data using the autonomous ground vehicle Boss, which sports a single, roof-mounted Velodyne lidar as part of a wide array of navigational sensors, as seen in Figure 1. Most importantly, its sensor suite includes differential GPS and inertial measurement, providing us with accurate ground truth data. Figure 1: Boss, the autonomous ground vehicle Figure 2 shows a part of the Robot City road network. We drove Boss around the edge of this network, using the trail depicted by the thick white arrows in the figure. Our data set consists of 165.337 seconds of Velodyne measurements and ground truth vehicle state logged during this run. We also obtained a map of Robot City in the form of a registered point set taken from aerial lidar, using the aerial platform in Figure 3(b). The map, as seen in Figure 4, consists of over 11 million 3D Cartesian points scattered across the site. The points cover the site with a density approximating 10cm spatial resolution. 2 Monte Carlo Localization 2.1 Monte Carlo Localization Monte Carlo Localization (MCL) is a sampling-based localization method for approximating probability density distributions [1, 2]. In this method, the variable of interest, x t, is represented by a probability density. In localization problems, the variable of interest is often referred to as the state. In our case, the state is the expected pose of the robotic vehicle, x t =< x t, y t, θ t > T. In an MCL, an individual state is considered to be complete because the Markov assumption asserts that all previous experience (e.g., past sensor measurements) is independent of future experience given the current experience [2]. Since we assume that the pose information is not measurable directly and accurately, a localization process must involve some form of inference. We need to represent our belief over possible poses as conditional probability distri- I

Figure 2: Road network at Robot City (a) Scanning equipment (b) Equipment on helicopter Figure 3: Scanning platform used to generate aerial lidar map butions. MCL represents this belief distribution as a population of particles that are propagated each timestep. The density of the particles approximates the belief distribution over the state space. Given a motion model u t, a belief on the previous state belief(x t 1 ) can be used to predict a possible new state, belief(x t ). belief(x t ) = p(x t u t, x t 1 )belief(x t 1 )dx t 1 These predicted states, belief(x t ), are evaluated by a sensor measurement model z t, II

Figure 4: Lidar point set of Robot City belief(x t ) = ηp(z t x t )belief(x t ) This is simply a Bayes filter. The MCL integrates this Bayes filter with a particle filter to form a localization process. Algorithm 1 provides the details of the basic MCL process in pseudocode [1]. Algorithm 1 MCL(X t 1, u t, z t, m) X t = X t = 0 for m = 1 to M do x m t = MOTIONMODEL(u t, x m t 1) wt m = MEASUREMENTMODEL(z t, x m t, m) X t = X t + < x m t, wt m > end for for m = 1 to M do draw i with probability w i t add x i t to X t end for return X t 2.2 Implementation and Results Our implementation of MCL follows fairly closely to the basic pseudocode outline. For the motion model, we used our knowledge of the vehicle to create a reasonable planar unicycle approximation. The vehicle speed estimate from our ground truth data with an added Gaussian noise term was our control input. The steering curvature was sampled uniformly between the vehicle steering limits. This seemed to adequately represent the vehicle s movement in our dataset. However, there were significant design decisions to be made in the implementation of the measurement model. The first issue we faced in implementing MCL in this system was in the dimensionality of the data. Both the velodyne data and the map are 3D point sets, meaning that a 6DOF transform is necessary to convert from the velodyne reference frame to the world frame. However, we examined the vehicle state log files and determined that the vast majority of the time, the vehicle was close to vertical. Thus, we could simplify the state to the vector < x t, y t, θ t > T, and use constant offsets based on the vehicle configuration to create an appropriate tranformation matrix. The primary issue, however, turned out to be the computational complexity of the scan matching problem. The Velodyne log files for our dataset alone consume 2.5Gb of disk space. The map consumes 173Mb in its compressed form. In addition, because our state space would be tested by matching points in higher-dimensional world space, we hypothesized that a large number of particles would be necessary to achieve sufficient sampling density. This limited the approaches we could take to functions that could be executed very quickly on very large point sets. We settled upon two approaches, a nearest-neighbor based distance metric, and a height-based distance metric. III

Min 0.0841 Max 6.1849 Mean 2.7278 σ 1.7947 Table 1: Error metrics for sample run 2.2.1 KD-Tree Our first measurement model transformed the velodyne data to world coordinates, then searched for the nearest neighbor to that point using a free Matlab kd-tree package. 1 The error for each point was the L2-norm distance from the point to its nearest neighbor. The error for each particle was simply the sum of these point errors. 2.2.2 Height Map Our second measurement model similarly transformed the velodyne data, but then matched the points against the height at their < x t, y t > T location. The error for each point was then the squared difference in height between the point and the height map. The error for each particle was once again the sum of these point errors. For both methods, the reciprocal of the error was used as the particle weight. As specified in the MCL algorithm, it was used as an unnormalized mass function to sample the next generation of particles. A sample of this matching can be seen in Figure 5. Figure 5: Matching Velodyne scans to the map Much of our original dataset was not usable due to a lack of visible obstacles in the velodyne data. We selected a subset of the data, and conducted sample runs along it using both distance estimation methods. The results can be seen in Figure 6, and the error information is shown in Figure 7 and Table 1. 3 Discussion Initially, we had hoped to do global localization by initializing the particle filter with particles sampled from locations along the road network. We were able to implement this using rejection sampling over a set of polygons encompassing the road network. However, when we began testing, it became clear that the point density required to find and track the vehicle would require us to initialize the particle filter with an exorbitant particle count, almost certainly in the several thousands. This would have been far too expensive to compute, so we ended up seeding the filter with the initial state recorded by the ground truth sensors. We also found one unexpected effect that hindered the performance of our MCL implementation. The Velodyne data was sampled uniformly in angular units, but the wide elevation range of the scan translated to very uneven 1 Michael, Steven; MIT Lincoln Labs; KD Tree Nearest Neighbor and Range Search IV

(a) Kd-tree MCL - best particles (red points) and ground truth (blue line) (b) Kd-tree MCL - all particles (points) and ground truth (blue line) (c) Height map MCL - all particles (points) and ground truth (blue line) Figure 6: Results of sample run using kd-tree and height map MCL sampling in Cartesian units. Points were heavily concentrated near and under the vehicle, biasing the error function toward favoring particles that placed the vehicle on flat ground rather than particles that matched more distant features. We tried uniformly resampling the points using interpolation and masking out nearby points from being evaluated by the error function, but neither were ideal, and neither completely resolved the issue. In comparing the performance of the two distance-finding strategies, kd-tree and height maps, we also found that while the height maps performed much faster and could handle 20 times more particles than the kd-tree search, the kd-tree performed significantly more accurately than the height map in both locating and tracking the vehicle pose. This is evident in Figure 6(a). It is likely that the 2 1 2D volumetric representation that the height map used did not match well against individual points, but would still perform well if used in conjunction with true ray-casting, which we did not attempt here. Also, the height map was significantly lower resolution than the original map, so it is possible that details represented in the Velodyne point sets were not represented properly in the height map distance. If you wish to see more, full size images and videos of our MCL in action are available at: http://www.cs.cmu.edu/ pkv/movies. V

7 6 5 Error(m) 4 3 2 1 0 0 5 10 15 20 25 30 Step Figure 7: MCL error during sample run VI

References [1] D. Fox, W. Burgard, F. Dellaert, and S. Thrun. Monte carlo localization: Efficient position estimation for mobile robots. In Proceedings of the Sixteenth National Conference on Artificial Intelligence, pages 343 349, 1999. [2] S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. MIT Press, 2005. VII