RoboCupRescue - Simulation League Team RescueRobots Freiburg (Germany)

Similar documents
RoboCupRescue - Robot League Team RescueRobots Freiburg (Germany)

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS

Autonomous Mobile Robot Design

Localization and Map Building

EKF Localization and EKF SLAM incorporating prior information

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Visual Odometry for Tracked Vehicles

Data Association for SLAM

NERC Gazebo simulation implementation

A Novel Map Merging Methodology for Multi-Robot Systems

Navigation methods and systems

Published in: Proc. 4th International Workshop on Synthetic Simulation and Robotics to Mitigate EarthquakeDisaster

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

3D Simultaneous Localization and Mapping and Navigation Planning for Mobile Robots in Complex Environments

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images:

Introduction to Multi-Agent Programming

A MOBILE ROBOT MAPPING SYSTEM WITH AN INFORMATION-BASED EXPLORATION STRATEGY

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Localization of Multiple Robots with Simple Sensors

Robot Localization based on Geo-referenced Images and G raphic Methods

Semantics in Human Localization and Mapping

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Simulation of a mobile robot with a LRF in a 2D environment and map building

D-SLAM: Decoupled Localization and Mapping for Autonomous Robots

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little

Final Exam Practice Fall Semester, 2012

SPQR-UPM Team Description Paper

Estimation of Planar Surfaces in Noisy Range Images for the RoboCup Rescue Competition

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Simultaneous Localization and Mapping (SLAM)

Exam in DD2426 Robotics and Autonomous Systems

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

Artificial Intelligence for Robotics: A Brief Summary

This chapter explains two techniques which are frequently used throughout

Localization and Map Building

Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Motion Control Strategies for Improved Multi Robot Perception

Visually Augmented POMDP for Indoor Robot Navigation

Model-based Real-Time Estimation of Building Occupancy During Emergency Egress

Scan-point Planning and 3-D Map Building for a 3-D Laser Range Scanner in an Outdoor Environment

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

Probabilistic Robotics

COS Lecture 13 Autonomous Robot Navigation

MULTI-ROBOT research has gained a broad attention. A Novel Way to Implement Self-localization in a Multi-robot Experimental Platform

Heuristic Optimization for Global Scan Matching with Focused Sonar on a Mobile Robot

Introduction to Mobile Robotics Multi-Robot Exploration

A scalable hybrid multi-robot SLAM method for highly detailed maps Pfingsthorn, M.; Slamet, B.; Visser, A.

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

Beyond frontier exploration Visser, A.; Ji, X.; van Ittersum, M.; González Jaime, L.A.; Stancu, L.A.

5. Tests and results Scan Matching Optimization Parameters Influence

Self-calibration of a pair of stereo cameras in general position

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Environment Identification by Comparing Maps of Landmarks

EE565:Mobile Robotics Lecture 3

1 Introduction. 2 Real-time Omnidirectional Stereo

Improving Simultaneous Mapping and Localization in 3D Using Global Constraints

Simultaneous Localization

VISION-BASED MULTI-AGENT SLAM FOR HUMANOID ROBOTS

Exploration Strategies for Building Compact Maps in Unbounded Environments

2005 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

Hinomiyagura Team Description Paper for Robocup 2013 Virtual Robot League

Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

Autonomous robot motion path planning using shortest path planning algorithms

Introduction to Multi-Agent Programming

Integrating Global Position Estimation and Position Tracking for Mobile Robots: The Dynamic Markov Localization Approach

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

Underwater Scan Matching using a Mechanical Scanned Imaging Sonar

D-SLAM: Decoupled Localization and Mapping for Autonomous Robots

Mobile Robot Mapping and Localization in Non-Static Environments

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Localization algorithm using a virtual label for a mobile robot in indoor and outdoor environments

Relaxation on a Mesh: a Formalism for Generalized Localization

L12. EKF-SLAM: PART II. NA568 Mobile Robotics: Methods & Algorithms

Introduction to Mobile Robotics. SLAM: Simultaneous Localization and Mapping

Intelligent Outdoor Navigation of a Mobile Robot Platform Using a Low Cost High Precision RTK-GPS and Obstacle Avoidance System

Sensor Modalities. Sensor modality: Different modalities:

Evaluation of Pose Only SLAM

Robust Real-Time Local Laser Scanner Registration with Uncertainty Estimation

Lecture 20: The Spatial Semantic Hierarchy. What is a Map?

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Stable Vision-Aided Navigation for Large-Area Augmented Reality

Hough-transform based map stitching after localization uncertainty. Okke Formsma University of Amsterdam

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011

6D SLAM with Kurt3D. Andreas Nüchter, Kai Lingemann, Joachim Hertzberg

RoboCup RoboCup Rescue Team (Germany)

Tracking Multiple Moving Objects with a Mobile Robot

Introduction to Mobile Robotics Techniques for 3D Mapping

VISION-BASED HUMANOID NAVIGATION USING SELF-SUPERVISED OBSTACLE DETECTION

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100

Collective Classification for Labeling of Places and Objects in 2D and 3D Range Data

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

Transcription:

RoboCupRescue - Simulation League Team RescueRobots Freiburg (Germany) Alexander Kleiner and Vittorio Amos Ziparo Institut für Informatik Foundations of AI Universität Freiburg, 79110 Freiburg, Germany http://www.informatik.uni-freiburg.de/rescue/ virtual Abstract. This paper describes the approach of the RescueRobots Freiburg Virtual League team. Our simulated robots are based on the two real robot types Lurker, a robot capable of climbing stairs and random stepfield, and Zerg, a lightweight and agile robot, capable of autonomously distributing RFID tags. Our approach covers a novel method for RFID-Technology based SLAM and exploration, allowing the fast and efficient coordination of a team of robots. Furthermore we utilize Petri nets for team coordination. 1 Introduction This paper describes the approach of the RescueRobots Freiburg Virtual League team. In general, our research focuses on the implementation of a fully autonomous team of robots that quickly explores a large terrain while mapping the environment. The simulated robots are based on the two real robot types Lurker, a robot capable of climbing stairs and random stepfield, and Zerg, a lightweight and agile robot, capable of autonomously distributing RFID tags. Our approach covers a novel method for RFID Technology-based SLAM and exploration, allowing the fast and efficient coordination of a team of robots. The motivation behind RFID-Technology based SLAM and exploration is the simplification of the 2D mapping problem by RFID tags, which the robots autonomously distribute with a tagdeploy-device. RFID tags provide a world-wide unique number that can be read from distances up to one meter. The detection of these tags and thus the unique identification of locations is significantly computationally cheaper and less erroneous than identifying locations from camera images and range data 1. RFID-Technology based SLAM and exploration has advantages for Urban Search and Rescue (USAR): The system generates from RFID tags a topological map, which can be augmented with structural and victim-specific information. If human task forces are also equipped with RFID readers, they can directly localize themselves within this map, rather than locating themselves in a 2D or 3D metric map. Travel routes to victims can directly be passed to them as complete plans that consist of RFID tag locations and 1 Note that even for humans the unique identification of a location is hard, when, for example, exploring a large office building or a collapsed building structure.

directions to follow. In fact, tags can be considered as signposts since the topological map provides for each tag the direction to the next tag. Furthermore, it is possible to store data, e.g. concerning nearby rooms or victims, directly in the tags. This information can then be utilized by other teams that are out of communication range. The idea of labeling locations with information that is important to the rescue task has already be applied in practice. During the disaster relief in New Orleans in 2005, rescue task forces marked buildings with information concerning, for example, hazardous materials or victims inside the buildings. Our RFID-Technology based marking of locations is a straight forward extension of this concept. Furthermore we utilize Petri nets for team coordination. (a) (c) Fig. 1. Robots built by our team: (a) The real Lurker robot and the real Zerg robot at the RoboCup competition in Osaka. (c) The simulated Lurker robot and the simulated Zerg robot. Picture (a) was taken by Adam Jacoff. (d) 2 RFID Technology-based SLAM The RescueRobots Freiburg real robot team successfully performed SLAM during the final of the Best in Class autonomy competition at RoboCup 2005 in Osaka. The map shown in figure 2 was autonomously generated by the system, i.e. directly printed out after the mission without any manual adjustment of the operator. Our overall system

(a) Fig. 2. Zerg robot during the final of the Best in Class autonomy competition at RoboCupRescue 2005 in Osaka: (a) slipping on newspapers and the autonomously generated map. Red crosses mark locations of victims which have been found by the robot. for SLAM is based on three levels, which are: Slippage-sensitive odometry, Scanmatching, and RFID-based localization. From these three levels, the latter two are applied within the Virtual Robot competition. We tackle the Closing The Loop problem by actively distributing unique RFID tags in the environment, i.e. placing them automatically on the ground, and by utilizing the tag correspondences found on the robot s trajectory for calculating globally consistent maps after the method introduced by Lu and Milios [1]. This method requires reliable estimates of the local displacement between two RFID tags. Therefore, a Kalman filter is utilized, which estimates the robot s pose from both scan matching and odometry-based dead reckoning. Generally, the robot s pose can be modeled by a Gaussian distribution N (l, Σ l ), where l = (ˆx, ŷ, ˆθ) T is the mean and Σ l a 3 3 covariance matrix, expressing uncertainty of the pose [2]. Given the measurement of the robot s motion by the normal distribution N(u, Σ u ), where u = (d, α) is the input of traveled distance d and angle α, respectively, and Σ u a 2 2 covariance matrix expressing odometry errors, the robot s pose at time t can be updated as follows: l t = F (l t 1, d, α) = ˆx t 1 + cos(ˆθ t 1 )d ŷ t 1 + sin(ˆθ t 1 )d, (1) ˆθ t 1 + α Σ lt = F l Σ lt 1 F T l + F u Σ ut 1 F T u, (2) where F describes the update formula, and F l and F u are partial matrices of its Jacobian F. Suppose the robot distributes n RFID tags at unknown locations l 1, l 2,..., l n, with distance d ij = ( x ij, y ij, θ ij ) between l i and l j. In order to determine the estimated distance ˆd ij with corresponding covariance matrix Σ ij between l i and l j, the

previously described Kalman filter is utilized. If the robot passes a tag l i, we reset the Kalman Filter in order to estimate the relative distance ˆd ij to subsequent tag l j on the robot s trajectory. Our goal is it to estimate locations l i that best explain the measured distances ˆd ij and covariances Σ ij. This can be achieved with the maximum likelihood concept by minimizing the following Mahalanobis-distance: W = ij ( d ij ˆd ij ) T Σ 1 ij (d ij ˆd ij ), (3) where the summation goes over all measured distances and d ij is the true distance between l i and l j. Note if we assume the robot s orientation to be measured by the IMU (whose error does not accumulate), we do not need to consider the orientation θ within the d ij in Equation 3, and hence the optimization problem can be solved linearly by calculating d ij = l i l j. However, if we also want to improve the estimate of the orientation, the d ij have to be linearized. It can easily be shown that the optimization problem in Equation 3 can be solved as long as the covariances Σ ij are invertible [1]. For distributing the tags in the environment, we constructed a special aperture which is also simulated on the virtual Zerg robot. (a) Fig. 3. Result from applying the non-linear mapper to data generated in the simulation. (a) Map with odometry noise and the corrected map. 3 RFID Technology-based exploration Efficiency of multi-robot exploration is usually measured by the ratio between the explored area and the distance traveled by the robots [5]. This efficiency can only be maximized if robots know about the past and future exploration targets of the other robots. The proposed method enables an exchange of this information via programmable RFID tags. Note that due to unconstrained communication in the Virtual Competition, information concerning RFID tags can directly be communicated by the robots without passing them. We assume that a single robot explores the environment, based on the concept of frontier cell exploration [4]. A cell is considered as frontier cell if it has already been

explored but also neighbors an unexplored cell. Each robot maintains a set of frontier cells with respect to its observations, e.g. by removing cells coming into the field of view of its sensors and by adding cells that are at the border, respectively. Frontier cells are generated by integrating laser range finder readings into an occupancy grid. In our approach, we restrict the size of this grid to the local vicinity of the robot. The basic idea of RFID-based exploration is to leave behind information via RFID tags, which helps other robots to reduce the overlap of exploration targets. Therefore, we store on RFID tags the relative locations of visited cells V tag = ( v 1, v 2,..., v m ), i.e. cells that have been visited by other robots. Tags are intended to provide information for a local area of the exploration space, thus their influence radius, i.e. maximal distance of relative locations, is limited by the distance τ. Robots subsequently synchronize the data related to a tag with their list of visited locations within the range τ of the tag. We assume that the robots IMU is based on a compass and thus the local coordinate frames of the robots are equally aligned to magnetic north. Each robot maintains a collection V r containing time-stamped locations l t = (x, y). During each cycle t, the robot adds its current pose l t. If the robot passes a RFID tag, V r and V tag are synchronized after Algorithm 1. Furthermore, we maintain for each RFID Algorithm 1 Synchronization of V r and V tag for all v i V tag do add absolute location ( v i + l 0) to V r end for for all v j V r do if v j l 0 < τ then add relative location (v j l 0) to V tag end if end for tag location a local evidence grid that integrates the observations from the victim sensor. Each observation is updated according to the robots pose and the sensors field of view. From the victim evidence grid a second set of frontier cells is calculated. According to the two sets of frontier cells and the locations visited by other robots, exploration targets are selected with different priority, whereas infrequently explored areas are preferred. Figure 4 shows some results of the RFID Technology-based exploration. Note that these results where obtained within a free-space exploration, i.e. within an arena without victims. 4 Coordination In order to successfully explore and navigate in the arena robots need to coordinate. In particular, we rely on a centralized approach in which an agent is reponsable for collecting relevant information and producing a multiagent synchronized plan. The coordinator agent will build and maintain a dynamic model of the world based on Petri nets [3]. The current knowledge of the environment and the state of the multia-

Explored area [m 2 ] 1100 1000 900 800 700 600 500 400 300 200 100 No com RFID com Full com small (a) normal Map size large Exploration score 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 small normal Map size No com RFID com Full com large Fig. 4. Averaged results from various exploration runs in the simulation. (a) and the exploration score, i.e. area divided by travelled distance. gent system will thus be represented as a Petri net < P, T, F, W, M 0 >. The model will be dynamically updated as information is gathered and will be used to compute moves for each agent which guarantee a safe multiagent path planning. t entr p state t exit p rfid t front pfront 00 11 00 11 000 111 n 00 11 00 11 p constr (a) (c) Fig. 5. Basic structures: (a) the RFID location structure the Frontier structure and (c) the Passage structure. The following structures < P, T, F > and the their possible combinations define the syntactic structure of our model. 1. RFID Location: This structure represents a unique RFID location. Figure 5(a) shows a graphical representation. Formally: < {p rfidi }, { }, { } >. 2. Frontier: This structure represents the open frontiers in the vicinity of an RFID location. Figure 5 shows a graphical representation of the structure. Formally: < {p front }, {t front }, {(t front, p front )} >.

3. Passage: This structure represents a connection between RFID locations. Figure 5(c) shows a graphical representation. The marking of p constr denotes the maximum amount of robots allowed in the passage simultaneously. More formally a passage is : < {p state, p constr }, {t entr, t exit }, {(t entr, p state ), (p state, t exit ), (t exit, p constr ), (p constr, t entr )} >. We combine these structures to obtain new models: 1. A RFID Location < P l, T l, F l > can be combined with a frontier < P f, T f, F f > resulting in < P l P f, T l T f, F l F f {(p rfid, t front )} > 2. A RFID Location < P l, T l, F l > can be combined with a passage < P p, T p, F p > resulting in < P l P p, T l T p, F l F p {(p rfid, t entr )} > 3. A passage < P p, T p, F p > can be combined with a RFID Location < P l, T l, F l > resulting in < P l P p, T l T p, F l F p {(t exit, p rfid )} > Algorithm 2 Coordination Agent while T rue do for all T ask AccomplishedT asks do if T ask.isinitiallocation() then Model.addRF IDLocation(T ask.rf ID) else if T ask.isf rontier() then Model.addRF IDLocation(T ask.rf ID) Model.removeF rontier(t ask) Model.combine(T ask.p revrf ID, T ask.p assage) end if end if Model.updateMarking(T ask) for all Neighbor T ask.f rontierslist() do if Neighbor ExploredList then Model.combine(T ask.rf ID, Neighbor) end if end for end for P lan = calculatep lan(model) assigngoals(p lan.nextstep) Model.updateMarking(P lan) end while Assuming robots will deploy an RFID at their starting location the Algorithm 2 correctly maintains the model and assigns tasks to agents. In particular, the model is build as follows: 1. Whenever an RFID is deployed by an agent a new place is added to the graph. The marking of this node represents the number of agents in the proximity of the RFID. In particular, an agent will be associated to the nearest RFID within those in a distance of τ (Section 3).

2. The RFID locations are combined with the open frontiers perceived at the by each agent. 3. When moving from a location to a frontier robots will identify passages. Passages cannot be longer than τ which is the maximum distance between two RFID locations. The former structure is thus characterized by a distance d < τ and a constraint on how many agents can simultaneously travel in the passage (i.e. the marking of the constraint place p constr ). Given the current model we can search for the sequence of state transitions that maximize the overall performance: i.e. find a multiagent plan that maximizes the number of frontier places with marking one and minimizes the travelled distance of the agent who travels the most. At each timestep, given the current model we can assign a RFID location or a frontier neighboring each robot. The overall result of this continuous planning process will be a synchronized multiagent path plan which guarantees that the assignments are safe (in the sense that passage capacities are not exceeded) and optimal according to the current knowledge. 5 Conclusion Our approach offers a solution to the problem of the deployment of a large group of robots while utilizing as less as necessary computational resources. This is carried out by the decomposition of the generally computational hard problem of SLAM, exploration and team coordination into two levels: Firstly, the grid based level, which is locally restricted to the close vicinity of a robot or a RFID tag. Secondly, the topological level, which is less fine grained than the local level, however, allows to efficiently compute globally consistent solutions. We belief that this decomposition leads to an efficient solution to the problem of multi-robot search and rescue. References 1. F. Lu and E. Milios. Globally consistent range scan alignment for environment mapping. Auton. Robots, 4:333 349, 1997. 2. P. S. Maybeck. Stochastic models, estimation, and control, volume 141 of Mathematics in Science and Engineering. 1979. 3. Tadao Murata. Petri nets: Properties, analysis and applications. In Proceedings of the IEEE, pages 541 580, April 1989. NewsletterInfo: 33Published as Proceedings of the IEEE, volume 77, number 4. 4. B. Yamauchi. A frontier-based approach for autonomous exploration. In IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA 97), 1997. 5. R. Zlot, A. Stentz, M. Dias, and S. Thayer. Multi-robot exploration controlled by a market economy. In Proceedings of the IEEE International Conference on Robotics and Automation, 2002.