Planning for Simultaneous Localization and Mapping using Topological Information

Similar documents
Online Simultaneous Localization and Mapping in Dynamic Environments

Kaijen Hsiao. Part A: Topics of Fascination

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing

COMPARISON OF ROBOT NAVIGATION METHODS USING PERFORMANCE METRICS

Fast Randomized Planner for SLAM Automation

Mapping and Exploration with Mobile Robots using Coverage Maps

Planning With Uncertainty for Autonomous UAV

Introduction to Mobile Robotics SLAM Grid-based FastSLAM. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Exploring Unknown Environments with Mobile Robots using Coverage Maps

Recovering Particle Diversity in a Rao-Blackwellized Particle Filter for SLAM After Actively Closing Loops

Combined Trajectory Planning and Gaze Direction Control for Robotic Exploration

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Probabilistic Robotics

Probabilistic Robotics. FastSLAM

A Novel Map Merging Methodology for Multi-Robot Systems

Combined Vision and Frontier-Based Exploration Strategies for Semantic Mapping

Introduction to Mobile Robotics Path Planning and Collision Avoidance

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images:

Improving Combinatorial Auctions for Multi-Robot Exploration

Histogram Based Frontier Exploration

Interactive SLAM using Laser and Advanced Sonar

EE631 Cooperating Autonomous Mobile Robots

Navigation methods and systems

A MOBILE ROBOT MAPPING SYSTEM WITH AN INFORMATION-BASED EXPLORATION STRATEGY

NERC Gazebo simulation implementation

Introduction to Mobile Robotics Path Planning and Collision Avoidance. Wolfram Burgard, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Mobile Robot Mapping and Localization in Non-Static Environments

Hybrid Mapping for Autonomous Mobile Robot Exploration

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Using Artificial Landmarks to Reduce the Ambiguity in the Environment of a Mobile Robot

Visually Augmented POMDP for Indoor Robot Navigation

Jurnal Teknologi PARTICLE FILTER IN SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) USING DIFFERENTIAL DRIVE MOBILE ROBOT. Full Paper

Grid-Based Models for Dynamic Environments

Relaxation on a Mesh: a Formalism for Generalized Localization

Autonomous vision-based exploration and mapping using hybrid maps and Rao-Blackwellised particle filters

Dynamic Robot Path Planning Using Improved Max-Min Ant Colony Optimization

Preliminary Results in Tracking Mobile Targets Using Range Sensors from Multiple Robots

Occupancy Grid Based Path Planning and Terrain Mapping Scheme for Autonomous Mobile Robots

VISION-BASED MULTI-AGENT SLAM FOR HUMANOID ROBOTS

Robust Laser Scan Matching in Dynamic Environments

A Genetic Algorithm for Simultaneous Localization and Mapping

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

PacSLAM Arunkumar Byravan, Tanner Schmidt, Erzhuo Wang

Building Reliable 2D Maps from 3D Features

Ground Plan Based Exploration with a Mobile Indoor Robot

Towards Gaussian Multi-Robot SLAM for Underwater Robotics

Neuro-adaptive Formation Maintenance and Control of Nonholonomic Mobile Robots

Environment Identification by Comparing Maps of Landmarks

Robot Localization based on Geo-referenced Images and G raphic Methods

A Relative Mapping Algorithm

Final project: 45% of the grade, 10% presentation, 35% write-up. Presentations: in lecture Dec 1 and schedule:

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

A Randomized Method for Integrated Exploration

Simulation of a mobile robot with a LRF in a 2D environment and map building

Beyond frontier exploration Visser, A.; Ji, X.; van Ittersum, M.; González Jaime, L.A.; Stancu, L.A.

Announcements. Recap Landmark based SLAM. Types of SLAM-Problems. Occupancy Grid Maps. Grid-based SLAM. Page 1. CS 287: Advanced Robotics Fall 2009

model: P (o s ) 3) Weights each new state according to a Markovian observation

CS4758: Moving Person Avoider

Localization of Multiple Robots with Simple Sensors

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING

Introduction to Mobile Robotics Multi-Robot Exploration

An Architecture for Automated Driving in Urban Environments

Path Planning with Uncertainty: Voronoi Uncertainty Fields

Localization and Map Building

Exploration Strategies for Building Compact Maps in Unbounded Environments

Motion Planning for an Autonomous Helicopter in a GPS-denied Environment

SCAN MATCHING WITHOUT ODOMETRY INFORMATION

Using the Topological Skeleton for Scalable Global Metrical Map-Building

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Smooth Nearness Diagram Navigation

Robot Mapping for Rescue Robots

Path and Viewpoint Planning of Mobile Robots with Multiple Observation Strategies

PROGRAMA DE CURSO. Robotics, Sensing and Autonomous Systems. SCT Auxiliar. Personal

Constant/Linear Time Simultaneous Localization and Mapping for Dense Maps

Planning Exploration Strategies for Simultaneous Localization and Mapping

Using the ikart mobile platform

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011

Localization algorithm using a virtual label for a mobile robot in indoor and outdoor environments

Title: Survey navigation for a mobile robot by using a hierarchical cognitive map

Survey navigation for a mobile robot by using a hierarchical cognitive map

Efficient particle filter algorithm for ultrasonic sensor based 2D range-only SLAM application

Implementation of Odometry with EKF for Localization of Hector SLAM Method

Integrated systems for Mapping and Localization

Ball tracking with velocity based on Monte-Carlo localization

Vision Guided AGV Using Distance Transform

Autonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment

D-SLAM: Decoupled Localization and Mapping for Autonomous Robots

Motion Control Strategies for Improved Multi Robot Perception

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

5. Tests and results Scan Matching Optimization Parameters Influence

Instant Prediction for Reactive Motions with Planning

W4. Perception & Situation Awareness & Decision making

UNIVERSITY OF NORTH CAROLINA AT CHARLOTTE

arxiv: v1 [cs.ro] 18 Jul 2016

A Fuzzy Local Path Planning and Obstacle Avoidance for Mobile Robots

ROS navigation stack Costmaps Localization Sending goal commands (from rviz) (C)2016 Roi Yehoshua

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Simultaneous Localization and Mapping (SLAM)

DP-SLAM: Fast, Robust Simultaneous Localization and Mapping Without Predetermined Landmarks

Neural Networks for Obstacle Avoidance

Transcription:

Planning for Simultaneous Localization and Mapping using Topological Information Rafael Gonçalves Colares e Luiz Chaimowicz Abstract This paper presents a novel approach for augmenting simultaneous localization and mapping (SLAM) with planning. We use dynamically generated topological maps in conjunction with a utility function to decide which actions the robot should perform in order to improve mapping efficiency. We execute a series of simulated and real experiments in order to study the performance of the proposed approach and results show a significant improvement of mapping efficiency. I. INTRODUCTION Simultaneous Localization and Mapping (SLAM) is a very active topic in robotics research. But despite a huge number of works in this area, few approaches are explicitly concerned about how the exploration of the environment is performed. Sometimes called SPLAM - Simultaneous Planning Localization and Mapping [1], or Active SLAM [2] these approaches try to use sensor data not only to map the environment and localize the robot but also to plan its actions. The planning component in the SLAM system should be capable of analyzing several features such as the map built so far, the current sensor information, the state of the robot and, based on that, decide which action the robot should take. This action can have different goals such as improving map quality, exploring the environment or reducing localization uncertainty. In this paper, we propose a novel SPLAM approach for indoor, non-prepared environments. The main contribution is the use of a topological map as basis for the planning. This topological map is dynamically generated on top of the DP-SLAM [3], an efficient SLAM algorithm that does not depend on explicit landmarks. Based on this topological map, an utility function is used to decide which actions should be taken by the robot. It balances the exploration of new areas with the improvement of map quality in order to maximize the efficiency of the mapping process. This paper is organized as follows. Section II brings a brief literature review related to SPLAM. Section III gives an overview of our approach, with the planning, mapping and navigation components being respectively detailed in Sections IV,V, and VI. Experimental results with both simulated and real robots are presented in Section VII, while Section VIII brings the conclusion and directions for future work. This work was partially supported by CNPq and Fapemig. The authors are with the Vision and Robotics Laboratory (VeRLab), Computer Science Department, Federal University of Minas Gerais, Brazil. emails: {rcolares, chaimo}@dcc.ufmg.br II. RELATED WORK One of the first works to consider the use of SLAM with planning was proposed by Feder [4], in which an utility function was used to determine the robot s next action. This utility function chose the next action from those that could improve the system information, basically improving the exploration. Bourgault [5] proposed a new utility function that tried to improve the exploration but also tried to improve the robot s localization, as the exploration alone could end up returning errors to the system. Makarenko s work [6] added new information to the utility function to avoid the robot from performing long trajectories since the environment is unknown. Huang [7] noted that determining a series of actions to the robot is better than deciding only the next one. In spite of that, the robot s trajectory had to be updated on every step, since new information arrived on the system regarding the robot s position and the map. He called this strategy Model Predictive Control. Although it presented better results than the strategies cited before, it still had the problem of exploring already visited areas. Leung [8], [9] presented a new approach to this problem using the idea of behaviors to the robots. Using this strategy, the robot chooses among exploring (navigating to unexplored regions), improving localization (navigating through areas with well localized marks) or improving the map (navigating through areas where the map quality is poor). Another type of planning involves the coordination of more robots performing SLAM. According to Fox [10], when you use more than one robot for the SLAM problem, your system needs some coordination that involves planning the robots next actions. They used a frontier based exploration, which conducts the robots to unexplored areas, and continuously tries to make the robots to meet and exchange information. The frontier based exploration is also used in [11] and [12]. In [13] each robot decides its next action based on a global map and what they call a objective function. This work was extend in [14] with the development of two approaches: one with a central unit that decides all the actions and a distributed approach in which each robot is responsible for its own decisions. III. METHODOLOGY As mentioned in Section I, our main objective is to propose a navigation policy that works together with the SLAM algorithm allowing a more efficient exploration of the environment. For this, we developed a system that is composed of three main components: a planning component that is

responsible for analyzing the robot s and map s status and deciding which action to take next, a mapping component that executes the SLAM while the robot is exploring and a navigation component that takes care of the robot navigation according to the generated plans. Figure 1 illustrates how these components are connected. Fig. 1: The system s main components. The basic idea behind the system is to generate exploration plans for the robot while the SLAM is performed. This is done through the use of a topological map, which abstracts the metric map, organizing and storing important information, and a utility function, which decides how the environment should be explored. The utility function selects a series of waypoints through which the robot has to navigate. These waypoints are chosen dynamically, taking in consideration the following characteristics: 1) Waypoints should not be far from the robot s current location: this is based on the fact that we do not know the entire map yet, actually we are trying to build the map, so we have to constantly evaluate our decisions. If we choose to run long paths we may end up performing bad actions. Thus, we decided to only choose goal points that are relatively closer to the robot, so we can be frequently choosing the next goals, allowing us to avoid bad decisions or to recover from bad decisions more easily and rapidly. 2) Going to these waypoints will improve the map: the waypoints we choose should add some information to the map. It could be in terms of expanding borders of the map or improving a previous mapping that returned too much uncertainty. We implemented mechanisms that can analyze these two features for each goal point and choose the best one, privileging the exploration, but making sure that we also generate a good quality map. 3) The waypoints should, preferably, be on a straightforward path: the robot should prefer to navigate in a straight line, avoiding taking turns, especially sharp turns, like 180, as this increases the odometric errors. The system works as follows. Initially the robot stays still and collects information to generate a first map. This map is analyzed and a topological map is build on top of it. The utility function receives this topological map, decides which point should be visited next and sends this information to the navigation system. The navigation system tries to safely get the robot to the goal point and, when the robot arrives there, this processes is repeated. The next sections will detail the components of the system. IV. PLANNING The planning component is composed by two main subcomponents: the topological map and the utility function. They are explained next. A. Topological Map In order to reason about the environment and generate plans for the robot, we build a topological map on top of the SLAM map. The process of building the topological map is divided in 3 steps: firstly, a voronoi-like skeleton is generated on top of the SLAM map. We then divide the map in regions and create nodes that will store information about these regions. The skeletonization is based on the work of [15], in which an image processing algorithm, known as thinning method, is used by the authors to detect the skeleton of a map. As mentioned, the objective of this thinning step is to obtain similar results to a Generalized Voronoi Diagram, i.e., a set of points on the environment that lay as far as possible from the walls and obstacles. To obtain the skeleton, we make a copy of the map obtained from the SLAM and apply the thinning algorithm on it. The algorithm runs iteratively eliminating grid cells until every cell left on the map is part of the skeleton. When the thinning is finished, we project the points that compose the skeleton back on the SLAM s map. The skeleton points are used in Algorithm 1 to generate a topological map, dividing the SLAM map into regions, each one represented by a node. Each skeleton point is a potential candidate of being a node for the topological map and these nodes are the center of each created region. Each node contains three pieces of information: (i) an ID that serves as a identifier, distinguishing the nodes from each other; (ii) a variable that evaluates the map returned by the SLAM algorithm and tries to quantify the uncertainties on that region; and (iii) the coordinates of the center of the region, i.e., the coordinates of the skeleton point that generated that node. Regions are defined by circles with center on the nodes coordinates and a radius that can be defined to better represent the map. These nodes coordinates will be considered as goal points by the navigator, as will be explained in the next sections. During robot navigation, a new topological map is constructed every time the robot reaches a goal point. This is necessary because every new information added to the SLAM map can substantially modify it, thus causing the topological map to become outdated. Figure 2a illustrates this process. The skeleton is shown in red and the first region created is represented by a blue circle with its center marked by a blue square. Fig. 2b represents the complete topological map after the algorithm execution.

Algorithm 1 Topological Map creation Require: skeleton[] {The points of the skeleton.} for i 1 to skeleton.size() do if topological map.is empty() then topological map.insert(skeleton[i]); else insert = true; for e 1 to topological map.size() do D1 = skeleton[i]; D2 = topological map[e]; if distance(d1, D2) < DIST MIN then insert = false; break; end if end for if insert == true then topological map.insert(skeleton[i]); end if end if end for (a) (b) Fig. 2: Generation of the topological map. B. Utility Function Based on the information provided by the topological map, the system will try to decide which region, represented by a node of the topological map, the robot should visit next. To do so, we define a utility function that aims for an efficient exploration of the environment. This function is defined as: U(x) = (D(x) + Err(x) V (x)) Q(x), (1) where, for each region x, U(x) is the utility value, D(x) is the distance from the robot s current pose to the region x, Err(x) is a factor related to the uncertainty of the map on that region, V (x) is a function that indicates if the robot has already visited that region and Q(x) is related to the angle the robot is facing in the moment. The region that is chosen to be visited by the robot is the one with the highest utility value. The D(x) factor is the euclidean s distance of the robot s current position to each region s center points on the topological map. This factor favors regions that are distant from the robot and penalizes regions that are too close from the robot s current pose. This is done to improve the exploration of the environment. On the other hand, we do not want to include regions that are very far way, because the map is always being updated with new data and this can modify the utility function. So we only consider regions within a maximum distance δ. The Err(x) factor is related to the uncertainty of the map returned by the SLAM process on that region. This is done by analyzing the probability returned by the SLAM algorithm on each point inside a region. Despite being an important factor to be considered if we want to build a good and trustworthy map, it should be used with caution. This information alone can result in a loss of efficiency in the mapping process, as it could lead the robot to choose regions that are difficult to reach or that are far away from its current position, causing the robot to expend time navigating through already visited parts of the map. In order to continue encouraging the robot to explore the environment and to avoid already mapped areas, the V (x) factor keeps track of every cell the robot has navigated. Every time we are computing the utility of a region, we check if the robot has passed nearby and penalize that region. This factor is especially useful when the robot reaches a crossroad that it has already visited. In this situation, the function will privilege the path that was not selected previously. One last factor considered in the computation of the utility function is related to the robot orientation, Q(x). Basically, we would like that the robot navigate on a straight line, avoiding long and steep turns. This is an important factor to consider since, in general, robot turns cause more odometric errors than forward movements. Another reason for using this factor is to encourage the robot to keep moving to new areas, penalizing when it tries to turn around and navigate through places it has just passed. As this factor can be characterized as a critical part of the system, both for the odometric errors and for the exploration, it multiplies the other three by a number that varies according to the region s position relative to the robot orientation. To determine this number the map is divided into 4 sectors and we determine to which sector the robot is facing at this moment and in which sector the region is located. The number to multiply is determined by the robot s facing angle and the region s sector. Figure 3 shows a robot facing sector II and the corresponding Q values for each sector. V. MAPPING To perform the simultaneous localization and mapping, we used the Distributed Particle SLAM algorithm (DP-SLAM). It was proposed by Eliazar and Parr [3], [16], [17] and uses particle filters to maintain a joint probability distribution over maps and robot positions. DP-SLAM uses an efficient map representation, reducing the time and the memory necessary to copy and store maps. Hence, DP-SLAM is able to maintain several different maps on memory until it can be sure which one is most likely to represent the environment. To keep all these particles and maps on memory and easily access them, DP-SLAM uses the concept of an ancestry tree,

Fig. 3: Map divided in 4 sectors. The robot is represented by the green square and the red lines are the sectors borders. The Q values for regions in each of the sectors are shown. where each particle has a pointer that informs which particle was re-sampled to generate it. As can be noted, this tree can get as large as the number of particles and generations. Thus, DP-SLAM manages to keep the tree small by pruning it over time. As a rule, a particle that does not get re-sampled is pruned from the tree, based on the fact that it did not score high when analyzing its probability. Another prune happens when a particle has only one child. In this case, the child is merged with its father. While the pruning can reduce the ancestor tree to an acceptable size, having a map related to each particle can still lead to a huge memory waste. Thus, DP-SLAM has only one physical map and several virtual maps. The map is an occupancy grid, but each cell does not represent free or occupied as usual. Instead, each cell holds an observation tree with the IDs of every particle that has made an observation on that cell and each particle has a list that holds every cell it has observed. At this point, it is easy to understand how the map is really generated. To find out the status of a certain cell, a particle just have to look for the most recent observation that one of its ancestors made on that cell. DP-SLAM has another feature that the authors call hierarchical SLAM. Basically, when a robot is moving through an environment it accumulates drift errors. To reduce the influence of these errors on mapping and localization, DP- SLAM implements a drift model on its algorithm. Firstly a regular SLAM, called low SLAM, is executed generating a low map, which is used next on what is called high SLAM. The high SLAM simulates some drift errors in the robot s position and samples the robot s trajectory with the information from the low map. With the trajectory information, the DP-SLAM tries to match the robot s observations with the trajectory and the drift errors and updates the high map. We call a DP-SLAM cycle when DP-SLAM computes the low and the high maps. It is interesting to note that while the high map has all the information mapped so far, the low map has only the information of the current cycle. VI. NAVIGATION To navigate the robots between regions, we use the Nearness Diagram (ND) Navigation [18], a reactive collision avoidance algorithm for robots navigating through complex scenarios. The algorithm tries to identify entities in the environment, such as obstacles and free space areas, and decides the best way to go from the current pose to the goal pose. These entities are used to define a set of situations and to implement the actions the robot should take at each situation. The sensory information retrieved by the robot is used to select a situation and the action associated with it. After our utility function plan which region to visit, this information is passed to the navigation system as an (x, y) coordinate in the SLAM map. This coordinate is converted to the robot coordinate frame and inserted in the ND Navigator that is responsible for safely guiding the robot to the next goal point. VII. EXPERIMENTS In order to test the efficiency of the proposed approach, we performed a series of experiments using both simulated and real robots. For the simulations, we used the Player/Stage Platform [19]. The main objective was to evaluate the benefits of using our navigation policy in the mapping of a generic environment. We used three different environments shown in Figure 4 and executed 20 runs of DP-SLAM with and without the navigation policy. Mapping without the navigation policy, called random navigation here, was performed moving the robot in the forward direction and randomly choosing a new orientation when an obstacle or wall is encountered. The simulated robot was a Pioneer 2DX with a Sick laser. In all the runs, the robot was initially placed at the center of the environment, with 90 orientation. (a) (b) (c) Fig. 4: Three environments used in the simulations: (a), (b) and (c). We firstly evaluated the ability to perform a complete mapping of the environment within a time limit (in this case, 200 DP-SLAM cycles) and the average number of DP-SLAM cycles necessary to perform this task. The results are shown in Table I. For each environment, it can be observed that the use of the navigation policy greatly improves the performance of the mapping task, reducing the average number of DP-SLAM cycles necessary to complete the map. Also, using the policy, the robot was able to complete a larger number of maps in comparison to the random navigation. The good

results can be explained by the use of the utility function. It was capable of analyzing the generated map and deciding where to go to better explore the environment. Without the utility function, the robot kept navigating through already visited parts of the map, most of the time not completing the map. It is important to mention that, as we do not have a path planning algorithm to guide the robot between waypoints, sometimes the ND navigation system was not able to provide a good path for the robot. That is the main reason why we did not get a 100% map completing using our Navigation Policy. TABLE I: Performance results. SLAM Cycles Completed Maps Environment a b c a b c Navigation Policy 91.85 92.1 132.1 18 19 13 Random Navigation 186.3 170.5 196.45 5 7 1 We also tested each of the utility function factors individually or in different combinations to analyze the behavior and the contribution each factor to the utility function. Table II shows the results for these tests on test environment a. TABLE II: Performance results for different utility functions on environment a. SLAM Cycles Comp. Maps Navigation Policy 91.85 18 U(x) = D(x) 169.2 3 U(x) = Err(x) 200.0 0 U(x) = V (x) 180.8 3 U(x) = Q(x) 138.05 11 U(x) = D(x) + Err(x) 121.6 13 U(x) = D(x) V (x) 122.45 12 U(x) = Err(x) V (x) 182.3 3 U(x) = D(x) + Err(x) V (x) 154.25 5 for SLAM Cycles as for completed maps. The good results obtained for this factor can be explained by the topology of environment a, which favors a straightforward navigation. Some of these functions can be considered as adaptations of some approaches found in the literature. Function U(x) = Err(x), for example, uses the same idea proposed in [4]. This function tries to minimize uncertainty of the map, thus making the robot navigate through already visited areas, reducing exploration and completing much fewer maps within the allowed time. This same behavior was reported in [4]. When we combine U(x) = D(x) + Err(x), we have an adaptation of the approach used in [5], that includes an exploration factor to the utility function. And if we apply the distance limit to the regions on the previous function we are adapting the idea of [6] to our system. In our experiments, the original function (Equation 1) presented better results than both adaptations. We also tested our system on a real robot navigating through some corridors. The robot is a Pioneer P3-AT with a SICK Laser (Fig. 5), equipped with a Pentium M Processor (1.73GHz), 512MB memory, running Ubuntu. Some of the robot parameters, such as maximum speed, had to be adjusted but the system parameters are the same as in the simulations. During the tests, the robot behaved as we expected, matching the results we achieved on simulations. Fig. 6 contains three different runs we did with the real robot and Table III presents the results we obtained for each run. The number of plans is related to the number of times our system computed a new region to visit. Fig. 7 illustrates the final maps obtained on each run. As can be seen on the table, the factors, individually, do not provide a good mapping efficiency. For example, function U(x) = Err(x), could not complete any map because it kept trying to remove the uncertainties of the map it already had instead of exploring the environment. The function U = D(x) implies that the robot should always visit far away regions, thus improving the exploration of the environment, but this is not confirmed by the small number of completed maps. It is a result of choosing to visit a region that is far from the robot and then having to navigate for a long period through already visited areas to reach that chosen region. Even the function for already visited areas, U(x) = V (x), could only complete three maps for the same reason as the previous factor. It may choose to visit regions far from the robot, thus spending long periods navigating through visited areas. The importance of the factor Q(x) can be seen on the last line of Table II. The function U(x) = D(x) + Err(x) V (x) is the same as our navigation policy without the Q(x) factor. As can be seen, the results from our navigation policy are better than function U(x) = D(x) + Err(x) V (x) Fig. 5: Pioneer 3AT used in the experiment. SLAM Cycles Duration(min) Distance(m) N o of Plans Run 1 18 18 15 3 Run 2 48 50 30 8 Run 3 68 60 35 9 TABLE III: Experimental results using a P3-AT robot. VIII. CONCLUSION This paper presented the development of a navigation policy to improve mapping and exploration of unknown environments. We propose the use of a utility function together

trade off between these factors in order to have good quality maps generated in real time. Another point to analyze is that we assume that the SLAM algorithm provides the robot position without uncertainties when evaluating the utility functions. An improved utility function wold consider this uncertainty when evaluating the best area to visit. Finally, we would like to explore the use of multiple robots in this task and use the utility function to guide the cooperative mapping. Fig. 6: The robot s trajectories: Run 1 is in red, Run 2 is in blue and Run 3 is in green. (a) (c) Fig. 7: Maps generated in Runs (a) 1, (b) 2, and (c) 3. with the DP-SLAM algorithm to dynamically generate plans and guide robot navigation. As shown in Section VII, our approach improved the performance of DP-SLAM in terms of time efficiency, when compared to a random navigation approach. Despite the good results, there is still room for improvement. As mentioned, sometimes the navigation system is not able to provide a good path between the waypoints. To solve this, we intend to use a path planning algorithm together with the ND navigator to improve the navigation part of our system. We also want to measure if the map quality is improved by using our utility function during the exploration. We want to determine if the map quality can be improved by better parameters to the DP-SLAM algorithm. For example, the number of particles used by the DP-SLAM algorithm is an important factor in map quality but also influences the algorithm processing time. So it is necessary to find a good (b) REFERENCES [1] C. Leung, S. Huang, N. M. Kwok, and G. Dissanayake, Planning under uncertainty using model predictive control for information gathering. Robotics and Autonomous Systems, pp. 898 910, 2006. [2] C. Stachniss, D. Hahnel, and W. Burgard, Exploration with active loop-closing for fastslam, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 2, 2004, pp. 1505 1510. [3] A. Eliazar and R. Parr, Dp-slam: Fast, robust simultaneous locazation and mapping without predetermined landmarks, in Proceedings of the International Joint Conferences on Artificial Intelligence (IJCAI), 2003, pp. 1135 1142. [4] H. J. S. Feder, J. J. Leonard, and C. M. Smith, Adaptive mobile robot navigation and mapping, International Journal of Robotics Research, vol. 18, no. 7, pp. 650 668, 1999. [5] F. Bourgault, A. Makarenko, S. Williams, B. Grocholsky, and H. Durrant-Whyte, Information based adaptive robotic exploration, in In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 1, 2002, pp. 540 545. [6] A. A. Makarenko, S. Williams, F. Bourgault, and H. Durrant-Whyte, An experiment in integrated exploration, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 1, 2002. [7] S. Huang, N. Kwok, G. Dissanayake, Q. Ha, and G. Fang, Multi-step look-ahead trajectory planning in slam: Possibility and necessity, in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2005, pp. 1091 1096. [8] C. Leung, S. Huang, and G. Dissanayake, Active slam using model predictive control and attractor based exploration, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2006, pp. 5026 5031. [9], Active slam for structured environments, in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2008, pp. 1898 1903. [10] D. Fox, J. Ko, K. Konolige, B. Limketkai, D. Schulz, and B. Stewart, Distributed multirobot exploration and mapping, Proceedings of the IEEE, vol. 94, no. 7, pp. 1325 1339, 2006. [11] W. Burgard, M. Moors, D. Fox, R. Simmons, and S. Thrun, Collaborative multi-robot exploration, in IEEE Int. Conf. Robotics Automation (ICRA), 2000. [12] B. Yamauchi, Frontier-based exploration using multiple robots, in Second Int. Conf. Autonomous Agents, 1998. [13] M. Bryson and S. Sukkarieh, Co-operative localisation and mapping for multiple uavs in unknown environments, in Aerospace Conference, 2007 IEEE, 2007, pp. 1 12. [14], Architectures for cooperative airborne simultaneous localisation and mapping, J. Intell. Robotics Syst., vol. 55, no. 4-5, pp. 267 297, 2009. [15] B.-Y. Ko, J.-B. Song, and S. Lee, Real-time building of a thinningbased topological map with metric features, in Intelligent Robots and Systems, 2004., vol. 2, 2004, pp. 1524 1529. [16] A. Eliazar and R. Parr, Dp-slam 2.0, in Robotics and Automation, 2004. Proceedings. ICRA 04. 2004 IEEE International Conference on, vol. 2, 2004, pp. 1314 1320. [17] A. Eliazar, Dp-slam, Ph.D. dissertation, Departament of Computer Science, Duke University, 2005. [18] J. Minguez, A. Member, and L. Montano, Nearness diagram (nd) navigation: Collision avoidance in troublesome scenarios, IEEE Transactions on Robotics and Automation, vol. 20, 2004. [19] B. P. Gerkey, R. Vaughan, and A. Howard, The player/stage project: Tools for multi-robot and distributed sensor systems, in 11th International Conference on Advanced Robotics, 2003, pp. 317 323.