Ball tracking with velocity based on Monte-Carlo localization

Similar documents
Jolly Pochie 2005 in the Four Legged Robot League

Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines

SHARPKUNGFU TEAM DESCRIPTION 2006

Using Layered Color Precision for a Self-Calibrating Vision System

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan

Sensor and information fusion applied to a Robotic Soccer Team

Jolly Pochie 2004 in the Four Legged Robot League

BabyTigers-98: Osaka Legged Robot Team

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

NuBot Team Description Paper 2013

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Brainstormers Team Description

Fast and Robust Edge-Based Localization in the Sony Four-Legged Robot League

Realtime Object Recognition Using Decision Tree Learning

A New Omnidirectional Vision Sensor for Monte-Carlo Localization

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

FutBotIII: Towards a Robust Centralized Vision System for RoboCup Small League

Multi-Cue Localization for Soccer Playing Humanoid Robots

Practical Extensions to Vision-Based Monte Carlo Localization Methods for Robot Soccer Domain

An Efficient Need-Based Vision System in Variable Illumination Environment of Middle Size RoboCup

Multi Robot Object Tracking and Self Localization Using Visual Percept Relations

Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training

beestanbul RoboCup 3D Simulation League Team Description Paper 2011

Task analysis based on observing hands and objects by vision

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

The University of Pennsylvania Robocup 2011 SPL Nao Soccer Team

Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion

Understanding Tracking and StroMotion of Soccer Ball

An Experimental Comparison of Localization Methods Continued

Machine Learning on Physical Robots

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

NERC Gazebo simulation implementation

Real time game field limits recognition for robot self-localization using collinearity in Middle-Size RoboCup Soccer

Nao Devils Dortmund. Team Description Paper for RoboCup Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann

A deformable model driven method for handling clothes

CONCLUSION ACKNOWLEDGMENTS REFERENCES

Time-to-Contact from Image Intensity

Eagle Knights 2007: Four-Legged League

Scale Invariant Segment Detection and Tracking

Testing omnidirectional vision-based Monte-Carlo Localization under occlusion

Expectation-Based Vision for Precise Self-Localization on a Mobile Robot

Document downloaded from: The final publication is available at: Copyright.

Development of Vision System on Humanoid Robot HRP-2

Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System

using an omnidirectional camera, sufficient information for controlled play can be collected. Another example for the use of omnidirectional cameras i

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

NAO-Team Humboldt 2009

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

Combining Edge Detection and Colour Segmentation in the Four-Legged League

Dutch AIBO Team at RoboCup 2006

Optical Flow-Based Person Tracking by Multiple Cameras

Localization of Multiple Robots with Simple Sensors

Continuous Valued Q-learning for Vision-Guided Behavior Acquisition

Real-Time Document Image Retrieval for a 10 Million Pages Database with a Memory Efficient and Stability Improved LLAH

Cooperative Sensing for 3D Ball Positioning in the RoboCup Middle Size League

An actor-critic reinforcement learning controller for a 2-DOF ball-balancer

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms

Particle Attraction Localisation

Factorization with Missing and Noisy Data

Monte Carlo Localization for Mobile Robots

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

Using the Extended Information Filter for Localization of Humanoid Robots on a Soccer Field

3D Digitization of a Hand-held Object with a Wearable Vision Sensor

Gesture Recognition using Temporal Templates with disparity information

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Nao recognition and coordination

Autonomous Navigation of Nao using Kinect CS365 : Project Report

Salient Visual Features to Help Close the Loop in 6D SLAM

Orientation Capture of a Walker s Leg Using Inexpensive Inertial Sensors with Optimized Complementary Filter Design

Introduction to Multi-Agent Programming

Improving Vision-Based Distance Measurements using Reference Objects

SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little

Motion Control in Dynamic Multi-Robot Environments

Watch their Moves: Applying Probabilistic Multiple Object Tracking to Autonomous Robot Soccer

Mouse Pointer Tracking with Eyes

Exam in DD2426 Robotics and Autonomous Systems

Mithras3D Team Description Paper 2014 Soccer Simulation 3D League

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources

Real-time target tracking using a Pan and Tilt platform

AN ADAPTIVE COLOR SEGMENTATION ALGORITHM FOR SONY LEGGED ROBOTS

Constraint-Based Landmark Localization

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Self-calibration of a pair of stereo cameras in general position

Carnegie Mellon University. Pittsburgh, PA robot stands 20cm high at the shoulder with the head. extending another 10cm. 1.

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

Environment Identification by Comparing Maps of Landmarks

CAMBADA 2016: Team Description Paper

Adaptive Motion Control: Dynamic Kick for a Humanoid Robot

An Omnidirectional Camera Simulation for the USARSim World

NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

Fingertips Tracking based on Gradient Vector

Acquiring Models of Rectangular 3D Objects for Robot Maps

Instance-Based Action Models for Fast Action Planning

Advanced Robotics Path Planning & Navigation

Simultaneous Localization and Mapping

Transcription:

Book Title Book Editors IOS Press, 23 1 Ball tracking with velocity based on Monte-Carlo localization Jun Inoue a,1, Akira Ishino b and Ayumi Shinohara c a Department of Informatics, Kyushu University b Office for Information of University Evaluation, Kyushu University c Graduate School of Information Science, Tohoku University Abstract. This work presents methods for tracking a ball from noisy data taken by robots. In robotic soccer, getting a good estimation of a ball s position is a critical. In addition, accurate ball tracking enables robots to play well, for example, efficiently passing to other robots, fine saves by the goalie, good team play, and so on. In this paper, we present new ball tracking technique based on Monte-Carlo localization. The key idea of our technique is to consider two kinds of samples, one for position and the other for velocity. As a result of this technique, it enables us to estimate trajectory of the ball even when it goes out of view. Keywords. Monte-Carlo Localization, Object Tracking, Object Recognition, RoboCup 1. Introduction In RoboCup, estimating the position of a ball accurately is one of the most important tasks. Most robots in Middle-size League have omni-directional cameras, and in Smallsize League, they use ceiling cameras to recognize objects. In the Four-legged League, however, robots have only low resolution cameras with a narrow field of vision on their noses, while the size of the soccer field is rather large. These facts make it difficult for the robots to estimate the position of the ball correctly, especially when the ball is far away or too near. Moreover, since the shutter speed of the camera is not especially fast, and its frame rate is not very high, tracking a moving object is a demanding task for the robots. In order to tackle these problems, several techniques have been proposed [11,9]. Kalman filter method [7] is one of the most popular techniques for estimating the position of objects in a noisy environment. It is also used to track a moving ball [2]. Methods using particle filters have been applied for tracking objects [5], and especially in Four-legged League, the method using Rao-Blackwellised particle filters has had great success[9]. In this paper, we consider how to estimate the trajectory of the moving ball, as well as the position of the static ball, based on the Monte-Carlo localization method [1]. Monte- Carlo localization has usually been used for self-localization in the Four-legged League. We extend this to handle the localization of the ball. We have already mentioned a primitive method based on this idea in [6], where we had ignored the velocity of the ball. 1 Correspondence to: Jun Inoue, E-mail: j-inoue@i.kyushu-u.ac.jp.

2 J. Inoue et al. / Figure 1. Cases of Ball Recognition. Dots on the edge show vertexes of a inscribed triangle. The diameter of the ball is calculated by using these vertexes. We propose three extended methods to deal with the velocity of the ball and report some experimental results. The rest of the paper is organized as follows. In Section 2, we first present our ball recognition algorithm from the images captured by the camera. Section 3 describes the details of Ball Monte-Carlo localization, and our experimental results and conclusions are given in Section 4. 2. Ball Recognition from the Image Data As we have already mentioned in the previous section, the camera of the robot in Fourlegged league is equipped on its nose. The field of vision is narrow (56.9 for horizontal and 45.2 for vertical), and the resolution is also low (412 32 at the maximum). Moreover, stereo vision is usually impossible, although some attempts to emulate it by using plural robots have been reported [8,12]. Therefore, first of all, an accurate recognition of the ball from a single image captured by the camera is indispensable in estimating the position of the ball accurately. In particular, the estimation of the distance to the ball from the robot is very critical in establishing a stable method for estimating the position of the (possibly moving) ball. A naive algorithm which counts the orange pixels in the image and uses it to estimate the distance does not work well, since the ball is often hidden by other robots, and the ball may be in the corner of the robot s view as shown in Figure 1. In addition, the projection of the line of sight to the ground is used for calculating the distance to the object, but it depends on the pose of the robot and is affected by the locomotion of the robot. In the Four-legged League, the robot s sight is very shaky; therefore, we do not use this method either. In this section, we will show our algorithm ability to recognize the ball and estimate the position relative to the robot from a single image. It will become the base for estimations from multiple images, as described in the following sections. In the image, the biggest component colored by orange and satisfying the following heuristic conditions is recognized as the ball. (1) The edge length of the component has to be more than 1 pixels, which helps exclude components too small. (2) The ratio of the orange pixels to the area of the bounding box must exceed 4%. (3) If the component touches the edge of the image, the length of the longer side of the bounding box must be over 2 pixels. We use the diameter of the ball to estimate the distance to it. However, the ball is often hidden by other robots partially, and when the robot approaches to the ball, only a part of the ball is visible at the corner of the camera view. Thus the size of the bounding box and the total number of pixels are not enough to estimate the distance accurately. Figure 1 shows two cases of diameter estimation. When the ball is inside the view completely (left image), we regard the length of longer side of the bounding box as the

J. Inoue et al. / 3 diameter of the ball. When the bounding box touches to the edges of the image (right image), we use three points of the components, since any three points of the edge of a circle uniquely determine the center and the diameter of it. 3. Ball Localization using Monte-Carlo Method In this section, we propose a technique which estimates the position and velocity of a moving ball based on the Monte-Carlo localization. This technique was introduced for the self-localization in [4] and utilized in [1]. The aim is to calculate the accurate position and velocity of the ball from a series of input images. In principle, we use differences of positions between two frames. However, these data may contain some errors. The Monte-Carlo method absorbs these errors so that the estimation of the velocity becomes more accurate. In this method, instead of describing the probability density function itself, it is represented as a set of samples that are randomly drawn from it. This density representation is updated every time based on the information from the sensors. We consider the following three variations of the method, in order to estimate both the position and velocity of the moving ball. (1) Each sample holds both the position and velocity, but is updated according to only the information of the position. (2) Each sample holds both the position and velocity, and is updated according to the information of both the position and velocity. (3) Two kinds of samples are considered: one for the position, and the other for the velocity, which are updated separately. For evaluating the effectiveness of our ball tracking system, we experimented in both simulated and realworld environments. The details of the algorithms and experimental results are shown below. 3.1. Ball Monte-Carlo localization updated by only position information First, we introduce a simple extension to our ball localization technique in [6]. In this method, a sample is a triple a i = p i, v i, s i, where p i is a position of the ball, v i is a velocity of it, and s i is a score (1 i n) which represents how p i and v i fit to observations. Fig 2 shows the update procedure. When the robot recognizes a ball in the image, each score s i is updated in step 6 1, depending on the distance of p i and the observed position p o. When the ball is not found, let p o = ɛ. The constant τ > defines the neighborhood of the sample. Moreover, the constants maxup and maxdown control the conservativeness of the update. If the ratio maxup/maxdown is small, response to the rapid change of ball s position becomes fast, but influence of errors become strong. The threshold value t controls the effect of misunderstandings. The function randomize resets p i and v i to random values and s i =. The function random() returns a real number randomly chosen between and 1. The return values p e and v e are weighted mean of samples whose scores are at least t. At first, we show a simulation results on this method. The simulation is performed as follows. We choose 2 positions from (8, 13) to ( 8, 13) at regular intervals as observed positions, which emulates that a robot stands at the center of the field without any movement and watches the ball rolling from left to right. Since in practice, the distance estimation to the ball is not very accurate compared to the direction estima-

4 J. Inoue et al. / Algorithm BallMoteCarloLocalizationUpdatedByOnlyPositionInformation Input. A set of samples Q = { p i, v i, s i } and observed ball position p o Output. estimated ball position p e and velocity v e. 1 for i := 1 to n do begin 2 p i := p i + v i ; 3 if p i is out of the field then 4 randomize( p i, v i, s i ) 5 end; 6 if p o ɛ then 7 for i := 1 to n do begin 8 score := exp( τ p i p o ); 9 s i := max(s i maxdown, min(s i + maxup, score)) 1 end P 11 avgscore := 1 n n i=1 s i; 12 p e := ; v e := ; w := ; 13 for i := 1 to n do begin 14 if s i < avgscore random() then 15 randomize( p i, v i, s i ) 16 else if s i > t then begin 17 p e := p e + s i p i ; 18 v e := v e + s i v i ; 19 w := w + s i 2 end; 21 end; 22 p e := p e /w; v e := v e /w; 23 output p e, v e Figure 2. Procedure Ball Monte-Carlo Localization updated by only position information tion, we added strong noise to y-direction and weak noise to x-direction of the observed positions. These positions are shown as diamond-shaped points in Fig. 3 and Fig. 4. We examined the above method and a simple Kalman filter method to predict the positions of the ball from these noisy data. The lines in Fig. 3 and 4 shows the trajectories of them for 4 frames (in the first 2 frames, the observed positions are given, and in the last 2 frames, no observed data is given). After some trials, we set the parameters as follows. The number of samples n = 5, maxup =.1, maxdown =.5, t =.15, and τ =.8. Compared to Kalman filter, our method failed to predict the trajectory of the positions after the ball is out of the view, because the velocity of the samples did not converge well. 3.2. Ball Monte-Carlo localization updated by position and velocity information In the previous method, each sample holds its own velocity, and in principal, good sample holding good position and velocity would get high score. However in the previous experiment, the velocity did not converged well. Therefore, we tried to reflect the velocity explicitly to the score, intended to accelerate the convergence. In the second method, we added the procedure shown in Fig. 5 to the previous method, between Step 1 and Step 11 in Fig. 2. Since the observed velocity v o is not given explicitly, it is substituted by

J. Inoue et al. / 5 25 2 Simulated Ball Position Kalman Filter MonteCarlo (pos only) 25 2 Simulated Ball Position MonteCarlo(pos and vel) MonteCarlo(two sets) 15 15 5 5 - -5 5 15 2 25 3 - -5 5 15 2 25 3 Figure 3. The result of the simulation with Kalman filter and MonteCarlo(pos only) Figure 4. The result of the simulation with Monte- Carlo(pos and vel) and MonteCarlo(two sets) for i := 1 to n do begin score := exp( τ v v i v o ); s i := max(s i maxdown v, min(s i + maxup v, score)) end Figure 5. Additional procedure to update the score reflecting the velocity explicitly in the second method. ( p o p last )/dt, where p last is the last position of the ball and dt is the difference between these two frame numbers. The line MonteCarlo(pos and vel) in Fig. 4 shows the result, with the parameters τ v =.8, maxup v = 1., maxdown v =.1. Unfortunately, it also failed to predict the trajectory because the converge rate was not improved well. 3.3. Ball Monte-Carlo localization with two sets of samples In the third method, we took another approach in order to reflect the goodness of the velocity to the score. The idea is to split the samples into two categories, one for the positions p i, s i, and the other for the velocities v i, s i. The score of p is updated in step 8 11 of PositionUpdate while that of v in step 2 5 of VelocityUpdate independently, in Fig. 6. The line MonteCarlo(two sets) in Fig. 4 shows the results. The estimated positions fits the pathway of the rolling ball, and the predicted trajectory when ball was out of sight is also as we expected. The effectiveness of it is comparable to the Kalman filter method. 3.4. On the number of samples We verified the relationship between the accuracy of the predictions and the number of samples. Fig. 7 shows some of these experiments for the proposed three methods. The x-axis is the number of samples ranging 1 to, and the y-axis is the average error of five trials, which measure the difference between the real position and the predicted position. In the left figure, the situation was almost the same as the previous experiments. In the right figure, we put a small obstacle in front of the robot which hides the ball in the center of the view. Because of this obstacle, some of observed positions were missing and the average errors increased. In general, the error decreases as the number of samples

6 J. Inoue et al. / Algorithm BallMoteCarloLocalizationWithTwoSets Input. Two sets of samples POS = { p i, s i } and VEL = { v i, s i }, observed ball position p o, calculated ball velocity v o. Output. estimated ball position p e and velocity v e. PositionUpdate(POS, VEL, p o, v o ) VelocityUpdate(VEL, p o, v o ) 1 v e := VelocityUpdate(VEL, p o, v o); 2 for i := 1 to n do begin 3 p i := p i + v e; 4 if p i is out of the field then 5 randomize( p i, s i ) 6 end; 7 if p o ɛ then 8 for i := 1 to n do begin 9 score := exp( τ p p i p o ); 1 s i := max(s i maxdown p, min(s i + maxup p, score)) 11 end; 12 p e = ; w = ; 13 avgscore := 1 n P n i=1 s i; 14 for i := 1 to n do begin 15 if s i < avgscore random() then 16 randomize( p i, s i ) 17 else if s i > t p then begin 18 p e := p e + s i p i ; 19 w := w + s i 2 end; 21 end; 22 p e := p e/w; 23 output p e, v e 1 if p o ɛ then 2 for i := 1 to m do begin 3 score new := exp( τ v v i v o ); 4 s i := max(s i maxdown v, min(s i + maxup v, score)) 5 end; 6 v e = ; w = ; 7 avgscore := 1 m P m i=1 s i; 8 for i := 1 to m do begin 9 if s i < avgscore random() then 1 randomize( v i, s i ) 11 else if s i > t v then begin 12 v e := v e + s i v i; 13 w := w + s i 14 end; 15 end; 16 v e := v e /w; 17 output v e Figure 6. Procedure Ball Monte-Carlo Localization with two sets of samples increases for all of these three methods. Among them, the third method which separates the samples into two categories performed the best. When the number of samples exceeds 15, the accuracy did not changed in practice. Remind that the data on observed positions, which are given as input, themselves contained some errors. For example, the average error of the observed positions was 52.6 mm in the experiment without obstacle. However, the average error of the third method was less than 4 mm when we took at least 15 samples, which verified that the proposed methods succeeded to stabilize the predicted ball positions. 3.5. Real-world Experiments We also performed many experiments in the real-world environments, at which we used the real robot in the soccer field for RoboCup competitions. We show some of them in Fig. 8, where we compared the third method which we proposed with the Kalman filter method. The purpose was to evaluate the robustness of the methods against the obstacle and the change directions of the ball movement. In the left figure, the ball rolled from right to left while a small obstacle in front of the robot hides for some moments. In the

J. Inoue et al. / 7 8 MonteCarlo(pos only) MonteCarlo(pos and vel) MonteCarlo(two sets) 8 MonteCarlo(pos only) MonteCarlo(pos and vel) MonteCarlo(two sets) 6 6 4 4 2 2 1 2 3 4 5 1 2 3 4 5 Figure 7. Relationship between the number of samples and accuracy, in two situations, without obstacle (left) and with an obstacle (right). right figure, the ball was kicked at (4, 9) and went left, then it is rebounded twice at ( 22, 14) and ( 5, 5), and disappeared from the view to the right. Mesh parts in the figures illustrates the visible area of the robot. From these experiments, we verified that the proposed method is robust against the frequent change of the directions, which is often the case in real plays. 2 15 2 15 Observed Ball Position Kalman Filter Ball Monte-Carlo Method 5 Observed Ball Position Kalman Filter Ball Monte-Carlo Method -6-4 -2 2 4 6 8 5-6 -4-2 2 4 6 8 Figure 8. Real world experiments. In the left situation, the ball rolled from right to left behind a small obstacle. In the right situation, the ball started at (4, 9) and rebounded twice at ( 22, 14) and ( 5, 5), and disappeared from the view to the right. 4. Conclusions We proposed some applications of Monte-Carlo localization technique to track a moving ball in noisy environment, and showed some experimental results. We tried three variations, depending on how to treat the velocity of the ball movement. The first two methods did not work well as we expected, where each sample has its own velocity value together with the the position value. The third method, which we treated positions and velocities separately, worked the best. We think the reason as follows. If we treat positions and velocities together, the search space has four dimensions. On the other hand, if we treat them separately, the search space is divided into two spaces, each of them has two di-

8 J. Inoue et al. / mensions. The latter would be easy to converge to correct values. The converge ratio is quite critical in RoboCup, because the situation changes very quickly in real games. We compared the proposed method with Kalman filter method, which is very popular. The experiments were not very comprehensive: for example, the observing robot did not move. Nevertheless, we verified that the proposed method is attractive especially when the ball rebounds frequently in real situations. In RoboCup domain, various techniques to improve the ball tracking ability are proposed. For example, cooperative estimation with other robots is proposed in [3,8]. we plan to integrate these idea into our method, and perform more comprehensive experiments in near future. References [1] F. Dellaert, D. Fox, W. Burgard, and S. Thrun. Monte Carlo Localization For Mobile Robots. In Proc. of the IEEE International Conference on Robotics & Automation, 1998. [2] M. Dietl, J.-S. Gutmann, and B. Nebel. Cooperative sensing in dynamic environments. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 1), 21. [3] A. Ferrein, G. Lakemeyer, and L. Hermanns. Comparing Sensor Fusion Techniques for Ball Position Estimation. In Proc. of the RoboCup 25 Symposium, 25. to appear. [4] D. Fox, W. Burgard, F. Dellaert, and S. Thrun. Monte Carlo Localization: Efficient Position Estimation for Mobile Robots. In Proc. of the National Conference on Artificial Intelligence, 1999. [5] C. Hue, J.-P. Le Cadre, and P. Perez. Tracking multiple objects with particle filtering. Technical Report 1361, IRISA, 2. [6] J. Inoue, H. Kobayashi, A. Ishino, and A. Shinohara. Jolly Pochie 25 in the Four Legged Robot League, 25. [7] R. Kalman. A New Approach to Linear Filtering and Prediction Problems. Journal of Basic Engineering, pages 35 45, 196. [8] A. Karol and M.-A. Williams. Distributed Sensor Fusion for Object Tracking. In Proc. of the RoboCup 25 Symposium, 25. to appear. [9] C. Kwok and D. Fox. Map-based Multiple Model Tracking of a Moving Object. In Proc. of the RoboCup 24 Symposium, 24. [1] T. Röfer, T. Laue, and D. Thomas. Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines. In RoboCup 25: Robot Soccer World Cup IX, Lecture Notes in Artificial Intelligence, Springer, 25. to appear. [11] D. Schulz, W. Burgard, D. Fox, and A.B. Cremers. Tracking Multiple Moving Objects with a Mobile Robot. In Proc. of the IEEE Computer Society Conference on Robotics and Automation(ICRA), 21. [12] A. Stroupe, M.-C. Martin, and T. Balch. Distributed Sensor Fusion for Object Position Estimation by Multi-Robot Systems. In Proc. of the IEEE International Conference on Robotics and Automation. IEEE, 21.