Tracking Multiple Moving Objects with a Mobile Robot
|
|
- Laureen Rose
- 6 years ago
- Views:
Transcription
1 Tracking Multiple Moving Objects with a Mobile Robot Dirk Schulz 1 Wolfram Burgard 2 Dieter Fox 3 Armin B. Cremers 1 1 University of Bonn, Computer Science Department, Germany 2 University of Freiburg, Department of Computer Science, Germany 3 University of Washington, Dept. of Computer Science & Engineering, Seattle, WA, USA Abstract One of the goals in the field of mobile ics is the development of mobile platforms which operate in populated environments. For many tasks it is therefore highly desirable that a can determine the positions of the humans in its surrounding. In this paper we introduce sample-based joint probabilistic data association filters to track multiple moving objects with a mobile. Our technique uses the s sensors and a motion model of the objects being tracked. A Bayesian filtering technique is applied to adapt the tracking process to the number of objects in the sensor range of the. Our approach to tracking multiple moving objects has been implemented and tested on a real. We present experiments illustrating that our approach is able to robustly keep track of multiple persons even in situations in which people are temporarily occluded. The experiments furthermore show that the approach outperforms other techniques developed so far. 1. Introduction The problem of estimating the positions of moving objects is an important problem in mobile ics. Knowledge about the position of moving objects can be used to improve the behavior of the system especially if the is deployed in populated environments. For example, this ability allows a to adapt its velocity to the speed of people in the environment and enables a to improve its collision avoidance behavior in situations in which the trajectory of the crosses the path of a human. In this paper we present a method for tracking multiple moving objects with a mobile. This technique uses the s sensors and a motion model of the objects being tracked in order to estimate their positions and velocities. We introduce sample-based Joint Probabilistic Data Association Filters (SJPDAFs). JPDAFs [1, 3] are a very popular approach to tracking multiple moving objects. They compute a Bayesian estimate of the correspondence between features detected in the sensor data and the different objects to be tracked. Like virtually all existing approaches to tracking multiple targets, they apply Kalman filters to estimate the states of the individual objects. While Kalman filters have been shown to provide highly efficient state estimates, they are restricted to Gaussian distributions over the state to be estimated. More recently, particle filters have been introduced to estimate non-gaussian, non-linear dynamic processes [7, 15]. They have been applied with great success to different state estimation problems including visual tracking [2, 9], mobile localization [5] and dynamic probabilistic networks [10]. The key idea of particle filters is to represent the state by sets of samples (or particles). The major advantage of this technique lies in the ability to represent multi-modal state densities, a property which has been shown to increase the robustness of the underlying state estimation process [8]. However, most existing applications deal with estimating the state of single objects only. One way to apply particle filters to the problem of tracking multiple objects is to estimate the combined state space, as proposed in [12]. Unfortunately, the complexity of this approach grows exponentially in the number of objects to be tracked. Our approach combines the advantages of particle filters with the efficiency of existing approaches to multi-target tracking: It uses particle filters to track the states of the objects and applies JPDAFs to assign the measurements to the individual objects. Instead of relying on Gaussian distributions extracted from the sample sets as proposed in [6], our approach applies the idea of JPDAFs directly to the sample sets of the individual particle filters. To adapt the SJPDAFs to the different numbers of objects in the s sensor range, our approach maintains a probability distribution over the number of objects being tracked. The robustness of the overall approach is increased by applying a motion model of the objects being tracked. Furthermore, it uses different features extracted from consecutive sensor measurements to explicitely deal with occlusions. This way, our can reliably keep track of multiple persons even if they temporarily occlude each other. This paper is organized as follows. After introducing a
2 general framework for JPDAFs, we introduce SJPDAFs in Section 2. Section 3 explains how we add and remove particle filters based on an estimate of the current number of objects. In Section 4, we describe how to extract features from proximity information provided by a s laser range finders. Furthermore, we show how to deal with occlusions. Section 5 describes several experiments carried out on a real and in simulations. The experiments illustrate the capabilities and the robustness of our approach. 2. Sample-based Joint Probabilistic Data Association Filters (SJPDAFs) To keep track of multiple moving objects one generally has to estimate the joint probability distribution of the state of all objects. This, however, is intractable in practice already for a small number of objects since the size of the state space grows exponentially in the number of objects. To overcome this problem, a common approach is to track the different objects independently, using factorial representations for the individual states. A general problem in this context is to determine which measurement is caused by which object. In this paper we apply Probabilistic Data Association Filters (JPDAFs) [3] for this purpose. In what follows we will first describe a general version of JPDAFs and then a sample-based implementation Joint Probabilistic Data Association Filters Consider the problem of tracking T objects. X k = {x k 1,..., x k T } denotes the state of these objects at time k. Note that each x k i is a random variable ranging over the state space of a single object. Furthermore, let Z(k) = {z 1 (k),..., z mk (k)} denote a measurement at time k, where z j (k) is one feature of such a measurement. Z k is the sequence of all measurements up to time k. The key question when tracking multiple objects is how to assign the observed features to the individual objects. In the JPDAF framework, a joint association event θ is a set of pairs (j, i) {0,..., m k } {1,..., T }. Each θ uniquely determines which feature is assigned to which object. Please note, that in the JPDAF framework, the feature z 0 (k) is used to model situations in which an object has not been detected, i.e. no feature has been found for object i. Let Θ ji denote the set of all valid joint association events which assign feature j to the object i. At time k, the JPDAF computes the posterior probability that feature j is caused by object i according to β ji = P (θ Z k ). (1) θ Θ ji Using Bayes rule and under the assumption that the estimation problem is Markovian, we can compute the probability P (θ Z k ) of an individual joint association event as P (θ Z k ) = P (θ Z(k), Z k 1 ) (2) Markov! = P (θ Z(k), X k ) (3) Bayes! = α p(z(k) θ, X k ) P (θ X k ). (4) Here α is a normalizer ensuring that P (θ Z k ) sums up to one over all θ. The term P (θ X k ) corresponds to the probability of the assignment θ given the current states of the objects. Throughout this paper we make the assumption that all assignments have the same likelihood so that this term can be approximated by a constant. Throughout our experiments we did not observe evidence that this approximation is too crude. However, we would like to refer to [4] for a better approximation of this quantity. The term p(z(k) θ, X k ) denotes the likelihood of making an observation given the state of the objects and a specific assignment between the observed features and the objects. In order to determine this quantity, we have to consider the case that a feature is not caused by any of the objects. We will call these features false alarms. Let γ denote the probability that an observed feature is a false alarm. The number of false alarms contained in an association event θ is given by (m k θ ). Then γ (mk θ ) is the probability assigned to all false alarms in Z(k) given θ. All other features are uniquely assigned to an object. Making the assumption that the features are detected independently of each other, we get p(z(k) θ, X k ) = γ (m k θ ) p(z j (k) x k i ). (5) (j,i) θ By inserting Eq. (5) into Eq. (4), using the assumption that P (θ X k ) is constant, and after inserting the result into Eq. (1) we obtain β ji = α γ (mk θ ) p(z j (k) x k i ). (6) θ Θ ji (j,i) θ It remains to describe, how the beliefs p(x k i ) about the states of the individual objects are updated. In the standard JPDAF it is generally assumed that the underlying densities are Gaussians, and Kalman filtering is applied to update these densities. In the more general framework of Bayesian filtering, the update equation for the prediction of the new state of an object is p(x k i Z k 1 ) = p(x k i x k 1 i, t) p(x k 1 i Z k 1 ) dx k 1 i. (7) Whenever new sensory input arrives, the state is corrected according to p(x k i Z k ) = α p(z(k) x k i ) p(x k i Z k 1 ), (8) where, again, α is a normalization factor. Since we do not know which of the features in Z(k) is caused by object i, we integrate the single features according to the assignment probabilities β ji m k p(x k i Z k ) = α β ji p(z j (k) x k i ) p(x k i ). (9) j=0 2
3 Thus, all we need to know are the models p(x k i xk 1 i, t) and p(z j (k) x k i ). Both depend on the properties of the objects being tracked and the sensors used. One additional important aspect is how the distributions p(x k i ) over the state spaces of the individual objects are represented The Sample-based Approach In most applications to target tracking, Kalman filters and hence Gaussian distributions are used to track the individual objects. In our approach, we use sample-based representations of the individual beliefs of the states of the objects which allows us to represent arbitrary densities. The key idea underlying all particle filters is to represent the density p(x k i Z k ) by a set S k i of N weighted, random samples or particles s k i,n (n = 1... N). A sample set constitutes a discrete approximation of a probability distribution. Each sample is a tuple (x k i,n, wk i,n ) consisting of state xk i,n and an importance factor wi,n k. The prediction step of Bayesian filtering is realized by drawing samples from the set computed in the previous iteration and by updating their state according to the prediction model p(x k i xk 1 i, t). In the correction step, a measurement Z(k) is integrated into the samples obtained in the prediction step. According to Eq. (9) we have to consider the assignment probabilities β ji in this step. To determine β ji, however, we have to compute p(z j (k) x k i ). In our particle filter-based version this quantity is obtained by integrating over all samples: p(z j (k) x k i ) = 1 N N p(z j (k) x k i,n). (10) n=1 Given the assignment probabilities we now can compute the weights of the samples m k wi,n k = α β ji p(z j (k) x k i,n), (11) j=0 where α is a normalizer ensuring that the weights sum up to one over all samples. Finally, we obtain N new samples from the current samples by bootstrap resampling. For this purpose we select every sample x k i,n with probability wk i,n. 3. Estimating the Number of Objects The Joint Probabilistic Data Association Filter assumes that the number of objects to be tracked is known. In practical applications, however, the number of objects often varies over time. For example, when a mobile is moving in a populated environment, the number of people in the perceptual field of the changes frequently. We deal with this problem by additionally maintaining a density P (N k M k ) over the number of objects N k at time k, where M k = m 0,..., m k is the sequence of the numbers of features observed so far. Using Bayes rule, we have P (N k M k ) = α P (m k N k, M k 1 ) P (N k M k 1 ). observation probability number 4 5 of objects number of features Figure 1. The sensor model P (m k N k ) specifying the probability of observing m k features, if N k objects are in the perceptual range of the sensor Under the assumption that, given the current number of objects, the number of observed features is independent of the number of previously observed features, we obtain P (N k M k ) = α P (m k N k ) P (N k M k 1 ). Using the law of total probability, and given the assumption that N k is independent of M k 1 given N k 1 we have P (N k M k ) = α P (m k N k ) n P (N k N k 1 = n) P (N k 1 = n M k 1 ) (12) As a result, we obtain a recursive update procedure for P (N k M k ), where all we need to know are the quantities P (m k N k ) and P (N k N k 1 ). The term P (m k N k ) represents the probability of observing m k features, if N k objects are in the perceptual field of the sensor. In our current implementation, which uses laser range sensors, we learned this quantity from a series of 5100 range scans, recorded during a simulation experiment, where up to 10 objects where placed at random locations within the s surrounding. The resulting density is depicted in Fig. 1. Please note that the probability of an occlusion increases with the number of objects. Accordingly, the more objects are in the perceptual field of the, the more likely it becomes that a feature is missing. Therefore, the resulting distribution shown in Fig. 1 does not correspond to a straight line lying on the diagonal. The term P (N k N k 1 ) specifies how the number of objects changes over time. In our system we model arrivals of new objects and departures of known objects as Poisson processes. To adapt the number of objects in the SJPDAF, our system uses the maximum likelihood estimate of P (N k M k ). If the number of particle filters in the SJPDAF is smaller than the new estimate for N k, new filters need to be initialized. Since we do not know which of the features originate from new objects, we initialize the new filters using a uniform distribution. We then rely on the SJPDAF to disambiguate this distribution during subsequent filter updates. In the alternative case that the new estimate for N k is smaller then the current number of particle filters, some of 3
4 Figure 2. Typical laser range finder scan. Two of the local minima are caused by people walking by the (left image). Feature grid extracted from the scan representing the corresponding probability densities P (legs k x,y ) (center). Occlusion grid representing P (occluded k x,y) (right). Figure 3. From left to right, top-down: the current occupancy map P (occ x,y Z(k)), the previous occupancy map P (occ x,y Z(k 1)), the resulting difference map P (new k x,y), and the fusion of the difference map with the feature maps for the scan depicted in Figure 2 the filters need to be removed. This, however, requires that we know which filter does not track an object any longer. To estimate the tracking performance of a sample set, we accumulate a discounted average Ŵ i k of the sum of sample weights before the normalization step: W k i Ŵ k i = (1 δ)ŵ k 1 i + δw k i. (13) Since the sum of sample weights decreases significantly, whenever a filter is not tracking any feature contained in the measurement, we use this value as an indicator that the corresponding object has left the perceptual field of the. Whenever we have to remove a filter, we choose the one with the smallest discounted average Ŵ k i. 4. Application to Laser-based People Tracking with a Mobile Robot In this section we describe the application of the SJPDAF to the task of tracking people using the range data obtained with a mobile. Our mobile platform is equipped with two laser range scanners mounted at a height of 40 cm. Each scan of these two sensors covers the whole surrounding of the at an angular resolution of 1 degree. To represent the state of a person we use a quadruple x, y, φ, v, where x and y represent the position relative to the, φ is the orientation and v is the walking speed of the person. To robustly identify and keep track of persons, our system uses two different patterns in range scans which are typically caused by humans walking through a building. Consider as an example the situation shown in the left image of Figure 2. In this situation two people pass a in a corridor which cause two local minima in the range profile of the laser range scan. The third minimum is caused by a trash bin placed in the corridor. Given these local minima we compute a set of two-dimensional position probability grids containing in each cell the probability P (legs k x,y ) that a persons legs are at position x, y relative to the. We generate one such map for each feature in the range scan. An overlay of all three maps representing the candidates found in our example is shown in the center of Figure 2. Unfortunately, there are other objects in typical office environments which produce patterns similar to people. To distinguish these static objects from moving objects our system additionally considers the changes in consecutive scans. This is achieved by computing local occupancy grid maps [14] for each pair of consecutive scans. Based on these occupancy grids we compute the probability P (new k x,y) that something moved to location x, y : P (new k x,y) = P (occ x,y Z(k)) (1 P (occ x,y Z(k 1))). Because the local maps P (occ x,y Z(k)) are built while the is moving, we first align the maps using the scan matching technique presented in [11]. Figure 3 shows two subsequent and aligned local grids and the resulting grid representing P (new k x,y). In order not to loose track of a moving object when it stops our system takes an inductive approach. A local feature is assumed to be caused by a moving object either if it is supported by P (new k x,y) or if a moving object was detected at the same position already in the previous scan P (support k x,y) = 1 (1 P (new k x,y)) (1 P (object k 1 x,y )). Finally P (object k x,y ) is a probability grid specifying the probability, that a moving object is currently placed at position x, y. This grid is computed by combining P (support k x,y ) with the position probability grid of the feature. Under the assumption that the probabilities are independent, we have P (object k x,y) = P (legs k x,y) P (support k x,y) In the correction steps of the particle filters, we now use the P (object k x,y ) of each feature z j(k), 1 j m k to compute the sample weights p(z j (k) x i,n (k)) P (z j (k) x k i,n) = P (object k x i,n ). (14) 4
5 Additionally we have to compute the likelihood p(z 0 (k) x i,n (k)), that the person indicated by a sample x i,n (k) has not been detected in the current scan. This can be due to failure in the feature extraction or due to occlusions. The first case is modeled by a small constant value in all cells of the position and difference grids. To deal with possible occlusions we compute the so-called occlusion map containing for each position in the surrounding of the the probability P (occluded k x,y) that the corresponding position is not visible given the current features extracted in the first step. The resulting occlusion map for the scan shown on the left of Figure 2 is depicted in the right image of Figure 2. To determine P (z 0 (k) x i,n (k)) we use this occlusion map and a fixed feature detection failure probability P ( Detect): P (z 0 (k) x i,n (k)) = P (occluded k x i,n Detect). (15) To predict the samples estimating the motions of persons, we apply a probabilistic motion model. We assume that a person changes its walking direction and walking speed according to Gaussian distributions. Thereby, the translational velocity is assumed to lie between 0 and 150 cm/s. Additionally, we use the static objects also extracted from consecutive scans to avoid that samples end up inside of objects or move through objects. 5. Experimental Results Our approach has been implemented and tested extensively with our mobile as well as in simulation runs. Throughout the experiments the was moving with speeds of up to 40 cm/s. The current implementation is able to integrate laser range scans at a rate of 5Hz Performance of SJPDAF-Tracking In the first experiment our was moving in the corridor of the department building. Simultaneously, up to 6 persons were walking in the corridor, frequently entering the s perceptual range of 8m. Figure 4 shows a short sequence of pairs of images taken during the experiment. Each pair consists of an image recorded with a ceiling-mounted camera (left part) as well as a 3D visualization according to the current estimate of our system at the same point in time (right image). The time delay between consecutive images is 1.25 seconds and the moves with a speed of 40 cm/s. As can be seen from the figure, the is able to accurately estimate the positions of the persons. A typical situation is shown in Figure 5. Here the is standing and three people are walking along the corridor. The upper image shows the ground truth extracted from the data recorded during the experiment. The lower image depicts the trajectories estimated by our algorithm. Please note that there is a certain delay in the initialization of the sample set for the Figure 4. Tracking sequence showing a real photo and a 3D visualization of the state estimate at the same point in time. The time delay between consecutive images is 1.25 seconds. person marked a. This delay is due to the number of object estimate and the occlusion by person b. To estimate the accuracy of our approach, we manually determined the positions of the persons for each individual scan and measured the distance to the estimated positions. It turned out that the average displacement between the ground truth position and the estimated position was 19 cm and the maximum displacement was 37 cm. In order to evaluate the accuracy of the estimation of the number of objects, we manually determined the correct number of people in each scan and compared them to the estimate of the system. Note, that the estimation process is a filter, which smoothes out the effect of random feature detection failures and false alarms, but it also introduces some delay in the detection of the change of the number of objects. We found an average detection delay of 1 sec. Neglecting this delay effect, the estimator correctly determined the number of objects in 90% of all scans. To analyze the advantage of the explicit occlusion handling, we systematically analyzed a data set recorded with 5
6 c t1 t3 t1 t2 t3 b a t2 Figure 5. Typical situation in which the had to track three persons. Whereas the ground truth is shown in the upper image, the estimated trajectories are depicted in the lower image. occluded Figure 6. Estimated trajectories of two persons in a situation in which one person temporarily occludes the other. The arrow indicates the point in time, when the occlusion occurred. our while it was moving at a speed of 40 cm/sec. In this experiment two persons were walking in the corridor of our department with speeds of 60cm/s and 80cm/s, respectively. Figure 6 shows a typical situation which occurred during this experiment in which one person temporarily occludes the other. From the data we created 40 different sequences which were used for the quantitative analysis. For all these sequences we evaluated the performance of our tracking algorithm with and without occlusion handling. Thereby we regarded it as a tracking failure if one of the two sample sets is removed or if one sample set tracked the wrong person after the occlusion took place. Without occlusion handling, the system failed in seven cases (17.5%). With occlusion handling, the was able to reliably keep track of both persons and failed in only one of the 40 cases (2.5%). These experiments demonstrate, that our system is able to reliably and accurately keep track of several moving objects even in cases in which occlusions occur Comparison to Standard JPDAFs As pointed out above, the main advantage of particle filters compared to Kalman filters is that particle filters in principle can represent arbitrary densities, whereas Kalman filters are restricted to Gaussian distributions. Accordingly, our SJPDAF-approach produces more accurate densities than the standard JPDAF using Kalman filters. b a c Figure 7. Tracking one object approaching a static ; using a particle filter (left) and using a Kalman filter (right). The arrow indicates the trajectory taken by the object. The Kalman filter erroneously predicts that the object moves into the. Figure 7 shows a typical example in which the restricted representation of Kalman filters leads to a wrong prediction. Here the tracks a person which walks straight towards an and then passes it on the right side (see solid line). The left image of Figure 7 also contains the probability densities of the particle filters computed by the SJPDAF at three different points in time. The grey-shaded contours indicate the corresponding distributions of the particles (the darker the higher the likelihood), which were obtained by computing a histogram over a discrete grid of poses. The Mahalanobis distance of the Gaussians computed at the same points in time by the standard JPDAF using a Kalman filter are shown in the right image of Figure 7. As can be seen from the figure, both filters correctly predict the position of the person in the first two steps. However, in the third step the Kalman filter predicts that the person will move into the. Our SJPDAF, in contrast, correctly predicts that the person will pass the either to the left or to the right. This is indicated by the bimodal distribution shown in the left image of Figure Advantage of the SJPDAF over Standard Particle Filters In the past, single state particle filters have also been used for tracking multiple objects [13, 9]. This approach, which rests on the assumption that each mode in the density corresponds to an object, is only feasible if all objects can be sensed at every point in time and if the measurement errors are small. If, for example, one object is occluded, the samples tracking this object obtain significantly smaller importance factors than the samples tracking the other objects. As a result, all samples quickly focus on the un-occluded object. Figure 8 shows an example situation in which the is tracking two persons. Whereas one person is visible all the time, the second one is temporarily occluded by a static. The upper row of the figure shows the evolution of the samples using such a single state particle filter. This filter was initialized with a bimodal distribution using 1000 particles for each object. As soon as the upper object is occluded, all samples are moved to the un-occluded person and the filter loses track of the occluded object. The lower row of the figure shows the samples sets resulting from our SJPDAF. 6
7 person 1 person 2 Figure 8. Tracking two persons with one sample set (top row) and with two sample sets (bottom row). The arrows indicate the position of the persons and their movement direction. Although the uncertainty about the position of the occluded object increases, the filter is able to reliably keep track of both objects. 6. Summary and Conclusions In this paper we presented a technique for keeping track of multiple moving objects with a mobile. In order to avoid the exponential complexity of joint state spaces, each object is tracked using a particle filter and Joint Probabilistic Data Association Filters are applied to solve the problem of assigning measurements to the individual objects. By integrating particle filters with JPDAFs, our SJPDAF inherits the advantages of both: it can represent arbitrary densities over the state space of the individual objects while still being able to efficiently solve the data association problem. Our approach uses a probabilistic approach to deal with varying numbers of objects. Furthermore, it extracts appropriate features from proximity data to deal with possible occlusions. The technique has been implemented and evaluated on a real as well as in simulation runs. The experiments carried out in a typical office environment demonstrate that our approach is able to reliably keep track of multiple persons. They furthermore illustrate that our approach outperforms other techniques developed so far. 7. Acknowledgments This work has partly been supported by the Information Societies Technology Programme of the European Commission under contract number IST References [1] Y. Bar-Shalom and T. Fortmann. Tracking and Data Association. Mathematics in Science and Engineering. Academic Press, [2] M. Black and A. Jepson. A probabilistic framework for matching temporal trajectories: Condensation-based recognition of gestures and expressions. In ECCV, [3] I. Cox. A review of statistical data association techniques for motion correspondence. International Journal of Computer Vision, 10(1):53 66, [4] I. Cox and S. Hingorani. An efficient implementation of reids multiple hypothesis tracking algorithm and its evaluation for the purpose of visual tracking. IEEE Transactions on PAMI, 18(2): , February [5] D. Fox, W. Burgard, F. Dellaert, and S. Thrun. Monte Carlo localization: Efficient position estimation for mobile s. In Proc. of the National Conference on Artificial Intelligence (AAAI), [6] N. Gordon. A hybrid bootstrap filter for target tracking in clutter. IEEE Transactions on Aerospace and Electronic Systems, 33(1): , January [7] N. Gordon, D. Salmond, and A. Smith. A novel approach to nonlinear/non-gaussian Bayesian state estimation. IEE Proceedings F, 140(2): , [8] J.-S. Gutmann, W. Burgard, D. Fox, and K. Konolige. An experimental comparison of localization methods. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), [9] M. Isard and A. Blake. Contour tracking by stochastic propagation of conditional density. In Proc. of the European Conference of Computer Vision, [10] K. Kanazawa, D. Koller, and S. Russell. Stochastic simulation algorithms for dynamic probabilistic networks. In Proc. of the 11th Annual Conference on Uncertainty in AI (UAI) Montreal, Canada, [11] F. Lu and E. Milios. Robot pose estimation in unknown environments by matching 2d range scans. In IEEE Computer Vision and Pattern Recognition Conference (CVPR), [12] J. MacCormick and A. Blake. A probabilistic exclusion principle for tracking multiple objects. In Proc. of 7th International Conference on Computer Vision (ICCV), pages , [13] E. Meier and F. Ade. Using the condensation algorithm to implement tracking for mobile s. In Proc. of the Third European Workshop on Advanced Mobile Robots, Eu99, pages IEEE, [14] H. Moravec and A. Elfes. High resolution maps from wide angle sonar. In Proc. of the IEEE International Conference on Robotics & Automation (ICRA), pages , [15] M. Pitt and N. Shephard. Filtering via simulation: auxiliary particle filters. Journal of the American Statistical Association, 94(446),
This chapter explains two techniques which are frequently used throughout
Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive
More informationProbabilistic Robotics
Probabilistic Robotics Bayes Filter Implementations Discrete filters, Particle filters Piecewise Constant Representation of belief 2 Discrete Bayes Filter Algorithm 1. Algorithm Discrete_Bayes_filter(
More informationMobile Robot Mapping and Localization in Non-Static Environments
Mobile Robot Mapping and Localization in Non-Static Environments Cyrill Stachniss Wolfram Burgard University of Freiburg, Department of Computer Science, D-790 Freiburg, Germany {stachnis burgard @informatik.uni-freiburg.de}
More informationRobust Monte-Carlo Localization using Adaptive Likelihood Models
Robust Monte-Carlo Localization using Adaptive Likelihood Models Patrick Pfaff 1, Wolfram Burgard 1, and Dieter Fox 2 1 Department of Computer Science, University of Freiburg, Germany, {pfaff,burgard}@informatik.uni-freiburg.de
More informationProbabilistic Robotics
Probabilistic Robotics Discrete Filters and Particle Filters Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Probabilistic
More informationIntroduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard
Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard 1 Motivation Recall: Discrete filter Discretize the continuous state space High memory complexity
More informationMonte Carlo Localization for Mobile Robots
Monte Carlo Localization for Mobile Robots Frank Dellaert 1, Dieter Fox 2, Wolfram Burgard 3, Sebastian Thrun 4 1 Georgia Institute of Technology 2 University of Washington 3 University of Bonn 4 Carnegie
More informationHumanoid Robotics. Monte Carlo Localization. Maren Bennewitz
Humanoid Robotics Monte Carlo Localization Maren Bennewitz 1 Basis Probability Rules (1) If x and y are independent: Bayes rule: Often written as: The denominator is a normalizing constant that ensures
More informationLearning Motion Patterns of People for Compliant Robot Motion
Learning Motion Patterns of People for Compliant Robot Motion Maren Bennewitz Wolfram Burgard Grzegorz Cielniak Sebastian Thrun Department of Computer Science, University of Freiburg, 79110 Freiburg, Germany
More informationGrid-Based Models for Dynamic Environments
Grid-Based Models for Dynamic Environments Daniel Meyer-Delius Maximilian Beinhofer Wolfram Burgard Abstract The majority of existing approaches to mobile robot mapping assume that the world is, an assumption
More informationVisual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching
Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Hauke Strasdat, Cyrill Stachniss, Maren Bennewitz, and Wolfram Burgard Computer Science Institute, University of
More informationMonte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta
1 Monte Carlo Localization using Dynamically Expanding Occupancy Grids Karan M. Gupta Agenda Introduction Occupancy Grids Sonar Sensor Model Dynamically Expanding Occupancy Grids Monte Carlo Localization
More informationRobust Localization of Persons Based on Learned Motion Patterns
Robust Localization of Persons Based on Learned Motion Patterns Grzegorz Cielniak Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 791 Freiburg, Germany Department
More informationObject Tracking with an Adaptive Color-Based Particle Filter
Object Tracking with an Adaptive Color-Based Particle Filter Katja Nummiaro 1, Esther Koller-Meier 2, and Luc Van Gool 1,2 1 Katholieke Universiteit Leuven, ESAT/VISICS, Belgium {knummiar,vangool}@esat.kuleuven.ac.be
More informationEfficient Particle Filter-Based Tracking of Multiple Interacting Targets Using an MRF-based Motion Model
Efficient Particle Filter-Based Tracking of Multiple Interacting Targets Using an MRF-based Motion Model Zia Khan, Tucker Balch, and Frank Dellaert {zkhan,tucker,dellaert}@cc.gatech.edu College of Computing,
More informationAdapting the Sample Size in Particle Filters Through KLD-Sampling
Adapting the Sample Size in Particle Filters Through KLD-Sampling Dieter Fox Department of Computer Science & Engineering University of Washington Seattle, WA 98195 Email: fox@cs.washington.edu Abstract
More informationMapping and Exploration with Mobile Robots using Coverage Maps
Mapping and Exploration with Mobile Robots using Coverage Maps Cyrill Stachniss Wolfram Burgard University of Freiburg Department of Computer Science D-79110 Freiburg, Germany {stachnis burgard}@informatik.uni-freiburg.de
More informationRevising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History
Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced
More informationIROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing
IROS 05 Tutorial SLAM - Getting it Working in Real World Applications Rao-Blackwellized Particle Filters and Loop Closing Cyrill Stachniss and Wolfram Burgard University of Freiburg, Dept. of Computer
More informationL10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms
L10. PARTICLE FILTERING CONTINUED NA568 Mobile Robotics: Methods & Algorithms Gaussian Filters The Kalman filter and its variants can only model (unimodal) Gaussian distributions Courtesy: K. Arras Motivation
More informationMarkov Localization for Mobile Robots in Dynaic Environments
Markov Localization for Mobile Robots in Dynaic Environments http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/jair-localize.html Next: Introduction Journal of Artificial Intelligence
More informationVisual Motion Analysis and Tracking Part II
Visual Motion Analysis and Tracking Part II David J Fleet and Allan D Jepson CIAR NCAP Summer School July 12-16, 16, 2005 Outline Optical Flow and Tracking: Optical flow estimation (robust, iterative refinement,
More informationTracking Algorithms. Lecture16: Visual Tracking I. Probabilistic Tracking. Joint Probability and Graphical Model. Deterministic methods
Tracking Algorithms CSED441:Introduction to Computer Vision (2017F) Lecture16: Visual Tracking I Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Deterministic methods Given input video and current state,
More informationPractical Course WS12/13 Introduction to Monte Carlo Localization
Practical Course WS12/13 Introduction to Monte Carlo Localization Cyrill Stachniss and Luciano Spinello 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Bayes Filter
More informationProbabilistic Robotics. FastSLAM
Probabilistic Robotics FastSLAM The SLAM Problem SLAM stands for simultaneous localization and mapping The task of building a map while estimating the pose of the robot relative to this map Why is SLAM
More informationAdapting Navigation Strategies Using Motions Patterns of People
Adapting Navigation Strategies Using Motions Patterns of People Maren Bennewitz and Wolfram Burgard Department of Computer Science University of Freiburg, Germany Sebastian Thrun School of Computer Science
More informationMonte Carlo Localization for Mobile Robots
Monte Carlo Localization for Mobile Robots Frank Dellaert y Dieter Fox y Wolfram Burgard z Sebastian Thrun y y Computer Science Department, Carnegie Mellon University, Pittsburgh PA 15213 z Institute of
More informationAUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization
AUTONOMOUS SYSTEMS PROBABILISTIC LOCALIZATION Monte Carlo Localization Maria Isabel Ribeiro Pedro Lima With revisions introduced by Rodrigo Ventura Instituto Superior Técnico/Instituto de Sistemas e Robótica
More informationRobot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard
Robot Mapping A Short Introduction to the Bayes Filter and Related Models Gian Diego Tipaldi, Wolfram Burgard 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Recursive
More informationParticle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore
Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction
More informationL17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms
L17. OCCUPANCY MAPS NA568 Mobile Robotics: Methods & Algorithms Today s Topic Why Occupancy Maps? Bayes Binary Filters Log-odds Occupancy Maps Inverse sensor model Learning inverse sensor model ML map
More informationPeople Tracking with Anonymous and ID-Sensors Using Rao-Blackwellised Particle Filters
People Tracking with Anonymous and ID-Sensors Using Rao-Blackwellised Particle Filters Dirk Schulz, Dieter Fox, and Jeffrey Hightower Dcpt. of Computer Science & Engineering University of Washington Seattle,
More informationAn Experimental Comparison of Localization Methods Continued
An Experimental Comparison of Localization Methods Continued Jens-Steffen Gutmann Digital Creatures Laboratory Sony Corporation, Tokyo, Japan Email: gutmann@ieee.org Dieter Fox Department of Computer Science
More informationA New Omnidirectional Vision Sensor for Monte-Carlo Localization
A New Omnidirectional Vision Sensor for Monte-Carlo Localization E. Menegatti 1, A. Pretto 1, and E. Pagello 12 1 Intelligent Autonomous Systems Laboratory Department of Information Engineering The University
More informationAdapting the Sample Size in Particle Filters Through KLD-Sampling
Adapting the Sample Size in Particle Filters Through KLD-Sampling Dieter Fox Department of Computer Science & Engineering University of Washington Seattle, WA 98195 Email: fox@cs.washington.edu Abstract
More informationExploring Unknown Environments with Mobile Robots using Coverage Maps
Exploring Unknown Environments with Mobile Robots using Coverage Maps Cyrill Stachniss Wolfram Burgard University of Freiburg Department of Computer Science D-79110 Freiburg, Germany * Abstract In this
More informationComputer Vision 2 Lecture 8
Computer Vision 2 Lecture 8 Multi-Object Tracking (30.05.2016) leibe@vision.rwth-aachen.de, stueckler@vision.rwth-aachen.de RWTH Aachen University, Computer Vision Group http://www.vision.rwth-aachen.de
More informationReal-Time Tracking of Moving Objects Using Particle Filters
Real-Time Tracing of Moving Objects Using Particle Filters António Almeida, Jorge Almeida and Rui Araújo ISR - Institute for Systems and Robotics, Department of Electrical and Computer Engineering, University
More informationMobile Robot Map Learning from Range Data in Dynamic Environments
1 Mobile Robot Map Learning from Range Data in Dynamic Environments Wolfram Burgard 1 Cyrill Stachniss 1,2 Dirk Hähnel 1,3 1 University of Freiburg, Dept. of Computer Science, 79110 Freiburg, Germany 2
More informationFox, Burgard & Thrun problem they attack. Tracking or local techniques aim at compensating odometric errors occurring during robot navigation. They re
Journal of Artificial Intelligence Research 11 (1999) 391-427 Submitted 1/99; published 11/99 Markov Localization for Mobile Robots in Dynamic Environments Dieter Fox Computer Science Department and Robotics
More informationSwitching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking
Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Yang Wang Tele Tan Institute for Infocomm Research, Singapore {ywang, telctan}@i2r.a-star.edu.sg
More informationIntroduction to Mobile Robotics Path Planning and Collision Avoidance
Introduction to Mobile Robotics Path Planning and Collision Avoidance Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras 1 Motion Planning Latombe (1991): eminently necessary
More informationUSING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS. Oliver Wulf, Bernardo Wagner
USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS Oliver Wulf, Bernardo Wagner Institute for Systems Engineering (RTS/ISE), University of Hannover, Germany Mohamed Khalaf-Allah
More informationProbabilistic Robotics
Probabilistic Robotics Sebastian Thrun Wolfram Burgard Dieter Fox The MIT Press Cambridge, Massachusetts London, England Preface xvii Acknowledgments xix I Basics 1 1 Introduction 3 1.1 Uncertainty in
More informationWhere is...? Learning and Utilizing Motion Patterns of Persons with Mobile Robots
Where is...? Learning and Utilizing Motion Patterns of Persons with Mobile Robots Grzegorz Cielniak Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 79110 Freiburg,
More informationArtificial Intelligence for Robotics: A Brief Summary
Artificial Intelligence for Robotics: A Brief Summary This document provides a summary of the course, Artificial Intelligence for Robotics, and highlights main concepts. Lesson 1: Localization (using Histogram
More informationWhere s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map
Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie
More informationMatching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment
Matching Evaluation of D Laser Scan Points using Observed Probability in Unstable Measurement Environment Taichi Yamada, and Akihisa Ohya Abstract In the real environment such as urban areas sidewalk,
More informationProbabilistic Robotics
Probabilistic Robotics FastSLAM Sebastian Thrun (abridged and adapted by Rodrigo Ventura in Oct-2008) The SLAM Problem SLAM stands for simultaneous localization and mapping The task of building a map while
More informationProbabilistic Robotics
Probabilistic Robotics Probabilistic Motion and Sensor Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Sensors for Mobile
More informationIntroduction to Mobile Robotics Path Planning and Collision Avoidance. Wolfram Burgard, Maren Bennewitz, Diego Tipaldi, Luciano Spinello
Introduction to Mobile Robotics Path Planning and Collision Avoidance Wolfram Burgard, Maren Bennewitz, Diego Tipaldi, Luciano Spinello 1 Motion Planning Latombe (1991): is eminently necessary since, by
More informationEfficient Failure Detection for Mobile Robots Using Mixed-Abstraction Particle Filters
Efficient Failure Detection for Mobile Robots Using Mixed-Abstraction Particle Filters Christian Plagemann, Cyrill Stachniss, and Wolfram Burgard University of Freiburg Georges-Koehler-Allee 7911 Freiburg,
More informationIntegrating Global Position Estimation and Position Tracking for Mobile Robots: The Dynamic Markov Localization Approach
Integrating Global Position Estimation and Position Tracking for Mobile Robots: The Dynamic Markov Localization Approach Wolfram urgard Andreas Derr Dieter Fox Armin. Cremers Institute of Computer Science
More informationOptimizing Schedules for Prioritized Path Planning of Multi-Robot Systems
Proceedings of the 20 IEEE International Conference on Robotics & Automation Seoul, Korea May 21-26, 20 Optimizing Schedules for Prioritized Path Planning of Multi-Robot Systems Maren Bennewitz y Wolfram
More informationSLAM: Robotic Simultaneous Location and Mapping
SLAM: Robotic Simultaneous Location and Mapping William Regli Department of Computer Science (and Departments of ECE and MEM) Drexel University Acknowledgments to Sebastian Thrun & others SLAM Lecture
More informationMultiple Target Tracking For Mobile Robots Using the JPDAF Algorithm
Multiple Target Tracking For Mobile Robots Using the JPDAF Algorithm Aliakbar Gorji, Saeed Shiry and Mohammad Bagher Menhaj Abstract Mobile robot localization is taken into account as one of the most important
More informationActive Monte Carlo Localization in Outdoor Terrains using Multi-Level Surface Maps
Active Monte Carlo Localization in Outdoor Terrains using Multi-Level Surface Maps Rainer Kümmerle 1, Patrick Pfaff 1, Rudolph Triebel 2, and Wolfram Burgard 1 1 Department of Computer Science, University
More informationOverview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models
Introduction ti to Embedded dsystems EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping Gabe Hoffmann Ph.D. Candidate, Aero/Astro Engineering Stanford University Statistical Models
More informationIntroduction to Mobile Robotics
Introduction to Mobile Robotics Gaussian Processes Wolfram Burgard Cyrill Stachniss Giorgio Grisetti Maren Bennewitz Christian Plagemann SS08, University of Freiburg, Department for Computer Science Announcement
More informationRobotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.
Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:
More informationVoronoi Tracking: Location Estimation Using Sparse and Noisy Sensor Data
Voronoi Tracking: Location Estimation Using Sparse and Noisy Sensor Data Lin Liao, Dieter Fox, Jeffrey Hightower, Henry Kautz, and Dirk Schulz Deptartment of Computer Science & Engineering University of
More informationRobust Vision-based Localization by Combining an Image Retrieval System with Monte Carlo Localization
IEEE Transactions on Robotics, 21(2), 2005. Robust Vision-based Localization by Combining an Image Retrieval System with Monte Carlo Localization Jürgen Wolf Wolfram Burgard Hans Burkhardt Department of
More informationMultiple Pedestrian Tracking using Viterbi Data Association
Multiple Pedestrian Tracking using Viterbi Data Association Asma Azim and Olivier Aycard Abstract To address perception problems we must be able to track dynamic objects of the environment. An important
More informationCAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING
CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING By Michael Lowney Senior Thesis in Electrical Engineering University of Illinois at Urbana-Champaign Advisor: Professor Minh Do May 2015
More informationRobótica 2005 Actas do Encontro Científico Coimbra, 29 de Abril de 2005
Robótica 2005 Actas do Encontro Científico Coimbra, 29 de Abril de 2005 REAL-TIME TRACKING USING PARTICLE FILTERS AND SAMPLE-BASED JOINT PROBABILISTIC DATA ASSOCIATION FILTERS Jorge Almeida António Almeida
More informationLocalization and Map Building
Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples
More informationEKF Localization and EKF SLAM incorporating prior information
EKF Localization and EKF SLAM incorporating prior information Final Report ME- Samuel Castaneda ID: 113155 1. Abstract In the context of mobile robotics, before any motion planning or navigation algorithm
More informationRobot Mapping. Grid Maps. Gian Diego Tipaldi, Wolfram Burgard
Robot Mapping Grid Maps Gian Diego Tipaldi, Wolfram Burgard 1 Features vs. Volumetric Maps Courtesy: D. Hähnel Courtesy: E. Nebot 2 Features So far, we only used feature maps Natural choice for Kalman
More informationMonte Carlo Localization
Monte Carlo Localization P. Hiemstra & A. Nederveen August 24, 2007 Abstract In this paper we investigate robot localization with the Augmented Monte Carlo Localization (amcl) algorithm. The goal of the
More informationRobust Monte Carlo Localization for Mobile Robots
To appear in Artificial Intelligence, Summer 2001 Robust Monte Carlo Localization for Mobile Robots Sebastian Thrun, Dieter Fox y, Wolfram Burgard z, and Frank Dellaert School of Computer Science, Carnegie
More informationAn Adaptive Fusion Architecture for Target Tracking
An Adaptive Fusion Architecture for Target Tracking Gareth Loy, Luke Fletcher, Nicholas Apostoloff and Alexander Zelinsky Department of Systems Engineering Research School of Information Sciences and Engineering
More informationIntroduction to Mobile Robotics SLAM Landmark-based FastSLAM
Introduction to Mobile Robotics SLAM Landmark-based FastSLAM Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello Partial slide courtesy of Mike Montemerlo 1 The SLAM Problem
More informationLocalization of Multiple Robots with Simple Sensors
Proceedings of the IEEE International Conference on Mechatronics & Automation Niagara Falls, Canada July 2005 Localization of Multiple Robots with Simple Sensors Mike Peasgood and Christopher Clark Lab
More informationVisually Augmented POMDP for Indoor Robot Navigation
Visually Augmented POMDP for Indoor obot Navigation LÓPEZ M.E., BAEA., BEGASA L.M., ESCUDEO M.S. Electronics Department University of Alcalá Campus Universitario. 28871 Alcalá de Henares (Madrid) SPAIN
More informationSpring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots
Spring 2016 Localization II Localization I 25.04.2016 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception
More informationA Sample of Monte Carlo Methods in Robotics and Vision. Credits. Outline. Structure from Motion. without Correspondences
A Sample of Monte Carlo Methods in Robotics and Vision Frank Dellaert College of Computing Georgia Institute of Technology Credits Zia Khan Tucker Balch Michael Kaess Rafal Zboinski Ananth Ranganathan
More informationEnvironment Identification by Comparing Maps of Landmarks
Environment Identification by Comparing Maps of Landmarks Jens-Steffen Gutmann Masaki Fukuchi Kohtaro Sabe Digital Creatures Laboratory Sony Corporation -- Kitashinagawa, Shinagawa-ku Tokyo, 4- Japan Email:
More informationEvaluation of Moving Object Tracking Techniques for Video Surveillance Applications
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation
More informationSupervised Learning of Places
Supervised Learning of Places from Range Data using AdaBoost Óscar Martínez Mozos Cyrill Stachniss Wolfram Burgard University of Freiburg, Department of Computer Science, D-79110 Freiburg, Germany Abstract
More informationSpring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots
Spring 2018 Localization II Localization I 16.04.2018 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception
More informationECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter
ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu
More informationObstacle Avoidance (Local Path Planning)
Obstacle Avoidance (Local Path Planning) The goal of the obstacle avoidance algorithms is to avoid collisions with obstacles It is usually based on local map Often implemented as a more or less independent
More informationUsing EM to Learn Motion Behaviors of Persons with Mobile Robots
Using EM to Learn Motion Behaviors of Persons with Mobile Robots Maren Bennewitz Wolfram Burgard Sebastian Thrun Department of Computer Science, University of Freiburg, 790 Freiburg, Germany School of
More informationWatch their Moves: Applying Probabilistic Multiple Object Tracking to Autonomous Robot Soccer
From: AAAI-0 Proceedings. Copyright 00, AAAI (www.aaai.org). All rights reserved. Watch their Moves: Applying Probabilistic Multiple Object Tracking to Autonomous Robot Soccer Thorsten Schmitt, Michael
More informationReducing Particle Filtering Complexity for 3D Motion Capture using Dynamic Bayesian Networks
Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008) Reducing Particle Filtering Complexity for 3D Motion Capture using Dynamic Bayesian Networks Cédric Rose Diatélic SA INRIA
More informationIntroduction to Mobile Robotics SLAM Grid-based FastSLAM. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello
Introduction to Mobile Robotics SLAM Grid-based FastSLAM Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello 1 The SLAM Problem SLAM stands for simultaneous localization
More informationTracking of Human Body using Multiple Predictors
Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,
More informationA Genetic Algorithm for Simultaneous Localization and Mapping
A Genetic Algorithm for Simultaneous Localization and Mapping Tom Duckett Centre for Applied Autonomous Sensor Systems Dept. of Technology, Örebro University SE-70182 Örebro, Sweden http://www.aass.oru.se
More informationComputer Vision Group Prof. Daniel Cremers. 11. Sampling Methods
Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric
More informationVoronoi Tracking: Location Estimation Using Sparse and Noisy Sensor Data
Voronoi Tracking: Location Estimation Using Sparse and Noisy Sensor Data Lin Liao, Dieter Fox, Jeffrey Hightower, Henry Kautz, and Dirk Schulz Deptartment of Computer Science & Engineering University of
More informationPedestrian Tracking Using Random Finite Sets
1th International Conference on Information Fusion Chicago, Illinois, USA, July -8, 2011 Pedestrian Tracking Using Random Finite Sets Stephan Reuter Institute of Measurement, Control, and Microtechnology
More informationIndoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot
Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot Yonghoon Ji 1, Atsushi Yamashita 1, and Hajime Asama 1 School of Engineering, The University of Tokyo, Japan, t{ji,
More informationEE565:Mobile Robotics Lecture 3
EE565:Mobile Robotics Lecture 3 Welcome Dr. Ahmad Kamal Nasir Today s Objectives Motion Models Velocity based model (Dead-Reckoning) Odometry based model (Wheel Encoders) Sensor Models Beam model of range
More informationParticle Filters for Visual Tracking
Particle Filters for Visual Tracking T. Chateau, Pascal Institute, Clermont-Ferrand 1 Content Particle filtering: a probabilistic framework SIR particle filter MCMC particle filter RJMCMC particle filter
More informationECSE-626 Project: An Adaptive Color-Based Particle Filter
ECSE-626 Project: An Adaptive Color-Based Particle Filter Fabian Kaelin McGill University Montreal, Canada fabian.kaelin@mail.mcgill.ca Abstract The goal of this project was to discuss and implement a
More informationHuman Upper Body Pose Estimation in Static Images
1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project
More informationPeople Tracking with Anonymous and ID-Sensors Using Rao-Blackwellised Particle Filters
In Proc. of the International Joint Conference on Artificial Intelligence (IJCAI), 2003. Note, this is a revised version that fixes a bug in the derivation of the algorithm in Section 3.1 (revision made
More information3D Model Acquisition by Tracking 2D Wireframes
3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract
More informationProbabilistic Matching for 3D Scan Registration
Probabilistic Matching for 3D Scan Registration Dirk Hähnel Wolfram Burgard Department of Computer Science, University of Freiburg, 79110 Freiburg, Germany Abstract In this paper we consider the problem
More informationAppearance-based Concurrent Map Building and Localization
Appearance-based Concurrent Map Building and Localization J.M. Porta B.J.A. Kröse IAS Group, Informatics Institute, University of Amsterdam Kruislaan 403, 1098SJ, Amsterdam, The Netherlands {porta,krose}@science.uva.nl
More information