L10. PARTICLE FILTERING CONTINUED NA568 Mobile Robotics: Methods & Algorithms
Gaussian Filters The Kalman filter and its variants can only model (unimodal) Gaussian distributions Courtesy: K. Arras
Motivation Goal: approach for dealing with arbitrary distributions Courtesy: Cyrill Stachniss
Key Idea: Samples Use multiple samples to represent arbitrary distributions samples Courtesy: Cyrill Stachniss
Particle Set Set of weighted samples state hypothesis importance weight The samples represent the posterior Courtesy: Cyrill Stachniss
Particles for Approximation Particles for function approximation The more particles fall into a region, the higher the probability of the region How to obtain such samples? Courtesy: Cyrill Stachniss
Closed Form Sampling is Only Possible for a Few Distributions Example: Gaussian How to sample from other distributions? Courtesy: Cyrill Stachniss
Importance Sampling Principle We can use a different distribution q to generate samples from p Account for the differences between q and p using a weight w = p / q target p proposal q Pre-condition: p(x)>0 q(x)>0 Courtesy: Cyrill Stachniss 8
Particle Filter Recursive Bayes filter Non-parametric approach Models the distribution by samples Prediction: draw from the proposal Correction: weighting by the ratio of target and proposal The more samples we use, the better the estimate! Courtesy: Cyrill Stachniss
Sequential Importance Sampling (SIS) Algorithm
Degeneracy Problem Ideally, the importance density q(. ) should be the posterior itself, i.e., For the assumed factored form below, it has been shown that the variance of the importance weights can only increase over time. In practical terms, all but one particle will eventually have negligible weight after a fixed number of time steps Effectively, a large computational effort is devoted to updating particles whose contribution to the approximation p(x k z 1:k ) is almost zero A. Doucet, S. Godsill, and C. Andrieu, On sequential Monte Carlo sampling methods for Bayesian filtering, Statistics and Computing, vol. 10, no. 3, pp. 197-208, 2000.
Resampling Resampling eliminates particles with low weights and multiplies particles with high weights Maps random measure {x ki, w ki } to {x k i*, 1/N} Sample with replacement such that P(x k i* = x ki ) = w k i uniform
Resampling w N-1 w N w 1 w 2 w N-1 w N w 1 w 2 w 3 w 3 Roulette wheel Binary search, O(N log N) Stochastic universal sampling Systematic resampling Linear time complexity, O(N) Easy to implement, low variance
Low-Variance Resampling Algorithm Algorithm systematic_resampling(s, N): 1. 1 S ' =, c1 = w 2. For i = 2 N Generate CDF i 3. c c + w i = i 1 1 4. u1 ~ U[0, N ], i = 1 Initialize threshold 5. For j =1 N Draw samples 6. While ( u j > c i ) Skip until next threshold reached 7. i = i +1 8. { i 1 S' = S' < x, N > } Insert 9. 1 u j+ 1 = u j + N Increment threshold 10. Return S Also called stochastic universal sampling
Idea behind low variance sampling 1/N u 1
Comments on Resampling Pros Reduces effects of degeneracy Cons Limits opportunity to parallelize implementation since all particles must be combined Particles with high importance weights are replicated. This leads to loss of diversity among particles, a.k.a. sample impoverishment Diversity of particle paths is reduced, any smoothed estimate based on particles paths degenerates
Trajectory Degeneracy Due to Resampling Implicitly, each particle represents a guess at the realization of the state sequence x 0:k k=0 k=1 k=2 A 0 A 0 A 0 B 0 B 0 B 0 C 0 C 0 C 0 D 0 D 0 C 1 Resampling step causes some particle lineages to die k=3 k=4 A 0 A 0 B 0 A 3 C 0 C 0 C 1 C 1 Trajectories can eventually collapse to a single source node k=5 k=6 A 0 A 0 A 3 A 3 C 0 A 5 C 1 C 1 k=7 A 0 A 3 A 5 C 1 k=8 A 0 A 3 A 5 A 7
Selection of Importance Density Selection of importance density q(. ) is the most critical issue in the design of a PF. The optimal importance density that minimizes the variance of the importance weights conditioned upon x i k-1 and z k is : This yields optimal A. Doucet, S. Godsill, and C. Andrieu, On sequential Monte Carlo sampling methods for Bayesian filtering, Statistics and Computing, vol. 10, no. 3, pp. 197-208, 2000.
Comments on Optimal Importance Density In order to use optimal importance function, one has to be able to: i) Sample from ii) Evaluate (up to a normalizing constant) In general, neither i) nor ii) is straightforward
Suboptimal Importance Density Most popular suboptimal choice is the transitional density This yields suboptimal
Comments on Suboptimal Importance Density Pros Importance weights can be easily evaluated Importance density can be easily sampled Cons With q opt, weights are computed before the particles are propagated to time k. This is not possible with the suboptimal transitional prior. State space is explored without any knowledge of the observation, hence filter can be inefficient
Particle Filter Algorithm (Sub-optimal version) Algorithm particle_filter( S t-1, u t-1, z t ): 1. S t 2. For i =1 n Generate new samples 3. Sample from using and 4. Compute importance weight 5. i η =η + w t Update normalization factor 6. For i i 7. w t = wt /η Normalize weights 8. Resample x t i* from the discrete distribution given by w t 9. Return =, η = 0 i =1 N
Particle Filter Algorithm (Sub-optimal version) Importance factor for x i t: draw x i t from p(x t x i t-1,u t-1 ) draw x i t-1 from Bel(x t-1 )
One Iteration of SIR PF likelihood weighted unweighted
Monte Carlo Localization Each particle is a pose (trajectory) hypothesis Proposal is the motion model Correction via the observation model Courtesy: Cyrill Stachniss
Particle Filter for Localization Courtesy: Cyrill Stachniss
Odometry Motion Model
Beacon Measurement Model IGNORE
Particle Filters
Sensor Information: Importance Sampling
Robot Motion
Sensor Information: Importance Sampling
Robot Motion
35 Global Localization Example
Motion Model Reminder Start
Proximity Sensor Model Reminder Laser sensor Sonar sensor
initialization Courtesy: Thrun, Burgard, Fox
observation Courtesy: Thrun, Burgard, Fox
resampling Courtesy: Thrun, Burgard, Fox
motion update Courtesy: Thrun, Burgard, Fox
measurement Courtesy: Thrun, Burgard, Fox
weight update Courtesy: Thrun, Burgard, Fox
resampling Courtesy: Thrun, Burgard, Fox
motion update Courtesy: Thrun, Burgard, Fox
measurement Courtesy: Thrun, Burgard, Fox
weight update Courtesy: Thrun, Burgard, Fox
resampling Courtesy: Thrun, Burgard, Fox
motion update Courtesy: Thrun, Burgard, Fox
measurement Courtesy: Thrun, Burgard, Fox
weight update Courtesy: Thrun, Burgard, Fox
resampling Courtesy: Thrun, Burgard, Fox
motion update Courtesy: Thrun, Burgard, Fox
measurement Courtesy: Thrun, Burgard, Fox
Importance Sampling with Resampling: Landmark Detection Example
Summary Particle Filters Particle filters are non-parametric, recursive Bayes filters Posterior is represented by a set of weighted samples Proposal to draw the samples for t+1 Weight to account for the differences between the proposal and the target Work well in low-dimensional spaces
Summary PF Localization Particles are propagated according to the motion model They are weighted according to the likelihood of the observation Called: Monte-Carlo localization (MCL) MCL is the gold standard for mobile robot localization today
Next Lecture EKF Feature-Based SLAM Read ProbRob Chapter 10 Smith, Self, Cheesemen: The paper that started it all Dissanayake: Convergence properties of EKF SLAM SLAM Tutorial I SLAM Tutorial II