Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Similar documents
Practical Course WS12/13 Introduction to Monte Carlo Localization

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM

Probabilistic Robotics

Probabilistic Robotics

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta

Particle Filter for Robot Localization ECE 478 Homework #1

Localization and Map Building

Localization, Where am I?

EE565:Mobile Robotics Lecture 3

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps

Probabilistic Robotics

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots

Data Association for SLAM

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

This chapter explains two techniques which are frequently used throughout

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Monte Carlo Localization for Mobile Robots

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing

5. Tests and results Scan Matching Optimization Parameters Influence

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Monte Carlo Localization

Monte Carlo Ray Tracing. Computer Graphics CMU /15-662

EKF Localization and EKF SLAM incorporating prior information

Practical Robotics (PRAC)

Localization and Map Building

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Computational Methods. Randomness and Monte Carlo Methods

Probabilistic Robotics

COS Lecture 13 Autonomous Robot Navigation

Why Use Graphs? Test Grade. Time Sleeping (Hrs) Time Sleeping (Hrs) Test Grade

Probabilistic Robotics. FastSLAM

Simuntaneous Localisation and Mapping with a Single Camera. Abhishek Aneja and Zhichao Chen

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

Exam in DD2426 Robotics and Autonomous Systems

A New Omnidirectional Vision Sensor for Monte-Carlo Localization

Probability and Statistics for Final Year Engineering Students

Artificial Intelligence for Robotics: A Brief Summary

Appendix E: Software

Particle Attraction Localisation

Note Set 4: Finite Mixture Models and the EM Algorithm

3 Vectors and the Geometry of Space

Tracking Multiple Moving Objects with a Mobile Robot

Chapter 2 Modeling Distributions of Data

Gaussian Processes for Robotics. McGill COMP 765 Oct 24 th, 2017

Mini-Project II. Monte Carlo Localisation for mobile robots

USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS. Oliver Wulf, Bernardo Wagner

ABSTRACT ROBOT LOCALIZATION USING INERTIAL AND RF SENSORS. by Aleksandr Elesev

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Final Exam Practice Fall Semester, 2012

Graphical Analysis of Kinematics

CSE-571 Robotics. Sensors for Mobile Robots. Beam-based Sensor Model. Proximity Sensors. Probabilistic Sensor Models. Beam-based Scan-based Landmarks

Learning from Data: Adaptive Basis Functions

CSE 490R P3 - Model Learning and MPPI Due date: Sunday, Feb 28-11:59 PM

Matthew Schwartz Lecture 19: Diffraction and resolution

Particle Filter Localization

LIGHT: Two-slit Interference

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images:

Error Simulation and Multi-Sensor Data Fusion

7630 Autonomous Robotics Probabilities for Robotics

F1/10 th Autonomous Racing. Localization. Nischal K N

Robust Monte-Carlo Localization using Adaptive Likelihood Models

Encoder applications. I Most common use case: Combination with motors

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

BIOL Gradation of a histogram (a) into the normal curve (b)

Models and State Representation in Scene: Generalised Software for Real-Time SLAM

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Advanced Vision Practical

Particle Filter Tutorial. Carlos Esteves and Daphne Ippolito. Introduction. Prediction. Update. Resample. November 3, 2016

number Understand the equivalence between recurring decimals and fractions

Robotics. Chapter 25-b. Chapter 25-b 1

Lecture VII : Random systems and random walk

Experiment 8 Wave Optics

NERC Gazebo simulation implementation

Lesson 3: Particle Filters

PEER Report Addendum.

Graphical Analysis of Kinematics

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Updated Sections 3.5 and 3.6

Localization of Multiple Robots with Simple Sensors

ROBOTICS AND AUTONOMOUS SYSTEMS

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

Autonomous Navigation of Nao using Kinect CS365 : Project Report

Sensor Tasking and Control

Laboratorio di Problemi Inversi Esercitazione 4: metodi Bayesiani e importance sampling

Robotics and Autonomous Systems

Robotics and Autonomous Systems

Week 7: The normal distribution and sample means

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

Transcription:

Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London

Review: Probabilistic Motion and Sensing Practical 4 The preliminaries for Monte Carlo Localisation: Updating a probability distribution, using particles, to represent motion uncertainty. We want to see the particles spreading out in a realistic manner to represent uncertainty over a square trajectory, and hear about how you found the right parameter settings to achieve this. Waypoint-based navigation. Investigating the performance of the sonar sensor probabilistically.

Motion Prediction We know that uncertainty grows during blind motion. So when the robot makes a movement, the particle distribution needs to shift its mean position but also spread out. We achieve this by passing the state part of each particle through a function which has a deterministic component and a random component.

Motion Prediction During a straight-line period of motion of distance D: x new x + (D + e) cos θ y new θ new = y + (D + e) sin θ θ + f During a pure rotation of angle angle α: x new x y new = y θ new θ + α + g Here e, f and g are zero mean noise terms i.e. random numbers sampled from with a Gaussian distribution. The spreading out comes from sampling a different set of random numbers for each particle.

Review: Probabilistic Motion and Sensing Practical 4 Although your robot may have been very precise in the square experiment we did in week 2, it is best to be conservative with the uncertainty values you use in a probabilistic filter such as MCL. Especially because the robots are now driving on variable carpet, odometry precision is likely to be less and you should make sure your particles spread out enough to represent this. This goes for the sonar sensor too: in a real navigation setting, there may be more variation (e.g. in how it responds to different surfaces, angles, etc.) than in your controlled experiments. In this practical we explicitly asked you to use fixed size steps (10cm and 90 ), so you just need to find single values for the variances of each of e, f and g in the particle motion prediction equations. If you later on want to use varying distance and angle step sizes for particle filter updates, you can come up with constants which are scale constants representing the growth in uncertainty per cm or per degree. Be careful though that variance should vary linearly with distance or angle, not standard deviation. This is to make sure that you reach the same position uncertainty after one long movement as after two short movements of half the distance.

Monte Carlo Localisation (MCL) In Practical 5 we will aim at repeatable localisation within an environment like this which we will set up with boards in the lab. d D E A b B c e C a F f G y θ g O x h H

Monte Carlo Localisation (MCL) In MCL, a cloud of weighted particles represents the uncertain position of a robot. Two ways of thinking about how MCL works: A Bayesian probabilistic filter. Survival of the fittest, like a genetic algorithm. After motion prediction, the particles are in a set of random positions which should span the possible positions into which the robot might have really moved. When a measurement is made from sonar, the likelihood each particle is assigned, according to how well it fits the measurement, is a fitness score. Normalising and resampling then allow strong particles to reproduce (multiple copies are made of them), while weak particles die out.

MCL Successful Large-Scale Implementation (Dieter Fox et al.1999, using sonar. See animated gif at http://www.doc.ic.ac.uk/~ajd/robotics/roboticsresources/ montecarlolocalization.gif.)

The Particle Distribution A particle is a point estimate x i of the state (position) of the robot with a weight w i. x i = x i y i θ i The full particle set is: {x i, w i }, for i = 1 to N. A typical value might be N = 100. All weights should add up to 1. If so, the distribution is said to be normalised: N w i = 1. i=1 Interpretation: the probability that the robot is within any region (multi-d volume) of the state space is the sum of all of the weights of particles within that region.

Continuous vs. Global Localisation Continuous localisation is a tracking problem: given a good estimate of where the robot was at the last time-step and some new measurements, estimate its new position. Global localisation is often called the kidnapped robot problem : the environment is known, but the robot s position within it is completely uncertain. MCL can be used to tackle both of these problems; the only difference is in how we initialise the particle set.

Continuous vs. Global Localisation In continuous localisation we usually assume that we start from a perfectly known robot position: set the state of all particles to the same value. Set all weights to be equal. The result is a spike, completely certain point estimate of the robot s location. x 1 = x 2 =... = x N = x init ; w 1 = w 2 =... w N = 1/N In global localisation we start from knowledge only that the robot is somewhere within a certain region. The state of each particle should be sampled randomly from all of the possible positions within that region. Set all weights to be equal. x i = Random ; w 1 = w 2 =... w N = 1/N With only one sonar sensor global localisation is very difficult, so in the practical we will attempt continuous localisation, initialising all particles to the same location.

Inferring an Estimate and Position-Based Navigation At any point, our uncertain estimate of the location of the robot is represented by the whole particle set. If we need to, we can make a point estimate of the current position and orientation of the robot by taking the mean of all of the particles: N x = w i x i. i.e. the means of the x, y and θ components are all calculated individually and stored in x. The robot could use this for instance to plan A B waypoint navigation. i=1

Steps in MCL/Particle Filter These steps are repeated every time the robot moves a little and makes measurements: 1. Motion Prediction based on Odometry 2. Measurement Update based on Outward Looking Sensors (Sonar) 3. Normalisation 4. Resampling

Motion Prediction See details from the last lecture. Watch out for angular wrap-around i.e. make sure that θ values are always in the range π to π. Solve this by simply adding or subtracting 2π to any particles which go out of this range. You may need to take special care before taking a mean of particles, ensuring that the wrap-around doesn t fall between nearby particle angles.

Displaying a Particle Set We can visualise the particle set by plotting the x and y coordinates as a set of dots; more difficult to visualise the θ angular distribution (perhaps with arrows?) but we can get the main idea just from the linear components. This example shows a robot only using repeated motion prediction the particles get more and more spread out.

Measurement Updates A measurement update consists of applying Bayes Rule to each particle; remember: P(X Z) = P(Z X)P(X) P(Z) So when we achieve a measurement z, we update the weight of each particle as follows: w i(new) = P(z x i ) w i, remembering that the denominator in Bayes rule is a constant factor which we do not need to calculate because it will later be removed by normalisation. P(z x i ) is the likelihood of particle i; the probability of getting measurement z given that x i represents the true state.

Likelihood Functions p(z v) A likelihood function fully describes a sensor s performance. Its form can be determined by repeated experiments with the sensor: make multiple many measurements at each of a set of precisely measured ground truth values, and calculate the statistics. p(z v) is a function of both measurement variables z and ground truth v and can be plotted as a probability surface. e.g. for a depth sensor: p(z v) p(z v) (0, 0, 0) z (0, 0, 0) z z v z=v v z=v v z=v z=1.1v Constant Uncertainty Growing Systematic Error (Biased)

Measurement Update: How do we get the Ground Truth Value? (Ax, Ay) m (Bx, By) θ Robot at (x, y, ) θ If robot is at pose (x, y, θ) then its forward distance to an infinite wall passing through (A x, A y ) and (B x, B y ) is: m = (B y A y )(A x x) (B x A x )(A y y) (B y A y ) cos θ (B x A x ) sin θ.

Measurement Update: Sonar (Ax, Ay) m (Bx, By) θ Robot at (x, y, ) θ The world coordinates at which the forward vector from the robot will meet the wall are: ( ) x + m cos θ. y + m sin θ Using this you can check if the sonar should hit between the endpoint limits of the wall. If the sonar would independently hit several of these walls, obviously the closest is the one it will actually respond to.

Likelihood for Sonar Update The likelihood should depend on the difference z m: if this is small, then the measurements validates a particle; if it is big it does not and weakens the particle. The further away the measurement is from the prediction, the less likely it is to occur. A Gaussian function is usually a good model for the mathematical form of this. The standard deviation σ s is based on our model of how uncertain the sensor is, and may depend on z or may be constant. (z m) 2 2σ p(z m) e s 2 Note the difference between this and the motion prediction step. There we sampled randomly from a Gaussian distribution to move each particle by a slightly different amount. Here in the measurement update we just read off a value from a Gaussian function to obtain a likelihood for each particle.

Robust Likelihood for Sonar Update A robust likehood function models the fact that real sensors sometimes report garbage values which are not close to ground truth. Robust functions have heavy tails. This can be achieved most simply by adding a constant to the likelihood function. The meaning of this is that there is some constant probability that the sensor will return a garbage value, uniformly distributed across the range of the sensor. p(z m) K 0 z=m z (z m) 2 p(z m) e 2σ s 2 + K The effect of a robust likelihood function in MCL is that the filter is less aggressive in killing off particles which are far from agreeing with measurements. An occasional garbage measurement will not lead to the sudden death of all of the particles in good positions.

Likelihood for Sonar Update (Ax, Ay) m β (Bx, By) Normal to map line θ Robot at (x, y, ) θ Also relevant may be the angle between the sonar direction and the normal to the wall. If this is too great, probably the sonar will not give a sensible reading and you should ignore it. ( ) β = cos 1 cos θ(a y B y ) + sin θ(b x A x ) (Ay B y ) 2 + (B x A x ) 2

Normalisation The weights of all particles should be scaled so that they again add up to 1. Calculate N i=1 w i and divide all existing weights by this: w i(new) = w i N i=1 w i

Resampling Resampling consists of generating a new set of N particles which all have equal weights 1/N, but whose spatial distribution now reflects the probability density. To generate each of the N new particles, we copy the state of one of the previous set of particles with probability according to that particle s weight. This is best achieved by generating the cumulative probability distribution of the particles, generating a random number between 0 and 1 and then picking the particle whose cumulative probability this intersects. Note that for efficiency it is possible to skip the normalisation step and resample directly from an unnormalised distribution.

Another Measurement Possibility: Compass Sensor A digital compass gives a robot the ability to estimate its rotation without drift. Having a digital compass and a single sonar sensor puts us in a position close to that of having a full sonar ring. In principle, the robot could stop and rotate on the spot every so often to simulate a sonar ring. We could simply monitor the compass as the robot continuously moves and feed this into our filter. But we won t try it today, because the NXT compasses don t work very well in the lab.

Measurement Update: Compass Sensor Compass measures bearing β relative to north. What is P(β x i )? Clearly the likelihood only depends on the θ part of x i. If the compass is working perfectly, then: β = γ θ, where γ is the magnetic bearing of the x coordiante axis of frame W. So we should assess the uncertainty in the compass, and set a likelihood which depends on the difference between β and γ θ; e.g. (β (γ θ)) 2 2σ Gaussian e c 2 Particles far from the right orientation will get low weights.