Mini-Project II. Monte Carlo Localisation for mobile robots

Similar documents
Probabilistic Robotics

This chapter explains two techniques which are frequently used throughout

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Localization and Map Building

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Probabilistic Robotics

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms

Practical Course WS12/13 Introduction to Monte Carlo Localization

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Particle Filter for Robot Localization ECE 478 Homework #1

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Localization, Where am I?

Monte Carlo Localization

Artificial Intelligence for Robotics: A Brief Summary

COS Lecture 13 Autonomous Robot Navigation

EE565:Mobile Robotics Lecture 3

Localization and Map Building

Final Exam Practice Fall Semester, 2012

5. Tests and results Scan Matching Optimization Parameters Influence

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING

Particle Attraction Localisation

Chapter 1. Introduction

7630 Autonomous Robotics Probabilities for Robotics

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100

Robust Monte Carlo Localization for Mobile Robots

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Exam in DD2426 Robotics and Autonomous Systems

GT "Calcul Ensembliste"

Statistical Techniques in Robotics (16-831, F10) Lecture #02 (Thursday, August 28) Bayes Filtering

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Markov Localization for Mobile Robots in Dynaic Environments

ECSE-626 Project: An Adaptive Color-Based Particle Filter

F1/10 th Autonomous Racing. Localization. Nischal K N

Autonomous Mobile Robot Design

Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation

Computer Vision 2 Lecture 8

NERC Gazebo simulation implementation

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Localization of Multiple Robots with Simple Sensors

Probabilistic Robotics

USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS. Oliver Wulf, Bernardo Wagner

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Robotics. Chapter 25-b. Chapter 25-b 1

Ultrasonic Multi-Skip Tomography for Pipe Inspection

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Probabilistic Robotics

Introduction to Mobile Robotics Probabilistic Motion Models

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

Mobile Robot Mapping and Localization in Non-Static Environments

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Apprenticeship Learning for Reinforcement Learning. with application to RC helicopter flight Ritwik Anand, Nick Haliday, Audrey Huang

Function approximation using RBF network. 10 basis functions and 25 data points.

2017 Summer Course on Optical Oceanography and Ocean Color Remote Sensing. Monte Carlo Simulation

MULTI-DIMENSIONAL MONTE CARLO INTEGRATION

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

ELEC Dr Reji Mathew Electrical Engineering UNSW

Geometrical Feature Extraction Using 2D Range Scanner

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Centre for Autonomous Systems

Elastic Correction of Dead-Reckoning Errors in Map Building

10-701/15-781, Fall 2006, Final

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm

Nested Sampling: Introduction and Implementation

Tracking Multiple Moving Objects with a Mobile Robot

ABSTRACT ROBOT LOCALIZATION USING INERTIAL AND RF SENSORS. by Aleksandr Elesev

CSE 490R P3 - Model Learning and MPPI Due date: Sunday, Feb 28-11:59 PM

Chapter 14 Global Search Algorithms

In the real world, light sources emit light particles, which travel in space, reflect at objects or scatter in volumetric media (potentially multiple

Robot Localization based on Geo-referenced Images and G raphic Methods

Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps

MTRX4700: Experimental Robotics

Robotics and Autonomous Systems

Robotics and Autonomous Systems

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots

Exam Issued: May 29, 2017, 13:00 Hand in: May 29, 2017, 16:00

10.4 Linear interpolation method Newton s method

Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang

Unsupervised Learning

Uncertainties: Representation and Propagation & Line Extraction from Range data

Empirical risk minimization (ERM) A first model of learning. The excess risk. Getting a uniform guarantee

Transcription:

The University of Birmingham School of Computer Science MSc in Advanced Computer Science Mini-Project II Monte Carlo Localisation for mobile robots Marek Kopicki Supervisor: Dr Jeremy Wyatt April 26, 2004

Abstract The localisation is not a trivial problem of mobile robotics. The robot must localise itself, having merely a map of the environment and sensor readings. This is a base knowledge that robot must possess in order to be able to complete higher level tasks like path planning. The Monte Carlo Localisation (MCL) is a probabilistic algorithm successfully applied in many mobile robot systems. The core of the system that implements the MCL algorithm is the T1000 Java-written robot application platform which cooperates with the Saphira robot control system developed at SRI International s Artificial Intelligence Center. T1000 is a client application communicating via TCP/IP network with a Windows/Linux server for Saphira, what enables running and testing several robot applications at the same time. Thanks to affine transformation of robot s sensor measurements, the algorithm does not have to be synchronised with incoming sensor data, and allows processing several measurements at a time. Moreover, they can be processed at any convenient time, what simplifies using any preselection algorithm. In addition, the improved particles initialisation can also reduce the number of particles without decreasing the effectiveness of the basic MCL algorithm. Even though the system was not tested yet on real robots, the experiments with the Pioneer robot simulator are promising, and it seems that the system is comparable to the original MCL software attached to Saphira. Keywords Localization, Monte Carlo Localisation, particle filters, position estimation, robot navigation, motion model, sensor model. ii

Acknowledgements I would like to take this opportunity to thank my project supervisor Dr Jeremy Wyatt who painstakingly guided me during the mini-project. Without his insightful suggestions as well as enlightening discussions with him, I would not have been able to complete this mini-project. iii

Contents Abstract Keywords Acknowledgements Table of contents List of Figures List of Tables ii ii iii v vi vii 1 Preface 1 2 Monte Carlo Localisation 2 2.1 Localisation problem................................... 2 2.2 Bayesian filtering..................................... 2 2.3 Belief implementations.................................. 4 2.3.1 Kalman Filters.................................. 4 2.3.2 Multi-hypothesis Tracking (MHT)....................... 4 2.3.3 Grid-based approaches (Markov Localisation)................. 5 2.3.4 Topological approaches............................. 5 2.3.5 Particle filters................................... 6 2.4 How does it work?.................................... 6 2.4.1 Sampling importance resampling (SIR).................... 7 3 Probabilistic Models for Bayesian Filters 9 3.1 Motion model....................................... 9 3.2 Sensor model....................................... 11 3.2.1 Modelling of sensor noise............................ 12 3.2.2 Integration of sensor measurements....................... 14 3.3 World model....................................... 15 4 Conclusions 19 A Robot motion 23 A.1 Position estimation.................................... 23 A.2 Position integration.................................... 23 A.3 Extended robot motion................................. 24 iv

B A few notions from statistics 26 B.1 Measurements and errors................................ 26 B.2 Random variables..................................... 26 B.3 Movement Errors..................................... 28 C The software platform 30 C-1 Overview......................................... 30 C-1.1 The server architecture............................. 30 C-1.2 The client architecture.............................. 30 C-2 Installation and usage instructions........................... 32 C-2.1 Files description................................. 32 C-2.2 General requirements.............................. 32 C-2.3 The server..................................... 32 C-2.4 The client..................................... 33 D Mini-project declaration 35 E Statement of information search strategy 37 1 Parameters of literature search............................. 37 1.1 Forms of literature................................ 37 1.2 Geographical and language coverage...................... 37 1.3 Age-Range of literature............................. 37 2 Appropriate search tools................................. 37 2.1 Engineering Index................................ 37 2.2 Science Citation Index (SCI).......................... 38 2.3 Dissertations Abstracts International (DAI) and Index to Theses...... 38 3 Search statements.................................... 38 4 Brief evaluation of the search.............................. 38 Bibliography 39 v

List of Figures 2.1 Monte Carlo localisation algorithm successfully localises the robot......... 8 3.1 The probabilistic evolution of the robot pose..................... 11 3.2 Sensor readings of moving robot............................ 12 3.3 Sensor model geometry.................................. 12 3.4 Sensor noise models................................... 13 3.5 Affine transformation of sensor measurements..................... 14 3.6 The world map...................................... 16 3.7 The improved initialisation of particles......................... 18 4.1 The Monte Carlo localisation on a large map..................... 22 A.1 Robot displacement................................... 24 A.2 Realistic robot motion.................................. 25 vi

List of Tables 2.1 Continuous implementations of belief.......................... 4 2.2 Discrete implementations of belief........................... 5 C.1 The T1000 applications................................. 34 C.2 The properties of the T1000 applications........................ 34 vii

Chapter 1 Preface Before futurology eventually lost its former status in our century, most of their representatives were foreseeing a bright future for robots in our live. The role of futurology was definitely succeeded by science which unveiled, unfortunately, that many a time, the discrepancies between fantasy and technical feasibility, and even theory, are very difficult to overcome. The science area is currently so wide that making any substantial progress in any discipline has become a real challenge. Even though building robots with human-like capabilities is absolutely unthinkable at the moment, one can notice some successful application areas of robots with famous Mars-rover at the forefront - a semi-autonomous mobile robot. But is the era of intelligent home servant robots really approaching? The main subject of interest of this report is a robot localisation problem, which is the problem of estimation of robot s location, having merely a map of the environment and sensor readings. The robot s location is a very base knowledge that robot must possess in order to be able to complete higher level tasks like path planning. The report analyses the Monte Carlo Localisation (MCL) - a probabilistic algorithm successfully applied in many mobile robot systems [13][5][3]. Because of its probabilistic nature, the algorithm performs well even in dynamically changing environment, where moving obstacles permanently disturb sensor readings. The core of the system that implements the MCL algorithm is the T1000 Java-written robot application platform (see appendix C) cooperating with the Saphira and Aria - a robot control system developed at SRI International s Artificial Intelligence Center and ActivMedia Robotics [1]. T1000 is thought to be a universal and easy in use platform for development miscellaneous algorithms for intelligent robots. T1000 is a client application that communicates via TCP/IP network with a multi-client server, what enables running and testing several robot applications at the same time. The server is implemented as a Windows/Linux dynamic library downloadable by the Saphira s Colbert environment. Unfortunately, because I was not able to test yet the software on real robots, it is difficult to say about any results of experiments. The only exception is the Pioneer robot simulator [1] where results are promising and it seems that the system is comparable to the original MCL software attached to Saphira. The whole project, however, is only a part of the summer one where I intend to extend the system in probably mapping algorithms and do much more experimental work. 1

Chapter 2 Monte Carlo Localisation 2.1 Localisation problem Localisation problem is a general class involving several localisation problem instances depending on the techniques used [13][5]. The simplest and the best known one is the position tracking problem. Here the estimate of initial robot pose is known, and the problem amounts to compensation of accumulating errors of robot s odometry readings. Because position tracking plays an important role in the MLC algorithm, I discuss this topic separately in the appendix A. More difficult is the global localisation problem or the wake-up robot s problem, where a robot s initial pose is unknown. The robot must be able to deal with multiple hypotheses about its initial pose [13]. The methods that solve this kind of problems are called global techniques [5]. Finally, the most difficult is the kidnapped robot problem, where already localised robot is suddenly transferred into any other place without being told. The kidnapped robot initially believes to be somewhere else than in fact it is. The algorithms solving kidnapped robot problem should also be capable to solve global localisation problem, which in turn can be considered as a sub-class of the first one. Most of existing algorithms deal only with the position tracking problem. The most efficient ones are Kalman Filters, which however cannot cope with the global localisation problem. Markov Localisation is an extension of Kalman Filters, and solves this problem by a piecewise constant approximation of possible robot s poses. Unfortunately, this method is computationally expensive, even though there were proposed many improvements. Monte Carlo Localisation appears to be the best solution of the global localization problem so far. Moreover, the MCL algorithm can solve the kidnapped robot problem either, while being more efficient and accurate than Markov Localization. 2.2 Bayesian filtering The majority of approaches to localisation problem base on so called Bayes Filters. Bayes Filter recursively estimates the posterior distribution representing the pose of the robot, basing entirely on collected sensor data. Typically, the pose of the robot is a two-dimensional Cartesian position plus the robot s heading direction. Bayes filters assume that the Markov assumption holds, i.e. the current state contains all the information which is necessary to predict a future state. The posterior or conditional distribution, also called belief Bel(x t ), represents the robot s pose x t 1 at time t: 1 or more general - the state of a dynamical system 2

Bel(x t ) = p(x t d t, d t 1,..., d 0 ) (2.1) where d t, d t 1,..., d 0 denote the collected sensor data from time 0 up to time t. In particular, picking up any pose x t (at time t), the belief answers the question what is the probability that the robot is at the pose x t if the collected data is d t, d t 1,..., d 0?. Assuming that the data consists of perceptual data (sonar or laser readings) denoted by o (observations), and odometry data denoted by a (actions), from the equation 2.1 we have [13]: Bel(x t ) = p(x t o t, a t 1, o t 1,..., a 0, o 0 ) (2.2) The equation 2.2 can by further transformed. According to Bayes rule we obtain: Bel(x t ) = p(o t x t, a t 1,..., o 0 )p(x t a t 1,..., o 0 ) p(o t a t 1,..., o 0 ) Note that, the denominator remains a constant value when testing the belief Bel(x t ) for various poses x t, therefore we can rewrite the equation 2.3 as: (2.3) where Bel(x t ) = ηp(o t x t, a t 1,..., o 0 )p(x t a t 1,..., o 0 ) (2.4) η = p(o t a t 1,..., o 0 ) 1 (2.5) Since the Markov assumption implies that the future state is independent of the past state if the current state is known, from the equation 2.4 we obtain: where Bel(x t ) = ηp(o t x t )p(x t a t 1,..., o 0 ) (2.6) p(o t x t, a t 1,..., o 0 ) = p(o t x t ) (2.7) The total probability theorem (see [6]) implies Bel(x t ) = ηp(o t x t ) p(x t x t 1, a t 1,..., o 0 )p(x t 1 a t 1,..., o 0 )dx t 1 (2.8) where we integrated over all states at time t 1. Exploiting the Markov assumption again, we can simplify the first term in the integral 2.8: p(x t x t 1, a t 1, o t 1, a t 2,..., o 0 ) = p(x t x t 1, a t 1 ) (2.9) From the belief definition 2.2, we find that the second term in the integral 2.8 equals: Bel(x t 1 ) = p(x t 1 o t 1, a t 2, o t 2,..., a 0, o 0 ) = p(x t 1 a t 1, o t 1, a t 2, o t 2,..., a 0, o 0 ) (2.10) as the action a t 1 taken at time t 1 does not influence the current state x t 1 but the future state x t. Finally, we obtain a recursive update equation: Bel(x t ) = ηp(o t x t ) p(x t x t 1, a t 1 )Bel(x t 1 )dx t 1 (2.11) 3

Approach Features Pros Cons Kalman Filter unimodal accuracy sensor variety Gaussian distribution efficiency first and second moment of PDF (non)linear dynamics and observations polynomial complex. in state dimension position tracking problem Multi-hypothesis multimodal accuracy sensor variety Tracking Gaussian distribution robustness implementation (non)linear dynamics and observations polynomial complex. in state dimension global localisation problem Table 2.1: An overview of continuous implementations of belief Bel(x) in Bayesian Filters. The probability p(x t x t 1, a t 1 ) will be further referred to as motion model, and the probability p(o t x t ) will be called sensor model. Since both models are usually assumed to be stationary, we can denote: p(x x, a) motion model p(o x) sensor model (2.12) Motion model, sensor model and belief representation are the subjects of the next chapters. 2.3 Belief implementations Bayesian Filters are very general abstract concept for recursive state estimation. The belief representation 2.1 is the key part of the update equation 2.11 and varies depending on the approach used. We can divide all representations into two groups: continuous representations (table 2.1) and discrete representations (table 2.2). 2.3.1 Kalman Filters Kalman Filters are the most widely used variants of Bayesian Filters, especially for solving the pose tracking problem. The belief distribution is approximated only by the first and the second moment (see definitions B.2 and B.4), what makes the Gaussian distribution an optimal choice (since all moments higher than two are equal 0 - see [2]). Though Kalman Filters are very efficient (linear algebra operations), they can represent only unimodal beliefs with limited initial uncertainty. However, in practise they works fine even in highly nonlinear systems, despite underlying linear assumptions about system s dynamics 2. 2.3.2 Multi-hypothesis Tracking (MHT) In Multi-hypothesis tracking the belief is represented by a linear combination of unimodal Gaussian distributions: 2 This assumption was relaxed in the extended version of Kalman Filters, by using first-order Taylor series expansions to linearise the system. 4

Approach Features Pros Cons Grid-based discrete robustness efficiency piecewise constant approximation sensor variety non-linear dynamics and observations exponential complex. in state dimension global localisation problem Topological discrete robustness accuracy abstract state-space abstract dynamics and observations one-dimensional graph global localisation problem Particle filter discrete robustness Sample-based approximation accuracy non-linear dynamics and observations sensor variety exponential complex. in state dimension global localisation problem Table 2.2: An overview of discrete implementations of belief Bel(x) in Bayesian Filters. Bel(x t ) = i w (i) t ( 1 (2π)n Λ (i) exp 1 2 ( (x t µ (i) ) T (Λ (i) ) 1 (x t µ (i) )) ) (2.13) MHT tracks each single Gaussian hypothesis about possible robot poses (represented by weights w (i) ), overcoming in this way the main limitation of Kalman Filter. Unfortunately, MHT needs sophisticated techniques and heuristics to solve sensor data association problem and to decide when to add or remove hypotheses. 2.3.3 Grid-based approaches (Markov Localisation) In Grid-based approaches the state space is divided into regular network of adjacent cells, and the belief is represented by a value assigned to each cell. This trick allows grid approaches modelling arbitrary distributions, and as a result, solving the global localisation problem. Grid approaches are also relatively accurate and very robust against sensor noise. The obvious disadvantage of the grid approaches is the space and computational complexity which grows exponentially with the number of space dimensions, what may prevent from whichever applications of the algorithm in higher dimensional spaces. 2.3.4 Topological approaches The problem with the exponentially growing complexity in the grid-based approaches can be avoided by using non-metric representation of space - graphs. Each node of the graph represents a certain location in the environment, while the edges are physical connections between them 3. Whereas the topological approaches can be quite efficient, they are not very precision about space locations, as the number of state spaces is limited. 3 The topological approach is undoubtedly the most natural one among all the others. 5

2.3.5 Particle filters The belief in Particle filters is represented by weighted set of n particles: Bel(x) {x (i), w (i) } i=1,...,n (2.14) where x (i) are poses, and w (i) are weights 4 also called importance factors. In order to imitate a regular PDF function a particle set must satisfy the same conditions as such a regular PDF does (see condition B.1) 5. Of course, it is not possible to sample at an arbitrary pose x from so defined distribution - instead we have to use the procedure called sampling importance resampling (SIR algorithm) described in the next chapter 2.4. The particle approximation is the core part of the MCL algorithm, which itself came by a few improvements [13], and it was proved to be the best algorithm known (in overall): ˆ the modelled system does not have to be linear ˆ particles can approximate virtually all possible distributions ˆ can tackle both the global localisation and the pose tracking problem ˆ copes well with noisy sensor data ˆ particles can concentrate only in interesting space regions ˆ adjustable CPU workload and accuracy by changing the number of particles ˆ easy applicable anytime algorithm 2.4 How does it work? Implementing the MCL algorithm is not very difficult once we know how to realize a motion model (see 3.1) and a sensor model (see 3.2). However, due to the discrete representation of belief Bel(x), the MCL recursive equation 2.11 can be easy to understand without the detailed knowledge of motion and sensor models. As we learnt from the previous section 2.3.5, the belief Bel(x) is a PDF function which returns a value of the probability that the robot pose is x. In particle filters we cannot evaluate Bel(x) for any arbitrary x 6, instead we have a set of poses x (i) and weights w (i) which are simply the values of Bel(x) in x = x (i). The set of particles {x (i) 0, w(i) 0 } for t = 0 represents the knowledge about the system at the beginning of iteration of the recursive equation 2.11. If we do not have any information about a possible pose of the robot, all n particles are uniformly distributed on the map (see figure 2.1), and all weights have the same value 1/n. In the MCL algorithm instead of integration over continuous space (equation 2.11), we repeat the subsequent steps exactly n times: 1. Sample a particle x (i) t 1 from the belief distribution Bel(x t 1) using sampling importance resampling algorithm 2.4.1. In other words, the higher the weight w (i) t 1 of a particle i, the larger probability of picking up this particle when sampling. 4 which correspond to PDF s values. 5 To be strict, the normalisation condition is not required, but it can be used to avoid over/underflow. 6 In fact, it is required in the Mixture-MCL [13] algorithm, and can be attained by using kd-trees [12]. 6

2. Apply the motion model p(x (i) t x (i) t 1, a t 1) to the sampled particle x (i) t 1. Intuitively, if the robot has moved by a distance d in a direction θ 7, the particle x (i) t 1 will be translated by the same distance and in the same direction 8 to the new pose x (i) t, but with some uncertainty (see equation 3.9). 3. Apply the sensor model p(o t x (i) t ) to the new particle pose x (i) t. In other words, calculate the 9. Assign probability of observing o t (what robot observes at time t), being at the pose x (i) t a new weight value to the i th particle: w (i) t = p(o t x (i) t ) (2.15) After repeating the above steps n times, we obtain a new set of n particles {x (i) t, w (i) t } comprising the new belief Bel(x t ). Some of them will be quickly eliminated (small weights), whereas some of the particles will concentrate in regions where the probability of observing o t is higher (large weights). The effects of updating the belief Bel(x) according to the above procedure, are shown in the figure 2.1. The MCL algorithm quickly localises the robot on the map. 2.4.1 Sampling importance resampling (SIR) In the first step 1 of the recursive procedure, we draw a single particle x i from the belief distribution Bel(x) with probability proportional to the particle s weight w i. In the simplest approach we construct a cumulative density function CDF (a table of length n) from the particle set {x (i), w (i) } as below: CDF = x (1), w (1) x (2), w (1) + w (2)... x (n), w (1) + w (2) +... + w (n) (2.16) To draw a particle from so defined CDF function, we do: ˆ generate a random number y [0, w (1) + w (2) +... + w (n) ) 10 ˆ use binary search to find an index i of a particle from the CDF table, such that w (i 1) y < w (i) and w (0) = 0 7 what is encoded in action a t 1 8 but related to the current location and heading of the particle 9 Note that, because each pose x (i) t consists of two elements - location and heading direction, the world may look completely different in different directions even at the same location. 10 One can normalise the particle set before constructing the CDF table - then y [0, 1). 7

Robot location Robot location Robot location Robot location 3 Figure 2.1: The effects of updating the belief Bel(x) according to the recursive MCL equation 2.11. In the topmost figure all the particles are uniformly distributed on the map, since initially a robot pose is unknown. After a few updates the particles are concentrating in regions indicated by last observations as the most probable locations of the robot. Finally, the number of suspected regions lowers to one, what points out that the robot is successfully localised. 8

Chapter 3 Probabilistic Models for Bayesian Filters 3.1 Motion model The motion model p(x x, a) answers the question: what is the probability of finding the robot at pose x after performing the action a, if the robot s initial pose was x?. The model is characterised by deterministic kinematic equations describing robot s movement, as well as it must contain information about movement uncertainty. As shown in the appendix A, the uncertainty of robot s movement can be described by three types of possible errors which accumulate under motion. We can distinguish two mechanisms producing errors [7]: ˆ the evolution of already accumulated errors during motion ˆ the addition of new integration errors [7] Let p denote the pose of the robot: p = x y Θ (3.1) then the uncertainty of the robot pose is expressed by three dimensional covariance matrix (see also definition B.12): Λ(p) = σxx 2 σxy 2 σxθ 2 σyx 2 σyy 2 σyθ 2 σθx 2 σθy 2 σθθ 2 (3.2) If all components of the robot pose vector 3.1 were independent, all off-diagonal elements would equal zero. Unluckily this is not the case - for example if the robot performs any move forward, uncertainty of the initial heading of the robot will result in uncertainty of its location. Thus, the covariance matrix should evolve as the robot s pose does. In the chapter A.3 we derived the iterative formula of robot motion: p = f(p) (3.3) 9

where and each step consists of two parts: 1. translation s in the direction Θ n 2. rotation by the angle Θ f x = x + s cos(θ + α) f y = y + s sin(θ + α) f Θ = Θ + Θ (3.4) The covariance matrix evolves according to the formula (for details see [7]): where F(ˆp) is the Jacobian Λ(p ) = F(ˆp)Λ(p)F(ˆp) T + R( Θ)ER(Θ) (3.5) F(ˆp) = f x x f y x f Θ x describing the evolution of an initial error, R(Θ) is the ordinary rotation operator cos(θ) sin(θ) 0 R(Θ) = sin(θ) cos(θ) 0 (3.7) 0 0 1 and E is the error matrix E = f x y f y y f Θ y f x Θ f y Θ f Θ Θ k T s 0 0 0 0 0 0 0 k Θ Θ + k D s ˆp (3.6) (3.8) The error matrix introduces to the resultant covariance matrix Λ(p ) a new integration error in every iteration step of the equation 3.5 (see chapter B.3). In the matrix 3.8 we also assumed that the initial robot s heading is along X axis, so there is no pose uncertainty along Y axis. The equation 3.5 is an elegant way of expressing the evolution of the covariance matrix. But on the other hand, in the motion model we are only interested in the probabilistic evolution of particle poses 1, not in the explicit form of the covariance matrix. If we assume that the initial uncertainty is zero, and the translation and rotation are independent variables, we can use directly the formula 3.1 simply adding zero mean Gaussian noise with the variance B.16. The robot motion formula takes the form: x n+1 = x n + ( s + N trans ) cos(θ n + α + N rotation + N drift ) y n+1 = y n + ( s + N trans ) sin(θ n + α + N rotation + N drift ) Θ n+1 = Θ n + Θ + N rotation + N drift (3.9) 1 This is true only for particle filters. 10

Figure 3.1: The probabilistic evolution of the robot pose represented by a particle cloud. The cloud spreads and takes a characteristic banana shape after a few turns. The solid line represents the robot s trajectory measured by odometry sensors, the clouds of particles approximate the density p(x x, a). where the noise is modelled by Gaussian distributions: N trans N rotation = N (0, k T s) N (0, k Θ Θ) N drift N (0, k D s) (3.10) Figure 3.1 depicts the effect of adding the noise factors 3.10 to the deterministic motion equation 3.1. The solid line represents the robot s trajectory measured by odometry sensors 2, the clouds of particles approximate the density p(x x, a) - the more crowded particles are, the more likely the robot s pose is. Starting from the leftmost figure, we can observe the evolution of particle cloud. The cloud spreads and takes a characteristic banana shape after a few turns. 3.2 Sensor model The sensor model p(o x) answers the question: what is the probability of observing o if the robot pose is x?. To simplify the problem, we turn our attention only to simple range finders, in particular to sonars. For range finders, to calculate the value of p(o x) we have to perform the following steps (compare to [13]): 1. compute the average (most probable) locations of observed targets (see figure 3.2) 2. model sensor noise 3. for each single measurement compute the probability of the measurement knowing both the robot pose and the target location 4. integrate calculated probabilities into a single result Since the first step is usually the task of low-level sensor drivers, we can leave out many tedious calculation related to a sensor design, and focus solely on probabilistic and geometrical aspects of the problem. 2 We used here the Pioneer simulator. 11

Figure 3.2: The long continuous line (black) represents robot moving along the walls of a small room. Sensor readings are denoted by radial beams (green lines), in most cases ending at the room walls (orange spots). Figure 3.3: Sensor model geometry: l is the distance to the nearest target, the flat surface of the target is at an angle φ to the perpendicular [7]. 3.2.1 Modelling of sensor noise While Bayesian Filter equations are quite general, an implementation of the sensor model to a large extent depends not only on sensor design (approach and implementation) but also on environment characteristics. The most popular way of modelling of measurement uncertainties requires only the knowledge of the actual (l) and measured (d) distance to the target (figure 3.3) 3. The sensor noise can be approximated by a Gaussian distribution centred at l (see [5]). For sonars we need, however, a third parameter - the incidence angle φ of the sonar signal (figure 3.3). In the approach presented in [7], the sensor noise is modelled by two distributions: 1. the on-axis distribution where φ = 0 (single reflections from target) 2. the off-axis distribution where φ is too large to detect a direct reflection of the sonar signal (multi-reflections) An example of the on-axis distribution is shown in figure 3.4. The on-axis distribution models the following phenomena: 3 All objects are usually assumed to be flat surfaces that can be modelled by lines in two dimensional world. 12

Figure 3.4: Sensor noise models. Starting from the left: on-axis distribution for φ = 0, interpolation of on-axis and off-axis distributions for φ = 10, off-axis distribution for φ 15. ˆ a true sensor return described by a Gaussian distribution, with mean d = l and variance σ 2 sensor ˆ a random sensor return up to the distance to a target l 4, caused by unmodelled dynamic objects, and described by a Poisson distribution (see [6]) with false positive rate λ F ˆ an sensor max-range return with finite probability P range (object detection failure), modelled as narrow uniform density at d r, where r is a sensor range 5. Furthermore, the Gaussian distribution is attenuated by the linear factor β(l), as the chance of detecting of any object decreases with increasing distance. The complete mathematical formula of the on-axis distribution can be expressed as [7]: d < l : λ F exp( λ F d) + β(l)n (l,σ 2 sensor )(d) exp( λ F d) l d < r : β(l)n (l,σ 2 sensor )(d) exp( λ F l) d r : P range (3.11) The off-axis distribution (figure 3.4) models the multi-reflection of the sonar echo before returning, i.e.: ˆ a random sensor return up to the distance to a target l (similarly to on-axis distribution), described by a Poisson distribution with false positive rate λ F ˆ an multi-reflection return, also described by a Poisson distribution but with higher rate parameter λ R (we expect to detect anything eventually) ˆ an sensor max-range return, the same as for the on-axis distribution Finally, we can denote the formula of the off-axis distribution [7]: d < l : λ F exp( λ F d) l d < r : λ R exp( λ R d) exp( λ F l) d r : P range (3.12) If the incidence angle φ is between 0 and a certain threshold value, the PDF of the modelled sensor noise can be obtained as an interpolation 6 of the on-axis and the off-axis distributions, so that 4 also called false positive measurement 5 Compare to the approach proposed in [7]. 6 Before interpolating, both densities should be normalised. 13

Figure 3.5: Transformation of sensor measurements. The affine transform is entirely defined by two pairs of locations: {x t, x t 1 } and {x t, x t 1}. The transformation does not change the angles α 1 and α 2. Because the scale of the transform x t x t 1 / x t x t 1 is usually different than 1, in order to preserve the distances to targets d 1 and d 2, their locations must be transformed separately. We also do not assume that the robot moves along the straight line between the poses x t 1 and x t. the resultant PDF is a function of three variables: the measured distance to a target d, the real distance to a target l, and the incidence angle φ (see figure 3.4) 7. 3.2.2 Integration of sensor measurements In order to calculate the final value of the sensor model p(o t x t ), for an arbitrary observation o t, and if the previous robot pose was x t 1, we perform the following steps (see figure 3.5): 1. Collect all sensor measurements m i 8 recorded between time t 1 and t, where each measurement m i contains the target location and the robot location. 2. For each measurement m i : calculate the distance d i between the robot and a target, calculate the angle α i between the reading direction and the robot heading direction (see figure 3.5). 3. Transform robot locations at which the measurements m i were recorded onto new coordinates (see figure 3.5) - the affine transform is entirely defined by two pairs of locations: {x t, x t 1 } and {x t, x t 1} 9. 4. In new coordinates : restore the locations of targets from d i and α i ; this step is necessary so 7 Note that the PDF must be also determined for l > r to give correct densities for d < r in case a target is just behind the sensor range l > r. 8 contributing to observation o t. 9 In the MCL algorithm the first pair {x t, x t 1 } describes the robot move from time t 1 up to t, whereas the second one {x t, x t 1 } describes the move of a particle within the same time period. 14

as to keep old measured d i 10. 5. For each measurement m i (in new coordinates) : extract from the world model (see chapter 3.3) the actual distance l i to the target, and the incidence angle φ i. 6. For each measurement m i : calculate the probability p i of observing m i, using the sensor noise model 3.2.1, and having all three parameters d i, l i and φ i. 7. Finally, assuming conditional independence of the individual measurements m i, calculate the probability p(o t x t ): p(o t x t ) = i p i (3.13) where p i is the probability of observing m i (after transforming to new coordinates). Because of using affine transforms the above procedure differs a bit in some points from this what is known in literature. The most important features of the procedure are: ˆ Enables the MCL algorithm processing several measurements at a time, each of them can be recorded at any time between t 1 and t. ˆ The MCL algorithm does not have to be synchronised with sensor measurements. evaluations of possible robot s poses (particles) can be performed at any time. The ˆ The CPU workload can be smoothly adjusted by changing the length of the period [t 1, t], and the number of used measurements 11. ˆ The additional needed CPU time can be neglected, as the real bottleneck of the MCL algorithm is the procedure of extraction of the parameters l i and φ i (see step 5). 3.3 World model The conception of the world model arose out of the idea of separation the MCL algorithm (including the motion and sensor models) from the implementation of a robot s world. Though an absolute separation could not come into play 12, the only moment the world model interacts with the MCL algorithm was reduced to the extraction of the distance l i to a target and the incidence angle φ i, given the measurement m i (see step 5). The main features of the current implementation of the world model are: ˆ the world is two dimensional ˆ the world is bounded by a rectangle ˆ all objects have floating-point vector representation ˆ all objects are lines ˆ the world is stored in files in two formats compatible with those from Saphira 13 15

Figure 3.6: The world map is divided into a set of adjacent square cells. Each cell contains pointers to all objects which can be visible to the robot from within the cell (defined by the robot s sonar range). The map consists of about 1200 cells. 16

The world map is divided into a set of adjacent square cells (see 3.6), where each cell i contains two things: 1. pointers to all local objects which can be visible to the robot from within the cell (defined by robot s sonar range). 2. the minimal c (i) min and maximal c(i) max distance to all local objects The additional resource overhead needed to maintain all data structures is not large and is approximately proportional to the number of cells. The overall growth in the efficiency of the MCL algorithm is especially visible for large world maps. For example, for the world map 3.6 the MCL algorithm works about 20 times faster, and on Pentium IV 2.6 MHz it is possible to update up to 100000 particles every second. The procedure of extraction of the parameters l i and φ i, given the measurement m i, involves basic geometrical transformations and consists of: 1. identifying the cell on the basis of the coordinates of the measurement m i 2. calculating the reading direction α i having the target and the robot locations 3. finding the nearest target in direction α i from the group of visible objects from the cell 4. calculating l i and φ i Each cell contains also information about the minimal and maximal distance to the local objects, what can be used for smart initialisation of particles (see figure 3.7). The whole procedure is only slightly slower than the regular one (also O(n)), and needs at least one measurement: 1. Calculate the minimal min and the maximal max distance from the robot to recorded target(s), having the group of measurements m i. 2. Collect all the cells {i} that c (i) min < max and c(i) min > min14. 3. Generate n random particles by repeating the following steps: pick up at random one of the collected cells, uniformly generate a single particle within the chosen cell. This procedure has a higher potential: for example one can generate particles which are already orientated against the objects. One must add, that the current version of the procedure is not resistant against false measurements, as well as does not take into account its uncertainties. 10 We do not assume the Doppler effect! The affine transform changes the scale of objects, while it does not change their proportions. 11 In addition to the numbers of particles. 12 The model of the sensor noise makes many assumptions about the shape of world s objects, and of course, the number of world dimensions is two (it can be changed relatively easy). 13 Unfortunately Saphira uses more formats, but the two supported ones are the most common. 14 One must be careful about using the minimal distance, since as some of the close object might not be recorded by the robot. 17

Figure 3.7: The improved initialisation of particles. Basing on sensor measurements, all the particles are generated only within the restricted regions. 18

Chapter 4 Conclusions Although the system was not tested yet on real robots, I experimented with the Pioneer robot simulator. No matter how perfect the simulator is, it will remain only the simulator. Therefore, the main purpose of the experiments were not obtaining any concrete numerical results, but rather developing, testing and optimising of the software. The experiments verified most strong and weak points of the basic MCL algorithm. I mention here only a few major problems. The tradeoff between the number of particles and efficiency. Since the belief distribution Bel(x) can contain only a finite number of particles, it is possible that there will be no particles initiated close to the actual pose of the robot 1. To make the matter worse, even supposing there are any particles there, because the algorithm is probabilistic, the particles may not survive. This scenario is especially probable if sensors are not very accurate, and in consequence their weights will not be significantly higher then the average. A trivial remedy to this problem is to increase a number of particles, what cannot be satisfying, though. Movement approximation. The solution to the aforementioned problem might be to add some more uncertainty to the motion model. The particles would then penetrate larger space and finally localise the robot. This approach works well in combination with noisy sensors - even if initially a group of particles is not very precisely localised, it can gradually adjust or synchronise with the true robot pose when moving. Unluckily, such a group of already localised particles tends to spread very quickly, what is particularly visible in the absence of (non-collinear) sensor measurements. The sensor noise model. Paradoxically, accurate sensors (plus narrow sensor distribution) make the MCL algorithm converge too quickly - this gives false results especially if the sensor accuracy does not correspond to its noise model. In practise, the modelled sensor noise should always overestimate the actual one, what gives particles more chance to synchronise with the optimal pose (at least optimal in the context of sensor readings). Summarising, it is not enough to have perfectly modelled motion and sensor noise, the MCL algorithm will not be optimal at all. On the other hand, an excessive increase of the number of particles makes the system inefficient. Are they any solutions to these problems? The two improvements proposed in the report could be the base for any more sophisticated algorithms. Moreover, they can be easily combined with, for example, the mixture MCL algorithm [13] described briefly at the end. 1 in three dimensional space 19

The affine transformation of sensor measurements 3.5 would allow using a procedure, which selects out the measurements that sketch any characteristic shapes or objects in the environment. Obviously, one cannot have too much influence on sensor measurements, but such a smart selection, even together with controlling the robot itself 2, may help the problem of premature convergence of the algorithm to false solutions. With using the improved particle initialisation 3.7, the conventional MCL algorithm needs approximately two times (the map 2.1) or even four times (the map 4.1) less particles to have the same effectiveness as with the standard uniform initialisation. These results vary depending on the map, sensor readings, number of used cells etc. The main strength of the algorithm 3.7 is its simplicity, speed 3 (all time-consuming calculations are done off-line), and not very large memory requirements (number of cells two floating point numbers). The mixture MCL algorithm [13] is an interesting algorithm, which combines the regular MCL and so called dual MCL algorithm according to the scheme: Mixture MCL = (1 Φ) Regular MCL + Φ Dual MCL 0 Φ 1 In the regular MCL 2.4 samples are drawn form p(x x, a) Bel(x), and then the weights are calculated from the sensor model p(o x). The dual MCL inverses this process - the proposal distribution is p(x o) and the weights are now directly proportional to belief Bel(x). The dual MCL requires sampling from p(x o) what can be realised by using kernel-density trees [12] - a piece-wise continuous probability distribution built out of particle set. The mixture MCL algorithm has been proven to be superior to the regular MCL [13], in particular the localisation error can be an order of magnitude smaller than in the conventional MCL. 2 To explore the regions that seem to be characteristic. Going further, basing on the map, one can even create an optimal exploration strategy. 3 otherwise it would be better to simply increase the number of particles 20

Robot location Robot location 21

Robot location Figure 4.1: The Monte Carlo localisation on a large map. All particles are initialised only in the vicinity of objects (the topmost figure). After about 15 seconds the MCL algorithm successfully localises the robot s pose (the bottom figure). 22

Appendix A Robot motion Modern mobile robots can walk, jump, swim, fly but even the simplest way of moving - rolling in two dimensional spaces can be surprisingly complex and the exact robot motion can only be approximated. But even though in this report we discuss only motion of rolling robots, the MCL algorithm remains the same, with the exception of so called motion model (chapter 3.1). A.1 Position estimation Position estimation is an inherent part of all navigation systems. One can distinguish two general groups of methods of position estimation: internal and external[7]. Internal methods (odometry) base on open-loop sensors, e.g. they do not rely on any feedback from environment. External methods uses sensor measurements directly from environment to record and correct the actual robot pose. The most popular internal methods use wheel encoders registering the number of counts of revolving shaft when robot is moving. External methods can employ Global Positioning System (GPS), external beacons like sonar or laser but also not very accurate compass. Whereas Saphira can support external methods, in the motion model (see chapter 3.1) it is usually sufficient to use only odometry readings. A.2 Position integration Odometric sensors gives information about a distance of robot moves, its speed and relative changes of robot s heading direction. It is necessary to integrate this information over time to give an estimate of the robot pose. Assuming that we know what is the correspondence between the number of counts and the travelled distance, and that the robot has two wheels (left and right), from the figure A.1 is clearly visible that in time period t robot changes its pose according to the following rule: dθ = 2π( r l) = 2πd s = ( r + l) 2 v = s t ( r l) d (A.1) 23

Figure A.1: Robot displacement within the time period t. where the angle Φ is given in radians, s is the overall movement of the robot and v is its speed. If at the time t n the global pose 1 of the robot P n is given by (x n, y n, Θ n ), then the pose P n+1 at the time t n+1 can be calculated from the simple rule: x n+1 = x n + s cos(θ n ) y n+1 = y n + s sin(θ n ) Θ n+1 = Θ n + Θ (A.2) Both the equation A.1 and the equation A.2 are no more than a crude simplification of a real physical motion, however they can be shown to converge to the ideal solution when t 0. While the equation A.1 is usually precise enough, as the sensor update rate can be as small as 10 ms, the equation A.2 can be a source of large integration errors. This is especially the case for external control systems 2, where the time period t is typically larger than in the equation A.2 and depends on used communication channel 3. A.3 Extended robot motion The equation A.2 is valid only for small values s and Θ, ideally when the robot moves exactly in direction given at the beginning of each update step. 1 or more precisely, the pose relative to the robot s starting point 2 The Saphira control system and the T1000 platform are good examples. 3 In fact, on Saphira system all sensor readings and a pose of the robot are updated every 100 ms (at least accessible to Saphira user), so the current pose is updating internally. However, to simulate particle motion, the relative movement s and the angle Θ can be extracted directly from the current pose of the robot. Note that in this case there will not be any integration errors, but only the instantaneous robot s heading error which can be entirely removed (see the next chapter A.3). 24

Figure A.2: Realistic robot motion. The initial robot heading direction Θ is usually different from the resultant motion direction. Figure A.2 depicts more realistic case of robot motion[7] - the initial robot s heading direction is generally different from the resultant direction of motion. If we denote this difference by angle α, we can write new equations for extended robot motion: x n+1 = x n + s cos(θ n + α) y n+1 = y n + s sin(θ n + α) Θ n+1 = Θ n + Θ (A.3) The only problem here is to find the angle α. Assuming that the angular velocity of the robot is constant during update time period t, we can write that: α (Θ n+1 Θ n ) (A.4) 2 On the other hand, there are often known both the initial pose P n and resultant pose P n+1, and we only need to know a value of the angle α 4. In this case the exact value of α can be obtained directly from the equation A.4: ( ) y α = arctan Θ n (A.5) x where x = x n+1 x n and y = y n+1 y n are relative robot s displacement. 4 to simulate exact particle motion, for instance 25

Appendix B A few notions from statistics B.1 Measurements and errors Measurement is not only the source of information but also various errors. The process of measurement consists in comparison of measured value with a suitable measurement standard. Basically, the result of measurement gives two pieces of information: the unit (meter, second, kilogram, etc.) and the numerical value. Because so obtained value is never absolutely error-free, the result must contain additional information describing its accuracy. One can distinguish three types of errors: 1. Blunders or mistakes. This type of error can be caused by for example wrongly chosen algorithm. 2. Systematic errors are a consequence of imperfect sensors or inaccurate measurement method. The calibration procedure usually eliminates most of these type of errors, but they still remain subject to sensor drift 1 and imprecision linearisation methods. 3. Random errors cannot be eliminated during calibration and they are effect of underlying physical processes like noise in electronic circuit, or unknown (not possible to model) interactions with the environment. The errors which elimination is not possible are frequently called measurement uncertainties. B.2 Random variables It is advantageous to consider random error as random variable, what allows us to use powerful statistical methods. Continuous random variables can be described by probability density function (PDF) p(x) which must satisfy the following conditions: p(x) 0, x + p(x)dx = 1 The n th moment of random variable X is defined as: M n = E[X n ] = + x n p(x)dx (B.1) (B.2) 1 which itself depends of environmental conditions 26

The 1 st moment of random variable X is called the expected or mean value µ = E[X] = The n th moment about the mean is defined as: + xp(x)dx (B.3) C n = E[(X µ) n ] = + The second moment about the mean is called the variance of variable X: σ 2 = E[(X µ)(x µ)] = E[X 2 ] E[X] 2 (x µ) n p(x)dx (B.4) (B.5) The standard deviation is the square root of the variance and is a measure of uncertainty or spread of random variable X around its mean value. Random variable X is said to be Gaussian if its PDF function is defined as: p(x) = 1 ) (x µ)2 exp ( 2πσ 2σ 2 (B.6) where µ and σ 2 are the mean and the variance of X. If any measurement quantity is described by Gaussian variable X, then 68% of all measured values of X will be within a standard deviation of a mean. If we denote by X 1, X 2,..., X n a set of n continuous random variables (see [6]): X = [X 1, X 2,..., X n ] T (B.7) then we can define the joint probability density function p(x 1,..., x n ), that similarly to the condition B.1, must satisfy the following relations: p(x 1,..., x n ) 0, x 1,..., x n +... + p(x 1,..., x n )dx 1...dx n = 1 (B.8) Random variables X 1, X 2,..., X n are said to be independent if the joint probability density function p(x 1, X 2,..., X n ) can be factorised: Let µ denote the mean vector: p(x) = p(x 1, X 2,..., X n ) = p(x 1 )p(x 2 )...p(x n ) (B.9) µ i = E[X i ] and define the covariance σ 2 ij of variables X i and X j as µ = [µ 1, µ 2,..., µ n ] T (B.10) σ 2 ij = E[(X i µ i )(X j µ j )] Note that σij 2 = σ2 ji. We can use these covariance values to form the covariance matrix: σ11 2... σ1n 2 Λ =..... σn1 2... σnn 2 (B.11) (B.12) 27