Mini-Project II. Monte Carlo Localisation for mobile robots

Size: px
Start display at page:

Download "Mini-Project II. Monte Carlo Localisation for mobile robots"

Transcription

1 The University of Birmingham School of Computer Science MSc in Advanced Computer Science Mini-Project II Monte Carlo Localisation for mobile robots Marek Kopicki Supervisor: Dr Jeremy Wyatt April 26, 2004

2 Abstract The localisation is not a trivial problem of mobile robotics. The robot must localise itself, having merely a map of the environment and sensor readings. This is a base knowledge that robot must possess in order to be able to complete higher level tasks like path planning. The Monte Carlo Localisation (MCL) is a probabilistic algorithm successfully applied in many mobile robot systems. The core of the system that implements the MCL algorithm is the T1000 Java-written robot application platform which cooperates with the Saphira robot control system developed at SRI International s Artificial Intelligence Center. T1000 is a client application communicating via TCP/IP network with a Windows/Linux server for Saphira, what enables running and testing several robot applications at the same time. Thanks to affine transformation of robot s sensor measurements, the algorithm does not have to be synchronised with incoming sensor data, and allows processing several measurements at a time. Moreover, they can be processed at any convenient time, what simplifies using any preselection algorithm. In addition, the improved particles initialisation can also reduce the number of particles without decreasing the effectiveness of the basic MCL algorithm. Even though the system was not tested yet on real robots, the experiments with the Pioneer robot simulator are promising, and it seems that the system is comparable to the original MCL software attached to Saphira. Keywords Localization, Monte Carlo Localisation, particle filters, position estimation, robot navigation, motion model, sensor model. ii

3 Acknowledgements I would like to take this opportunity to thank my project supervisor Dr Jeremy Wyatt who painstakingly guided me during the mini-project. Without his insightful suggestions as well as enlightening discussions with him, I would not have been able to complete this mini-project. iii

4 Contents Abstract Keywords Acknowledgements Table of contents List of Figures List of Tables ii ii iii v vi vii 1 Preface 1 2 Monte Carlo Localisation Localisation problem Bayesian filtering Belief implementations Kalman Filters Multi-hypothesis Tracking (MHT) Grid-based approaches (Markov Localisation) Topological approaches Particle filters How does it work? Sampling importance resampling (SIR) Probabilistic Models for Bayesian Filters Motion model Sensor model Modelling of sensor noise Integration of sensor measurements World model Conclusions 19 A Robot motion 23 A.1 Position estimation A.2 Position integration A.3 Extended robot motion iv

5 B A few notions from statistics 26 B.1 Measurements and errors B.2 Random variables B.3 Movement Errors C The software platform 30 C-1 Overview C-1.1 The server architecture C-1.2 The client architecture C-2 Installation and usage instructions C-2.1 Files description C-2.2 General requirements C-2.3 The server C-2.4 The client D Mini-project declaration 35 E Statement of information search strategy 37 1 Parameters of literature search Forms of literature Geographical and language coverage Age-Range of literature Appropriate search tools Engineering Index Science Citation Index (SCI) Dissertations Abstracts International (DAI) and Index to Theses Search statements Brief evaluation of the search Bibliography 39 v

6 List of Figures 2.1 Monte Carlo localisation algorithm successfully localises the robot The probabilistic evolution of the robot pose Sensor readings of moving robot Sensor model geometry Sensor noise models Affine transformation of sensor measurements The world map The improved initialisation of particles The Monte Carlo localisation on a large map A.1 Robot displacement A.2 Realistic robot motion vi

7 List of Tables 2.1 Continuous implementations of belief Discrete implementations of belief C.1 The T1000 applications C.2 The properties of the T1000 applications vii

8 Chapter 1 Preface Before futurology eventually lost its former status in our century, most of their representatives were foreseeing a bright future for robots in our live. The role of futurology was definitely succeeded by science which unveiled, unfortunately, that many a time, the discrepancies between fantasy and technical feasibility, and even theory, are very difficult to overcome. The science area is currently so wide that making any substantial progress in any discipline has become a real challenge. Even though building robots with human-like capabilities is absolutely unthinkable at the moment, one can notice some successful application areas of robots with famous Mars-rover at the forefront - a semi-autonomous mobile robot. But is the era of intelligent home servant robots really approaching? The main subject of interest of this report is a robot localisation problem, which is the problem of estimation of robot s location, having merely a map of the environment and sensor readings. The robot s location is a very base knowledge that robot must possess in order to be able to complete higher level tasks like path planning. The report analyses the Monte Carlo Localisation (MCL) - a probabilistic algorithm successfully applied in many mobile robot systems [13][5][3]. Because of its probabilistic nature, the algorithm performs well even in dynamically changing environment, where moving obstacles permanently disturb sensor readings. The core of the system that implements the MCL algorithm is the T1000 Java-written robot application platform (see appendix C) cooperating with the Saphira and Aria - a robot control system developed at SRI International s Artificial Intelligence Center and ActivMedia Robotics [1]. T1000 is thought to be a universal and easy in use platform for development miscellaneous algorithms for intelligent robots. T1000 is a client application that communicates via TCP/IP network with a multi-client server, what enables running and testing several robot applications at the same time. The server is implemented as a Windows/Linux dynamic library downloadable by the Saphira s Colbert environment. Unfortunately, because I was not able to test yet the software on real robots, it is difficult to say about any results of experiments. The only exception is the Pioneer robot simulator [1] where results are promising and it seems that the system is comparable to the original MCL software attached to Saphira. The whole project, however, is only a part of the summer one where I intend to extend the system in probably mapping algorithms and do much more experimental work. 1

9 Chapter 2 Monte Carlo Localisation 2.1 Localisation problem Localisation problem is a general class involving several localisation problem instances depending on the techniques used [13][5]. The simplest and the best known one is the position tracking problem. Here the estimate of initial robot pose is known, and the problem amounts to compensation of accumulating errors of robot s odometry readings. Because position tracking plays an important role in the MLC algorithm, I discuss this topic separately in the appendix A. More difficult is the global localisation problem or the wake-up robot s problem, where a robot s initial pose is unknown. The robot must be able to deal with multiple hypotheses about its initial pose [13]. The methods that solve this kind of problems are called global techniques [5]. Finally, the most difficult is the kidnapped robot problem, where already localised robot is suddenly transferred into any other place without being told. The kidnapped robot initially believes to be somewhere else than in fact it is. The algorithms solving kidnapped robot problem should also be capable to solve global localisation problem, which in turn can be considered as a sub-class of the first one. Most of existing algorithms deal only with the position tracking problem. The most efficient ones are Kalman Filters, which however cannot cope with the global localisation problem. Markov Localisation is an extension of Kalman Filters, and solves this problem by a piecewise constant approximation of possible robot s poses. Unfortunately, this method is computationally expensive, even though there were proposed many improvements. Monte Carlo Localisation appears to be the best solution of the global localization problem so far. Moreover, the MCL algorithm can solve the kidnapped robot problem either, while being more efficient and accurate than Markov Localization. 2.2 Bayesian filtering The majority of approaches to localisation problem base on so called Bayes Filters. Bayes Filter recursively estimates the posterior distribution representing the pose of the robot, basing entirely on collected sensor data. Typically, the pose of the robot is a two-dimensional Cartesian position plus the robot s heading direction. Bayes filters assume that the Markov assumption holds, i.e. the current state contains all the information which is necessary to predict a future state. The posterior or conditional distribution, also called belief Bel(x t ), represents the robot s pose x t 1 at time t: 1 or more general - the state of a dynamical system 2

10 Bel(x t ) = p(x t d t, d t 1,..., d 0 ) (2.1) where d t, d t 1,..., d 0 denote the collected sensor data from time 0 up to time t. In particular, picking up any pose x t (at time t), the belief answers the question what is the probability that the robot is at the pose x t if the collected data is d t, d t 1,..., d 0?. Assuming that the data consists of perceptual data (sonar or laser readings) denoted by o (observations), and odometry data denoted by a (actions), from the equation 2.1 we have [13]: Bel(x t ) = p(x t o t, a t 1, o t 1,..., a 0, o 0 ) (2.2) The equation 2.2 can by further transformed. According to Bayes rule we obtain: Bel(x t ) = p(o t x t, a t 1,..., o 0 )p(x t a t 1,..., o 0 ) p(o t a t 1,..., o 0 ) Note that, the denominator remains a constant value when testing the belief Bel(x t ) for various poses x t, therefore we can rewrite the equation 2.3 as: (2.3) where Bel(x t ) = ηp(o t x t, a t 1,..., o 0 )p(x t a t 1,..., o 0 ) (2.4) η = p(o t a t 1,..., o 0 ) 1 (2.5) Since the Markov assumption implies that the future state is independent of the past state if the current state is known, from the equation 2.4 we obtain: where Bel(x t ) = ηp(o t x t )p(x t a t 1,..., o 0 ) (2.6) p(o t x t, a t 1,..., o 0 ) = p(o t x t ) (2.7) The total probability theorem (see [6]) implies Bel(x t ) = ηp(o t x t ) p(x t x t 1, a t 1,..., o 0 )p(x t 1 a t 1,..., o 0 )dx t 1 (2.8) where we integrated over all states at time t 1. Exploiting the Markov assumption again, we can simplify the first term in the integral 2.8: p(x t x t 1, a t 1, o t 1, a t 2,..., o 0 ) = p(x t x t 1, a t 1 ) (2.9) From the belief definition 2.2, we find that the second term in the integral 2.8 equals: Bel(x t 1 ) = p(x t 1 o t 1, a t 2, o t 2,..., a 0, o 0 ) = p(x t 1 a t 1, o t 1, a t 2, o t 2,..., a 0, o 0 ) (2.10) as the action a t 1 taken at time t 1 does not influence the current state x t 1 but the future state x t. Finally, we obtain a recursive update equation: Bel(x t ) = ηp(o t x t ) p(x t x t 1, a t 1 )Bel(x t 1 )dx t 1 (2.11) 3

11 Approach Features Pros Cons Kalman Filter unimodal accuracy sensor variety Gaussian distribution efficiency first and second moment of PDF (non)linear dynamics and observations polynomial complex. in state dimension position tracking problem Multi-hypothesis multimodal accuracy sensor variety Tracking Gaussian distribution robustness implementation (non)linear dynamics and observations polynomial complex. in state dimension global localisation problem Table 2.1: An overview of continuous implementations of belief Bel(x) in Bayesian Filters. The probability p(x t x t 1, a t 1 ) will be further referred to as motion model, and the probability p(o t x t ) will be called sensor model. Since both models are usually assumed to be stationary, we can denote: p(x x, a) motion model p(o x) sensor model (2.12) Motion model, sensor model and belief representation are the subjects of the next chapters. 2.3 Belief implementations Bayesian Filters are very general abstract concept for recursive state estimation. The belief representation 2.1 is the key part of the update equation 2.11 and varies depending on the approach used. We can divide all representations into two groups: continuous representations (table 2.1) and discrete representations (table 2.2) Kalman Filters Kalman Filters are the most widely used variants of Bayesian Filters, especially for solving the pose tracking problem. The belief distribution is approximated only by the first and the second moment (see definitions B.2 and B.4), what makes the Gaussian distribution an optimal choice (since all moments higher than two are equal 0 - see [2]). Though Kalman Filters are very efficient (linear algebra operations), they can represent only unimodal beliefs with limited initial uncertainty. However, in practise they works fine even in highly nonlinear systems, despite underlying linear assumptions about system s dynamics Multi-hypothesis Tracking (MHT) In Multi-hypothesis tracking the belief is represented by a linear combination of unimodal Gaussian distributions: 2 This assumption was relaxed in the extended version of Kalman Filters, by using first-order Taylor series expansions to linearise the system. 4

12 Approach Features Pros Cons Grid-based discrete robustness efficiency piecewise constant approximation sensor variety non-linear dynamics and observations exponential complex. in state dimension global localisation problem Topological discrete robustness accuracy abstract state-space abstract dynamics and observations one-dimensional graph global localisation problem Particle filter discrete robustness Sample-based approximation accuracy non-linear dynamics and observations sensor variety exponential complex. in state dimension global localisation problem Table 2.2: An overview of discrete implementations of belief Bel(x) in Bayesian Filters. Bel(x t ) = i w (i) t ( 1 (2π)n Λ (i) exp 1 2 ( (x t µ (i) ) T (Λ (i) ) 1 (x t µ (i) )) ) (2.13) MHT tracks each single Gaussian hypothesis about possible robot poses (represented by weights w (i) ), overcoming in this way the main limitation of Kalman Filter. Unfortunately, MHT needs sophisticated techniques and heuristics to solve sensor data association problem and to decide when to add or remove hypotheses Grid-based approaches (Markov Localisation) In Grid-based approaches the state space is divided into regular network of adjacent cells, and the belief is represented by a value assigned to each cell. This trick allows grid approaches modelling arbitrary distributions, and as a result, solving the global localisation problem. Grid approaches are also relatively accurate and very robust against sensor noise. The obvious disadvantage of the grid approaches is the space and computational complexity which grows exponentially with the number of space dimensions, what may prevent from whichever applications of the algorithm in higher dimensional spaces Topological approaches The problem with the exponentially growing complexity in the grid-based approaches can be avoided by using non-metric representation of space - graphs. Each node of the graph represents a certain location in the environment, while the edges are physical connections between them 3. Whereas the topological approaches can be quite efficient, they are not very precision about space locations, as the number of state spaces is limited. 3 The topological approach is undoubtedly the most natural one among all the others. 5

13 2.3.5 Particle filters The belief in Particle filters is represented by weighted set of n particles: Bel(x) {x (i), w (i) } i=1,...,n (2.14) where x (i) are poses, and w (i) are weights 4 also called importance factors. In order to imitate a regular PDF function a particle set must satisfy the same conditions as such a regular PDF does (see condition B.1) 5. Of course, it is not possible to sample at an arbitrary pose x from so defined distribution - instead we have to use the procedure called sampling importance resampling (SIR algorithm) described in the next chapter 2.4. The particle approximation is the core part of the MCL algorithm, which itself came by a few improvements [13], and it was proved to be the best algorithm known (in overall): ˆ the modelled system does not have to be linear ˆ particles can approximate virtually all possible distributions ˆ can tackle both the global localisation and the pose tracking problem ˆ copes well with noisy sensor data ˆ particles can concentrate only in interesting space regions ˆ adjustable CPU workload and accuracy by changing the number of particles ˆ easy applicable anytime algorithm 2.4 How does it work? Implementing the MCL algorithm is not very difficult once we know how to realize a motion model (see 3.1) and a sensor model (see 3.2). However, due to the discrete representation of belief Bel(x), the MCL recursive equation 2.11 can be easy to understand without the detailed knowledge of motion and sensor models. As we learnt from the previous section 2.3.5, the belief Bel(x) is a PDF function which returns a value of the probability that the robot pose is x. In particle filters we cannot evaluate Bel(x) for any arbitrary x 6, instead we have a set of poses x (i) and weights w (i) which are simply the values of Bel(x) in x = x (i). The set of particles {x (i) 0, w(i) 0 } for t = 0 represents the knowledge about the system at the beginning of iteration of the recursive equation If we do not have any information about a possible pose of the robot, all n particles are uniformly distributed on the map (see figure 2.1), and all weights have the same value 1/n. In the MCL algorithm instead of integration over continuous space (equation 2.11), we repeat the subsequent steps exactly n times: 1. Sample a particle x (i) t 1 from the belief distribution Bel(x t 1) using sampling importance resampling algorithm In other words, the higher the weight w (i) t 1 of a particle i, the larger probability of picking up this particle when sampling. 4 which correspond to PDF s values. 5 To be strict, the normalisation condition is not required, but it can be used to avoid over/underflow. 6 In fact, it is required in the Mixture-MCL [13] algorithm, and can be attained by using kd-trees [12]. 6

14 2. Apply the motion model p(x (i) t x (i) t 1, a t 1) to the sampled particle x (i) t 1. Intuitively, if the robot has moved by a distance d in a direction θ 7, the particle x (i) t 1 will be translated by the same distance and in the same direction 8 to the new pose x (i) t, but with some uncertainty (see equation 3.9). 3. Apply the sensor model p(o t x (i) t ) to the new particle pose x (i) t. In other words, calculate the 9. Assign probability of observing o t (what robot observes at time t), being at the pose x (i) t a new weight value to the i th particle: w (i) t = p(o t x (i) t ) (2.15) After repeating the above steps n times, we obtain a new set of n particles {x (i) t, w (i) t } comprising the new belief Bel(x t ). Some of them will be quickly eliminated (small weights), whereas some of the particles will concentrate in regions where the probability of observing o t is higher (large weights). The effects of updating the belief Bel(x) according to the above procedure, are shown in the figure 2.1. The MCL algorithm quickly localises the robot on the map Sampling importance resampling (SIR) In the first step 1 of the recursive procedure, we draw a single particle x i from the belief distribution Bel(x) with probability proportional to the particle s weight w i. In the simplest approach we construct a cumulative density function CDF (a table of length n) from the particle set {x (i), w (i) } as below: CDF = x (1), w (1) x (2), w (1) + w (2)... x (n), w (1) + w (2) w (n) (2.16) To draw a particle from so defined CDF function, we do: ˆ generate a random number y [0, w (1) + w (2) w (n) ) 10 ˆ use binary search to find an index i of a particle from the CDF table, such that w (i 1) y < w (i) and w (0) = 0 7 what is encoded in action a t 1 8 but related to the current location and heading of the particle 9 Note that, because each pose x (i) t consists of two elements - location and heading direction, the world may look completely different in different directions even at the same location. 10 One can normalise the particle set before constructing the CDF table - then y [0, 1). 7

15 Robot location Robot location Robot location Robot location 3 Figure 2.1: The effects of updating the belief Bel(x) according to the recursive MCL equation In the topmost figure all the particles are uniformly distributed on the map, since initially a robot pose is unknown. After a few updates the particles are concentrating in regions indicated by last observations as the most probable locations of the robot. Finally, the number of suspected regions lowers to one, what points out that the robot is successfully localised. 8

16 Chapter 3 Probabilistic Models for Bayesian Filters 3.1 Motion model The motion model p(x x, a) answers the question: what is the probability of finding the robot at pose x after performing the action a, if the robot s initial pose was x?. The model is characterised by deterministic kinematic equations describing robot s movement, as well as it must contain information about movement uncertainty. As shown in the appendix A, the uncertainty of robot s movement can be described by three types of possible errors which accumulate under motion. We can distinguish two mechanisms producing errors [7]: ˆ the evolution of already accumulated errors during motion ˆ the addition of new integration errors [7] Let p denote the pose of the robot: p = x y Θ (3.1) then the uncertainty of the robot pose is expressed by three dimensional covariance matrix (see also definition B.12): Λ(p) = σxx 2 σxy 2 σxθ 2 σyx 2 σyy 2 σyθ 2 σθx 2 σθy 2 σθθ 2 (3.2) If all components of the robot pose vector 3.1 were independent, all off-diagonal elements would equal zero. Unluckily this is not the case - for example if the robot performs any move forward, uncertainty of the initial heading of the robot will result in uncertainty of its location. Thus, the covariance matrix should evolve as the robot s pose does. In the chapter A.3 we derived the iterative formula of robot motion: p = f(p) (3.3) 9

17 where and each step consists of two parts: 1. translation s in the direction Θ n 2. rotation by the angle Θ f x = x + s cos(θ + α) f y = y + s sin(θ + α) f Θ = Θ + Θ (3.4) The covariance matrix evolves according to the formula (for details see [7]): where F(ˆp) is the Jacobian Λ(p ) = F(ˆp)Λ(p)F(ˆp) T + R( Θ)ER(Θ) (3.5) F(ˆp) = f x x f y x f Θ x describing the evolution of an initial error, R(Θ) is the ordinary rotation operator cos(θ) sin(θ) 0 R(Θ) = sin(θ) cos(θ) 0 (3.7) and E is the error matrix E = f x y f y y f Θ y f x Θ f y Θ f Θ Θ k T s k Θ Θ + k D s ˆp (3.6) (3.8) The error matrix introduces to the resultant covariance matrix Λ(p ) a new integration error in every iteration step of the equation 3.5 (see chapter B.3). In the matrix 3.8 we also assumed that the initial robot s heading is along X axis, so there is no pose uncertainty along Y axis. The equation 3.5 is an elegant way of expressing the evolution of the covariance matrix. But on the other hand, in the motion model we are only interested in the probabilistic evolution of particle poses 1, not in the explicit form of the covariance matrix. If we assume that the initial uncertainty is zero, and the translation and rotation are independent variables, we can use directly the formula 3.1 simply adding zero mean Gaussian noise with the variance B.16. The robot motion formula takes the form: x n+1 = x n + ( s + N trans ) cos(θ n + α + N rotation + N drift ) y n+1 = y n + ( s + N trans ) sin(θ n + α + N rotation + N drift ) Θ n+1 = Θ n + Θ + N rotation + N drift (3.9) 1 This is true only for particle filters. 10

18 Figure 3.1: The probabilistic evolution of the robot pose represented by a particle cloud. The cloud spreads and takes a characteristic banana shape after a few turns. The solid line represents the robot s trajectory measured by odometry sensors, the clouds of particles approximate the density p(x x, a). where the noise is modelled by Gaussian distributions: N trans N rotation = N (0, k T s) N (0, k Θ Θ) N drift N (0, k D s) (3.10) Figure 3.1 depicts the effect of adding the noise factors 3.10 to the deterministic motion equation 3.1. The solid line represents the robot s trajectory measured by odometry sensors 2, the clouds of particles approximate the density p(x x, a) - the more crowded particles are, the more likely the robot s pose is. Starting from the leftmost figure, we can observe the evolution of particle cloud. The cloud spreads and takes a characteristic banana shape after a few turns. 3.2 Sensor model The sensor model p(o x) answers the question: what is the probability of observing o if the robot pose is x?. To simplify the problem, we turn our attention only to simple range finders, in particular to sonars. For range finders, to calculate the value of p(o x) we have to perform the following steps (compare to [13]): 1. compute the average (most probable) locations of observed targets (see figure 3.2) 2. model sensor noise 3. for each single measurement compute the probability of the measurement knowing both the robot pose and the target location 4. integrate calculated probabilities into a single result Since the first step is usually the task of low-level sensor drivers, we can leave out many tedious calculation related to a sensor design, and focus solely on probabilistic and geometrical aspects of the problem. 2 We used here the Pioneer simulator. 11

19 Figure 3.2: The long continuous line (black) represents robot moving along the walls of a small room. Sensor readings are denoted by radial beams (green lines), in most cases ending at the room walls (orange spots). Figure 3.3: Sensor model geometry: l is the distance to the nearest target, the flat surface of the target is at an angle φ to the perpendicular [7] Modelling of sensor noise While Bayesian Filter equations are quite general, an implementation of the sensor model to a large extent depends not only on sensor design (approach and implementation) but also on environment characteristics. The most popular way of modelling of measurement uncertainties requires only the knowledge of the actual (l) and measured (d) distance to the target (figure 3.3) 3. The sensor noise can be approximated by a Gaussian distribution centred at l (see [5]). For sonars we need, however, a third parameter - the incidence angle φ of the sonar signal (figure 3.3). In the approach presented in [7], the sensor noise is modelled by two distributions: 1. the on-axis distribution where φ = 0 (single reflections from target) 2. the off-axis distribution where φ is too large to detect a direct reflection of the sonar signal (multi-reflections) An example of the on-axis distribution is shown in figure 3.4. The on-axis distribution models the following phenomena: 3 All objects are usually assumed to be flat surfaces that can be modelled by lines in two dimensional world. 12

20 Figure 3.4: Sensor noise models. Starting from the left: on-axis distribution for φ = 0, interpolation of on-axis and off-axis distributions for φ = 10, off-axis distribution for φ 15. ˆ a true sensor return described by a Gaussian distribution, with mean d = l and variance σ 2 sensor ˆ a random sensor return up to the distance to a target l 4, caused by unmodelled dynamic objects, and described by a Poisson distribution (see [6]) with false positive rate λ F ˆ an sensor max-range return with finite probability P range (object detection failure), modelled as narrow uniform density at d r, where r is a sensor range 5. Furthermore, the Gaussian distribution is attenuated by the linear factor β(l), as the chance of detecting of any object decreases with increasing distance. The complete mathematical formula of the on-axis distribution can be expressed as [7]: d < l : λ F exp( λ F d) + β(l)n (l,σ 2 sensor )(d) exp( λ F d) l d < r : β(l)n (l,σ 2 sensor )(d) exp( λ F l) d r : P range (3.11) The off-axis distribution (figure 3.4) models the multi-reflection of the sonar echo before returning, i.e.: ˆ a random sensor return up to the distance to a target l (similarly to on-axis distribution), described by a Poisson distribution with false positive rate λ F ˆ an multi-reflection return, also described by a Poisson distribution but with higher rate parameter λ R (we expect to detect anything eventually) ˆ an sensor max-range return, the same as for the on-axis distribution Finally, we can denote the formula of the off-axis distribution [7]: d < l : λ F exp( λ F d) l d < r : λ R exp( λ R d) exp( λ F l) d r : P range (3.12) If the incidence angle φ is between 0 and a certain threshold value, the PDF of the modelled sensor noise can be obtained as an interpolation 6 of the on-axis and the off-axis distributions, so that 4 also called false positive measurement 5 Compare to the approach proposed in [7]. 6 Before interpolating, both densities should be normalised. 13

21 Figure 3.5: Transformation of sensor measurements. The affine transform is entirely defined by two pairs of locations: {x t, x t 1 } and {x t, x t 1}. The transformation does not change the angles α 1 and α 2. Because the scale of the transform x t x t 1 / x t x t 1 is usually different than 1, in order to preserve the distances to targets d 1 and d 2, their locations must be transformed separately. We also do not assume that the robot moves along the straight line between the poses x t 1 and x t. the resultant PDF is a function of three variables: the measured distance to a target d, the real distance to a target l, and the incidence angle φ (see figure 3.4) Integration of sensor measurements In order to calculate the final value of the sensor model p(o t x t ), for an arbitrary observation o t, and if the previous robot pose was x t 1, we perform the following steps (see figure 3.5): 1. Collect all sensor measurements m i 8 recorded between time t 1 and t, where each measurement m i contains the target location and the robot location. 2. For each measurement m i : calculate the distance d i between the robot and a target, calculate the angle α i between the reading direction and the robot heading direction (see figure 3.5). 3. Transform robot locations at which the measurements m i were recorded onto new coordinates (see figure 3.5) - the affine transform is entirely defined by two pairs of locations: {x t, x t 1 } and {x t, x t 1} In new coordinates : restore the locations of targets from d i and α i ; this step is necessary so 7 Note that the PDF must be also determined for l > r to give correct densities for d < r in case a target is just behind the sensor range l > r. 8 contributing to observation o t. 9 In the MCL algorithm the first pair {x t, x t 1 } describes the robot move from time t 1 up to t, whereas the second one {x t, x t 1 } describes the move of a particle within the same time period. 14

22 as to keep old measured d i For each measurement m i (in new coordinates) : extract from the world model (see chapter 3.3) the actual distance l i to the target, and the incidence angle φ i. 6. For each measurement m i : calculate the probability p i of observing m i, using the sensor noise model 3.2.1, and having all three parameters d i, l i and φ i. 7. Finally, assuming conditional independence of the individual measurements m i, calculate the probability p(o t x t ): p(o t x t ) = i p i (3.13) where p i is the probability of observing m i (after transforming to new coordinates). Because of using affine transforms the above procedure differs a bit in some points from this what is known in literature. The most important features of the procedure are: ˆ Enables the MCL algorithm processing several measurements at a time, each of them can be recorded at any time between t 1 and t. ˆ The MCL algorithm does not have to be synchronised with sensor measurements. evaluations of possible robot s poses (particles) can be performed at any time. The ˆ The CPU workload can be smoothly adjusted by changing the length of the period [t 1, t], and the number of used measurements 11. ˆ The additional needed CPU time can be neglected, as the real bottleneck of the MCL algorithm is the procedure of extraction of the parameters l i and φ i (see step 5). 3.3 World model The conception of the world model arose out of the idea of separation the MCL algorithm (including the motion and sensor models) from the implementation of a robot s world. Though an absolute separation could not come into play 12, the only moment the world model interacts with the MCL algorithm was reduced to the extraction of the distance l i to a target and the incidence angle φ i, given the measurement m i (see step 5). The main features of the current implementation of the world model are: ˆ the world is two dimensional ˆ the world is bounded by a rectangle ˆ all objects have floating-point vector representation ˆ all objects are lines ˆ the world is stored in files in two formats compatible with those from Saphira 13 15

23 Figure 3.6: The world map is divided into a set of adjacent square cells. Each cell contains pointers to all objects which can be visible to the robot from within the cell (defined by the robot s sonar range). The map consists of about 1200 cells. 16

24 The world map is divided into a set of adjacent square cells (see 3.6), where each cell i contains two things: 1. pointers to all local objects which can be visible to the robot from within the cell (defined by robot s sonar range). 2. the minimal c (i) min and maximal c(i) max distance to all local objects The additional resource overhead needed to maintain all data structures is not large and is approximately proportional to the number of cells. The overall growth in the efficiency of the MCL algorithm is especially visible for large world maps. For example, for the world map 3.6 the MCL algorithm works about 20 times faster, and on Pentium IV 2.6 MHz it is possible to update up to particles every second. The procedure of extraction of the parameters l i and φ i, given the measurement m i, involves basic geometrical transformations and consists of: 1. identifying the cell on the basis of the coordinates of the measurement m i 2. calculating the reading direction α i having the target and the robot locations 3. finding the nearest target in direction α i from the group of visible objects from the cell 4. calculating l i and φ i Each cell contains also information about the minimal and maximal distance to the local objects, what can be used for smart initialisation of particles (see figure 3.7). The whole procedure is only slightly slower than the regular one (also O(n)), and needs at least one measurement: 1. Calculate the minimal min and the maximal max distance from the robot to recorded target(s), having the group of measurements m i. 2. Collect all the cells {i} that c (i) min < max and c(i) min > min Generate n random particles by repeating the following steps: pick up at random one of the collected cells, uniformly generate a single particle within the chosen cell. This procedure has a higher potential: for example one can generate particles which are already orientated against the objects. One must add, that the current version of the procedure is not resistant against false measurements, as well as does not take into account its uncertainties. 10 We do not assume the Doppler effect! The affine transform changes the scale of objects, while it does not change their proportions. 11 In addition to the numbers of particles. 12 The model of the sensor noise makes many assumptions about the shape of world s objects, and of course, the number of world dimensions is two (it can be changed relatively easy). 13 Unfortunately Saphira uses more formats, but the two supported ones are the most common. 14 One must be careful about using the minimal distance, since as some of the close object might not be recorded by the robot. 17

25 Figure 3.7: The improved initialisation of particles. Basing on sensor measurements, all the particles are generated only within the restricted regions. 18

26 Chapter 4 Conclusions Although the system was not tested yet on real robots, I experimented with the Pioneer robot simulator. No matter how perfect the simulator is, it will remain only the simulator. Therefore, the main purpose of the experiments were not obtaining any concrete numerical results, but rather developing, testing and optimising of the software. The experiments verified most strong and weak points of the basic MCL algorithm. I mention here only a few major problems. The tradeoff between the number of particles and efficiency. Since the belief distribution Bel(x) can contain only a finite number of particles, it is possible that there will be no particles initiated close to the actual pose of the robot 1. To make the matter worse, even supposing there are any particles there, because the algorithm is probabilistic, the particles may not survive. This scenario is especially probable if sensors are not very accurate, and in consequence their weights will not be significantly higher then the average. A trivial remedy to this problem is to increase a number of particles, what cannot be satisfying, though. Movement approximation. The solution to the aforementioned problem might be to add some more uncertainty to the motion model. The particles would then penetrate larger space and finally localise the robot. This approach works well in combination with noisy sensors - even if initially a group of particles is not very precisely localised, it can gradually adjust or synchronise with the true robot pose when moving. Unluckily, such a group of already localised particles tends to spread very quickly, what is particularly visible in the absence of (non-collinear) sensor measurements. The sensor noise model. Paradoxically, accurate sensors (plus narrow sensor distribution) make the MCL algorithm converge too quickly - this gives false results especially if the sensor accuracy does not correspond to its noise model. In practise, the modelled sensor noise should always overestimate the actual one, what gives particles more chance to synchronise with the optimal pose (at least optimal in the context of sensor readings). Summarising, it is not enough to have perfectly modelled motion and sensor noise, the MCL algorithm will not be optimal at all. On the other hand, an excessive increase of the number of particles makes the system inefficient. Are they any solutions to these problems? The two improvements proposed in the report could be the base for any more sophisticated algorithms. Moreover, they can be easily combined with, for example, the mixture MCL algorithm [13] described briefly at the end. 1 in three dimensional space 19

27 The affine transformation of sensor measurements 3.5 would allow using a procedure, which selects out the measurements that sketch any characteristic shapes or objects in the environment. Obviously, one cannot have too much influence on sensor measurements, but such a smart selection, even together with controlling the robot itself 2, may help the problem of premature convergence of the algorithm to false solutions. With using the improved particle initialisation 3.7, the conventional MCL algorithm needs approximately two times (the map 2.1) or even four times (the map 4.1) less particles to have the same effectiveness as with the standard uniform initialisation. These results vary depending on the map, sensor readings, number of used cells etc. The main strength of the algorithm 3.7 is its simplicity, speed 3 (all time-consuming calculations are done off-line), and not very large memory requirements (number of cells two floating point numbers). The mixture MCL algorithm [13] is an interesting algorithm, which combines the regular MCL and so called dual MCL algorithm according to the scheme: Mixture MCL = (1 Φ) Regular MCL + Φ Dual MCL 0 Φ 1 In the regular MCL 2.4 samples are drawn form p(x x, a) Bel(x), and then the weights are calculated from the sensor model p(o x). The dual MCL inverses this process - the proposal distribution is p(x o) and the weights are now directly proportional to belief Bel(x). The dual MCL requires sampling from p(x o) what can be realised by using kernel-density trees [12] - a piece-wise continuous probability distribution built out of particle set. The mixture MCL algorithm has been proven to be superior to the regular MCL [13], in particular the localisation error can be an order of magnitude smaller than in the conventional MCL. 2 To explore the regions that seem to be characteristic. Going further, basing on the map, one can even create an optimal exploration strategy. 3 otherwise it would be better to simply increase the number of particles 20

28 Robot location Robot location 21

29 Robot location Figure 4.1: The Monte Carlo localisation on a large map. All particles are initialised only in the vicinity of objects (the topmost figure). After about 15 seconds the MCL algorithm successfully localises the robot s pose (the bottom figure). 22

30 Appendix A Robot motion Modern mobile robots can walk, jump, swim, fly but even the simplest way of moving - rolling in two dimensional spaces can be surprisingly complex and the exact robot motion can only be approximated. But even though in this report we discuss only motion of rolling robots, the MCL algorithm remains the same, with the exception of so called motion model (chapter 3.1). A.1 Position estimation Position estimation is an inherent part of all navigation systems. One can distinguish two general groups of methods of position estimation: internal and external[7]. Internal methods (odometry) base on open-loop sensors, e.g. they do not rely on any feedback from environment. External methods uses sensor measurements directly from environment to record and correct the actual robot pose. The most popular internal methods use wheel encoders registering the number of counts of revolving shaft when robot is moving. External methods can employ Global Positioning System (GPS), external beacons like sonar or laser but also not very accurate compass. Whereas Saphira can support external methods, in the motion model (see chapter 3.1) it is usually sufficient to use only odometry readings. A.2 Position integration Odometric sensors gives information about a distance of robot moves, its speed and relative changes of robot s heading direction. It is necessary to integrate this information over time to give an estimate of the robot pose. Assuming that we know what is the correspondence between the number of counts and the travelled distance, and that the robot has two wheels (left and right), from the figure A.1 is clearly visible that in time period t robot changes its pose according to the following rule: dθ = 2π( r l) = 2πd s = ( r + l) 2 v = s t ( r l) d (A.1) 23

31 Figure A.1: Robot displacement within the time period t. where the angle Φ is given in radians, s is the overall movement of the robot and v is its speed. If at the time t n the global pose 1 of the robot P n is given by (x n, y n, Θ n ), then the pose P n+1 at the time t n+1 can be calculated from the simple rule: x n+1 = x n + s cos(θ n ) y n+1 = y n + s sin(θ n ) Θ n+1 = Θ n + Θ (A.2) Both the equation A.1 and the equation A.2 are no more than a crude simplification of a real physical motion, however they can be shown to converge to the ideal solution when t 0. While the equation A.1 is usually precise enough, as the sensor update rate can be as small as 10 ms, the equation A.2 can be a source of large integration errors. This is especially the case for external control systems 2, where the time period t is typically larger than in the equation A.2 and depends on used communication channel 3. A.3 Extended robot motion The equation A.2 is valid only for small values s and Θ, ideally when the robot moves exactly in direction given at the beginning of each update step. 1 or more precisely, the pose relative to the robot s starting point 2 The Saphira control system and the T1000 platform are good examples. 3 In fact, on Saphira system all sensor readings and a pose of the robot are updated every 100 ms (at least accessible to Saphira user), so the current pose is updating internally. However, to simulate particle motion, the relative movement s and the angle Θ can be extracted directly from the current pose of the robot. Note that in this case there will not be any integration errors, but only the instantaneous robot s heading error which can be entirely removed (see the next chapter A.3). 24

32 Figure A.2: Realistic robot motion. The initial robot heading direction Θ is usually different from the resultant motion direction. Figure A.2 depicts more realistic case of robot motion[7] - the initial robot s heading direction is generally different from the resultant direction of motion. If we denote this difference by angle α, we can write new equations for extended robot motion: x n+1 = x n + s cos(θ n + α) y n+1 = y n + s sin(θ n + α) Θ n+1 = Θ n + Θ (A.3) The only problem here is to find the angle α. Assuming that the angular velocity of the robot is constant during update time period t, we can write that: α (Θ n+1 Θ n ) (A.4) 2 On the other hand, there are often known both the initial pose P n and resultant pose P n+1, and we only need to know a value of the angle α 4. In this case the exact value of α can be obtained directly from the equation A.4: ( ) y α = arctan Θ n (A.5) x where x = x n+1 x n and y = y n+1 y n are relative robot s displacement. 4 to simulate exact particle motion, for instance 25

33 Appendix B A few notions from statistics B.1 Measurements and errors Measurement is not only the source of information but also various errors. The process of measurement consists in comparison of measured value with a suitable measurement standard. Basically, the result of measurement gives two pieces of information: the unit (meter, second, kilogram, etc.) and the numerical value. Because so obtained value is never absolutely error-free, the result must contain additional information describing its accuracy. One can distinguish three types of errors: 1. Blunders or mistakes. This type of error can be caused by for example wrongly chosen algorithm. 2. Systematic errors are a consequence of imperfect sensors or inaccurate measurement method. The calibration procedure usually eliminates most of these type of errors, but they still remain subject to sensor drift 1 and imprecision linearisation methods. 3. Random errors cannot be eliminated during calibration and they are effect of underlying physical processes like noise in electronic circuit, or unknown (not possible to model) interactions with the environment. The errors which elimination is not possible are frequently called measurement uncertainties. B.2 Random variables It is advantageous to consider random error as random variable, what allows us to use powerful statistical methods. Continuous random variables can be described by probability density function (PDF) p(x) which must satisfy the following conditions: p(x) 0, x + p(x)dx = 1 The n th moment of random variable X is defined as: M n = E[X n ] = + x n p(x)dx (B.1) (B.2) 1 which itself depends of environmental conditions 26

34 The 1 st moment of random variable X is called the expected or mean value µ = E[X] = The n th moment about the mean is defined as: + xp(x)dx (B.3) C n = E[(X µ) n ] = + The second moment about the mean is called the variance of variable X: σ 2 = E[(X µ)(x µ)] = E[X 2 ] E[X] 2 (x µ) n p(x)dx (B.4) (B.5) The standard deviation is the square root of the variance and is a measure of uncertainty or spread of random variable X around its mean value. Random variable X is said to be Gaussian if its PDF function is defined as: p(x) = 1 ) (x µ)2 exp ( 2πσ 2σ 2 (B.6) where µ and σ 2 are the mean and the variance of X. If any measurement quantity is described by Gaussian variable X, then 68% of all measured values of X will be within a standard deviation of a mean. If we denote by X 1, X 2,..., X n a set of n continuous random variables (see [6]): X = [X 1, X 2,..., X n ] T (B.7) then we can define the joint probability density function p(x 1,..., x n ), that similarly to the condition B.1, must satisfy the following relations: p(x 1,..., x n ) 0, x 1,..., x n p(x 1,..., x n )dx 1...dx n = 1 (B.8) Random variables X 1, X 2,..., X n are said to be independent if the joint probability density function p(x 1, X 2,..., X n ) can be factorised: Let µ denote the mean vector: p(x) = p(x 1, X 2,..., X n ) = p(x 1 )p(x 2 )...p(x n ) (B.9) µ i = E[X i ] and define the covariance σ 2 ij of variables X i and X j as µ = [µ 1, µ 2,..., µ n ] T (B.10) σ 2 ij = E[(X i µ i )(X j µ j )] Note that σij 2 = σ2 ji. We can use these covariance values to form the covariance matrix: σ σ1n 2 Λ =..... σn σnn 2 (B.11) (B.12) 27

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Bayes Filter Implementations Discrete filters, Particle filters Piecewise Constant Representation of belief 2 Discrete Bayes Filter Algorithm 1. Algorithm Discrete_Bayes_filter(

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz Humanoid Robotics Monte Carlo Localization Maren Bennewitz 1 Basis Probability Rules (1) If x and y are independent: Bayes rule: Often written as: The denominator is a normalizing constant that ensures

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Discrete Filters and Particle Filters Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Probabilistic

More information

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard 1 Motivation Recall: Discrete filter Discretize the continuous state space High memory complexity

More information

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms L10. PARTICLE FILTERING CONTINUED NA568 Mobile Robotics: Methods & Algorithms Gaussian Filters The Kalman filter and its variants can only model (unimodal) Gaussian distributions Courtesy: K. Arras Motivation

More information

Practical Course WS12/13 Introduction to Monte Carlo Localization

Practical Course WS12/13 Introduction to Monte Carlo Localization Practical Course WS12/13 Introduction to Monte Carlo Localization Cyrill Stachniss and Luciano Spinello 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Bayes Filter

More information

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta 1 Monte Carlo Localization using Dynamically Expanding Occupancy Grids Karan M. Gupta Agenda Introduction Occupancy Grids Sonar Sensor Model Dynamically Expanding Occupancy Grids Monte Carlo Localization

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM 1 Introduction In this assignment you will implement a particle filter to localize your car within a known map. This will

More information

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Adapting the Sample Size in Particle Filters Through KLD-Sampling Adapting the Sample Size in Particle Filters Through KLD-Sampling Dieter Fox Department of Computer Science & Engineering University of Washington Seattle, WA 98195 Email: fox@cs.washington.edu Abstract

More information

Particle Filter for Robot Localization ECE 478 Homework #1

Particle Filter for Robot Localization ECE 478 Homework #1 Particle Filter for Robot Localization ECE 478 Homework #1 Phil Lamb pjl@pdx.edu November 15, 2012 1 Contents 1 Introduction 3 2 Implementation 3 2.1 Assumptions and Simplifications.............................

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Adapting the Sample Size in Particle Filters Through KLD-Sampling Adapting the Sample Size in Particle Filters Through KLD-Sampling Dieter Fox Department of Computer Science & Engineering University of Washington Seattle, WA 98195 Email: fox@cs.washington.edu Abstract

More information

Localization, Where am I?

Localization, Where am I? 5.1 Localization, Where am I?? position Position Update (Estimation?) Encoder Prediction of Position (e.g. odometry) YES matched observations Map data base predicted position Matching Odometry, Dead Reckoning

More information

Monte Carlo Localization

Monte Carlo Localization Monte Carlo Localization P. Hiemstra & A. Nederveen August 24, 2007 Abstract In this paper we investigate robot localization with the Augmented Monte Carlo Localization (amcl) algorithm. The goal of the

More information

Artificial Intelligence for Robotics: A Brief Summary

Artificial Intelligence for Robotics: A Brief Summary Artificial Intelligence for Robotics: A Brief Summary This document provides a summary of the course, Artificial Intelligence for Robotics, and highlights main concepts. Lesson 1: Localization (using Histogram

More information

COS Lecture 13 Autonomous Robot Navigation

COS Lecture 13 Autonomous Robot Navigation COS 495 - Lecture 13 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization

More information

EE565:Mobile Robotics Lecture 3

EE565:Mobile Robotics Lecture 3 EE565:Mobile Robotics Lecture 3 Welcome Dr. Ahmad Kamal Nasir Today s Objectives Motion Models Velocity based model (Dead-Reckoning) Odometry based model (Wheel Encoders) Sensor Models Beam model of range

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

Final Exam Practice Fall Semester, 2012

Final Exam Practice Fall Semester, 2012 COS 495 - Autonomous Robot Navigation Final Exam Practice Fall Semester, 2012 Duration: Total Marks: 70 Closed Book 2 hours Start Time: End Time: By signing this exam, I agree to the honor code Name: Signature:

More information

5. Tests and results Scan Matching Optimization Parameters Influence

5. Tests and results Scan Matching Optimization Parameters Influence 126 5. Tests and results This chapter presents results obtained using the proposed method on simulated and real data. First, it is analyzed the scan matching optimization; after that, the Scan Matching

More information

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models Introduction ti to Embedded dsystems EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping Gabe Hoffmann Ph.D. Candidate, Aero/Astro Engineering Stanford University Statistical Models

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING By Michael Lowney Senior Thesis in Electrical Engineering University of Illinois at Urbana-Champaign Advisor: Professor Minh Do May 2015

More information

Particle Attraction Localisation

Particle Attraction Localisation Particle Attraction Localisation Damien George (dpgeorge@students.cs.mu.oz.au) Nick Barnes (nmb@cs.mu.oz.au) Dept. Computer Science and Software Engineering The University of Melbourne, Vic, 31, AUSTRALIA

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction A Monte Carlo method is a compuational method that uses random numbers to compute (estimate) some quantity of interest. Very often the quantity we want to compute is the mean of

More information

7630 Autonomous Robotics Probabilities for Robotics

7630 Autonomous Robotics Probabilities for Robotics 7630 Autonomous Robotics Probabilities for Robotics Basics of probability theory The Bayes-Filter Introduction to localization problems Monte-Carlo-Localization Based on material from R. Triebel, R. Kästner

More information

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization AUTONOMOUS SYSTEMS PROBABILISTIC LOCALIZATION Monte Carlo Localization Maria Isabel Ribeiro Pedro Lima With revisions introduced by Rodrigo Ventura Instituto Superior Técnico/Instituto de Sistemas e Robótica

More information

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 7: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS Integrated Cooperative SLAM with Visual Odometry within teams of autonomous planetary exploration rovers Author: Ekaterina Peshkova Supervisors: Giuseppe

More information

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms L17. OCCUPANCY MAPS NA568 Mobile Robotics: Methods & Algorithms Today s Topic Why Occupancy Maps? Bayes Binary Filters Log-odds Occupancy Maps Inverse sensor model Learning inverse sensor model ML map

More information

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100 ME 597/747 Autonomous Mobile Robots Mid Term Exam Duration: 2 hour Total Marks: 100 Instructions: Read the exam carefully before starting. Equations are at the back, but they are NOT necessarily valid

More information

Robust Monte Carlo Localization for Mobile Robots

Robust Monte Carlo Localization for Mobile Robots To appear in Artificial Intelligence, Summer 2001 Robust Monte Carlo Localization for Mobile Robots Sebastian Thrun, Dieter Fox y, Wolfram Burgard z, and Frank Dellaert School of Computer Science, Carnegie

More information

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 8: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2018 Localization II Localization I 16.04.2018 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

GT "Calcul Ensembliste"

GT Calcul Ensembliste GT "Calcul Ensembliste" Beyond the bounded error framework for non linear state estimation Fahed Abdallah Université de Technologie de Compiègne 9 Décembre 2010 Fahed Abdallah GT "Calcul Ensembliste" 9

More information

Statistical Techniques in Robotics (16-831, F10) Lecture #02 (Thursday, August 28) Bayes Filtering

Statistical Techniques in Robotics (16-831, F10) Lecture #02 (Thursday, August 28) Bayes Filtering Statistical Techniques in Robotics (16-831, F10) Lecture #02 (Thursday, August 28) Bayes Filtering Lecturer: Drew Bagnell Scribes: Pranay Agrawal, Trevor Decker, and Humphrey Hu 1 1 A Brief Example Let

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2016 Localization II Localization I 25.04.2016 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie

More information

Markov Localization for Mobile Robots in Dynaic Environments

Markov Localization for Mobile Robots in Dynaic Environments Markov Localization for Mobile Robots in Dynaic Environments http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/jair-localize.html Next: Introduction Journal of Artificial Intelligence

More information

ECSE-626 Project: An Adaptive Color-Based Particle Filter

ECSE-626 Project: An Adaptive Color-Based Particle Filter ECSE-626 Project: An Adaptive Color-Based Particle Filter Fabian Kaelin McGill University Montreal, Canada fabian.kaelin@mail.mcgill.ca Abstract The goal of this project was to discuss and implement a

More information

F1/10 th Autonomous Racing. Localization. Nischal K N

F1/10 th Autonomous Racing. Localization. Nischal K N F1/10 th Autonomous Racing Localization Nischal K N System Overview Mapping Hector Mapping Localization Path Planning Control System Overview Mapping Hector Mapping Localization Adaptive Monte Carlo Localization

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: EKF-based SLAM Dr. Kostas Alexis (CSE) These slides have partially relied on the course of C. Stachniss, Robot Mapping - WS 2013/14 Autonomous Robot Challenges Where

More information

Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation

Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation 242 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 3, JUNE 2001 Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation José E. Guivant and Eduardo

More information

Computer Vision 2 Lecture 8

Computer Vision 2 Lecture 8 Computer Vision 2 Lecture 8 Multi-Object Tracking (30.05.2016) leibe@vision.rwth-aachen.de, stueckler@vision.rwth-aachen.de RWTH Aachen University, Computer Vision Group http://www.vision.rwth-aachen.de

More information

NERC Gazebo simulation implementation

NERC Gazebo simulation implementation NERC 2015 - Gazebo simulation implementation Hannan Ejaz Keen, Adil Mumtaz Department of Electrical Engineering SBA School of Science & Engineering, LUMS, Pakistan {14060016, 14060037}@lums.edu.pk ABSTRACT

More information

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss Robot Mapping Least Squares Approach to SLAM Cyrill Stachniss 1 Three Main SLAM Paradigms Kalman filter Particle filter Graphbased least squares approach to SLAM 2 Least Squares in General Approach for

More information

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General Robot Mapping Three Main SLAM Paradigms Least Squares Approach to SLAM Kalman filter Particle filter Graphbased Cyrill Stachniss least squares approach to SLAM 1 2 Least Squares in General! Approach for

More information

Localization of Multiple Robots with Simple Sensors

Localization of Multiple Robots with Simple Sensors Proceedings of the IEEE International Conference on Mechatronics & Automation Niagara Falls, Canada July 2005 Localization of Multiple Robots with Simple Sensors Mike Peasgood and Christopher Clark Lab

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics FastSLAM Sebastian Thrun (abridged and adapted by Rodrigo Ventura in Oct-2008) The SLAM Problem SLAM stands for simultaneous localization and mapping The task of building a map while

More information

USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS. Oliver Wulf, Bernardo Wagner

USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS. Oliver Wulf, Bernardo Wagner USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS Oliver Wulf, Bernardo Wagner Institute for Systems Engineering (RTS/ISE), University of Hannover, Germany Mohamed Khalaf-Allah

More information

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models Prof. Daniel Cremers 4. Probabilistic Graphical Models Directed Models The Bayes Filter (Rep.) (Bayes) (Markov) (Tot. prob.) (Markov) (Markov) 2 Graphical Representation (Rep.) We can describe the overall

More information

Robotics. Chapter 25-b. Chapter 25-b 1

Robotics. Chapter 25-b. Chapter 25-b 1 Robotics Chapter 25-b Chapter 25-b 1 Particle Filtering Particle filtering uses a population of particles (each particle is a state estimate) to localize a robot s position. This is called Monte Carlo

More information

Ultrasonic Multi-Skip Tomography for Pipe Inspection

Ultrasonic Multi-Skip Tomography for Pipe Inspection 18 th World Conference on Non destructive Testing, 16-2 April 212, Durban, South Africa Ultrasonic Multi-Skip Tomography for Pipe Inspection Arno VOLKER 1, Rik VOS 1 Alan HUNTER 1 1 TNO, Stieltjesweg 1,

More information

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza ETH Master Course: 151-0854-00L Autonomous Mobile Robots Summary 2 Lecture Overview Mobile Robot Control Scheme knowledge, data base mission

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Sebastian Thrun Wolfram Burgard Dieter Fox The MIT Press Cambridge, Massachusetts London, England Preface xvii Acknowledgments xix I Basics 1 1 Introduction 3 1.1 Uncertainty in

More information

Introduction to Mobile Robotics Probabilistic Motion Models

Introduction to Mobile Robotics Probabilistic Motion Models Introduction to Mobile Robotics Probabilistic Motion Models Wolfram Burgard, Michael Ruhnke, Bastian Steder 1 Robot Motion Robot motion is inherently uncertain. How can we model this uncertainty? Dynamic

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Mobile Robot Mapping and Localization in Non-Static Environments

Mobile Robot Mapping and Localization in Non-Static Environments Mobile Robot Mapping and Localization in Non-Static Environments Cyrill Stachniss Wolfram Burgard University of Freiburg, Department of Computer Science, D-790 Freiburg, Germany {stachnis burgard @informatik.uni-freiburg.de}

More information

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard Robot Mapping A Short Introduction to the Bayes Filter and Related Models Gian Diego Tipaldi, Wolfram Burgard 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Recursive

More information

Apprenticeship Learning for Reinforcement Learning. with application to RC helicopter flight Ritwik Anand, Nick Haliday, Audrey Huang

Apprenticeship Learning for Reinforcement Learning. with application to RC helicopter flight Ritwik Anand, Nick Haliday, Audrey Huang Apprenticeship Learning for Reinforcement Learning with application to RC helicopter flight Ritwik Anand, Nick Haliday, Audrey Huang Table of Contents Introduction Theory Autonomous helicopter control

More information

Function approximation using RBF network. 10 basis functions and 25 data points.

Function approximation using RBF network. 10 basis functions and 25 data points. 1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data

More information

2017 Summer Course on Optical Oceanography and Ocean Color Remote Sensing. Monte Carlo Simulation

2017 Summer Course on Optical Oceanography and Ocean Color Remote Sensing. Monte Carlo Simulation 2017 Summer Course on Optical Oceanography and Ocean Color Remote Sensing Curtis Mobley Monte Carlo Simulation Delivered at the Darling Marine Center, University of Maine July 2017 Copyright 2017 by Curtis

More information

MULTI-DIMENSIONAL MONTE CARLO INTEGRATION

MULTI-DIMENSIONAL MONTE CARLO INTEGRATION CS580: Computer Graphics KAIST School of Computing Chapter 3 MULTI-DIMENSIONAL MONTE CARLO INTEGRATION 2 1 Monte Carlo Integration This describes a simple technique for the numerical evaluation of integrals

More information

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Overview Problem statement: Given a scan and a map, or a scan and a scan,

More information

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BHARAT SIGINAM IN

More information

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping CSE-571 Probabilistic Robotics Fast-SLAM Mapping Particle Filters Represent belief by random samples Estimation of non-gaussian, nonlinear processes Sampling Importance Resampling (SIR) principle Draw

More information

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Geometrical Feature Extraction Using 2D Range Scanner

Geometrical Feature Extraction Using 2D Range Scanner Geometrical Feature Extraction Using 2D Range Scanner Sen Zhang Lihua Xie Martin Adams Fan Tang BLK S2, School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798

More information

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering SYDE 372 - Winter 2011 Introduction to Pattern Recognition Clustering Alexander Wong Department of Systems Design Engineering University of Waterloo Outline 1 2 3 4 5 All the approaches we have learned

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping Lecturer: Alex Styler (in for Drew Bagnell) Scribe: Victor Hwang 1 1 Occupancy Mapping When solving the localization

More information

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models Prof. Daniel Cremers 4. Probabilistic Graphical Models Directed Models The Bayes Filter (Rep.) (Bayes) (Markov) (Tot. prob.) (Markov) (Markov) 2 Graphical Representation (Rep.) We can describe the overall

More information

Centre for Autonomous Systems

Centre for Autonomous Systems Robot Henrik I Centre for Autonomous Systems Kungl Tekniska Högskolan hic@kth.se 27th April 2005 Outline 1 duction 2 Kinematic and Constraints 3 Mobile Robot 4 Mobile Robot 5 Beyond Basic 6 Kinematic 7

More information

Elastic Correction of Dead-Reckoning Errors in Map Building

Elastic Correction of Dead-Reckoning Errors in Map Building Elastic Correction of Dead-Reckoning Errors in Map Building Matteo Golfarelli Dario Maio Stefano Rizzi DEIS, Università di Bologna Bologna, Italy Abstract A major problem in map building is due to the

More information

10-701/15-781, Fall 2006, Final

10-701/15-781, Fall 2006, Final -7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly

More information

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm Robot Mapping FastSLAM Feature-based SLAM with Particle Filters Cyrill Stachniss Particle Filter in Brief! Non-parametric, recursive Bayes filter! Posterior is represented by a set of weighted samples!

More information

Note Set 4: Finite Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for

More information

Nested Sampling: Introduction and Implementation

Nested Sampling: Introduction and Implementation UNIVERSITY OF TEXAS AT SAN ANTONIO Nested Sampling: Introduction and Implementation Liang Jing May 2009 1 1 ABSTRACT Nested Sampling is a new technique to calculate the evidence, Z = P(D M) = p(d θ, M)p(θ

More information

Tracking Multiple Moving Objects with a Mobile Robot

Tracking Multiple Moving Objects with a Mobile Robot Tracking Multiple Moving Objects with a Mobile Robot Dirk Schulz 1 Wolfram Burgard 2 Dieter Fox 3 Armin B. Cremers 1 1 University of Bonn, Computer Science Department, Germany 2 University of Freiburg,

More information

ABSTRACT ROBOT LOCALIZATION USING INERTIAL AND RF SENSORS. by Aleksandr Elesev

ABSTRACT ROBOT LOCALIZATION USING INERTIAL AND RF SENSORS. by Aleksandr Elesev ABSTRACT ROBOT LOCALIZATION USING INERTIAL AND RF SENSORS by Aleksandr Elesev A mobile robot must know its position in order to operate autonomously. The process of determining the robot s position from

More information

CSE 490R P3 - Model Learning and MPPI Due date: Sunday, Feb 28-11:59 PM

CSE 490R P3 - Model Learning and MPPI Due date: Sunday, Feb 28-11:59 PM CSE 490R P3 - Model Learning and MPPI Due date: Sunday, Feb 28-11:59 PM 1 Introduction In this homework, we revisit the concept of local control for robot navigation, as well as integrate our local controller

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

In the real world, light sources emit light particles, which travel in space, reflect at objects or scatter in volumetric media (potentially multiple

In the real world, light sources emit light particles, which travel in space, reflect at objects or scatter in volumetric media (potentially multiple 1 In the real world, light sources emit light particles, which travel in space, reflect at objects or scatter in volumetric media (potentially multiple times) until they are absorbed. On their way, they

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps

Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps Lecturer: Drew Bagnell Scribes: {agiri, dmcconac, kumarsha, nbhakta} 1 1 Occupancy Mapping: An Introduction

More information

MTRX4700: Experimental Robotics

MTRX4700: Experimental Robotics Stefan B. Williams April, 2013 MTR4700: Experimental Robotics Assignment 3 Note: This assignment contributes 10% towards your final mark. This assignment is due on Friday, May 10 th during Week 9 before

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Terry Payne Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Simon Parsons Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots Mapping 1 Occupancy Mapping When solving the localization problem, we had a map of the world and tried

More information

Exam Issued: May 29, 2017, 13:00 Hand in: May 29, 2017, 16:00

Exam Issued: May 29, 2017, 13:00 Hand in: May 29, 2017, 16:00 P. Hadjidoukas, C. Papadimitriou ETH Zentrum, CTL E 13 CH-8092 Zürich High Performance Computing for Science and Engineering II Exam Issued: May 29, 2017, 13:00 Hand in: May 29, 2017, 16:00 Spring semester

More information

10.4 Linear interpolation method Newton s method

10.4 Linear interpolation method Newton s method 10.4 Linear interpolation method The next best thing one can do is the linear interpolation method, also known as the double false position method. This method works similarly to the bisection method by

More information

Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang

Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang Localization, Mapping and Exploration with Multiple Robots Dr. Daisy Tang Two Presentations A real-time algorithm for mobile robot mapping with applications to multi-robot and 3D mapping, by Thrun, Burgard

More information

Unsupervised Learning

Unsupervised Learning Networks for Pattern Recognition, 2014 Networks for Single Linkage K-Means Soft DBSCAN PCA Networks for Kohonen Maps Linear Vector Quantization Networks for Problems/Approaches in Machine Learning Supervised

More information

Uncertainties: Representation and Propagation & Line Extraction from Range data

Uncertainties: Representation and Propagation & Line Extraction from Range data 41 Uncertainties: Representation and Propagation & Line Extraction from Range data 42 Uncertainty Representation Section 4.1.3 of the book Sensing in the real world is always uncertain How can uncertainty

More information

Empirical risk minimization (ERM) A first model of learning. The excess risk. Getting a uniform guarantee

Empirical risk minimization (ERM) A first model of learning. The excess risk. Getting a uniform guarantee A first model of learning Let s restrict our attention to binary classification our labels belong to (or ) Empirical risk minimization (ERM) Recall the definitions of risk/empirical risk We observe the

More information