EKF Localization and EKF SLAM incorporating prior information

Similar documents
Autonomous Mobile Robot Design

Probabilistic Robotics. FastSLAM

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Probabilistic Robotics

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Data Association for SLAM

Geometrical Feature Extraction Using 2D Range Scanner

Probabilistic Robotics

Robotics. Chapter 25-b. Chapter 25-b 1

Simultaneous Localization and Mapping

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm

Practical Course WS12/13 Introduction to Monte Carlo Localization

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS

Navigation methods and systems

Simuntaneous Localisation and Mapping with a Single Camera. Abhishek Aneja and Zhichao Chen

Localization and Map Building

Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation

Robot Mapping. TORO Gradient Descent for SLAM. Cyrill Stachniss

DYNAMIC ROBOT LOCALIZATION AND MAPPING USING UNCERTAINTY SETS. M. Di Marco A. Garulli A. Giannitrapani A. Vicino

Artificial Intelligence for Robotics: A Brief Summary

Introduction to Mobile Robotics SLAM Landmark-based FastSLAM

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Localization and Map Building

Introduction to Mobile Robotics. SLAM: Simultaneous Localization and Mapping

Robotics. Chapter 25. Chapter 25 1

ROBUST LOCALISATION OF AUTOMATED GUIDED VEHICLES FOR COMPUTER-INTEGRATED MANUFACTURING ENVIRONMENTS. R.C. Dixon 1 *, G.Bright 2 & R.

ICRA 2016 Tutorial on SLAM. Graph-Based SLAM and Sparsity. Cyrill Stachniss

Localization, Where am I?

L12. EKF-SLAM: PART II. NA568 Mobile Robotics: Methods & Algorithms

Probabilistic Robotics

BBR Progress Report 006: Autonomous 2-D Mapping of a Building Floor

State Estimation for Continuous-Time Systems with Perspective Outputs from Discrete Noisy Time-Delayed Measurements

Implementation of Odometry with EKF for Localization of Hector SLAM Method

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

What is the SLAM problem?

Nonlinear Filtering with IMM Algorithm for Coastal Radar Target Tracking System

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

High Accuracy Navigation Using Laser Range Sensors in Outdoor Applications

Principles of Robot Motion

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

ME 456: Probabilistic Robotics

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100

COS Lecture 13 Autonomous Robot Navigation

Stereo Vision as a Sensor for EKF SLAM

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images:

GT "Calcul Ensembliste"

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Maritime UAVs Swarm Intelligent Robot Modelling and Simulation using Accurate SLAM method and Rao Blackwellized Particle Filters

C-SAM: Multi-Robot SLAM using Square Root Information Smoothing

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms

2005 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

Motion Control Strategies for Improved Multi Robot Perception

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

Introduction to Robotics

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

5. Tests and results Scan Matching Optimization Parameters Influence

Introduction to Mobile Robotics SLAM Grid-based FastSLAM. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little

Combined Trajectory Planning and Gaze Direction Control for Robotic Exploration

(W: 12:05-1:50, 50-N202)

Robot Mapping. Hierarchical Pose-Graphs for Online Mapping. Gian Diego Tipaldi, Wolfram Burgard

Graph-Based SLAM (Chap. 15) Robot Mapping. Hierarchical Pose-Graphs for Online Mapping. Graph-Based SLAM (Chap. 15) Graph-Based SLAM (Chap.

Trajectory Generation for Constant Velocity Target Motion Estimation Using Monocular Vision

Simultaneous local and global state estimation for robotic navigation

MULTI-ROBOT research has gained a broad attention. A Novel Way to Implement Self-localization in a Multi-robot Experimental Platform

Monocular SLAM with Inverse Scaling Parametrization

NERC Gazebo simulation implementation

Online Simultaneous Localization and Mapping in Dynamic Environments

SLAM With Sparse Sensing

Robot Mapping. Grid Maps. Gian Diego Tipaldi, Wolfram Burgard

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms

RoboCupRescue - Simulation League Team RescueRobots Freiburg (Germany)

UAV Autonomous Navigation in a GPS-limited Urban Environment

Robot Localization based on Geo-referenced Images and G raphic Methods

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Appearance-based Concurrent Map Building and Localization

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

This chapter explains two techniques which are frequently used throughout

Implementation of SLAM algorithms in a small-scale vehicle using model-based development

AWireless sensor network has been successfully applied to. Vision-Based Coordinated Localization for Mobile Sensor Networks

Tracking. Establish where an object is, other aspects of state, using time sequence Biggest problem -- Data Association

CANAL FOLLOWING USING AR DRONE IN SIMULATION

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Mobile robot localisation and navigation using multi-sensor fusion via interval analysis and UKF

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING

Tracking Multiple Moving Objects with a Mobile Robot

1 Differential Drive Kinematics

Final Exam Practice Fall Semester, 2012

Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang

High-precision, consistent EKF-based visual-inertial odometry

Probabilistic Robotics

Reference Tracking System for a Mobile Skid Steer Robot (Version 1.00)

Transcription:

EKF Localization and EKF SLAM incorporating prior information Final Report ME- Samuel Castaneda ID: 113155

1. Abstract In the context of mobile robotics, before any motion planning or navigation algorithm can be executed, it is necessary for the robot to have a globally consistent map of the environment and its own location in it. The problem of simultaneously learning the global map and the location of the robot is commonly known as the Simultaneous Localization and Mapping (SLAM) problem. One important result about most SLAM problems is that since objects in the map have unknown positions, seen a new object doesn t help the robot to get a better estimate of its position nor the other objects poses. It s not until the robot sees a previously known landmark the loop can be closed and the estimation can really be accurate (achieve global consistency). A novel SLAM approach was presented in [1] to achieve global consistency by utilizing information extracted from aerial images of the environment as prior information to the SLAM problem. In this work, the SLAM problem is built-up step by step, starting with a robot and sensor model, generating and environment and first building the simple Localization problem using and Extended Kalman Filter (EKF), then the solution is expanded to cover the online SLAM problem, and Finally prior information is added to prove that this can actually grant global consistency without needing to close the loop.. Introduction The motivation for this project comes from how the aerial images from outdoor environments can be easily accessed by any person with a web connection and the growing importance of robotics in the academic and commercial worlds. o Related Work The idea of this work comes from the approach proposed in [1]. This approach the authors extract features from the global aerial images to have some initial information about the landmarks in the environment. Then as the robot moves and takes 3D laser scans of the environment the map is built using a Graph-based formulation to solve the SLAM problem. It is important to notice that the approach proposed in [1] includes more than just the SLAM solution since first the aerial images needs to be processed to extract the desired features (this process can be seen in Figure.1). Then for every step of the robot the problem starts with extracting a D projection from the 3D laser scans (see Figure.). After obtaining this, the association problem has to made to match every measurement with the corresponding landmark (Known as Association Problem), and finally after this is done is when the SLAM algorithm can be applied.

Figure.1. Image processing: (a) Google Earth image, (b) Result of edge extraction, (c) Resultant likelihood field. [1] Figure.. Shows: (a) 3D laser scan image of the area, (b) Google Earth image, (c) Edges extraction of aerial image, (d) Bird-eye projection of the 3D laser scan, (e) Positions extracted from (d). [1] o Scope For the purpose of this work, the goal is to achieve the same results given in [1] but with a simpler version in the sense that the focus will be only in the SLAM part, correspondences and prior information will be considered given for the purpose of this so no Image

Processing nor Association problem will be solved. Also instead of taking 3D laser scans with the sensor and computing the D bird-view projection, the sensor information will consider to be directly measuring the D distances. With respect to the SLAM problem, instead of Graph-SLAM formulation used in [1], this work will use an EKF formulation which will be build up step by step starting from a simple Localization problem, to then extend this to solve the Online SLAM problem and finally add the prior information. Table.1 shows the comparison between work done in [1] against the work done in this project. Table.1. The Proposed work for this paper against the Original work in [1]. 3. The Model o Motion model The model considered for the robot motion takes into account three parameters that locates a robot in a D space, this parameters that constitute the state are: {x, y, θ}. The control or input consider two variables which are,f, T-, where f describes a linear velocity command, and T defines an angular velocity command.

The model used is called a Velocity Motion Model and as shown in *+ it can be derived as follows: Consider a robot moving among a circular trajectory generated by applying a linear velocity ( f ) and an angular velocity ( T ) to the robot (See Figure 3.1). The radius of this circle is given by: ( 1 ) If the robot position is note by {x, y} and the center of the circle is denoted by {xc, yc}: ( ) ( ) ( ) ( ) ( ) ( 3 ) Now, if f and T remains constant over a time interval Δt, the new position of the robot will still be on the circle but shifted by an angle of T. Δt, solving from () and (3): ( ) ( ) ( ) ( ) ( ) ( 5 ) ( ) ( ) ( ) Mixing the previous equations in a state space representation we get the motion model ( 7 ) Note that this model doesn t work if the angular velocity command is zero (T=), to solve this issue, since the main use for the model will be to estimate the Sate and not to actually generate a straight trajectory we will just consider a very small value for T when the input command is zero

. Figure 3.1. Robot movement described by a circular trajectory. [] Linearization: The EKF that will be applied requires the use of a linearized version of the model for certain steps. This will be done according to Taylor expansion. Let X(k+1) = g(u, X) from equation (7) with X(t) = {x(t), y(t), θ(t)} been the state and U = {f(t), T(t)} the input. The Jacobian of g with respect to X is given by ( ) where gi stands for the i th row of g. Then: By evaluating the G at the current state a linearized model is obtained. Motion noise: The EKF formulation requires Additive White Gaussian Noise (AWGN). The noise is assumed to come in the control space as follows:

When converted into state space, the noise will come as: To compute the covariance matrix of the noise in state space Rk : where Vk is the Jacobian of g with respect to U o Sensor Model As sensor model we will be using a range bearing model, it measures the distance ( rk ) and angle ( θk ) of a landmark j with respect to the robot state at a given moment k. From basic trigonometry: Zk (j) h(x(k), M(j)) Zk(j) is the measurement of the jth landmark as a given step k of the robot motion, it is computed with the function h that depends on the robot state at step k ( Xk ) and the global position of the jth landmark ( Mj ). Note that for the localization problem the landmarks positions are fixed, this will not be the case for the SLAM problem.

Linearization: Again, the EKF formulation requires a linearized version of the model, this time for the sensor model. In this case the Jacobian H is given by: Sensor noise: As before the noise will be AWGN, and we will assume the range noise to be independent to the angular noise.. Localization Problem The localization problem consists on a robot moving on an environment with a known map expressed as landmarks, the goal is to give the best possible approximation of the robot s location according to the sensing of the landmarks. Figure.1 shows the Localization problem Set-up. Figure.1. A robot located in space with known landmarks, the measurements of the landmarks will provide the information needed to locate the robot as it moves around the environment.

o EKF formulation A Kalman Filter bases its estimation on considering a linear system to keep track of the expected value and covariance of the state as it propagates through a linear process. The EKF is just to make the Kalman Filter track a non-linear system by using linear approximations of the model computed around the present state for each point (or best approximation of the present state). The estimation process consists in two main steps: Prediction, corresponding to the upper section of Figure.; and Filtering, corresponding to the lower section of Figure.. Prediction Also known as A-priori estimation. It uses the input and the motion model to propagate the expected value and covariance of the state. This gives a first estimation of the state at the next time step. The equations for the prediction step are: where μk1 and Σk1 represents the expected value of state, and the covariance matrix at a time k+1; μk and Σk are the expected value of the state, and the covariance matrix at a time k; Uk is the input at time k. Linearization for Gk and Rk are made around the best estimation (A-posteriori estimation) of the state at time k. Filtering Also known as A-posteriori estimation. The measurement from the sensor has arrived and now this information is compared with the prediction stage results to give a refined estimation of the state. The equations for the filtering step are:

where μk and Σk are initialized as the A-priori estimation. Linearization for Hj (previously refereed as H) is done around the value μk. (Note that the meaning of k and k1 is different from the one used before in the sense that they don t refer to time k and time k+1 but just to belief and corrected estimation, they are both for time k+1 ). This filtering stage works with one landmark j at a time, in case more than one landmark is observed by the sensor, then the filter will run as many times as the number of landmarks in range, this way each of the measurements contributes to correct the estimation. In case no landmark is in range the sensor will not give any measurement and this step will be skipped, leaving the A-priori as the best possible estimation. Figure.. Extended Kalman Filter Overview. 5. SLAM Problem The SLAM problem is in essence very similar to the Localization Problem but now the map of the environment is also unknown, and the job now is to simultaneously estimate both, robot and landmark coordinates. To include the landmarks into the estimation the basic idea is just to create an extended state: Note that the map M now becomes M(k). ( ) ( ( ) ( ) ) Additionally for this part, the definition of each landmark will be expanded to:

where the parameter ms is called the signature and can be used to send information about the landmark which for our simulation will be the identification j. o From EKF Localization to EKF SLAM As stated before, the state is now expanded, but the idea of the EKF formulation remains. In regarding to the estimation of the Robot parameters, everything will be the same except for that now instead of the filter using known knowledge of the map to evaluate h(x(k),m(j)) and H, now it will use the best available estimation of those values (given by the A-priori or a filtering stage for the previous landmark), is important to notice that the matrix H has to be modified because partial derivatives now have to be taken with respect to the landmark coordinates too, so now: For the map we will assume static maps. The estimation of the map follows the same formulation than estimating the state but with some important considerations to take into account: 1) Since the map is considered static, the Prediction step should not modify the map estimation which is the same as considering the map state space dynamics as just an identity matrix ( M(k+1) = I.M(k) ). ) When a landmark is seen for the first time, its estimated position will be set according to the estimation of the robot s position plus the sensor measurement. And naturally the uncertainty (reflected by the covariance matrix) will come as a product of the robot s and sensor s uncertainty. 3) At the start of the simulation the landmarks have completely unknown positions and the state vector M() and covariance matrix Σ() should be initialized to reflect this. A good way to do it is proposed in [] by just setting the coordinates of the landmarks at the origin and the covariance matrix for the section corresponding to the landmarks as Diag( ). The full estimation will work by setting the robot s and map s estimators in parallel and while each of them working by its own, they are highly coupled because each of them takes the estimation of the other one in its formulation.

After all the considerations discussed the problem now reduces in how to set the formulas for Prediction and Filtering correctly to work both estimations in parallel. The method proposed in *+ relies in declaring an auxiliary matrix Fx to handle this. where N is the number of landmarks. 3N By taking Fx as proposed we can now rewrite the prediction step to satisfy the conditions discussed above as: The function g1 is similar to the full model g. And is given by: It is easy to see that with formulation the prediction stage fulfills the conditions discussed. Similarly, for the Filtering step, a matrix Fxj is proposed as follows: The filtering equations can be written as: 3j-3 3N-3j

Before these filtering equations it is important to check if the landmark has been previously observed and if not, initialize it at the coordinates where it is observed. This is done with the following equations just before each landmark processed in the filtering step:. SLAM with prior information One important result of the SLAM problem in order to obtain a globally consistent map and robot estimation, the trajectory of the robot needs to be such that it comes back to a previously observed landmark with small uncertainty, this will allow the global uncertainty of all landmarks and specially the robot s position to converge and have a reliable estimation, this property can be observed from the results of the SLAM problem in section 7. Having a limitation such as relaying in the robot to describe a loop like trajectory is very restrictive for some mobile robotics applications and some other solution needs to be found to achieve global consistency of the estimation. If we analyze what happens in the EKF algorithm is that once a landmark is first observed, it s expected value and covariance matrix are modified for the first time, in the case of the loopclosing trajectory, when the landmark is observed again, the filtering step uses the original previous information about the landmark to correct the estimation of the robots position and with that it corrects all other estimations for the landmarks. But what if instead of initializing the landmark information as if they are completely unknown, we use some prior information about the landmarks?, logically assuming there is such prior information. Well the behavior the first time that landmark is seen will be very similar as if there was a loop closing there (some initial information and some new information mixing for the filtering). This idea is the one to be demonstrated by using the prior in the EKF. The way to do this is actually very simple is the prior is given, just instead of having Y()={,,...,} and Σ() as previously described, use the prior information, Figure.1 shows the initialization with no prior and Figure. shows the initialization including prior information in the third and fifth landmarks.

Figure.1 State and covariance matrix initialization for the SLAM problem with 5 landmarks to consider, no prior information available. Figure. State and covariance matrix initialization for the SLAM problem with 5 landmarks to consider, prior information added.

7. Simulation Results RESULTS FOR LOCALIZATION PROBLEM As we can observe from figure 7.1, the robot moves with a relatively high uncertainty in the model, shown by the A-priori estimation ellipses, but when the sensor detects some landmarks the filtering allows the A-posteriori estimation to be much more certain, then as the robot moves away from the landmarks the uncertainty grows, and only one ellipse is observed, this is because they are actually overlapped since no landmark detection means A-priori and A-posteriori are the same. And when again just when the robot detects more landmarks the estimation becomes certain again as seen in section (d) of Figure 7.1, the A-priori had large uncertainty but the A-posteriori (resulting from filtering) is much more certain. ( a ) ( b ) Blue Rectangle Robot Blue dots Landmarks Dotted line Sensor range Blue ellipse Apriori uncertainity Red ellipse Aposteriori uncertainty ( c ) ( d ) Figure 7.1 EKF Localization Problem Simulation.

RESULTS FOR SLAM PROBLEM LANDMARKS Blue dots Real positions ( a ) Red crosses Estimated positions. ( b ) Blue ellipses Uncertainty of landmark. ROBOT Rectangles: Black Real position. Green- Noiseless trajectory. Red Apriori estimation. Blue Aposteriori estimation. Blue ellipse Apriori uncertainty Red ellipse Aposteriori uncertainty Dotted line Sensor range ( c ) ( d ) ( e ) ( f ) Figure 7. EKF SLAM Problem Simulation.

As shown in Figure 7. both landmarks and robot position start at the origin. As the robot moves its position becomes more and more uncertain, affecting with it the uncertainty of the landmarks it observes on the way, also, the real and estimated trajectories diverge more and more from each other. It is not until the robot closes the loop with a more certain landmark in part (d) of Figure 7. when the filtering can make significant corrections, this can be seen by comparing A- priori and A-posteriori estimations, the first one is very close to the path that the noiseless trajectory would follow and the second one very close to the actual position of the robot following the real trajectory; and also by a significant reduction in all uncertainty ellipses. Finally, after one more step where the robot is still able to see the landmark (Section (f) of Figure 7.), all estimations becomes even more accurate. RESULTS FOR SLAM PROBLEM WITH PRIOR INFORMATION The experiment again starts with the robot at the origin but some landmarks are initialized according to prior information, this can be seen as two of the landmarks in Figure 7.3 (a) have already their estimated position and uncertainty. Then again as in the previous case the robot s and landmarks uncertainty grows as the simulation advances, it can be observed a slight correction in the robot s estimation when the third landmark is observed, but the interesting result comes when the fifth landmark is observed because just one step before in Figure 7.3 (e), uncertainty is high for both the robot and the fourth landmark, but in Figure 7.3 (f) the robot observes the fifth landmark and because of the prior knowledge, the robot is able to again get a very good estimation of its location and also the fourth landmark s estimation gets better. This proves that the prior knowledge about a landmark behaves as a closing loop system without having to close the loop.

( a ) ( b ) ( c ) ( d ) ( e ) ( f ) Figure 7.3 EKF SLAM with prior information Problem Simulation.

. Conclusions With the conclusion of this project, the Localization and SLAM techniques are clearly understood as well as the EKF setup and operation. The EKF approach can be effectively applied to solve both localization and SLAM problems and can also be extended to incorporate initial knowledge about the landmarks in the case of SLAM with prior information. When prior information about the landmarks is available the SLAM problem can achieve global consistency without having to close the loop. This EKF approach can be easily extended to applications outside the robotics field, as long as the system can be modeled and the sensing device is sufficient to grant an observable process and the noise is AWGN. By comparing the results obtained from this work with the ones from [1] it s possible to affirm that Graph-Slam and EKF formulations can both effectively solve the SLAM problem and after applying prior information, both can give similar results, the choice of which is better comes down to the kind of noise, computational resources available, and number of landmarks to be processed. References [1] Kummerle, Kleiner, etal. Large Scale Graph-based SLAM using Aerial Images as Prior Information. [] Thrun, S; Burgard, W; Fox,D Probabilistic Robotics, Cambridge, Mass. : MIT Press, c.