Implementation of SLAM algorithms in a small-scale vehicle using model-based development

Size: px
Start display at page:

Download "Implementation of SLAM algorithms in a small-scale vehicle using model-based development"

Transcription

1 Master of Science Thesis in Datorteknik, Fordonssystem Department of Electrical Engineering, Linköping University, 2017 Implementation of SLAM algorithms in a small-scale vehicle using model-based development Johan Alexandersson och Olle Nordin

2 Master of Science Thesis in Datorteknik, Fordonssystem Implementation of SLAM algorithms in a small-scale vehicle using model-based development Johan Alexandersson och Olle Nordin LiTH-ISY-EX 17/5101 SE Supervisor: Examiner: Peter A Johansson isy, Linköpings universitet Simon Tegelid ÅF Consulting Mattias Krysander isy, Linköpings universitet Department of Electrical Engineering Department of Electrical Engineering Linköping University SE Linköping, Sweden Copyright 2017 Johan Alexandersson och Olle Nordin

3 Abstract As autonomous driving is rapidly becoming the next major challenge in the automotive industry, the problem of Simultaneous Localization And Mapping (SLAM) has never been more relevant than it is today. This thesis presents the idea of examining SLAM algorithms by implementing such an algorithm on a radio controlled car which has been fitted with sensors and microcontrollers. The software architecture of this small-scale vehicle is based on the Robot Operating System (ROS), an open-source framework designed to be used in robotic applications. This thesis covers Extended Kalman Filter (EKF)-based SLAM, FastSLAM, and GraphSLAM, examining these algorithms in both theoretical investigations, simulations, and real-world experiments. The method used in this thesis is modelbased development, meaning that a model of the vehicle is first implemented in order to be able to perform simulations using each algorithm. A decision of which algorithm to be implemented on the physical vehicle is then made backed up by these simulation results, as well as a theoretical investigation of each algorithm. This thesis has resulted in a dynamic model of a small-scale vehicle which can be used for simulation of any ROS-compliant SLAM-algorithm, and this model has been simulated extensively in order to provide empirical evidence to define which SLAM algorithm is most suitable for this application. Out of the algorithms examined, FastSLAM was proven to the best candidate, and was in the final stage, through usage of the ROS package gmapping, successfully implemented on the small-scale vehicle. iii

4

5 Acknowledgments First and foremost, we would like to thank Simon Tegelid and Magnus Carlsson at ÅF, for endless support whenever needed. Additional accolades go out to our supervisor Peter Johansson at the Department of Electrical Engineering who was very helpful with technical support. We would also like to extend our gratitude to our examiner Mattias Krysander, also at the aforementioned department, who provided an extensive amount of good insight about the subject and also lots of useful hints regarding academic writing in general. Linköping, October 2017 Johan Alexandersson and Olle Nordin v

6

7 Contents Notation xi 1 Introduction The Company Background Thesis Scope Limitations Risk analysis General risks Kinematic vehicle model Equipment Thesis methodology Related work Recursive Bayesian Estimation State-space representation Bayesian Filtering Kalman filter Extended Kalman filter Particle filter Sampling Resampling Rao-Blackwellized Particle Filter Simultaneous localization and mapping Introduction to SLAM Data association Loop closure Full SLAM and online SLAM General models for SLAM problem EKF SLAM Algorithm Computational complexity vii

8 viii Contents Data association FastSLAM Particle structure Algorithm FastSLAM Computational complexity Data Association GraphSLAM Algorithm Computational complexity Data association Theoretical evaluation of SLAM Vehicle Dynamics Longitudinal dynamics Gear ratio Wheel torque Forward velocity Lateral dynamics Ackermann angle Heading angle Coordinates Sensor theory Gyroscope Wheel encoders Laser range finder Simulink Model Implementation Input Kinematic Vehicle Velocity Servo Angle & Angular Velocity Position Map Parameters Implementation Odometry Sensors Gyroscope Wheel encoders LIDAR Description Operation Probability of error ROS Interface

9 Contents ix /sensor_data /scan /car_position /clock ROS Implementation CAN Reader Odometry SLAM rviz Position simulinkposition ROS Control App Joystick Input CAN Writer CPU Recoder Simulations Method Overview Maps Tests Results Mapping accuracy Path accuracy Average position error Average CPU load Issues Conclusion Real-world experiment Overview Results Conclusion Discussion Achievements Method Complications Future work Bibliography 91

10

11 Notation Abbreviations Abbreviation ROS LIDAR SLAM IMU GU I AI P DF MEMS LU T CP U P C CAN Description Robot Operating System Light detection and ranging Simultaneous localization and mapping Inertial measurement unit Graphical user interface Artificial intelligence Probability Density Function micro-electro-mechanical systems Lookup table Central Processing Unit Personal Computer Controller Area Network xi

12

13 1 Introduction 1.1 The Company This thesis has been conducted at ÅF in Linköping. ÅF is a Swedish consulting company with over 8000 employees worldwide [6]. ÅF as a company focuses heavily on technology areas such as industry, infrastructure and energy [6]. The thesis work was conducted at ÅF s technology department. 1.2 Background In recent times, many companies in the automotive industry have spent a lot of time and funds on researching the possibility of producing autonomous cars. The research has moved forward to the point that there are actually semi-autonomous cars available in the market today, and it is widely believed that at some point in time, almost all vehicles will be autonomous. In order to achieve this, there is obviously a need for advanced sensors and functions. One of the major issues is the fact that the vehicle needs to be aware of its current position within an unknown environment. This is known as the Simultaneous Localization And Mapping problem, commonly abbreviated as SLAM. As ÅF is a company that works a lot with autonomous driving, a competence project concerning this has been started. The goal with this project is to assemble a small-scale vehicle with embedded sensors and microprocessors, and implement functions which will enable this vehicle to drive autonomously. As implementing a SLAM algorithm is a vital part of achieving this, this thesis will focus on finding the best SLAM algorithm for this purpose and implement it in the vehicle. 1

14 2 1 Introduction 1.3 Thesis Scope The purpose of this thesis is to implement a Simultaneous-Location-And-Mapping (SLAM) algorithm in a small-scale vehicle running the Robot Operating System (ROS). Which SLAM algorithm to be chosen will be supported by a theoretical investigation. Three different algorithms will be investigated, Extended Kalman Filter based SLAM (EKF)[12], particle filter-based SLAM (FastSLAM) [29], and graph-based SLAM (GraphSlam) [11]. FastSlam is implemented in the ROS packages gmapping [2] and coreslam [1], and Graphslam is implemented in the package slam_karto [3]. In addition, simulations will be done on a model created in Simulink representing the actual car and further information regarding the SLAM algorithm implemented in the model will be presented. The SLAM algorithm is supposed to simultaneously create a map of the vehicle s environment as well as calculating the position of the vehicle within this map. This data is to be broadcast to another ROS node on a PC, which will show the map and the vehicle s position in real-time in a GUI. There will be two different levels of priority in this thesis. The first priority is to implement passive SLAM, and the second priority is to implement active SLAM. Another second priority goal is to make the algorithm consider dynamic objects. In order to reach the first priority goals, the following needs to be investigated: A theoretical investigation comparing different SLAM-algorithms in order to make a valid decision on which one to use How to run a SLAM algorithm using limited resources (since it will be run on an embedded processor). This means first and foremost to pick a SLAM algorithm which uses few resources. However, if this is not sufficient, modifications to the algorithm (or the implementation of one) may be needed. Development of a model in Simulink and evaluating the model by comparing the output of the model to a theoretical position/direction within a map In order to reach the second priority goals, the following needs to be investigated: A theoretical investigation comparing different pathfinding algorithms for active SLAM, as well as simulating different algorithms in Simulink. Theoretical studies and simulations where dynamic objects are considered. 1.4 Limitations The following limitations are set for this project: Ethical aspects of any kind will not be discussed in this thesis. There are no requirements of the environment of which the vehicle will operate in.

15 1.5 Risk analysis Risk analysis There are several risks that certain parts of this thesis may be delayed or fail. These are explained in the following sections General risks A major part of this thesis will be conducted during the summer, when supervisors may be on vacation leave. This will limit the amount of support that is available Kinematic vehicle model There are several risks regarding the kinematic vehicle model: The sensors in the kinematic vehicle model should be able to represent the actual sensors in the car as well as possible. The sensors being modelled should be validated by some measurements on the real sensors as well as theoretical values. However, that is not a guarantee that they actually represent the actual sensors. The kinematic vehicle model should be validated using a measuring system. Currently, the measuring system is under development at Linköpings Universitet. This basically implies that the testing phase of the model could be delayed. Alternative solutions for measuring could be by plain eye sight comparing the difference in position within the maps.

16 4 1 Introduction 1.6 Equipment Fig. 1.1 displays an overview of the system architecture used in this thesis. Figure 1.1: System architecture overview The hardware equipment and software components are further explained below: ROS A framework and operating system developed for robots. This thesis will use the ROS Kinetic distribution Lubuntu Lightweight Ubuntu operating system, operating on the imx6 processor on the Nitrogen6_MAX board.

17 1.7 Thesis methodology 5 STM32 board A PCB containing a stm32f4 microcontroller, CAN transceivers, as well as being connected to a RPLIDAR, magnet encoders and an IMU. Nitrogen6_MAX A motherboard containing CAN transceivers, Ethernet, Wi-Fi, and an imx6 quad-core processor. Magnet encoders Hall effect sensors attached to the wheel axis of the physical vehicle, measuring each time a magnet mounted inside the wheel passes by. IMU 9250 Inertial measurement unit, connected to a magnetometer, a gyro and an accelerometer. RPLIDAR A 2D laser scanner capable of producing scans in 360 different angles. 1.7 Thesis methodology Fig. 1.2 illustrates the method workflow of this thesis. The first phase is a prestudy, where the basic theory of SLAM algorithms will be studied and ROS/Simulink tutorials will be conducted. This is just to get a good initial understanding of the subject. The next phase of the thesis is to concurrently develop a model of the physical vehicle in Simulink and doing a theoretical investigation on the SLAM algorithms chosen for investigation in this thesis. Once the model is done, some ROS implementation is needed to be able to perform simulations with the vehicle model. When the simulations are done, the results will be used to choose an algorithm to be implemented on the physical vehicle. Figure 1.2: Thesis methodology

18 6 1 Introduction 1.8 Related work This thesis, as stated above in Section 1.3, includes conducting a theoretical investigation about three SLAM algorithms. Given that the SLAM problem has been known for a long time, there exists papers related to each of the SLAM algorithms examined in this thesis. [4, p.6]. From the book Probabilistic robotics [28] by Sebastian Thrun, Wolfram Burgard and Dieter Fox, each SLAM algorithm have been described thoroughly and it is being widely cited in this thesis. However, given the specified thesis scope described in Section 1.3 the theoretical investigation is also to be backed up by simulation results. Various aspects of the GraphSLAM and the FastSLAM algorithms have been compared to each other in the paper "An evaluation of 2D SLAM Techniques available in Robot Operating System" [24, p.1-6] and the master thesis "SmokeNav - Simultaneous Localization and Mapping in Reduced Visibility Scenarios". [25, p.26-33]. Moreover, at pages in the master thesis [25], the reader is provided with real world results from GraphSLAM and FastSLAM implementations. Furthermore, real world results of a GraphSLAM and FastSLAM implementation has been provided in [24, p.5-6].

19 2 Recursive Bayesian Estimation A key component of what enables a robot to successfully perform navigation is sensing. In order to navigate properly, the robot needs information about its environment. This information is supplied by sensors. A major problem, however, is that the sensor data will always be corrupted by noise to some degree. That means that if the robot just uses the raw sensor data as it is, the navigation will produce erroneous results. To apprehend the fact that all measurements are in the presence of noise, filtering is needed. One approach of doing this is through Recursive Bayesian Estimation [28, p. 13], which uses probabilistic models in order to filter out the noise. 2.1 State-space representation In order to define states of a system, state-space representation is used. In general, a continuous-time system is said to be in state-space form [20, p. 19] if it can be defined as ẋ(t) = f (x(t), u(t)) y(t) = g(x(t), u(t)) (2.1a) (2.1b) where x(t) is the system s state vector, and u(t) and y(t) is the input and the output of the system, respectively. The derivative of the state vector ẋ(t) is a function of the current state x(t) and the current input u(t). Similarly, the output of the system y(t) is a function of the current state and the current input. For discretetime systems, state-space equations [20, p. 19] are defined as x t+1 = f (x t, u t ) y t = g(x t, u t ) (2.2a) (2.2b) 7

20 8 2 Recursive Bayesian Estimation For linear continuous-time systems, the state space model can be formulated as ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t) (2.3a) (2.3b) For linear discrete-time systems, the state space model can be formulated as x t+1 = Ax t + Bx t y t = Cx t + Bx t (2.4a) (2.4b) 2.2 Bayesian Filtering The pose of a robot is due to its current state x t. When the system enters a new state x t+1, which is due to some control command of the robot u t, the new state will produce a measurement z t+1. The relationship between controls, states, and measurements is shown in Fig. 2.1 below. A Bayes Filter is an algorithm which Figure 2.1: Relationship between states, controls, and measurements.

21 2.3 Kalman filter 9 calculates the probability density function (PDF) for the state vector x t. This is done in two steps, the prediction step bel(x t ) and the correction step bel(x t ) as bel(x t ) = p(x t u t, x t 1 ) bel(x t 1 ) dx t 1 (2.5a) bel(x t ) = η p(z t x t ) bel(x t ) (2.5b) where in the prediction step (given by Eq. 2.5a), a new state x t is predicted using only the control command u t and the previous state x t 1. This prediction is altered in the correction step by using sensor data in order to get a better estimation[28, p.27]. A scaling factor η is introduced to scale the resulting PDF in order to make the integral sum 1 [28, p.17]. Pseudo-code for the algorithm is described in Algorithm 1. Algorithm 1 Recursive Bayes Filter 1: function bayes_filter(bel(x t 1 ), u t, z t ) 2: for all x t do 3: bel(x t ) = p(x t u 1, x t 1 ) bel(x t 1 ) dx t 1 4: bel(x t ) = η p(z t x t ) bel(x t ) 5: end for 6: return bel(x t ) 7: end function 2.3 Kalman filter A technique which implements a Bayes filter is the Kalman filter, which is used in linear Gaussian systems. Basically, it computes a belief for a continuous state [28, p.40]. This state represents the information regarding the vehicle such as the odometry, and its surrounding environment for example landmarks[28, p.20]. The belief of the state at a specific time instance t bel(x t ), can be expressed by its mean and its covariance.[28, p.40] The Kalman filter needs three specific properties as prerequisites [28, p ], and these are: The current state x t, needs to be represented by linear arguments, meaning that x t = A t (x t 1 ) + B t u t + ɛ t (2.6) where A t and B t are constants. Moreover, the added noise source should be of Gaussian nature. The current state x t depends on the previous state x t 1, the control u t, and the noise ɛ t. The current measurement z t requires its arguments to be of linear nature meaning that z t = C t x t + δ t (2.7)

22 10 2 Recursive Bayesian Estimation where C t is a constant and the noise source is Gaussian. The current measurement z t depends on the current state x t and the noise source δ t. The last prerequisite is that the start belief bel(x 0 ) has to be Gaussian distributed. The Kalman filter algorithm itself, shown below in Algorithm 2 expresses the belief of the current state x t, at a specific time t. The bel(x t ), is expressed by the mean µ t and the covariance Σ t, which is also the output of the algorithm.[28, p.42] Algorithm 2 Kalman filter algorithm 1: function Kalman_filter(µ t 1, Σ t 1, u t, z t ) 2: µ t = A t µ t 1 + B t u t 3: Σ t = A t Σ t 1 A T t +R t 4: K t = Σ t Ct T (C tσ t Ct T +Q t) 1 5: µ t = µ t + K t (z t C t µ t ) 6: Σ t = (I K t C t )Σ t 7: return µ t, Σ t 8: end function Basically, the algorithm calculates predictions of its mean µ t and covariance Σ t at line 2 and 3. Whereas A t and B t are matrices which multiplied by the state vector x t 1 and the control vector u t represents the state transition probability with linear arguments, which is one of the requirements for the Kalman filter algorithm to be successful. These predictions are used in the correction step, lines 4-6. The first step of the correction is the calculation of the Kalman gain denoted K t. Moreover, the calculation of the mean µ t is performed with regards to the predicted mean µ t the measurement z t - the predicted measurement (2.7) C t µ t times the Kalman gain. At last, the new covariance Σ t is calculated with regards to the Kalman gain factor as well as the predicted covariance Σ t [28, p.43]. 2.4 Extended Kalman filter The Extended Kalman Filter is an approach of dealing with nonlinear models, of which the traditional normal Kalman filter cannot handle. Basically, real world measurements and state transitions are often not linear, thus a normal Kalman filter s assumption of linear data is not accurate. The approach of how EKF processes this is by describing the state transition probability and the measurement probability with nonlinear functions [28, p. 54] x(t) = g(u(t), x(t 1)) + ɛ(t) z(t) = h(x(t)) + δ(t) (2.8a) (2.8b) As stated above the EKF uses nonlinear functions. The output of the algorithm is an approximation in the form of a Gaussian distribution, with an estimated mean and covariance. The approximation part of the algorithm is due to

23 2.4 Extended Kalman filter 11 Figure 2.2: Linearization of the nonlinear function g(x). Dashed line indicates the Taylor approximation of g(x) the fact that it uses nonlinear functions. These functions need to go through a linearization process. This process linearizes the functions by for example using a first order Taylor expansion around a linearization point, this can be seen in Fig The resulting Gaussian distribution can then be calculated in closed form. This enables the extraction of interesting characteristics of the distribution such as the mean µ t and the covariance Σ t. Given the mean and the covariance, the belief bel(x t ) can be represented [28, p.56-58]. The EKF algorithm is shown below in Algorithm 3 [28, p.59]. Algorithm 3 Extended Kalman filter algorithm 1: function Extended_Kalman_filter(µ t 1, Σ t 1, u t, z t ) 2: µ t = g(u t, µ t 1 ) 3: Σ t = G t Σ t 1 Gt T +R t 4: K t = Σ t Ht T (H tσ t Ht T +Q t) 1 5: µ t = µ t + K t (z t h(µ t )) 6: Σ t = (I K t H t )Σ t 7: return µ t, Σ t 8: end function As can be seen from Algorithm 3 it is slightly different compared to the Kalman filter Algorithm 2. This is due to the fact that the EKF uses nonlinear functions to calculate the mean µ t and the covariance Σ t.

24 12 2 Recursive Bayesian Estimation 2.5 Particle filter Another implementation of a Bayes filter is the particle filter. In contrast to the Extended Kalman filter, it does not use a parametric model for the probability distributions. The main idea of the particle filter is that it instead keeps a set of M samples, where each sample x [m] t (known as a "particle") is a set of potential state variable values. The probability itself of these state variables is represented by how many particles contain similar state variable values, meaning that areas with high probability will have a higher density of particles than the areas where the probability is low [19, p. 27]. The particle filter is described in Algorithm 4[28, p. 98], where X t is the predicted state vector, and X t is the corrected state vector. Algorithm 4 Particle Filter 1: function particle_filter(x t 1, u t, z t ) 2: X t = X t = 3: for m = 1 to M do 4: sample x [m] t p(x t u t, x [m] t 1 ) 5: w [m] t = p(z t x [m] t ) 6: X t = X t + (x [m] t, w [m] t ) 7: end for 8: for m = 1 to M do 9: draw i with probability w [i] t 10: add x [i] t to X t 11: end for 12: return X t 13: end function Sampling During sampling (lines 3-7 in Algorithm 4), new values are generated for each particle in the filter. These values are only temporary, and the particles themselves will not be updated until the resampling stage. This, for each particle, new state is due to the control u t and the previous state of the particle x [m] t. After this, the importance factor w [m] t is calculated. The importance factor is due to the measurement z t and is used as a likelihood measure of the particle used in the forthcoming resampling of the particle filter. [28, p. 99] Resampling During resampling (lines 8-10 in Algorithm 4) the actual particles in the filter are updated. This is done by randomly selecting a particle generated in the sampling stage, where the probability of selection for each particle is due to its importance

25 2.5 Particle filter 13 factor. The resampling is done with replacement, so each particle can be drawn several times. [28, p.99] Rao-Blackwellized Particle Filter There s also another version of the Particle Filter called the Rao-Blackwellized particle filter. This is an extension of the regular particle filter where some state variables are represented by particles, and some with Gaussians. [28, p. 437]

26

27 3 Simultaneous localization and mapping This chapter describes the basic features of a generic SLAM algorithm, as well as introducing a few SLAM algorithms. The key features of these algorithms are then summarized in Section 3.5, where each algorithm is a also evaluated in terms of being appropriate for implementation in the physical vehicle of this thesis. 3.1 Introduction to SLAM Simultaneous localization and mapping is a problem where a moving object needs to build a map of an unknown environment, while simultaneously calculating its position within this map. [12, p. 229] There are several areas which could benefit from having autonomous vehicles with SLAM algorithms implemented. Examples would be the mining industry, underwater exploration, and planetary exploration.[12, p. 229] The SLAM problem in general can be formulated using a probability density function denoted p(x t, m z 1:t, u 1:t ), where x t is the position of the vehicle, m is the map, and z 1:t is a vector of all measurements. u 1:t is a vector of the control signals of the vehicle, which is either the control commands themselves or odometry, depending on the application Data association Basically, the concept data association is to investigate the relationship between older data and new data gathered. In a SLAM context it is of necessity to relate older measurements to newer measurements. This enables the process of determining the locations of landmarks in the environment, and thus this also gives information regarding the robots position within the map. [4, p. 26] 15

28 16 3 Simultaneous localization and mapping Loop closure The concept of loop closure in a SLAM context is the ability of a vehicle to recognize that a location has already been visited. By applying a loop closure algorithm, the accuracy of both the map and the vehicles position within the map can be increased. [30, p.1] However, this is not an easy task to perform, due to the fact that the operating environment of the vehicle could contain similar structural circumstances as previously visited locations. In that case if the loop closure algorithm performs poorly, it could lead to a faulty loop closure, which could corrupt both the map as well as the pose of the vehicle within the map.[30, p.1] Full SLAM and online SLAM There are two different types of SLAM problems. The online SLAM problem of which only the current pose x t and the map m are expressed, given the control input u 1:t and measurements z 1:t. As well as the full SLAM problem which expresses the entire trajectory of the robot. The PDF of the full SLAM problem is denoted as p(x 0:t, m z 1:t, u 1:t ), where all the poses of the robot are considered, including its initial pose x 0. [17, p. 21] General models for SLAM problem A motion model for the SLAM problem can be described by x t = g(u t, x t 1 ) + δ t (3.1) where the current pose x t depends on the possible nonlinear function g(u t, x t 1 ), with u t being the control input, x t 1 the previous pose and δ t being a random Gaussian variable with zero mean.[17, p. 19] A measurement model for the SLAM problem can be described by: z t = h(x t, m) + ɛ t (3.2) where the current measurement z t, depends on the possible nonlinear function h(x t, m), where x t denotes the current pose, m represents the map, ɛ t a random Gaussian variable with zero mean.[17, p. 19] 3.2 EKF SLAM The EKF SLAM (Extended Kalman Filter) approach of solving the SLAM problem is by using sensor data gathered from the movement and rotation of the robot. This can be done by for example using wheel encoders and a gyroscope. Furthermore, it is also necessary to gather information of the environment by, for example, using a Laser Range Finder (LIDAR). With this data, the algorithm

29 3.2 EKF SLAM 17 keeps track of where the robot is likely positioned within a map, as well as keeping track of specific landmarks observed.[4] Algorithm When the robot is turned on, the sensors on the robot such as the wheel encoders and the gyroscope will gather information of the positioning of the robot. Moreover, landmarks from the environment are also extracted based on new observations from the LIDAR which is mounted on the robot. These new observations are associated with previous observations and updated in the EKF algorithm. However, if an observation of a landmark cannot be associated to a previous observation the observation itself is presented to the EKF algorithm as a new observation.[4, p. 29] This process is illustrated in Fig. 3.1.

30 18 3 Simultaneous localization and mapping Figure 3.1: EKF SLAM algorithm. [4, p. 10] First and foremost, the robot localizes itself within the map being created using observed landmarks and information from the sensors. The landmarks should preferably be observable from multiple angles, as well as not being separated from other landmarks. These prerequisites gives the EKF algorithm a good possibility to distinguish between landmarks at a later time. Moreover, the landmarks being used should also be stationary. [4, p. 18] To describe the EKF SLAM process in short terms, it consists of two stages. Prediction step Predict the current state expressed by the predicted mean and the predicted covariance. These are being calculated based on control input, matrices, the previous covariance and the previous mean. [27, p ] Correction step

31 3.2 EKF SLAM 19 First and foremost, the idea behind the correction step is the data association. The main idea behind associating data in a SLAM context is to distinguish between an earlier observed landmark and a newly observed one. [27, p.15-16,30] When a new measurement has occurred the procedure is as following: Store the locations of newly observed landmarks. [27, p.15-16,30] Associate previously observed landmarks with newly observed ones. [27, p.15-16,30] Furthermore, the correction gain, also known as the Kalman gain is computed. Basically, it is a correction gain factor necessary to update the current mean and covariance of the current state. The current mean and covariance is not only dependent on the Kalman gain though, but they are also taking the predicted mean and the predicted covariance and the measurement into account. [27, p.37-38] Computational complexity The computational cost of the EKF SLAM algorithm is quite costly in comparison to other SLAM algorithms. When new landmarks are detected, they are added to the state vector of the filter, and the map of with N identified landmarks, will increase linearily. [23, p.7765] In terms of numbers, the use of memory is O(N 2 ). The computation cost of computing one step of the algorithm is approximately O(N 2 ) [27, p.57] [5, p. 792], the complete cost for computing the whole algorithm is O(N 3 ), heavily dependant on the size of the map[5, p.792] Data association A bottleneck when using the EKF algorithm is the computational cost which increases when new landmarks are observed. This leads to a prerequisite of that the environment should not contain too many landmarks. However, since the number of landmarks in the environment are few, the localization within the map and the data association is getting increasingly more difficult.[28, p ] Moreover, the data association is made by computing the maximum likelihood and in case of a faulty association, the future results which are based on previous associations by the algorithm will also be faulty.[28, p ]

32 20 3 Simultaneous localization and mapping 3.3 FastSLAM Another way of solving the SLAM problem is using the FastSLAM algorithm, which makes use of the Rao-Blackwellized particle filter. FastSLAM takes advantage of something that most other SLAM algorithms do not, namely the fact that each observation only concerns a small amount of the state variables. The pose of the robot is probabilistically dependent on its previous pose, whereas the landmark locations are probabilistically dependent on the position of the robot. [19, p. 27] This is not taken into account by the EKF SLAM algorithm, for example, which with each update has to recalculate its covariance matrix, resulting in poor scaling in large maps. [19, p. 29] This is not a problem in FastSLAM, as it factors out the calculations to simpler subproblems, as shown by p(s t, θ z t, u t, n t ) = p(s t z t, u t, n t ) N p(θ n s t, z t, u t, n t ) (3.3) which factors out the robot path s t and each of the N landmarks θ n. One major difference here between FastSLAM and, for example EKF SLAM, is that the entire path s t is calculated, as opposed to just the current pose x t. This means that FastSLAM is a full SLAM algorithm and EKF SLAM is an online SLAM algorithm, as described in Section Particle structure The particle filter used in FastSLAM is comprised of M particles, where each particle is defined by n=1 S [m] t = (s t[m], µ 1,t, Σ 1,t,..., µ N,t, Σ N,t ) (3.4) Each particle contains N + 1 variables, one robot path s t[m], and N landmark posteriors, where each landmark is represented by an EKF with a mean µ n,t and a variance Σ n,t. During one iteration of the algorithm, the robot path for every particle s t[m] is updated once, as well as each landmark filter µ n,t, Σ n,t Algorithm Each iteration of the algorithm can be summarized in four steps. These are presented below. 1. For each of the M particles, sample a new robot path s t from p(s t s [m] t 1, u t) 2. With the new measurement, update the landmark filters 3. Give each particle an importance factor 4. Resample the particles, with particles with a higher importance factor having a higher draw probability.

33 3.3 FastSLAM 21 In step 1, a new robot path is sampled for each particle. This path is drawn from the motion model, with the robot control as input. After this, the posteriors of the landmarks are updated due to the new measurement. After this, each particle is given an importance factor, which is a measure on how realistic the variables in the particle are. Later, the particles are resampled in order to get rid of the ones which do not have high probability. [19, p. 10] FastSLAM 2.0 The FastSLAM algorithm (and particle filters in general) performs more accurately when there is more diversity among the particles. Particles will not be diverse if, for example, the odometry sensors are noisy and the laser range finder is accurate, as this will render many of the motion hypothesises unlikely and many particles will have a low importance factor. Because of this drawback of the FastSLAM algorithm, an updated one has been developed, named FastSLAM 2.0. FastSLAM 2.0 is quite similar to its predecessor, with the exception that when estimating the motion, it also considers the range sensors. This creates better guesses in general, and will result in more particles having higher importance factors. [19, p. 63] Computational complexity If the FastSLAM is directly implemented as described above, it has O(M N) computational complexity, where M is the number of particles and N is the number of landmarks. [19, p. 48] To implement an algorithm like this would not be a good idea, since it would scale linearly with the number of landmarks. This would render the algorithm unsuitable for large maps. To apprehend this issue, FastSLAM is implemented using binary trees rather than the array structure shown in 3.4. The algorithm will then have O(MLog(N)) complexity [28, p. 465], which is illustrated in Fig. 3.2 below, which illustrates such a tree in a single particle with N = 8. Using this binary tree approach also signifcantly reduces the amount of memory used by the algorithm. [19, p. 58]

34 22 3 Simultaneous localization and mapping Figure 3.2: Binary tree data structure of FastSLAM particle Data Association Another advantage of the FastSLAM algorithm is the fact that its data association is on a per-particle basis. This means that each particle will have its own hypothesis on which landmark the measurement is associated with. If a measurement associates incorrectly with a landmark in a particle, this particle will have a low importance factor, thus making the impact of this wrong association very small. In other algorithms, such as EKF-based SLAM, once an incorrect data association has been made, it will cause the algorithm to produce incorrect results from that point. This is one of the main advantages of the FastSLAM algorithm in comparison to other algorithms, and contributes to the algorithm having a high robustness. [19, p. 41]

35 3.4 GraphSLAM GraphSLAM GraphSLAM is another approach of solving the SLAM problem. The idea behind GraphSLAM is to address the SLAM problem by using a graph. The graph contains nodes which represent the poses of the robot x 0,..., x t as well as landmarks in the map which are denoted as m 0,..., m t. However, this is not enough to address the SLAM problem, as there are also constraints between the positions x t, x t 1, x t 2,..., x t n of the robot and the landmarks m 0,..., m t. [26, p.361] These constraints represents the distance between adjacent locations such as x t 1, x t as well as the distance between the locations to the landmarks m 0,...m t. These constraints can be constructed due to information from the odometry source u t.[26, p ] The graph mentioned earlier can be converted to a sparse matrix. Which in contrast to dense matrices can be more efficiently used. [17, p. 1] Fig. 3.3 illustrates how the landmarks m 0,..., m t, the locations x 0,..., x t and the constraints are linked together. Figure 3.3: Robot poses seen as circles, landmarks of the map seen as squares, dashed lines representing the motion of the robot based on control commands given to the robot as well as solid lines representing the distance to the landmarks given by measurements Algorithm The PDF for the full SLAM problem is denoted p(x 0:t, m z 1:t, u 1:t ). For simplicity, all the poses x 0:t and the landmarks m can be described by a state space vector y

36 24 3 Simultaneous localization and mapping in Fig Figure 3.4: Statespace vector containing all poses and landmarks Then in order to achieve the most likely pose of the robot within the map and the positions of the landmarks in the map, an optimization algorithm has to be computed. [17, p. 11] Moreover, the PDF p(y z 1:t, u 1:t ) contains the possibly nonlinear functions g(u t, x t 1 ) and h(x t, m i ), whereas m i represents the i-th landmark observed at time t. [17, p.4,11] The optimization algorithm expects linear functions. Thus, a necessary step is to linearize these. This could be done by for example executing a Taylor linearization algorithm. [17, p.11] As specified earlier, the GraphSLAM approach of solving the SLAM problem is by introducing a graph. This graph can be converted to a sparse matrix. Basically, this sparse matrix contains all the information acquired by the robot while moving in an environment. For example, the robots movement through the map and landmarks observed at different locations. Thus, as the robot moves through the environment the sparse matrix will increase in size and contain even more information.[17, p. 14] In the following Figs. 3.5,3.6,3.7 below, it is illustrated of how the robot poses illustrated as blue squares and landmarks illustrated as red squares observed are added to the sparse matrix. Moreover, the blue squares are also adjacent to each other due to their relationship as the new pose is dependant on the old pose. The diagonal grey squares shows that the landmarks of each row m1, m2, m3, m4 and its corresponding column m1, m2, m3, m4 is the one and same landmark. Figure 3.5: Robot poses and landmarks observed

37 3.4 GraphSLAM 25 Figure 3.6: Robot poses and landmarks observed Figure 3.7: Robot poses and landmarks observed Computational complexity Some interesting statistics when discussing the GraphSLAM algorithm is the computational load and the use of memory. In comparison with the EKF SLAM algorithm of which the use of memory grows by O(N 2 ) when new landmarks N are introduced and the computational cost by O(N 3 ), the GraphSLAMs use of memory grows linearly when new landmarks N are introduced. As for the computational cost of the GraphSLAM algorithm it is dependent on time, thus, if the path of the vehicle is very long time-wise the computational cost can be costly. However, in general the attributes of the GraphSLAM algorithm is very desirable when the environment contains a lot of landmarks, of which the EKF algorithm tends to fail.[26, p. 362]. [28, p. 330] Data association The data association part of the GraphSLAM algorithm is done using a vector containing correspondences between landmarks observed in the map. [28, p.362] The algorithm performs a correspondence test to check the probability that different landmarks observed within the map actually is the same landmark. If the result of this correspondence test exceeds a certain value, the algorithm will determine the landmarks as the same landmark, otherwise it will determine these

38 26 3 Simultaneous localization and mapping as two separate landmarks. [28, p. 363] In comparison with the EKF SLAM algorithm where the future data associations can be faulty due to earlier data associations, the GraphSLAM algorithm can reexamine older data associations, thus reducing the risk of introducing errors for future data associations. [28, p. 339,363] The GraphSLAM algorithm solves the full SLAM problem [17, p. 15], in comparison to the EKF slam algorithm which only solves the online SLAM problem. 3.5 Theoretical evaluation of SLAM Given the specified thesis scope in Section 1.3, a theoretical investigation regarding three different SLAM algorithms, namely EKF, GraphSLAM and FastSLAM has been conducted. There are several parameters of interest when discussing SLAM algorithms, some of which have been investigated are the following: robustness, efficiency as well as ROS compatibility. These parameters are evaluated and compared to each other below. EKF Robustness Given what has been stated in 3.2.3, a faulty data association will result in erroneous future results, thus the robustness of the EKF algorithm is low. [28, p ] Efficiency In Section the total computational cost of executing the algorithm is O(N 3 ), to a great degree dependant on the size of the map[5, p.792]. In conclusion this makes the EKF SLAM algorithm more suitable for smaller maps containing fewer landmarks due to the heavy computational load.[27, p. 57] However, as stated in the localization and data association part can be difficult when the landmarks in the environment are few. [28, p ] ROS compatibility FastSLAM Implemented in the mrpt_ekf_slam_2d package. Robustness Given what has been stated in 3.3.5, the FastSLAM algorithm is significantly more robust than the EKF SLAM algorithm due to the fact that the data association from measurements to landmarks is particle-based which reduces the impact of a wrong data association. Efficiency

39 3.5 Theoretical evaluation of SLAM 27 In Section the total computational cost is stated as O(MLog(N)), meaning that the complexity is due to the number of particles M and the number of landmarks N. ROS compatibility GraphSLAM Implemented in the gmapping package. Robustness Given what has been stated in 3.4.3, the robustness of the Graph- SLAM algorithm is higher than the EKF algorithm, partly due to the fact that older data associations can be reexamined if a wrong data association has been performed. Basically, this reduces the risk of producing erroneous future data associations thus increasing the robustness. Efficiency In Section it is stated that the use of memory is linearly dependent in comparison to the EKF algorithm which uses O(N 2 ) based on number of landmarks N Moreover, the computational load is time dependent, thus if the path is long the algorithm can be quite costly as stated in ROS compatibility Implemented in the slam_karto package. From what can be read above, the EKF SLAM algorithm is not too robust, while both the GraphSLAM algorithm and the FastSLAM algorithm have a higher robustness. Moreover, in terms of efficiency the EKF SLAM algorithm is very costly. Given that all algorithms have been implemented in ROS packages, each of these are viable choices. However, as the EKF SLAM algorithm lacks in both robustness and effeciency, the conclusion of using other more effective algorithms such as the FastSLAM or GraphSLAM instead of the EKF algorithm can be reached. These algorithms are able to efficiently deal with maps containing more landmarks, as well as larger maps in terms of sheer size. With maps containing a couple of hundred landmarks, for example when trying to map the outside world, the computational complexity of the EKF algorithm could prove to be a huge problem to deal with. [18, p. 1], [28, p. 330]

40

41 4 Vehicle Dynamics This chapter describes some of the vehicle dynamics theory which was used to develop the dynamic model of the small-scale vehicle. Some of the theory presented in this chapter was ultimately not used in the final model, but is still included here since it describes the initial modeling approach. This chapter only concerns the movement of the vehicle, whereas the theory concerning the sensors is presented in Chapter Longitudinal dynamics Longitudinal dynamics describe the movement of a vehicle along the longitudinal axis. Thus, it can be said that longitudinal dynamics describe the speed of the vehicle in the forward and backward direction, due to how different forces act upon the vehicle Gear ratio In a vehicle the motor and the wheels rotate at different speeds. This is due to the fact that they are connected by different sized gears. The relationship between the angular velocity ω of the motor and gear is known as the gear ratio [15, p. 43] and is defined as n gear = ω motor ω wheels (4.1) Wheel torque The torque τ at the wheel of the vehicle can be computed with 29

42 30 4 Vehicle Dynamics P = τ ω (4.2) where P is the power of torque and ω is the angular velocity. If it is assumed that there are no losses, the power at the motor and the wheels will be equal, meaning that P motor = P wheels (4.3) and together with with (4.2) this implies τ motor ω motor = τ wheels ω wheels (4.4) If (4.1) is substituted into (4.4) we obtain the torque of the wheels as a function of the gear ratio and torque of the motor τ wheels = n gear τ motor (4.5) Forward velocity To obtain the forward velocity of the vehicle, the longitudinal force of the driving wheels needs to be calculated. This is obtained by F = τ r (4.6) and in the terms of the wheels it is denoted by F = τ wheels r wheels (4.7) Newton s second law of motion can then be used to determine the acceleration of the vehicle as a = F wheels m vehicle (4.8) where m vehicle denotes the mass of the vehicle. The rotational inertia of the wheels and drive shaft can be neglected or included as mass equivalent. Finally, the velocity is obtained through integration as 4.2 Lateral dynamics v = T t adt (4.9) Lateral dynamics describe the movement of the vehicle along the lateral axis. Lateral movement is mainly determined by the heading angle, which needs to be calculated in order to determine the lateral velocity of the vehicle.

43 4.2 Lateral dynamics Ackermann angle The Ackermann steering angle is defined as the wheelbase angle δ = l/r (4.10) which causes the vehicle to turn in a specific heading angle, or yaw. l is the length of the wheelbase and R is the radius of the heading angle, i.e. the distance between the center of gravity and the turn center, as Fig 4.1 shows. [16, p.129] Figure 4.1: The Ackermann angle Heading angle If the Ackermann angle, the length of the wheelbase, and the forward velocity is known, the heading angle of the vehicle can be calculated. The forward velocity is defined as v = R ψ (4.11) Since the radius of the heading angle is related to the Ackermann angle, (4.10) can be substituted into (4.11) in order to get the yaw rate ψ = v l δ (4.12) Finally, the heading angle can be obtained through integration as ψ = T t ψdt (4.13)

44 32 4 Vehicle Dynamics 4.3 Coordinates Coordinates of the vehicle can then be calculated by integrating the velocities in the x and y direction over time as x = y = v x dt v y dt (4.14a) (4.14b) If the forward velocity v, vehicle length l, yaw rate ω = ψ, and yaw ψ of the vehicle are known, the velocities v x and v y can be calculated using a transformation matrix [10] as [ ] [ ] [ ] vx v cos(ψ) sin(ψ) = v l y 2 ω (4.15) sin(ψ) cos(ψ)

45 5 Sensor theory The sensors used in this thesis are a gyroscope, wheel encoders, and a LIDAR. The measurements from these sensors provide the necessary input to the SLAM algorithms. The SLAM algorithms will process this data in order to present a map of the surrounding environment, including the pose of the vehicle positioned within the map. The following sections will provide a description of each sensor mentioned. 5.1 Gyroscope Gyroscopes have been widely used for military and civilian purposes the last century. Not only for aeronautical purposes but also by the marine. The traditional gyroscopes used to be mechanical. However, what is being used in many applications nowadays are MEMS gyroscopes (micro-electro-mechanical systems), which are being used in cell phones, robotic projects, and for military purposes. [14, p. 171] Gyroscopes in general can be described by the the formula of Coriolis force F c = 2 m (ω v) (5.1) As can be seen from (5.1) the Coriolis force is depending on the mass m of an object, the angular velocity ω as well as the velocity v of the object. This object is a part of the gyroscope and the mass of the object as well as the velocity of the object are known. Given that the object rotates, and by having information about the mass and velocity of the object the angular velocity can be detected.[14, p. 173] Given the angular velocity, the total angle that the vehicle has rotated can be calculated as 33

46 34 5 Sensor theory 5.2 Wheel encoders a = t 0 ψdt (5.2) Wheel encoders are being used to provide the microcontroller with information about the velocity of the vehicle. There are several different types of wheel encoders based on different technologies, such as, optical wheel encoders, mechanical as well as magnetic wheel encoders just to name a few. In this thesis a magnetic wheel encoder is being used which consists of a Hall effect sensor and a number of magnets. The Hall effect sensor outputs a high voltage level whenever the magnetic field surrounding it is sufficiently strong, (a magnet passing by)[22, p.3-4]. The pulse outputted by the Hall effect sensor is being registered by the microcontroller. By counting the number of pulses generated since the previous sampling instance, the microcontroller is able to estimate the speed of the vehicle. Below is some pseudo code for calculating the vehicle velocity in Algorithm 5. Algorithm 5 Vehicle car speed 1: function car_speed(()p ulses avg,t) 2: T new = T T old 3: v car = (P ulses avg (wheelcircum/10))/t new 4: T new = T old return v car 5: end function The algorithm calculates the V car in cm/s. The average number of high pulses registered by the two Hall effect sensors is denoted P ulses avg. Which is multiplied by (1/10) of the circumference of the wheels, thus giving a distance traveled. By dividing with the time period T new the v car resembles the well known formula for velocity v = s/t. The wheel circumference is divided by ten due to the fact that there are ten magnets attached on each wheel. 5.3 Laser range finder Laser Range Finders (LIDAR) can be used to measure the distance to objects in the vicinity of the vehicle. A laser range finder consists of a laser emitter and receiver, and is sometimes mounted on a rotating motor in order to measure distances in every direction of the vehicle. The basic operation principle is that a short pulse of light is emitted, reflected on an object, and then received. The time t wait between the emitting and receiving can then be used to calculate the distance d to the object as

47 5.3 Laser range finder 35 where c is the speed of light. [21, p. 31] d = 1 2 ct wait (5.3)

48

49 6 Simulink Model Implementation The vehicle, sensors, and the environment of the vehicle were modelled in Simulink. This chapter describes the implementation of all the subsystems of the model, which can be seen in Fig 6.1 below. The model is not able to be simulated in real-time, due to the computation time being too high. Figure 6.1: Block Diagram of Simulink Model 37

50 38 6 Simulink Model Implementation 6.1 Input The input subsystem provides the input stimuli to the system. This is done in two different modes, keyboard, and file. In keyboard mode, the steering angle and the velocity of the vehicle is controlled using the arrow keys on the keyboard. In file mode, the velocity and steering angled is defined by a.mat file. These.mat files are recorded when running the model in keyboard mode. A typical workflow while using this model would be to first record the instructions for the vehicle to a.mat file, and then use the same.mat file for repeated experiments. 6.2 Kinematic Vehicle The kinematic vehicle, shown in Fig. 6.2 is comprised of five subsystems, each of which is described in the sections below. Figure 6.2: Model of Kinematic Vehicle Velocity Initial model The initial velocity model was modelled after how velocity is created in the vehicle itself, with the motor being a torque source which is converted to force, and ultimately velocity. This is described in Section 4.1. However this model was never completed, and scrapped. This is because conception of such a model proved to be too time-consuming, and might have required a master thesis on its own to do Final model The final model is considerably simpler than the initial model. In this model, the input signal is the desired velocity of the vehicle. This is passed through a first-

51 6.3 Map 39 order Filter in order to model the acceleration and deceleration of the vehicle. The time it takes for the vehicle to go from a stationary state to be moving at full speed was measured in order to calculate the time constant τ of the first order filter Servo A simple model was also used for the servo. The servo control signal is a value from -100 to 100, which is then passed through a look-up table which ranges from -45 to 45, which is the maximum steering angle of the vehicle. A first-order filter is used to model the delay of the steering of the vehicle, and the time constant τ was calculated by measuring the time it takes for the wheel of the vehicle to turn from 0 to Angle & Angular Velocity This subsystem calculates the yaw rate and the yaw angle for the vehicle, which is due to the steering angle and the speed. This is calculated by using the Ackermann angle, which is explained in Eq in Section Position The X-position and Y-position subsystems realize the transformation matrix described in Section 4.3 in order to determine the current position of the vehicle. 6.3 Map Parameters In order to emulate the environment of the vehicle, a map definition needed to be created. These maps are defined by two parameters; dimensions and walls. The dimensions array is a 1-by-2 matrix where the first index is the width of the map and the second is the height of the map. The walls matrix is an n-by-4 matrix which describes the walls in the map, where each row contains the coordinates of a wall. These can be either diagonal, vertical, or horizontal. The map parameters are stored in a.mat file which can then be read by the Simulink model file Implementation The map is implemented in two S-functions called vehiclemap and drawmap. The former calculates all necessary values to display the current position and yaw of the vehicle within this map, and the latter draws this is in a MATLAB plot. In this plot, the position and yaw of the vehicle is represented by a triangle. This is shown in Fig. 6.3 below. The inputs of vehiclemap is shown in Table 6.1 below, and the Simulink model diagram is shown in Fig. 6.4.

52 40 6 Simulink Model Implementation Figure 6.3: Plot of map generated by vehiclemap S-function Table 6.1: Inputs of vehiclemap Inputs Direction Size Definition Coords In 2 Current coordinates of the vehicle. Yaw In 1 Current yaw of the vehicle. Triangle Coords Out 6 Coordinates of where to draw the triangle Collision Out 1 Boolean indicating if the vehicle has hit a wall Init Out 1 Boolean indicating if map has been initialized

53 6.4 Odometry Sensors 41 Figure 6.4: Map subsystem 6.4 Odometry Sensors Gyroscope Description The model receives the actual angular velocity produced from the car model, and outputs an estimation of what the sensor would have measured. These measurements are corrupted by noise and bias. The model has been designed with regards to actual measurements on the values from the gyroscope. By measuring the output when the physical vehicle is standing still, an average error can be calculated and modelled. The pseudo code for deriving the bias is presented below in algorithm 6 Algorithm 6 Gyroscope bias 1: for Gz in array do 2: count = count + int(gz) 3: number = number + 1 4: end for 5: ψ = (count /(number 32768)) First and foremost, what is being derived above in algorithm 6 is the angular velocity in rad/s. The bias output from the gyroscope s z-axis has been measured

54 42 6 Simulink Model Implementation and an average bias output has been calculated. To transform this output to an angular velocity in rad/s, the average bias (count/number) is multiplied by DPS/ADC * pi/180. Where DPS = degrees per second of range specified, there are three possible alternatives: 250 /s, 500 /s and 2000 /s. ADC = Analog to digital converter, resolution of 16 bits. This division gives the angular rate in /s, by multiplying with pi/180 the angular rate can be expressed in radians/s. Moreover, the noise from the gyroscope has also been modelled using Simulink. The variance and the standard deviation has been derived from samples of the gyroscope left stationary Implementation Given the precondition that the vehicle is standing still, samples were obtained from the physical sensor. These samples were converted into angular velocities in rad/s by algorithm 6. Given these converted samples, figures were created in Matlab which display the Gaussian nature of the values from the gyroscope. Fig. 6.5 shows the angular velocities from the gyroscope, and how the curve resembles a Gaussian distribution. This hypothesis is also in line with conclusions of [13, p.7-8,22] when measuring sensor errors. Figure 6.5: Bar chart displaying the noise of the gyroscope, curve showing a Gaussian distribution The Gaussian noise as well as the bias offset was implemented into the Simulink model. This can be seen in Fig This model outputs an angular rate expressed in rad/s corrupted by noise and bias with the aim of resembling the actual gyroscope implemented in the physical car.

55 6.4 Odometry Sensors 43 Figure 6.6: Gyroscope implementation in Simulink Test phase Evaluation of the model has been done by implementing a test environment in Simulink. Basically, a ROS subscriber block has been implemented in Simulink which receives the actual values from the physical car. The values of the z-axis of the gyroscope are being measured and computed as an angular rate in rad/s as well as an angle in rad. The comparison between the Simulink model of the sensor and the physical sensor values can then be done in the Simulink environment. Fig. 6.7 below shows the test environment. Figure 6.7: Gyroscope test environment in Simulink Moreover, Fig. 6.8 shows the angular difference between the gyroscope model and real time measurements on the physical gyroscope.

56 44 6 Simulink Model Implementation Figure 6.8: Comparison between gyroscope model and real time measurements on the physical gyroscope. Top display shows the model output after seconds, and bottom display shows the total angle measured from the physical gyroscope. (Notice, both are initialized to rad, 90 degrees) Wheel encoders Description A sensor model of the wheel encoders has been implemented into the model of the car in Simulink. They have been modelled with regards to measurements of the output from the wheel encoders attached on the physical car. However, the data being received is actually an estimate of the speed of the physical car and not a number of magnets detected since the last timestamp. With that being said, the sensor model implemented is using these measurements to estimate and mimic the transmitted car speed from the physical wheel encoders attached on the car. These car speed measurements are not that precise though, which could be explained by a bias offset and noise. The bias consists of truncation errors and the algorithm which converts magnet pulses to car speed in cm/s. The truncation error occurs when the algorithm truncates the floating number to an u_int8 which is to be transmitted on the CAN bus. The noise can be expressed in terms of the variance around the mean velocity.

57 6.4 Odometry Sensors Test phase The test phase of the wheel encoders was executed at our office on the floor. Moreover, we used equipment such as a yardstick, rulers, as well as a mobile phone for measuring time of which the vehicle traveled a certain distance. Each measurement conducted consisted of about 40 measured samples. Table 6.2 below shows the measurements done from the test environment. The results on each row differs quite a bit, this is mostly due to the fact that the engine on our car restarted randomly during runtime. Table 6.2: Results of measurements on the wheel encoders MI µ σ σ 2 [s] [cm] [tq] [vq] Basically, what can be seen from Table 6.2 is the sent instruction MI to the motor, the average measured velocity µ, the standard deviation σ, the variance σ 2, the physical time [s] for the car to travel a distance expressed in seconds, the physical distance [cm] the car travelled during the measurement. Moreover, the factor [tq] denotes the portion of time of which the speed of the vehicle is above a certain threshold, thus disregarding the portion of time of which the vehicle is accelerating. Furthermore, the factor [vq] corrects the actual distance the car travelled during the performed test, by disregarding the distance travelled during acceleration to a more steady velocity. The Fig. 6.9 shows the speed received from the physical wheel encoders Implementation The model was implemented based on the values in Table 6.2. The bias has been modelled through numerous measurements on the wheel encoder values and

58 46 6 Simulink Model Implementation Figure 6.9: Plot of the measured speed from the wheel encoders physical measurements on the actual car driving a predefined distance of 2 m. From Table 6.3 below, the measured velocity from physical tests are significantly higher than the car speed which the sensors believe the car is driving in. The way this is implemented in Simulink is by introducing a look up table, with an actual velocity input scaled to the sensed value. Table 6.3: Measured velocity compared to the sensed velocity from the physical wheel encoders Motor instruction Sensed velocity Measured velocity In Fig below, the car speed from the sensor implemented in Simulink is plotted. The plot is showing the car speed in cm/s plotted over time. Moreover, the car speed is depending on user input in the model which can be seen as the steps where a user has increased the car speed. In addition, the sensor model implemented includes a noise source which depends on the current speed of the car model, this noise can be seen in the plot below. Fig below shows the model implementation in Simulink. Basically, what can be seen is the conversion from m/s to cm/s, a LUT converting the velocity input to what the sensor detects, as well as a Matlab function which adds noise to the velocity depending on the current velocity.

59 6.5 LIDAR 47 Figure 6.10: Wheel encoder noise plotted over time in Simulink Figure 6.11: Wheel encoder model implemented in Simulink 6.5 LIDAR Description The LIDAR is implemented as an S-function called lidar, and models the RPLI- DAR used in this thesis. The inputs of this function are shown in Table 6.4 below.

60 48 6 Simulink Model Implementation Table 6.4: Inputs of LIDAR S-functions Inputs Direction Size Definition Coords In 2 Current coordinates of the vehicle. Yaw In 1 Current yaw of the vehicle. Lidar Offset In 1 Vertical Displacement of LIDAR Collision Out 1 Boolean indicating if the vehicle has hit a wall Distances Out 360 Distances to objects in all angles Operation Before vehiclemap has initialized the map, the function waits in an idle state. After this, the distance is measured. Noise, range constraints, and an error PDF is then applied to this value to make it resemble the RPLIDAR. This value is then written to the state vector. Subsequently, the LIDAR is rotated one degree to prepare it for the next measurement, which will take place after a delay. This delay is equal to the time it takes for the RPLIDAR to rotate one degree. If the LIDAR has reached the end of its rotating cycle, the state vector is first written to the output and then flushed. A flowchart of the function is shown in Fig below, and a diagram of the subsystem is shown in Fig Figure 6.12: Diagram of LIDAR subsystem

61 6.5 LIDAR 49 Figure 6.13: Flowchart of LIDAR S-function

62 50 6 Simulink Model Implementation Probability of error Early on in the modeling process, it was observed that the LIDAR did not quite perform as expected. When doing a full rotation, the LIDAR failed produce measurements for all 360 degrees. During one of the early tests, the LIDAR was placed in a cardboard box. It was observed that the amount of missed measurements was usually distributed among the same rotational points throughout repeated experiments. This raised the hypothesis that the probability of the LIDAR missing a measurement is due to how the laser hits the object. When the LIDAR is in a box, there are two conditions which will differ with every measurement, proximity to the wall, and the angle to the wall. Thus, it was decided to investigate how each of these conditions affected the probability of LIDAR misses. In order to investigate the missed measurements, a test area was set up as shown in Fig. 6.16, where a cardboard box was placed in front of the vehicle. This cardboard box was then tilted in different angles and placed at different distances in order to determine what causes the LIDAR to miss a measurement. The results of this experiment can be seen in Fig and Fig. 6.15, respectively. No conclusive testing was done to see if different types of surfaces affected the probability of error. Figure 6.14: Percentage of misses due to wall incline

63 6.5 LIDAR 51 Figure 6.15: Percentage of misses due to distance to wall. From Fig it can be seen that when the tilt of the reflection surface exceeds 40, the percentage of missed measurements increases dramatically. Hypothetically, this due to the tilt angle of the measured object causing a larger reflection angle of the laser. This means that the reflected laser is more likely to miss the receiver. From observing Fig it can also be concluded that the LIDAR does not operate probably when the distance to the measured object is getting close to its range constraints. In the model, these sets of error probabilities are realized as two look-up tables. When a measurement is simulated, the angle and distance to the measured object are passed to its respective look-up table, and a probability of error is obtained. This probability is then used in a binomial random variable which is multiplied with the measured distance, meaning that the measurement will be equal to 0 if the random variable generates a miss.

64 52 6 Simulink Model Implementation Figure 6.16: Measuring environment for investigating LIDAR misses

65 6.6 ROS Interface ROS Interface The ROS Interface subsystem, shown in Fig enables the model to communicate with ROS, and makes heavy use of the Robotics Systems Toolbox for Simulink. The ROS Interface takes the LIDAR and Odometry data generated by the model and it publishes in two ROS topics, /scan and /sensor_data. Moreover, it also publishes the current position of the car in the Simulink map on the topic car_position. It also publishes the clock, since ROS needs to be run on the same clock as Simulink if the two are to be used in the same system.

66 54 6 Simulink Model Implementation Figure 6.17: Block Diagram of ROS Interface

67 6.6 ROS Interface /sensor_data This topic contains the speed and the angular velocity of the vehicle. The message type is custom defined, and is comprised of a Header [9] and a Float64MultiArray [8]. This topic publishes the current odometry values periodically, with a clock period of 10 ms /scan This topic contains the most recent LIDAR scan as well as a set of constants as required by the ROS message type sensor_msgs/laserscan [7]. The /scan topic is published whenever the LIDAR has made a full 360 rotation. This means that the publishing frequency of the /scan message is effectively 5.5 Hz, as this is the rotational frequency of the LIDAR model /car_position This topic contains the position of the vehicle in the Matlab/Simulink generated map. This topic is solely for testing purposes to compare the x and y position of the vehicle at different time instances. The topic consists of a Float64MultiArray and a header with the current time stamp. The publishing frequency of the topic is 10 Hz /clock This topic contains the timestamp, expressed as a ROS timestamp. A ROS timestamp constains two values, one value of how many seconds have passed, and one value of how many nanoseconds have passed since the last measured second. For example, if the time stamp is supposed to be 5.8, the value for seconds will be 5 and the value of nanoseconds will be This topic was introduced due to the fact that the model cannot run in real time. Basically, the clock topic contains the relative time being used in the model. The message type being used is ros_graph/clock, and is published at a frequency of 100 Hz.

68

69 7 ROS Implementation In order to control the vehicle and obtain its sensor data, a set of ROS nodes was implemented, as shown in Fig. 7.1 below. Everything within a dotted rectangle in the figure is a ROS node, where each dotted line represents a device running the node. Each node is described in the following sections. Figure 7.1: Diagram showing the active ROS nodes while executing a SLAM algorithm on the physical vehicle. 57

70 58 7 ROS Implementation 7.1 CAN Reader The can_reader node is tasked with obtaining the sensor data sent by the STM32 on the CAN bus, and making this data accessible to the other ROS nodes. The CAN bus is being read continuously, and depending on the CAN ID of the received data, a function handling LIDAR data or a function handling odometry data is called. These functions parse the data to a graspable format, an example of this would be the conversion of the z axis of the gyroscopes digital value to an angular velocity. The LIDAR data is published using a topic called /scan, which is of the message type sensor_msgs/laserscan. The sensor data topic is called /sensor_data, containing car speed, timestamp, gyroscope values to mention some, with the message type of the topic being a customized message called can/sensor_data. 7.2 Odometry The purpose of the odometry node is to present the angular rotation and the x and y position of the vehicle model as well as the physical vehicle. This is done by subscribing to the sensor_data topic, described in Section 7.1. Basically, the data is extracted from the topic in the form of a velocity, an angular velocity and a time stamp. With the help of this data, velocities in the x and y direction can be calculated. By using these velocities and the difference between the current timestamp and the previous timestamp the position in x and y of the vehicle and the angle of which the vehicle has turned can be calculated. This data is presented as a topic which contains the current coordinates of the vehicle as well as the total angle rotated. Moreover, it is also presented to the TF tree, which is a transform tree that relates coordinate frames to each other. The coordinates of these frames depends on what kind of frame it is. For example, the base link frame is the coordinate frame of the entire robot, so the coordinates for this frame is the actual position of the robot.the odometry node presents the transform from the odom frame to the base_link frame. This correlation shows the relative x and y position and orientation of the robot frame (base_link) to the odometry frame. 7.3 SLAM The SLAM node is basically the implementation of the SLAM algorithm wrapped into ROS. Basically, the SLAM algorithm requires information from the sensors, such as the LIDAR data as well as the odometry data. By acquiring this data it is able to generate a map topic which contains the information about its surroundings. Moreover, it also presents a transform from map to the odom coordinate frame. With this information, the coordinate frames in the map can be related to each other and the positioning of the vehicle within the map can be calculated and presented. Fig. 7.2, depicts the transform tree, which shows how the coordinate frames relate to each other.

71 7.4 rviz 59 Figure 7.2: An overview of the transform tree. The broadcaster is the name of the node which presents the transform. How often the transform is updated can be seen in the average transform. The buffer length is how many seconds of data is available in the tf buffer. Transform times are seen in the most recent and oldest transform, where the former contains the timestamp of the first transform of that type, and the latter contains vice versa. 7.4 rviz The rviz node is mainly a visualization tool which is able to provide a live update of the map generated from the SLAM algorithm. Moreover, the trajectory of the vehicle in the map can also be visualized. An overview of the rviz window is shown in Fig. 7.3.

72 60 7 ROS Implementation Figure 7.3: An overview of the rviz window. The main window shows the map generated by a SLAM algorithm. The little red square is the current pose of the vehicle. 7.5 Position The general idea behind the position node is to save the trajectory data of the vehicle. The position of the vehicle is obtained by looking up a transform from the fixed world frame to the base_link frame which represents the actual position of the vehicle. The estimated position is generated by the SLAM algorithm. The topic was introduced for testing purposes in order to compare the actual position of the vehicle with an estimated position to validate each SLAM algorithm. The data being saved to the text file is the x and y coordinates as well as a time stamp. The values are stored into a vector with a frequency of 10 Hz, basically, the same as the frequency of the /car_position topic referred in Section simulinkposition The main idea behind the simulinkposition node is to save the trajectory of the vehicle within the map generated from Simulink. The node subscribes to the topic from simulink /car_position and saves the x and y coordinates as well as a time stamp into a text file. These coordinates represents the actual position of the vehicle in the map, thus, not distorted by the sensors in the model. This topic is described in Section

73 7.7 ROS Control App ROS Control App The ROS Control App is an android mobile application, which is used to steer the physical vehicle. This is done by moving a joystick in x and y directions, which publishes the angular and linear movement on a topic called /joystick. 7.8 Joystick Input The purpose of the Joystick node is to interface the data from the ROS Control App and supply this to the physical vehicle. Basically, the node subscribes to the joystick topic from the ROS Control App. The data from the application can be seen as an instruction to rotate the vehicle as well as instruction to move the vehicle forward or backwards. This data is being being split up into two topics, /throttle as well as /steering which are both of the type std_msgs/int8. The throttle topic contains instructions to the engine of the vehicle to move the vehicle either forward or backwards, and the steering topic contains the instructions to the servo on the vehicle. 7.9 CAN Writer The task of the can_writer node is to send data over the CAN bus to the STM32. Basically, this data being sent is steering instructions to the servo on the vehicle, instructions to the motor to move the car forward or backwards as well as instructions to the lights and the buzzer on the vehicle. This data is obtained from various topics in the ROS environment, such as the /throttle 7.8 topic, the /steering 7.8 topic and the /carlights topic CPU Recoder The task of the CPU recorder node is to periodically sample the CPU usage of a process and store it in a text file. It uses two different linux commands for this, top and ps, and stores the results from each method in a separate text file. Two different methods are used as a security measure, since it would be very time consuming to have to do all simulations again if results from the other method was needed.

74

75 8 Simulations 8.1 Method Overview In order to determine which algorithm was suitable to use on the vehicle, simulations were performed on the model. Three maps were created, each of different size and each map containing different amount of features in the environment. This is to check the performance of each algorithm during different circumstances. Control instructions were recorded for each map using the Input block in the Simulink model, so that an identical vehicle path could be used for each simulation. After this, the simulation was run with this path and all the simulated data was saved in a ROS bag. In this way, the real-time limitations of the model could be avoided while running the SLAM algorithms. Fig. 8.1 below shows the active ROS nodes while simulations are taking place. The player node replays the data stored in the ROS bag and publishes the data on the /scan, /sensor_data and /car_position topic. The Odometry node presents odometry information to the SLAM node, the rviz node and the Position node. The simulinkposition, cpu_recorder, position, and rviz nodes were used for testing purposes. Thorough descriptions of all these nodes can be found in Chapter 7. 63

76 64 8 Simulations Figure 8.1: Diagram showing the active ROS nodes while simulating a SLAM algorithm Maps Three maps were designed for the simulation. The main idea was to have three maps which grow in size, as all SLAM algorithms scale with the map size, and this might give an idea of how well a particular algorithm scales. The maps are shown in Figures 8.2, 8.3, and 8.4 below, where the axis tick labels show distance in meters.

77 8.1 Method 65 Figure 8.2: Plot of small map Figure 8.3: Plot of medium map

78 66 8 Simulations Figure 8.4: Plot of large map Tests Four different types of tests were performed on each algorithm, namely: Mapping accuracy Path accuracy Average position error Average CPU load In the mapping accuracy test, the SLAM algorithm is used on a specific map simulation. When the map is obtained, this is saved, and presented next to the actual map in order to show how the generated map from the SLAM algorithm differs from the actual map. The path accuracy test compares the generated path of each algorithm with the actual path the kinematic vehicle has travelled. This is presented in a plot which includes the actual path, and the generated path from each algorithm. The average position error is a performance metric which shows how much the estimated position from the SLAM algorithm differs from the actual position. This is calculated by computing the difference between the estimated position and actual position at each time instance, and computing the mean. In the average CPU load test, the CPU usage of each SLAM algorithm was recorded using the CPU recorder node described in Section For these results, the top method was used, with a sampling period of 10 ms.

79 8.1 Method 67 Each test is performed for each algorithm, for each map. Also, each of these tests is performed for simulations with ideal sensors, and simulations with added sensor noise. After the theoretical investigation had been conducted, it was decided to perform simulations with GraphSLAM and FastSLAM. This investigation is thoroughly described in Section 3.5. These algorithms are implemented in the ROS packages slam_karto and gmapping, which were used for the simulations. The active ROS nodes for the slam _karto and gmapping simulations are illustrated in Fig. 8.5 and Fig. 8.6, respectively. Figure 8.5: ROS network during simulation of the slam_karto algorithm.

80 68 8 Simulations Figure 8.6: ROS network during simulation of the gmapping algorithm

81 8.2 Results Results All simulation results are presented in the sections below. The tests themselves were carried out as specified in Section Mapping accuracy This section presents how well each algorithm performs in terms of generating an accurate map. Each figure shows how one map is generated by both algorithms, as well as using both the ideal and the noisy sensor model for each algorithm Small map Fig 8.7 below shows the results for the mapping of the small map. The results of each algorithm/sensor model appear to be fairly similar. (a) gmapping with ideal sensor model (b) slam_karto with ideal sensor model (c) gmapping with noisy sensor model (d) slam_karto with noisy sensor model Figure 8.7: Mapping accuracy results on small map

82 70 8 Simulations Medium map Fig 8.8 shows the results of mapping of the medium map. In this case, the noise appears to have an effect on the mapping. (a) gmapping with ideal sensor model (b) slam_karto with ideal sensor model (c) gmapping with noisy sensor model (d) slam_karto with noisy sensor model Figure 8.8: Mapping accuracy results on medium map

83 8.2 Results Large map Fig 8.9 shows the mapping of the large map. algorithms varies greatly. Here, the performance of both (a) gmapping with ideal sensor model (b) slam_karto with ideal sensor model (c) gmapping with noisy sensor model (d) slam_karto with noisy sensor model Figure 8.9: Mapping accuracy results on large map. When the noisy sensor model is used, as shown in (d), the map produced by slam_karto is highly inaccurate. This is because the loop closing fails.

84 72 8 Simulations Path accuracy The following sections contain the results of the path accuracy test for all three maps and both the ideal and noisy sensor models. Each figure contains the theoretical path as well as the generated path of slam_karto and gmapping Small map Fig 8.10 below shows the generated path of both algorithms compared to the theoretical path when running simulations on the small map. gmapping is not handling the noise as well as slam_karto in this case.

85 8.2 Results 73 (a) Ideal sensor model (b) Noisy sensor model Figure 8.10: Estimated path on small map

86 74 8 Simulations Medium map Fig 8.11 below shows the generated path of both algorithms compared to the theoretical path when running simulations of the medium map. The generated path of both algorithms does not deviate greatly from the theoretical path using the ideal model. Both algorithms however, especially slam_karto, show a significantly erroneous path with the noisy sensor model. (a) Ideal sensor model (b) Noisy sensor model Figure 8.11: Estimated path on medium map

87 8.2 Results Large map Fig 8.12 below shows the generated path of both algorithms compared to the theoretical path when running simulations on the large map. The loop closure problems of slam_karto, described in Section , causes its path to drift away during the end of the simulation. (a) Ideal sensor model (b) Noisy sensor model Figure 8.12: Estimated map on large map

88 76 8 Simulations Average position error Ideal sensor model Table 8.1: Average position error of gmapping and slam_karto in different maps with ideal sensors. Small Map Medium Map Large Map gmapping slam_karto As can be seen in Table 8.1, when an ideal sensor model is used, the algorithms produce similar results, although slam_karto slightly outperforms gmapping on the medium map. A specific cause for this remains inconclusive. What can be said regarding this in general is that given the same environment and sensor data available, slam_karto simply works better than gmapping under those very particular circumstances Noisy sensor model Table 8.2: Average position error of gmapping and slam_karto in different maps with noisy sensors. Small Map Medium Map Large Map gmapping slam_karto In both the ideal and noisy simulation sets, the clear relationship is that the average position error always seems to increase with the size of the map. This can be attributed to the fact that if the robot travels for a longer distance, the measurements may drift for a longer time until the algorithm has a chance to synchronize, i.e. through a loop closure. This is illustrated pretty well in Fig 8.12, where for example gmapping drifts a lot while travelling through the long corridor until it is able to correct itself. Fig 8.13 shows how the position error over time for one of the simulations. The figure shows that the error does not have any time-related behaviour, it s magnitude does not for example increase nor decrease over time. The error is in contrast related to the algorithms ability (or disability) to predict the position of the vehicle. This ability can be due to the quality of sensor data the algorithm has to work with, and how "difficult" the current estimation problem is to solve.

89 8.2 Results 77 Thus, the error is more related to the state of the vehicle and its environment, rather than the time elapsed. Figure 8.13: Large map with ideal sensors- position error over time Average CPU load In the section below, the average CPU load is expressed as a percentage. This percentage is the average percentage of time that the SLAM algorithm utilizes the CPU. This figure refers to the usage of one CPU core, so a number higher than 100% is possible if the algorithm uses more than one core.

90 78 8 Simulations Ideal sensor model Small Map Medium Map Large Map gmapping 68.61% 99.19% % slam_karto 16.74% 39.03% 56.94% Table 8.3: Average CPU load of gmapping and slam_karto in different maps using ideal sensor model Noisy sensor model Small Map Medium Map Large Map gmapping 90.5% 97.93% 70.45% slam_karto 34.85% 46.45% 21.91% Table 8.4: Average CPU load of gmapping and slam_karto in different maps using noisy sensor model. Something which is somewhat surprising here is the the fact that for slam_karto, the CPU usage goes down significantly for the large map. If you cross-examine this observation with the fact that this simulation produced a highly inaccurate map (see Fig. 8.9 (d)) and a highly inaccurate path (see Fig 8.12 (b)), this may indicate that the algorithm is not doing all computations necessary to work properly. 8.3 Issues When doing these experiments, there were some problems with running the slam_karto node. The problem was that when performing a loop closure, the node crashed almost every time. A crash itself did not result in erroneous results, but it meant that the SLAM was aborted, so no further mapping could be conducted from that point on. Also, since the crash resulted in a loop closure not being executed, this means that the generated map was not as accurate as it would have been following a successful loop closure. The cause of the slam_karto crashes was never found. In order to still obtain results which could be used in this thesis, the slam_karto experiments had to performed over and over again until the loop closure algorithm managed to be executed without crashing.

91 8.4 Conclusion Conclusion The results show that both the gmapping and slam_karto algorithms are able to solve the SLAM problem. The generated maps (seen in Fig. 8.7, 8.8, and 8.9) from the two algorithms are similar in the ideal case. Moreover, the positioning error does not differ a lot between the two algorithms, with each algorithm outperforming the other on at least one map. However, when noise is introduced, it can clearly be seen that the loop closure of slam_karto starts failing. A good example of this can be seen in Fig. 8.8 d), where the robot has observed the lower room twice, but fails to recognize the fact that it as been there before. A similar issue can be seen in Fig b). It should also be noted that the loop closure of slam_karto mostly resulted in the node crashing, or generated a map that was completely incomprehensible. The results shown in this report simply show the best result out of many trials. The gmapping algorithm, however, produced similar results every time, and there were no issues with the loop closure. Another interesting observation can be seen in Fig b). In this simulation, gmapping continuously makes small position mistakes which are quickly corrected. slam_karto, on the other hand, predicts the position really well until it reaches the lower corridor of the map, where the positioning error starts to increase. The algorithm never recovers from this error, and keeps on producing false position estimates from that point. This, along with the fact that the slam_karto node almost always crashes when performing a loop closure, indicates that it may not be as reliable as gmapping. Tables 8.3 and 8.4 show that in all simulations performed, gmapping has at least twice as much CPU load as slam_karto. This large CPU load of the algorithm, which at times even occupies an entire core by itself, is an indicator that the algorithm might have problems being executed using limited resources. Regardless, it was still chosen to be the one to be implemented on the physical vehicle, due to the fact that slam_karto was deemed too unreliable.

92

93 9 Real-world experiment 9.1 Overview Chapter 8 introduced simulation results on three different maps generated by two different SLAM algorithms. From these simulations, it was decided that gmapping was the appropriate algorithm to use with the physical vehicle. This chapter describes a real-world experiment featuring a map generated by the physical vehicle while running gmapping. The real-world experiment of the gmapping algorithm was performed at the ÅF office building in Linköping. The room chosen was a small office approximately measuring 5.3x4.8 m. A sketch of this office can be seen in Fig. 9.1 below, and a photo can be seen in Fig

94 82 9 Real-world experiment Figure 9.1: Sketch of test environment. The circles represent table legs, and the squares two large cupboards which divide the room.

95 Results Figure 9.2: Photo of test environment 9.2 Results The experiment was conducted three times, and the generated maps are shown in Fig. 9.3 below. The computational load for each trial is shown in Table 9.1.

96 84 9 Real-world experiment (a) Trial 1 (b) Trial 2 (c) Trial 3 Figure 9.3: Map of small office generated by gmapping During the generation of each map, the robot was controlled manually, which meant that the robot drove differently at each trial. All maps are fairly accurate

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM 1 Introduction In this assignment you will implement a particle filter to localize your car within a known map. This will

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics FastSLAM Sebastian Thrun (abridged and adapted by Rodrigo Ventura in Oct-2008) The SLAM Problem SLAM stands for simultaneous localization and mapping The task of building a map while

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Sebastian Thrun Wolfram Burgard Dieter Fox The MIT Press Cambridge, Massachusetts London, England Preface xvii Acknowledgments xix I Basics 1 1 Introduction 3 1.1 Uncertainty in

More information

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models Introduction ti to Embedded dsystems EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping Gabe Hoffmann Ph.D. Candidate, Aero/Astro Engineering Stanford University Statistical Models

More information

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz Humanoid Robotics Monte Carlo Localization Maren Bennewitz 1 Basis Probability Rules (1) If x and y are independent: Bayes rule: Often written as: The denominator is a normalizing constant that ensures

More information

Probabilistic Robotics. FastSLAM

Probabilistic Robotics. FastSLAM Probabilistic Robotics FastSLAM The SLAM Problem SLAM stands for simultaneous localization and mapping The task of building a map while estimating the pose of the robot relative to this map Why is SLAM

More information

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms L10. PARTICLE FILTERING CONTINUED NA568 Mobile Robotics: Methods & Algorithms Gaussian Filters The Kalman filter and its variants can only model (unimodal) Gaussian distributions Courtesy: K. Arras Motivation

More information

EKF Localization and EKF SLAM incorporating prior information

EKF Localization and EKF SLAM incorporating prior information EKF Localization and EKF SLAM incorporating prior information Final Report ME- Samuel Castaneda ID: 113155 1. Abstract In the context of mobile robotics, before any motion planning or navigation algorithm

More information

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm Robot Mapping FastSLAM Feature-based SLAM with Particle Filters Cyrill Stachniss Particle Filter in Brief! Non-parametric, recursive Bayes filter! Posterior is represented by a set of weighted samples!

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: EKF-based SLAM Dr. Kostas Alexis (CSE) These slides have partially relied on the course of C. Stachniss, Robot Mapping - WS 2013/14 Autonomous Robot Challenges Where

More information

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard Robot Mapping A Short Introduction to the Bayes Filter and Related Models Gian Diego Tipaldi, Wolfram Burgard 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Recursive

More information

What is the SLAM problem?

What is the SLAM problem? SLAM Tutorial Slides by Marios Xanthidis, C. Stachniss, P. Allen, C. Fermuller Paul Furgale, Margarita Chli, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart What is the SLAM problem? The

More information

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard 1 Motivation Recall: Discrete filter Discretize the continuous state space High memory complexity

More information

Practical Course WS12/13 Introduction to Monte Carlo Localization

Practical Course WS12/13 Introduction to Monte Carlo Localization Practical Course WS12/13 Introduction to Monte Carlo Localization Cyrill Stachniss and Luciano Spinello 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Bayes Filter

More information

Implementation of Odometry with EKF for Localization of Hector SLAM Method

Implementation of Odometry with EKF for Localization of Hector SLAM Method Implementation of Odometry with EKF for Localization of Hector SLAM Method Kao-Shing Hwang 1 Wei-Cheng Jiang 2 Zuo-Syuan Wang 3 Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung,

More information

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping CSE-571 Probabilistic Robotics Fast-SLAM Mapping Particle Filters Represent belief by random samples Estimation of non-gaussian, nonlinear processes Sampling Importance Resampling (SIR) principle Draw

More information

Artificial Intelligence for Robotics: A Brief Summary

Artificial Intelligence for Robotics: A Brief Summary Artificial Intelligence for Robotics: A Brief Summary This document provides a summary of the course, Artificial Intelligence for Robotics, and highlights main concepts. Lesson 1: Localization (using Histogram

More information

Introduction to Mobile Robotics SLAM Landmark-based FastSLAM

Introduction to Mobile Robotics SLAM Landmark-based FastSLAM Introduction to Mobile Robotics SLAM Landmark-based FastSLAM Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello Partial slide courtesy of Mike Montemerlo 1 The SLAM Problem

More information

Introduction to Mobile Robotics. SLAM: Simultaneous Localization and Mapping

Introduction to Mobile Robotics. SLAM: Simultaneous Localization and Mapping Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping The SLAM Problem SLAM is the process by which a robot builds a map of the environment and, at the same time, uses this map to

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Bayes Filter Implementations Discrete filters, Particle filters Piecewise Constant Representation of belief 2 Discrete Bayes Filter Algorithm 1. Algorithm Discrete_Bayes_filter(

More information

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview Arun Das 05/09/2017 Arun Das Waterloo Autonomous Vehicles Lab Introduction What s in a name? Arun Das Waterloo Autonomous

More information

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING By Michael Lowney Senior Thesis in Electrical Engineering University of Illinois at Urbana-Champaign Advisor: Professor Minh Do May 2015

More information

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing IROS 05 Tutorial SLAM - Getting it Working in Real World Applications Rao-Blackwellized Particle Filters and Loop Closing Cyrill Stachniss and Wolfram Burgard University of Freiburg, Dept. of Computer

More information

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology Basics of Localization, Mapping and SLAM Jari Saarinen Aalto University Department of Automation and systems Technology Content Introduction to Problem (s) Localization A few basic equations Dead Reckoning

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Discrete Filters and Particle Filters Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Probabilistic

More information

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms L15. POSE-GRAPH SLAM NA568 Mobile Robotics: Methods & Algorithms Today s Topic Nonlinear Least Squares Pose-Graph SLAM Incremental Smoothing and Mapping Feature-Based SLAM Filtering Problem: Motion Prediction

More information

Investigating Simultaneous Localization and Mapping for AGV systems

Investigating Simultaneous Localization and Mapping for AGV systems Investigating Simultaneous Localization and Mapping for AGV systems With open-source modules available in ROS Master s thesis in Computer Systems and Networks ALBIN PÅLSSON MARKUS SMEDBERG Department of

More information

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization AUTONOMOUS SYSTEMS PROBABILISTIC LOCALIZATION Monte Carlo Localization Maria Isabel Ribeiro Pedro Lima With revisions introduced by Rodrigo Ventura Instituto Superior Técnico/Instituto de Sistemas e Robótica

More information

13. Learning Ballistic Movementsof a Robot Arm 212

13. Learning Ballistic Movementsof a Robot Arm 212 13. Learning Ballistic Movementsof a Robot Arm 212 13. LEARNING BALLISTIC MOVEMENTS OF A ROBOT ARM 13.1 Problem and Model Approach After a sufficiently long training phase, the network described in the

More information

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss Robot Mapping Least Squares Approach to SLAM Cyrill Stachniss 1 Three Main SLAM Paradigms Kalman filter Particle filter Graphbased least squares approach to SLAM 2 Least Squares in General Approach for

More information

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General Robot Mapping Three Main SLAM Paradigms Least Squares Approach to SLAM Kalman filter Particle filter Graphbased Cyrill Stachniss least squares approach to SLAM 1 2 Least Squares in General! Approach for

More information

Announcements. Recap Landmark based SLAM. Types of SLAM-Problems. Occupancy Grid Maps. Grid-based SLAM. Page 1. CS 287: Advanced Robotics Fall 2009

Announcements. Recap Landmark based SLAM. Types of SLAM-Problems. Occupancy Grid Maps. Grid-based SLAM. Page 1. CS 287: Advanced Robotics Fall 2009 Announcements PS2: due Friday 23:59pm. Final project: 45% of the grade, 10% presentation, 35% write-up Presentations: in lecture Dec 1 and 3 --- schedule: CS 287: Advanced Robotics Fall 2009 Lecture 24:

More information

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie

More information

ME 456: Probabilistic Robotics

ME 456: Probabilistic Robotics ME 456: Probabilistic Robotics Week 5, Lecture 2 SLAM Reading: Chapters 10,13 HW 2 due Oct 30, 11:59 PM Introduction In state esemaeon and Bayes filter lectures, we showed how to find robot s pose based

More information

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms L17. OCCUPANCY MAPS NA568 Mobile Robotics: Methods & Algorithms Today s Topic Why Occupancy Maps? Bayes Binary Filters Log-odds Occupancy Maps Inverse sensor model Learning inverse sensor model ML map

More information

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

Robotic exploration for mapping and change detection

Robotic exploration for mapping and change detection Masterarbeit Sebastian Gangl Robotic exploration for mapping and change detection Robotische Exploration zur Kartierung und Änderungsdetektion Erstprüfer: apl. Prof. Dr.-Ing. Claus Brenner Zweitprüfer:

More information

Final project: 45% of the grade, 10% presentation, 35% write-up. Presentations: in lecture Dec 1 and schedule:

Final project: 45% of the grade, 10% presentation, 35% write-up. Presentations: in lecture Dec 1 and schedule: Announcements PS2: due Friday 23:59pm. Final project: 45% of the grade, 10% presentation, 35% write-up Presentations: in lecture Dec 1 and 3 --- schedule: CS 287: Advanced Robotics Fall 2009 Lecture 24:

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

Introduction to Mobile Robotics SLAM Grid-based FastSLAM. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Introduction to Mobile Robotics SLAM Grid-based FastSLAM. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello Introduction to Mobile Robotics SLAM Grid-based FastSLAM Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello 1 The SLAM Problem SLAM stands for simultaneous localization

More information

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu

More information

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Adapting the Sample Size in Particle Filters Through KLD-Sampling Adapting the Sample Size in Particle Filters Through KLD-Sampling Dieter Fox Department of Computer Science & Engineering University of Washington Seattle, WA 98195 Email: fox@cs.washington.edu Abstract

More information

Rotational3D Efficient modelling of 3D effects in rotational mechanics

Rotational3D Efficient modelling of 3D effects in rotational mechanics Rotational3D - Efficient Modelling of 3D Effects in Rotational Mechanics Rotational3D Efficient modelling of 3D effects in rotational mechanics Johan Andreasson Magnus Gäfvert Modelon AB Ideon Science

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

Mobile Robot Mapping and Localization in Non-Static Environments

Mobile Robot Mapping and Localization in Non-Static Environments Mobile Robot Mapping and Localization in Non-Static Environments Cyrill Stachniss Wolfram Burgard University of Freiburg, Department of Computer Science, D-790 Freiburg, Germany {stachnis burgard @informatik.uni-freiburg.de}

More information

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Adapting the Sample Size in Particle Filters Through KLD-Sampling Adapting the Sample Size in Particle Filters Through KLD-Sampling Dieter Fox Department of Computer Science & Engineering University of Washington Seattle, WA 98195 Email: fox@cs.washington.edu Abstract

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

PacSLAM Arunkumar Byravan, Tanner Schmidt, Erzhuo Wang

PacSLAM Arunkumar Byravan, Tanner Schmidt, Erzhuo Wang PacSLAM Arunkumar Byravan, Tanner Schmidt, Erzhuo Wang Project Goals The goal of this project was to tackle the simultaneous localization and mapping (SLAM) problem for mobile robots. Essentially, the

More information

Sensor fusion methods for indoor navigation using UWB radio aided INS/DR

Sensor fusion methods for indoor navigation using UWB radio aided INS/DR Sensor fusion methods for indoor navigation using UWB radio aided INS/DR JOSÉ BORRÀS SILLERO Master s Degree Project Stockholm, Sweden July 2012 XR-EE-SB 2012:015 Abstract Some applications such as industrial

More information

Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang

Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang Localization, Mapping and Exploration with Multiple Robots Dr. Daisy Tang Two Presentations A real-time algorithm for mobile robot mapping with applications to multi-robot and 3D mapping, by Thrun, Burgard

More information

Calibration of Inertial Measurement Units Using Pendulum Motion

Calibration of Inertial Measurement Units Using Pendulum Motion Technical Paper Int l J. of Aeronautical & Space Sci. 11(3), 234 239 (2010) DOI:10.5139/IJASS.2010.11.3.234 Calibration of Inertial Measurement Units Using Pendulum Motion Keeyoung Choi* and Se-ah Jang**

More information

EE565:Mobile Robotics Lecture 3

EE565:Mobile Robotics Lecture 3 EE565:Mobile Robotics Lecture 3 Welcome Dr. Ahmad Kamal Nasir Today s Objectives Motion Models Velocity based model (Dead-Reckoning) Odometry based model (Wheel Encoders) Sensor Models Beam model of range

More information

Final Exam Practice Fall Semester, 2012

Final Exam Practice Fall Semester, 2012 COS 495 - Autonomous Robot Navigation Final Exam Practice Fall Semester, 2012 Duration: Total Marks: 70 Closed Book 2 hours Start Time: End Time: By signing this exam, I agree to the honor code Name: Signature:

More information

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS Integrated Cooperative SLAM with Visual Odometry within teams of autonomous planetary exploration rovers Author: Ekaterina Peshkova Supervisors: Giuseppe

More information

GT "Calcul Ensembliste"

GT Calcul Ensembliste GT "Calcul Ensembliste" Beyond the bounded error framework for non linear state estimation Fahed Abdallah Université de Technologie de Compiègne 9 Décembre 2010 Fahed Abdallah GT "Calcul Ensembliste" 9

More information

Jurnal Teknologi PARTICLE FILTER IN SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) USING DIFFERENTIAL DRIVE MOBILE ROBOT. Full Paper

Jurnal Teknologi PARTICLE FILTER IN SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) USING DIFFERENTIAL DRIVE MOBILE ROBOT. Full Paper Jurnal Teknologi PARTICLE FILTER IN SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) USING DIFFERENTIAL DRIVE MOBILE ROBOT Norhidayah Mohamad Yatim a,b, Norlida Buniyamin a a Faculty of Engineering, Universiti

More information

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza ETH Master Course: 151-0854-00L Autonomous Mobile Robots Summary 2 Lecture Overview Mobile Robot Control Scheme knowledge, data base mission

More information

Kaijen Hsiao. Part A: Topics of Fascination

Kaijen Hsiao. Part A: Topics of Fascination Kaijen Hsiao Part A: Topics of Fascination 1) I am primarily interested in SLAM. I plan to do my project on an application of SLAM, and thus I am interested not only in the method we learned about in class,

More information

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Hauke Strasdat, Cyrill Stachniss, Maren Bennewitz, and Wolfram Burgard Computer Science Institute, University of

More information

Computer Vision 2 Lecture 8

Computer Vision 2 Lecture 8 Computer Vision 2 Lecture 8 Multi-Object Tracking (30.05.2016) leibe@vision.rwth-aachen.de, stueckler@vision.rwth-aachen.de RWTH Aachen University, Computer Vision Group http://www.vision.rwth-aachen.de

More information

Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization

Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization Marcus A. Brubaker (Toyota Technological Institute at Chicago) Andreas Geiger (Karlsruhe Institute of Technology & MPI Tübingen) Raquel

More information

Survey: Simultaneous Localisation and Mapping (SLAM) Ronja Güldenring Master Informatics Project Intellgient Robotics University of Hamburg

Survey: Simultaneous Localisation and Mapping (SLAM) Ronja Güldenring Master Informatics Project Intellgient Robotics University of Hamburg Survey: Simultaneous Localisation and Mapping (SLAM) Ronja Güldenring Master Informatics Project Intellgient Robotics University of Hamburg Introduction EKF-SLAM FastSLAM Loop Closure 01.06.17 Ronja Güldenring

More information

Navigation methods and systems

Navigation methods and systems Navigation methods and systems Navigare necesse est Content: Navigation of mobile robots a short overview Maps Motion Planning SLAM (Simultaneous Localization and Mapping) Navigation of mobile robots a

More information

Robotic Mapping. Outline. Introduction (Tom)

Robotic Mapping. Outline. Introduction (Tom) Outline Robotic Mapping 6.834 Student Lecture Itamar Kahn, Thomas Lin, Yuval Mazor Introduction (Tom) Kalman Filtering (Itamar) J.J. Leonard and H.J.S. Feder. A computationally efficient method for large-scale

More information

INTEGRATED TECH FOR INDUSTRIAL POSITIONING

INTEGRATED TECH FOR INDUSTRIAL POSITIONING INTEGRATED TECH FOR INDUSTRIAL POSITIONING Integrated Tech for Industrial Positioning aerospace.honeywell.com 1 Introduction We are the world leader in precision IMU technology and have built the majority

More information

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta 1 Monte Carlo Localization using Dynamically Expanding Occupancy Grids Karan M. Gupta Agenda Introduction Occupancy Grids Sonar Sensor Model Dynamically Expanding Occupancy Grids Monte Carlo Localization

More information

Robotics. Chapter 25. Chapter 25 1

Robotics. Chapter 25. Chapter 25 1 Robotics Chapter 25 Chapter 25 1 Outline Robots, Effectors, and Sensors Localization and Mapping Motion Planning Chapter 25 2 Mobile Robots Chapter 25 3 Manipulators P R R R R R Configuration of robot

More information

Inertial Navigation Static Calibration

Inertial Navigation Static Calibration INTL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 2018, VOL. 64, NO. 2, PP. 243 248 Manuscript received December 2, 2017; revised April, 2018. DOI: 10.24425/119518 Inertial Navigation Static Calibration

More information

Principles of Robot Motion

Principles of Robot Motion Principles of Robot Motion Theory, Algorithms, and Implementation Howie Choset, Kevin Lynch, Seth Hutchinson, George Kantor, Wolfram Burgard, Lydia Kavraki, and Sebastian Thrun A Bradford Book The MIT

More information

F1/10 th Autonomous Racing. Localization. Nischal K N

F1/10 th Autonomous Racing. Localization. Nischal K N F1/10 th Autonomous Racing Localization Nischal K N System Overview Mapping Hector Mapping Localization Path Planning Control System Overview Mapping Hector Mapping Localization Adaptive Monte Carlo Localization

More information

Advanced Techniques for Mobile Robotics Graph-based SLAM using Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Advanced Techniques for Mobile Robotics Graph-based SLAM using Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Advanced Techniques for Mobile Robotics Graph-based SLAM using Least Squares Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz SLAM Constraints connect the poses of the robot while it is moving

More information

NERC Gazebo simulation implementation

NERC Gazebo simulation implementation NERC 2015 - Gazebo simulation implementation Hannan Ejaz Keen, Adil Mumtaz Department of Electrical Engineering SBA School of Science & Engineering, LUMS, Pakistan {14060016, 14060037}@lums.edu.pk ABSTRACT

More information

AN INCREMENTAL SLAM ALGORITHM FOR INDOOR AUTONOMOUS NAVIGATION

AN INCREMENTAL SLAM ALGORITHM FOR INDOOR AUTONOMOUS NAVIGATION 20th IMEKO TC4 International Symposium and 18th International Workshop on ADC Modelling and Testing Research on Electric and Electronic Measurement for the Economic Upturn Benevento, Italy, September 15-17,

More information

Introduction to robot algorithms CSE 410/510

Introduction to robot algorithms CSE 410/510 Introduction to robot algorithms CSE 410/510 Rob Platt robplatt@buffalo.edu Times: MWF, 10-10:50 Location: Clemens 322 Course web page: http://people.csail.mit.edu/rplatt/cse510.html Office Hours: 11-12

More information

SLAM: Robotic Simultaneous Location and Mapping

SLAM: Robotic Simultaneous Location and Mapping SLAM: Robotic Simultaneous Location and Mapping William Regli Department of Computer Science (and Departments of ECE and MEM) Drexel University Acknowledgments to Sebastian Thrun & others SLAM Lecture

More information

Gaussian Processes, SLAM, Fast SLAM and Rao-Blackwellization

Gaussian Processes, SLAM, Fast SLAM and Rao-Blackwellization Statistical Techniques in Robotics (16-831, F11) Lecture#20 (November 21, 2011) Gaussian Processes, SLAM, Fast SLAM and Rao-Blackwellization Lecturer: Drew Bagnell Scribes: Junier Oliva 1 1 Comments on

More information

Geometrical Feature Extraction Using 2D Range Scanner

Geometrical Feature Extraction Using 2D Range Scanner Geometrical Feature Extraction Using 2D Range Scanner Sen Zhang Lihua Xie Martin Adams Fan Tang BLK S2, School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798

More information

Analysis, optimization, and design of a SLAM solution for an implementation on reconfigurable hardware (FPGA) using CńaSH

Analysis, optimization, and design of a SLAM solution for an implementation on reconfigurable hardware (FPGA) using CńaSH December 12, 2016 MASTER THESIS Analysis, optimization, and design of a SLAM solution for an implementation on reconfigurable hardware (FPGA) using CńaSH Authors: Robin Appel Hendrik Folmer Faculty: Faculty

More information

(W: 12:05-1:50, 50-N202)

(W: 12:05-1:50, 50-N202) 2016 School of Information Technology and Electrical Engineering at the University of Queensland Schedule of Events Week Date Lecture (W: 12:05-1:50, 50-N202) 1 27-Jul Introduction 2 Representing Position

More information

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100 ME 597/747 Autonomous Mobile Robots Mid Term Exam Duration: 2 hour Total Marks: 100 Instructions: Read the exam carefully before starting. Equations are at the back, but they are NOT necessarily valid

More information

Tracking Multiple Moving Objects with a Mobile Robot

Tracking Multiple Moving Objects with a Mobile Robot Tracking Multiple Moving Objects with a Mobile Robot Dirk Schulz 1 Wolfram Burgard 2 Dieter Fox 3 Armin B. Cremers 1 1 University of Bonn, Computer Science Department, Germany 2 University of Freiburg,

More information

Augmented Reality, Advanced SLAM, Applications

Augmented Reality, Advanced SLAM, Applications Augmented Reality, Advanced SLAM, Applications Prof. Didier Stricker & Dr. Alain Pagani alain.pagani@dfki.de Lecture 3D Computer Vision AR, SLAM, Applications 1 Introduction Previous lectures: Basics (camera,

More information

ROBUST LOCALISATION OF AUTOMATED GUIDED VEHICLES FOR COMPUTER-INTEGRATED MANUFACTURING ENVIRONMENTS. R.C. Dixon 1 *, G.Bright 2 & R.

ROBUST LOCALISATION OF AUTOMATED GUIDED VEHICLES FOR COMPUTER-INTEGRATED MANUFACTURING ENVIRONMENTS. R.C. Dixon 1 *, G.Bright 2 & R. ROBUST LOCALISATION OF AUTOMATED GUIDED VEHICLES FOR COMPUTER-INTEGRATED MANUFACTURING ENVIRONMENTS R.C. Dixon 1 *, G.Bright 2 & R. Harley 3 1.2 Department of Mechanical Engineering University of KwaZulu-Natal,

More information

CHARACTERIZATION AND CALIBRATION OF MEMS INERTIAL MEASUREMENT UNITS

CHARACTERIZATION AND CALIBRATION OF MEMS INERTIAL MEASUREMENT UNITS CHARACTERIZATION AND CALIBRATION OF MEMS INERTIAL MEASUREMENT UNITS ökçen Aslan 1,2, Afşar Saranlı 2 1 Defence Research and Development Institute (SAE), TÜBİTAK 2 Dept. of Electrical and Electronics Eng.,

More information

AUTONOMOUS PLANETARY ROVER CONTROL USING INVERSE SIMULATION

AUTONOMOUS PLANETARY ROVER CONTROL USING INVERSE SIMULATION AUTONOMOUS PLANETARY ROVER CONTROL USING INVERSE SIMULATION Kevin Worrall (1), Douglas Thomson (1), Euan McGookin (1), Thaleia Flessa (1) (1)University of Glasgow, Glasgow, G12 8QQ, UK, Email: kevin.worrall@glasgow.ac.uk

More information

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Vehicle Localization. Hannah Rae Kerner 21 April 2015 Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular

More information

High-precision, consistent EKF-based visual-inertial odometry

High-precision, consistent EKF-based visual-inertial odometry High-precision, consistent EKF-based visual-inertial odometry Mingyang Li and Anastasios I. Mourikis, IJRR 2013 Ao Li Introduction What is visual-inertial odometry (VIO)? The problem of motion tracking

More information

Monte Carlo Localization for Mobile Robots

Monte Carlo Localization for Mobile Robots Monte Carlo Localization for Mobile Robots Frank Dellaert 1, Dieter Fox 2, Wolfram Burgard 3, Sebastian Thrun 4 1 Georgia Institute of Technology 2 University of Washington 3 University of Bonn 4 Carnegie

More information

Map Representation and LIDAR-Based Vehicle

Map Representation and LIDAR-Based Vehicle Map Representation and LIDAR-Based Vehicle Localization Automatic Map Extraction from High Density Point Clouds for LIDAR Localization in Production Vehicles Master s thesis in the Signal and Systems department,

More information

Note Set 4: Finite Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for

More information

E80. Experimental Engineering. Lecture 9 Inertial Measurement

E80. Experimental Engineering. Lecture 9 Inertial Measurement Lecture 9 Inertial Measurement http://www.volker-doormann.org/physics.htm Feb. 19, 2013 Christopher M. Clark Where is the rocket? Outline Sensors People Accelerometers Gyroscopes Representations State

More information

LOAM: LiDAR Odometry and Mapping in Real Time

LOAM: LiDAR Odometry and Mapping in Real Time LOAM: LiDAR Odometry and Mapping in Real Time Aayush Dwivedi (14006), Akshay Sharma (14062), Mandeep Singh (14363) Indian Institute of Technology Kanpur 1 Abstract This project deals with online simultaneous

More information

Data Association for SLAM

Data Association for SLAM CALIFORNIA INSTITUTE OF TECHNOLOGY ME/CS 132a, Winter 2011 Lab #2 Due: Mar 10th, 2011 Part I Data Association for SLAM 1 Introduction For this part, you will experiment with a simulation of an EKF SLAM

More information

Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation

Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation 242 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 3, JUNE 2001 Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation José E. Guivant and Eduardo

More information

Robotics and Autonomous Systems

Robotics and Autonomous Systems Robotics and Autonomous Systems Lecture 6: Perception/Odometry Simon Parsons Department of Computer Science University of Liverpool 1 / 47 Today We ll talk about perception and motor control. 2 / 47 Perception

More information

arxiv: v1 [cs.ro] 18 Jul 2016

arxiv: v1 [cs.ro] 18 Jul 2016 GENERATIVE SIMULTANEOUS LOCALIZATION AND MAPPING (G-SLAM) NIKOS ZIKOS AND VASSILIOS PETRIDIS arxiv:1607.05217v1 [cs.ro] 18 Jul 2016 Abstract. Environment perception is a crucial ability for robot s interaction

More information

37. Simultaneous Localization and Mapping

37. Simultaneous Localization and Mapping Sebastian Thrun, John J. Leonard 37. Simultaneous Localization and Mapping 871 This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in

More information

A Relative Mapping Algorithm

A Relative Mapping Algorithm A Relative Mapping Algorithm Jay Kraut Abstract This paper introduces a Relative Mapping Algorithm. This algorithm presents a new way of looking at the SLAM problem that does not use Probability, Iterative

More information