AN INCREMENTAL SLAM ALGORITHM FOR INDOOR AUTONOMOUS NAVIGATION

Similar documents
Introduction to Mobile Robotics SLAM Grid-based FastSLAM. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing

Implementation of Odometry with EKF for Localization of Hector SLAM Method

Probabilistic Robotics

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Probabilistic Robotics. FastSLAM

How to Learn Accurate Grid Maps with a Humanoid

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Probabilistic Robotics

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

What is the SLAM problem?

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

PacSLAM Arunkumar Byravan, Tanner Schmidt, Erzhuo Wang

A Unified 3D Mapping Framework using a 3D or 2D LiDAR

Simultaneous Localization and Mapping

Semantic Mapping and Reasoning Approach for Mobile Robotics

SLAM: Robotic Simultaneous Location and Mapping

Final project: 45% of the grade, 10% presentation, 35% write-up. Presentations: in lecture Dec 1 and schedule:

Kaijen Hsiao. Part A: Topics of Fascination

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Announcements. Recap Landmark based SLAM. Types of SLAM-Problems. Occupancy Grid Maps. Grid-based SLAM. Page 1. CS 287: Advanced Robotics Fall 2009

Introduction to Mobile Robotics SLAM Landmark-based FastSLAM

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds.

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Mobile Robot Mapping and Localization in Non-Static Environments

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Fast and Accurate SLAM with Rao-Blackwellized Particle Filters

UAV Autonomous Navigation in a GPS-limited Urban Environment

Recovering Particle Diversity in a Rao-Blackwellized Particle Filter for SLAM After Actively Closing Loops

IN recent years, NASA s Mars Exploration Rovers (MERs)

Autonomous Mobile Robot Design

SIMULTANEOUS LOCALLZATION AND MAPPING USING UAVS EQUIPPED WITH INEXPENSIVE SENSORS

THE PERFORMANCE ANALYSIS OF AN INDOOR MOBILE MAPPING SYSTEM WITH RGB-D SENSOR

Maritime UAVs Swarm Intelligent Robot Modelling and Simulation using Accurate SLAM method and Rao Blackwellized Particle Filters

PROGRAMA DE CURSO. Robotics, Sensing and Autonomous Systems. SCT Auxiliar. Personal

Graph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle)

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

Lecture: Autonomous micro aerial vehicles

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Computationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM

Practical Course WS12/13 Introduction to Monte Carlo Localization

Robot Localization based on Geo-referenced Images and G raphic Methods

Autonomous navigation in industrial cluttered environments using embedded stereo-vision

Probabilistic Robotics

Grid-Based Models for Dynamic Environments

Three-dimensional Underwater Environment Reconstruction with Graph Optimization Using Acoustic Camera

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Using LMS-100 Laser Rangefinder for Indoor Metric Map Building

Application of Vision-based Particle Filter and Visual Odometry for UAV Localization

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Introduction to Mobile Robotics

arxiv: v1 [cs.ro] 18 Jul 2016

Pose Estimation and Control of Micro-Air Vehicles

EE565:Mobile Robotics Lecture 3

A Genetic Algorithm for Simultaneous Localization and Mapping

Stereo vision SLAM: Near real-time learning of 3D point-landmark and 2D occupancy-grid maps using particle filters.

Jurnal Teknologi PARTICLE FILTER IN SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) USING DIFFERENTIAL DRIVE MOBILE ROBOT. Full Paper

CSE 527: Introduction to Computer Vision

Autonomous Multi-Floor Indoor Navigation with a Computationally Constrained MAV

OctoSLAM: A 3D Mapping Approach to Situational Awareness of Unmanned Aerial Vehicles

PML-SLAM: a solution for localization in large-scale urban environments

Low Cost solution for Pose Estimation of Quadrotor

Fast Randomized Planner for SLAM Automation

Localization and Map Building

Integrated Sensing Framework for 3D Mapping in Outdoor Navigation

Human-Robot Interaction Using ROS Framework for Indoor Mapping Missions

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

Stable Vision-Aided Navigation for Large-Area Augmented Reality

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

6D SLAM with Kurt3D. Andreas Nüchter, Kai Lingemann, Joachim Hertzberg

Survey: Simultaneous Localisation and Mapping (SLAM) Ronja Güldenring Master Informatics Project Intellgient Robotics University of Hamburg

Motion estimation of unmanned marine vehicles Massimo Caccia

Introduction to Autonomous Mobile Robots

Mapping and Exploration with Mobile Robots using Coverage Maps

Simultaneous Localization and Mapping (SLAM)

Marker Based Localization of a Quadrotor. Akshat Agarwal & Siddharth Tanwar

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING

Estimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter

Improvements for an appearance-based SLAM-Approach for large-scale environments

SPQR-UPM Team Description Paper

Towards Mobile Mapping of Underground Mines. Andreas Nüchter, Jan Elseberg, Peter Janotta

Mapping Contoured Terrain Using SLAM with a Radio- Controlled Helicopter Platform. Project Proposal. Cognitive Robotics, Spring 2005

Probabilistic Robotics

Combined Trajectory Planning and Gaze Direction Control for Robotic Exploration

A Sparse Hybrid Map for Vision-Guided Mobile Robots

Real Time Data Association for FastSLAM

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

Introduction to Mobile Robotics. SLAM: Simultaneous Localization and Mapping

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011

Simultaneous Localization

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Transcription:

20th IMEKO TC4 International Symposium and 18th International Workshop on ADC Modelling and Testing Research on Electric and Electronic Measurement for the Economic Upturn Benevento, Italy, September 15-17, 2014 AN INCREMENTAL SLAM ALGORITHM FOR INDOOR AUTONOMOUS NAVIGATION M. Di Castro, A. Masi, G. Lunghi, R. Losito CERN, Switzerland Abstract Robots indoor autonomous navigation is a crucial requirement for many challenging applications like the inspection of unstructured and harsh environments. Simultaneous Localization and Mapping (SLAM) is a key component to navigate autonomously. A novel embedded incremental algorithm that could be used on ground or aerial robots to localize and create 3D maps of an unknown GPS-denied environment, is presented. The algorithm is processing-wise light, is executed on board in real time and is suitable to be integrated in a full framework for autonomous navigation. It has been validated using a minimal set of sensors composed of a laser scanner and an inertial unit. Preliminary test results of the algorithm are presented using the CERN test facilities as case study. I. INTRODUCTION Many robotic applications require the ability to acquire in real-time a 3D model of the environment in order to navigate safely in it and accomplish complex tasks. Simultaneous Localization and Mapping (SLAM) algorithms incrementally build map of the environment and estimate the pose of the robot at the same time [1]. In the absence of global positioning information like in GPS-denied environment, the robot s state (the robot s pose and map) is essential to obtain a safe and reliable autonomous navigation. Obtaining large and accurate maps is still an open problem, especially when performed in real-time. There are different approaches to create SLAM algorithms, but basically all the algorithms can be divided in three main categories: landmark-based SLAM, gridbased SLAM and pose-graph SLAM. Landmark-based SLAM identifies landmarks in the environment and, considering the relative position of the landmarks, computes the state of the robot. Landmarkbased SLAM requires some previous information about the environment, since the robot has to recognize a landmark and associate it with the previously observed landmarks. This operation is called data association and is highly dependent on the environment [2][3]. Grid-based SLAM uses an occupancy grid map to represent the environment. With this approach, no assumptions about the environment are required [4][5]. Pose-graph based SLAM uses a graph to represent a problem. Every node in the graph corresponds to a pose of the robot during mapping. Every edge corresponds to the spatial constraints between them. [6] For each type of the described SLAM algorithms, different approaches have been proposed. One of these specific approaches is the EKF SLAM that uses an extended Kalman filter to estimate the state of the robot. Usually, this class of algorithms are landmark based. Although EKF is optimal for linear Gaussian noise, the SLAM problem is not a linear problem, so some linearization must be performed. If the non-linearities are large, EKF SLAM can diverge. Particle Filter SLAM solves the Gaussian limitation, using a particle filter to represent the posterior, so no likelihood parameterization is required and any kind of distribution can be represented[7][8][9]. In this paper an incremental approach for a SLAM algorithm based on particle filters is proposed. Because a parametric form for the likelihood distribution of the robot s pose and the map can t be defined, in the proposed method the space around the last odometry measurement is sampled trying to find the most likely pose. Instead of keeping a map for every particle and computing the probability of each particle using the sensor model [10], only one particle is kept and the probability of this particle to give the most correct map at each step is maximized. Using this approach, only one map at each step can be carried on reducing the computational time and allowing the use of more particles around the odometry point measured increasing the map accuracy with respect to standard particle filter approaches. If the time between two steps is short, the inconsistency of the map is kept low. SLAM algorithm compute data coming from various sensors needed for the autonomous navigation. Different types of sensor have been used for SLAM in the last decade. 3D time of flight sensors are usually heavy, expensive and have high power consumption. Stereo cameras and monocular cameras are used for visual odometry and landmark extraction. Nevertheless information extraction for building an occupancy grid map is computationally heavy. 2D laser range finders are the most common choice and have been intensively used on robots for SLAM so far. Being compact, light and with an update rate in the order ISBN-14: 978-92-990073-2-7 7

of tens of Hz, 2D laser range finders (lidar) are a suitable choice to be used as sensor for SLAM. For the work presented in this paper, a lidar and an inertial measurement unit (IMU) have been used as positioning sensor. The presented approach uses a 2D laser range finder and an IMU to estimate the 6 DoF state of the robot and a 3D map of the environment. Unless standard particle filter SLAM, the presented approach is more suitable for realtime. Other approaches like Hector SLAM [11] provide a 6 DoF state estimation but a 2D map. The approach proposed in this paper provides a full 3D map, more suitable for autonomous navigation. The proposed method is suitable both for UAVs and AGVs. The AGVs can provide also wheels odometry information that can replace the UAV s IMU. The next section defines the SLAM problem and is followed by experimental results. The paper then ends with a brief discussion about the future works and the conclusions. II. THE PROPOSAL The SLAM problem can be defined as the maximization of the following likelihood: A. The development platform The development platform is based on two sensors: an IMU and a 2D laser range finder. A full 6-DOF SLAM is implemented, making the approach completely robotic platform independent. For the experimental validation, the focus has been put on the autonomous navigation for Unmanned Aerial Vehicles (UAVs), since they have the tightest constraints on autonomy, payload and processing power. B. The software architecture The software architecture described in Fig.1 is highly modular. The scan matching modules takes data from the IMU and laser sensors and computes the odometry. The SLAM module represents the innovative contribution to the state of the art, characterized by the alternative approach to the standard particle filter SLAM. In this step an odometry error correction is performed, using the map built so far. The last step updates the map. p(x t, m t x t 1, m t 1, z t, u t ) (1) where x t is the 6-DOF pose of the robot at time t, m t is the occupancy grid map at time t, x t 1 is the previous pose of the robot, m t 1 the occupancy grid map built so far, z t is the latest lidar measurement and u t is the latest IMU measurement. Fig 2. Global reference frame S and robot s reference frame S i. The IMU Module u t = [ x, y, z,, β, γ] T Fig 1. The software architecture x, y, z represent the relative distances from step t-1 and t on robot s reference system S (Fig 2.). These values are calculated with a double integration using data from IMU s accelerometers. The drift that is present due to the double integration can be neglected, since this operation is only done at initialization for the scan matching algorithm and it is not taken into account during the navigation., β are respectively roll and pitch angles of the robot. They are provided in the global reference 8

frame, considering the relative rotation from the gravitational vector. γ is the variation on the yaw angle, provided from IMU s gyroscopes. Then, we update the rotation matrix of the robot using the robot s attitude obtained from IMU. ii. The Laser Module The laser module periodically scans the environment providing z t in (1). z t = [r i, θ i ] i = 0,, n r i is the i-th distance obtained from the range finder, and θ i the angle orientation of the range. In order to get the altitude of the robot, two approaches have been tested. The first one consists of using an ultrasound sensor looking downward to get the height of the robot. The ultrasound sensor has a cone shaped field of view, so it can sense the distance of the nearest obstacle under the robotic platform. The second approach consists of using two mirrors reflecting respectively, upwards and downwards, small fractions of the field of view of the laser scanner. u t is a rough estimation of the movement and it can be provided by an IMU if the algorithm runs on a UAV or, it can be provided by the wheels odometry estimation, if the algorithm runs on a MAV. The output of this module, x t, can be considered as an accurate robot s odometry. iv. The SLAM module The SLAM module maximizes the probability given in the equation 1 by analysing the input processed in the previous modules. The main idea follows the particle filter SLAM implementation using a Montecarlo localization based approach [15]. Initially, as illustrated in Fig 4., given the estimation of the pose of the robot x t and the previous calculated map m t 1, the proposed algorithm performs the ray casting [16] extracting the k distance z t from the first obstacle on the map m t 1 at position x t. Using the specific laser sensor model (Fig. 3), the algorithm weights the measured distance, given by the laser, returning the probability to have iii. The scan matching module The Scan Matching Module finds the rigid-body transformation (translation + rotation) that better aligns two consecutive scans of the laser ranger finder and the IMU (sensor fusion [12]). The Scan Matching is based on the Canonical Scan Matcher (CSM) algorithm [13] that is a very fast variation of the Iterative Closest Point (ICP) algorithm that uses a point-to-line metric optimized for rangefinder scan matching [14]. It is robust enough to be used in industrial prototypes of autonomous mobile robotics. Given z t, z t 1 and u t, the scan matching module returns x t = [x, y, z, α, β, γ ] T that is an estimation of the pose of the robot in the global reference frame S (Fig 2.). Fig 3. Lidar sensor model Fig 4. SLAM module flow chart 9

the measured distance given the position x t and the map m t 1 ( p(z t x t, m t 1 ) ). Following, the algorithm repeats the previous weights operation on several x t generated according to the motion model of the robot returning a set of weighted poses. Unlike particle filter SLAM, instead of keeping every particle with its own weight and its own map for all the execution time, in the new proposed method, only the position that has the highest weight is kept. This operation is performed for all the ranges at different angles of the 2D laser scanner and could be used also on 1D or 3D laser scanners. Standard Particle Filter SLAM algorithms, running on embedded platforms, use around 10 particles during their execution having 10 consistent maps but only using 10 samples of the robot motion model at each step. The new proposed approach, running on embedded platform, uses up to 100 samples at each step. In this way, the space around x t is sampled with more points providing much more accurate maps. Moreover, Standard Particle Filter SLAM algorithms need to bring always the same number of particles during the whole algorithm process, while the new proposed approach is flexible and, not being linked to previously maps, it can use different particles number according to the map error estimation and needs. The algorithm is fast enough allowing two consecutive calculated points to be very close reducing the map errors that could come from the deletion of all previous calculated maps. We can consider that at every step, the algorithm does an error correction of the estimated pose x t. Furthermore, referring always to a previous calculated map, the proposed algorithm keeps a global consistency, assuming that the map gathers together all the previous measurements. angular velocities, an Hokuyo UTM-30 LX laser scanner that provides long range (30 meters) high frequency scans (about 40 Hz sample frequency) with a wide field of view (270 degrees). All the code runs on an Advantech MIO-2661 embedded board, with an Intel Atom N2600 (Dual Core, 1.6 GHz with Intel Hyper-Threading), 4 GB RAM DDR3 and a 150 GB SSD msata as storage. Lubuntu 12.04 with the low latency kernel is used as the operating system. The algorithm is implemented as a multi thread system, with timing in order to simulate a soft real-time behaviour. Fig 5. Five cm resolution OctoMap v. The Mapping module The mapping module uses OctoMap [17], an efficient probabilistic 3D mapping framework based on Octrees. The OctoMap library implements a 3D occupancy grid mapping approach, providing data structures and mapping algorithms in C++ particularly suited for robotics. The map implementation is designed to meet the following requirements: fully 3d model, updatability, flexibility and compactness. It runs on a different thread, allowing the localization algorithm not to stop when data are added to the map. III. EXPERIMENTAL RESULTS The validation platform is based on an IMU Xsens MTi-200 that provides the robot s accelerations and Fig 6.One cm resolution OctoMap 10

(a) (b) The sensors and control platform has been deployed and validated both on a terrestrial and aerial robot. Moreover, on the terrestrial platform, the proposed method has been coupled with a semi-autonomous navigation system including obstacle avoidance. Fig. 5 shows a 5 cm resolution 3D map of a room while Fig 6. shows a 1 cm resolution 3D map of the same room. In Fig. 7, a 5 cm 3D map is shown including also the path of the robot and a real picture of the room mapped. Closer objects and also objects present outside the room are mapped through the windows. The proposed algorithm is robust enough to map enviroment where objects with low reflectivity are present, like plastic windows, or where there is the presence of external strong light sources. The true pathway is not shown because no absolute tracking system has been used. IV. FUTURE WORK Solutions to fuse the laser scanner and the IMU with another type of sensor such as a monocular camera or stereo camera are being tested and validated. Monocular and stereo cameras help loop closure operations [18], can be used as visual odometry [19] and also allow building offline 3D visual maps of the environment using some structure from motion algorithms. The use of a loop closure module inside the proposed work framework is foreseen as well as a map batch optimization. V. CONCLUSIONS An incremental slam algorithm based on Montecarlo localization for indoor autonomous navigation has been developed and validated. Based on a minimal set of sensor, being light, portable and fully reconfigurable, the new proposed algorithm is suitable to be deployed on mobile robots such UAVs and ground vehicles with limited payload and low processing power. (c) Fig 7. a: picture of the environment. b: perspective view of the 3D map of the environment where the near buildings due to the open windows are also visible. c: 3D map of the environment from above with the path of the robot in marked red. VI. REFERENCES [1] Bailey, T. and Durrant-Whyte, H., Simultaneous localisation and mapping (SLAM): Part II state of the art," Robotics & Automation Magazine, Sep 2006. 20, 81 [2] G. Dissanayake, H. Durrant-Whyte, and T. Bailey. A computationally efficient solution to the simultaneous localisation and map building (SLAM) problem. In Proc. of the IEEE Int. Conf. on Robotics & Automation (ICRA), pages 1009 1014, San Francisco, CA, USA, 2000. [3] K.S. Chong and L. Kleeman, Feature-based mapping in real, large scale environments using an ultrasonic array, Int. J. Robot. Res., vol. 18, no. 1, 11

pp. 3 19, 1999 [4] G. Grisetti, C. Stachniss, and W. Burgard, GMapping Algorithm (Online). http://www.openslam.org. [5] Simone Z and Ann E. Nicholson, Square Root Unscented Particle Filtering for Grid Mapping, 2009 [6] Grisetti, G., Kummerle, R., Stachniss, C., & Burgard, W. (2010). A tutorial on graph-based SLAM. Intelligent Transportation Systems Magazine, IEEE, 2(4), 31-43. [7] A. Doucet, J.F.G. de Freitas, K. Murphy, and S. Russel. Rao-Black-wellized particle filtering for dynamic bayesian networks. In Proc. Of the Conf. on Uncertainty in Artificial Intelligence (UAI), pages 176 183, Stanford, CA, USA, 2000. [8] G. Grisetti, C. Stachniss, and W. Burgard, Improving gridbased SLAM with Rao- Blackwellized particle filters by adaptive proposals and selective resampling, in Proc. IEEE Int. Conf. Robotics Automation, 2005, pp. 667 672. [9] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit. FastSLAM: A factored solution to simultaneous localization and mapping. In Proc. of the National Conference on Artificial Intelligence (AAAI), pages 593 598, Edmonton, Canada, 2002 [10] Thrun, Sebastian. "Probabilistic robotics." Communications of the ACM 45.3 (2002): 124-129. [11] S. Kohlbrecher, J. Meyer, O. von Stryk, and U. Klingauf, A Flexible and Scalable SLAM System with Full 3D Motion Estimation, in Proceedings of IEEE International Symposium on Safety, Security and Rescue Robotics, 2011. [12] Brooks, Richard R., and Sundararaja S. Iyengar. Multi-sensor fusion: fundamentals and applications with software. Prentice-Hall, Inc., 1998. [13] Andrea Censi, An ICP variant using a point-to-line metric, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Pasadena, USA, May, 2008 [14] Gutmann, J-S., and Christian Schlegel. "Amos: Comparison of scan matching approaches for selflocalization in indoor environments." Advanced Mobile Robot, 1996., Proceedings of the First Euromicro Workshop on. IEEE, 1996. [15] Thrun, Sebastian. "Probabilistic robotics." Communications of the ACM 45.3 (2002): 200-212. [16] Armin Hornung and Kai M. Wurm and Maren Bennewitz and Cyrill Stachniss and Wolfram Burgard, OctoMap: An Efficient Probabilistic 3D Mapping Framework Based, Autonomous Robots, 2013. [17] Thrun, Sebastian. "Learning occupancy grid maps with forward sensor models." Autonomous robots 15.2 (2003): 111-127. [18] M. Cummins and P. Newman, Probabilistic appearance based navigation and loop closing, in Proc. IEEE Int. Conf. Robotics Automation, Rome, Italy, Apr. 2007, pp. 2042 2048 [19] Scaramuzza, Davide, and Friedrich Fraundorfer. "Visual odometry [tutorial]." Robotics & Automation Magazine, IEEE 18.4 (2011): 80-92. 12