Simulation of a mobile robot with a LRF in a 2D environment and map building

Similar documents
Analysis of methods for reducing line segments in maps: Towards a general approach

Geometric Robot Mapping

Robot Localization based on Geo-referenced Images and G raphic Methods

Evolving principal component clustering with a low run-time complexity for LRF data mapping

Processing of distance measurement data

Localization and Map Building

An Extended Line Tracking Algorithm

Uncertainties: Representation and Propagation & Line Extraction from Range data

Geometrical Feature Extraction Using 2D Range Scanner

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

Localization algorithm using a virtual label for a mobile robot in indoor and outdoor environments

MTRX4700: Experimental Robotics

Robot Mapping for Rescue Robots

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

DYNAMIC ROBOT LOCALIZATION AND MAPPING USING UNCERTAINTY SETS. M. Di Marco A. Garulli A. Giannitrapani A. Vicino

This chapter explains two techniques which are frequently used throughout

Online Simultaneous Localization and Mapping in Dynamic Environments

Shape Matching for Robot Mapping

3D Model-Based Dynamic Feature Planning for Laser Scanner-Based Navigation of Vehicles

COS Lecture 13 Autonomous Robot Navigation

Independent contact regions for frictional grasps on 3D objects

6D SLAM with Kurt3D. Andreas Nüchter, Kai Lingemann, Joachim Hertzberg

LITERATURE SURVEY: COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING

3-D MAP GENERATION BY A MOBILE ROBOT EQUIPPED WITH A LASER RANGE FINDER. Takumi Nakamoto, Atsushi Yamashita, and Toru Kaneko

Visually Augmented POMDP for Indoor Robot Navigation

D-SLAM: Decoupled Localization and Mapping for Autonomous Robots

Autonomous Mobile Robot Design

Calibration of a rotating multi-beam Lidar

Acquiring Models of Rectangular 3D Objects for Robot Maps

A set theoretic approach to path planning for mobile robots

2005 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

Final Exam Practice Fall Semester, 2012

Localization of Multiple Robots with Simple Sensors

A Lightweight SLAM algorithm using Orthogonal Planes for Indoor Mobile Robotics

Improvement of Map Building during the Exploration of Polygonal Environments using the Range Data

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

A MOBILE ROBOT MAPPING SYSTEM WITH AN INFORMATION-BASED EXPLORATION STRATEGY

Localization and Map Building

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3

Scan-point Planning and 3-D Map Building for a 3-D Laser Range Scanner in an Outdoor Environment

Data Association for SLAM

Incremental Robot Mapping with Fingerprints of Places

A Scanning Method for Industrial Data Matrix Codes marked on Spherical Surfaces

MOBILE ROBOT LOCALIZATION. REVISITING THE TRIANGULATION METHODS. Josep Maria Font, Joaquim A. Batlle

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

arxiv: v1 [cs.cv] 18 Sep 2017

A scalable hybrid multi-robot SLAM method for highly detailed maps Pfingsthorn, M.; Slamet, B.; Visser, A.

D-SLAM: Decoupled Localization and Mapping for Autonomous Robots

Intelligent Robotics

Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners

Centre for Autonomous Systems

Kaijen Hsiao. Part A: Topics of Fascination

Note Set 4: Finite Mixture Models and the EM Algorithm

Sensor Data Representation

An Efficient Algorithm for Extrinsic Calibration between a 3D Laser Range Finder and a Stereo Camera for Surveillance

NERC Gazebo simulation implementation

Loop detection and extended target tracking using laser data

Implementation of Odometry with EKF for Localization of Hector SLAM Method

COMPARISON OF ROBOT NAVIGATION METHODS USING PERFORMANCE METRICS

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving

EE631 Cooperating Autonomous Mobile Robots

Simultaneous Localization and Mapping

Natural landmark extraction for mobile robot navigation based on an adaptive curvature estimation

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Roadmaps. Vertex Visibility Graph. Reduced visibility graph, i.e., not including segments that extend into obstacles on either side.

Rational Trigonometry Applied to Robotics

A Sparse Hybrid Map for Vision-Guided Mobile Robots

5. Tests and results Scan Matching Optimization Parameters Influence

Probabilistic Matching for 3D Scan Registration

Localization, Where am I?

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

Offline Simultaneous Localization and Mapping (SLAM) using Miniature Robots

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

Navigation methods and systems

Planning Robot Motion for 3D Digitalization of Indoor Environments

Sensor Modalities. Sensor modality: Different modalities:

Jurnal Teknologi PARTICLE FILTER IN SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) USING DIFFERENTIAL DRIVE MOBILE ROBOT. Full Paper

Perception IV: Place Recognition, Line Extraction

INDOOR NAVIGATION BY USING SEGMENTATION OF RANGE IMAGES OBTAINED BY LASER SCANNERS

IGTF 2016 Fort Worth, TX, April 11-15, 2016 Submission 149

Adapting the Sample Size in Particle Filters Through KLD-Sampling

A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA

Improving Simultaneous Mapping and Localization in 3D Using Global Constraints

Schematized Aspect Maps for Robot Guidance

Modeling the Static and the Dynamic Parts of the Environment to Improve Sensor-based Navigation

SCAN MATCHING WITHOUT ODOMETRY INFORMATION

A Multi-layer Description of the Environment using Curvature Information for Robot Navigation

Building Reliable 2D Maps from 3D Features

arxiv: v1 [cs.ro] 26 Nov 2018

Orthogonal 3D-SLAM for Indoor Environments Using Right Angle Corners

CCD Data Processing Improvements for Star Cameras

HOUGH TRANSFORM CS 6350 C V

LOAM: LiDAR Odometry and Mapping in Real Time

Introduction to Mobile Robotics Path Planning and Collision Avoidance

EKF Localization and EKF SLAM incorporating prior information

NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM

DYNAMIC POSITIONING OF A MOBILE ROBOT USING A LASER-BASED GONIOMETER. Joaquim A. Batlle*, Josep Maria Font*, Josep Escoda**

차세대지능형자동차를위한신호처리기술 정호기

Transcription:

Simulation of a mobile robot with a LRF in a 2D environment and map building Teslić L. 1, Klančar G. 2, and Škrjanc I. 3 1 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, 1000 Ljubljana, SLOVENIA luka.teslic@fe.uni-lj.si 2 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, 1000 Ljubljana, SLOVENIA gregor.klancar@fe.uni-lj.si 3 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, 1000 Ljubljana, SLOVENIA igor.skrjanc@fe.uni-lj.si 1 Introduction This artcile deals with the modelling and simulation of a mobile robot with a laser range finder in a 2D environment and map building. The simulator is built in the Matlab Simulink environment, therby taking advantage of the powerful Matlab toolboxes for developing mapping, localization, SLAM and navigation algorithms. A map-building algorithm is developed and tested with a simulation. The line segments, extracted from the LRF s output in each scan, are made up of polylines, which are merged with the existing global map to form a new global map. The global map of the environment is represented by unions of line segments, where each union represents an object in the environment. Map building, localization and navigation are important issues in mobile robotics. To develop and test algorithms for executing tasks of this kind, it is useful to have a simulator of a mobile robot equipped with sensors in a static environment. Since a Laser Range Finder (LRF) is often used as the basic interaction between the robot and the environment, the represented mobile robot model also includes a model of the LRF. The problem of robotic mapping and localization has been widely studied. A robot must know its own pose (localization problem) in order to build a map, and the robot also needs to know the environment map (mapping problem) to localize itself to its current pose. The problems of mapping and localization can be handled separately if the robot s pose is given to the robot by a human or from using GPS and INU sensors (outdoor environments) when map building. The map of the environment can then be used to solve the localization problem. To avoid the known robot s pose assumption, a SLAM (Simulta-

2 Teslić L., Klančar G., and Škrjanc I. neous Localization and Mapping) algorithm must be built, where the problems of localization and mapping are merged. The robot can localize itself by odometric measurements and by comparing the local map, obtained from the current view of the robot, with an already built global environment map. In [1] a comprehensive survey of the SLAM problem is addressed. In the literature different approaches to the map building were proposed. A topological map [3] is composed of the nodes representing topological locations and the edges between the nodes. These nodes contain information about the way to reach a connected topological location. In [3] the metric and topological paradigm are integrated into a hybrid system for map building. A global topological map connects local metric maps, allowing a compact environment model, which does not require global metric consistency and provides both precision and robustness. The metric approach builds a map with occupancy grids or with simple geometrical features (e.g., line segments). Occupancy grids require a huge amount of computer memory and are therefore not appropriate when modelling a large environment [4]. In this paper we chose line segments for the environment model as they require a smaller amount of computer memory. In [5] a comparison of line-extraction algorithms using a 2D laser rangefinder is reported. In [6] the environment is represented by polygonal curves (polylines), possibly containing rich shape information for matching environment scans. However, some environment objects can not be represented by one polyline (consecutive line segments). Therefore, each environment object is represented by the union of inconsecutive line segments in this paper. 2 Simulator The main reason to develop a new simulator instead of using one of the many already available is to study navigation, localization and mapping algorithms in the Matlab Simulink environment. Matlab and its tool-boxes (e.g., Fuzzy Logic, Control, etc.) represent a very powerful tool for developing all kinds of algorithms. The simulator includes the models of a mobile robot (Fig.1 (a)), a laser range finder and the environment. The purpose of the modelling is to create a simulation model where different algorithms for mapping can be tested. We assume a two-dimensional environment and that the robot knows its own pose p(t) = [x rob (t), y rob (t), ϕ rob (t)] (1) at time t, in a global frame of reference (Fig.1 (a)). The denotation for time, t, will be subsequently abandoned for simplicity.

2.1 Robot Model Simulation of a mobile robot with a LRF in 2D and map building 3 The kinematic model of the robot is given by the following equations ẋ rob = v cos ϕ rob, ẏ rob = v sinϕ rob, ϕ rob = ω, (2) where the input v denotes the tangential speed, input ω denotes the angular speed, (x rob, y rob ) denotes the position of the robot in global coordinates (x G, y G ) and ϕ rob denotes the orientation of the robot according to the global coordinate axis, x G. The continuous model (Eq. (2)) is implemented in Matlab Simulink with a simulation scheme using the ode45 integration method. For simulation purposes it is enough to control the robot with the inputs v and ω. The pose (Eq. (2)) is the input in the S-function Animation for simulating the environment model and the LRF model and the input in the S-function for the map-building algorithm. 2.2 Environment Model The two-dimensional model of the environment can be built with line segments. The line segment is defined with two points on the line and the normal line equation x G cos α j + y G sinα j p j = 0, (3) where the parameter p j denotes the distance of the line from the origin, parameter α j (π, π] denotes the direction of the line normal passing through the origin and x G, y G are the global coordinates of the points lying on the line. 2.3 Laser Range-Finder Model The laser range finder in each time step gives the set of distances d s = [d s0,..., d s180 ] to obstacles (e.g., a wall) at angles θ s = [0,...,180 ]. We simulate a reflection point by calculating the intersection points (x ip (i, j), y ip (i, j)) between the i-th laser beam line (Fig. 1 (b)) and all (j = 1,..., N) the lines describing the environment line segments with determinants and calculate distances and angles d(i, j) = (x ip (i, j) x rob ) 2 + (y ip (i, j) y rob ) 2, (4) θ(i, j) = atan2(y ip (i, j) y rob, x ip (i, j) x rob ) (ϕ rob 90 ). If there is no intersection point between the lines, this is labelled with d(i, j) = D; D > d max, (5)

4 Teslić L., Klančar G., and Škrjanc I. y G laser beam line at angle ( i) ( i) (x rob,yrob) v y R rob y G a Environment line y R (x rob,yrob) d x R ( x ( i,j ),y ( i,j)) ip ip Environment line segment x R Laser beam line z G x G z G x G Fig. 1. (a) Coordinates of the robot according to the global coordinates. (b) Intersection point between the laser beam line and the environment line. where d max denotes the maximal range of the LRF (e.g., 80m). In the case of concurring lines the nearest point of the environment line segment to the robot (x rob, y rob ) is chosen as the reflecting point. Because the intersections between the lines are calculated, the intersections behind the robot and the intersections in front of the robot, which do not lie on the environment line segments, must be eliminated (labeled with Eq. (5)). Furthermore, we choose a point with the minimum distance from the robot d(i) = min(d(i,1),..., d(i, N)) as the reflection point. So, if there is no intersection point between the i th laser beam and all the environment lines, the distances d(i,:) take the value D and d(i) = D. If there are more environment lines in front of the robot, the nearest intersection point is chosen as the reflecting point of the laser beam. We define the vectors (Fig. 1 (b)) a = (cos ϕ rob, sinϕ rob, 0) and d = (x ip (i, j) x rob, y ip (i, j) y rob, 0). If the dot product a d < 0, the intersection point lies behind the robot and it is eliminated. If θ(i) equals 0 (180 ) and there is an environment line on the left (right) side of the robot, an intersection point between lines, which does not appear with the real LRF, is eliminated. This situation happens when c = a d = (0, 0, c 3 ); c 3 > 0 (c 3 < 0), (6) where the operator denotes the cross product. We assume the LRF noise model using d noise (i) = d(i) + d(i) d max N(0, σ), (7) where N(0, σ) denotes the normal distributed noise with zero mean and σ variance. If the distance d(i) is large, the influence of the noise is proportionally larger. In this way we can test the robustness of mapping algorithms without knowing the real noise distribution.

Simulation of a mobile robot with a LRF in 2D and map building 5 3 Mapping algorithm Assuming that the robot knows its own pose (Eq. 1)) the calculated reflection point (x G (i), y G (i)) according to the global coordinates is x G (i) = x rob + d(i)cos θ G (i), θ G (i) = (ϕ rob 90 ) + θ s (i), i = 1,..., n, y G (i) = y rob + d(i)sinθ G (i), and (8) where θ G (i) denotes the laser-beam angle according to the global coordinate frame (Fig.1 (a)). All consecutive points (8) by which a reflection has occurred (d(i) D) are clustered, other points (d(i) > D) are ignored. If there is only one point in a cluster, it is also ignored. Each cluster is then split into more clusters if the distance between two following points is greater than the threshold. Finally, every cluster is reduced in the set of consecutive line segments or polylines using the split-and-merge algorithm (it is fast and has good accurate) [5] and least-squares line fitting. The line segments are defined with edge points and line parameters p k (the distance of the line from the origin) and α k (π, π] (the direction of the line normal passing through the origin). 3.1 Integrating the global map with the local map G = S U G = ( S1, S2) U ( G1, G2 NEW OLD ) = ( G1, G2, S1, S2) G= ( G1, G2, G3) G= ( G1, G2, G3) G2 G2 G2 G1 S2 G1 S1 G1 NEW= G1 U S2 OLD G3=S1 G3 Fig. 2. Integrating the global map G =(G1,G2) with the local map S =(S1,S2). Each local map S (Fig.2) represents a set of polylines (S1, S2), and each polyline is composed of consecutive line segments described with line parameters and edge points. The global map G (Fig.2, right) is composed of the unions (G1, G2 and G3) of line segments, which represent objects in the environment. The local map S is united with the previously built global map G old to get a new global map G NEW = S G OLD. When a robot makes its second local map S, G OLD is equal to the local map S FIRST, obtained in the first environment scan. When uniting S and G OLD each line segment of

6 Teslić L., Klančar G., and Škrjanc I. L1 d r2 r4 L2 c r3 b a r1 R c1 : a + b < l(l1) + T, c2 : c + d < l(l1) + T, c3 : a + c < l(l2) + T, c4 : b + d < l(l2) + T, c5 : r1 < R, c6 : r2 < R, c7 : r3 < R, c8 : r4 < R, Fig. 3. The conditions for merging the line segments L1 and L2. the set (G OLD, S) is compared to each line segment in one loop. We define the conditions for merging the line segments L1 and L2 (Fig. 3), which correspond to the same environment line segment for the thresholds T and R, where l(li) denotes the length of line segment Li. If at least two of conditions c i (i = 1, 2, 3, 4) are satisfied and if all conditions c i (i = 5, 6, 7, 8) are satisfied, two line segments are merged. If the conditions c 1 and c 2 or the conditions c3 and c4 are satisfied, two line segments are also merged. When merging two line segments, new line-segment parameters are computed by uniting the edge points of both segments, as indicated in [7]. If the merged line segments belong to different unions, the unions are merged (Fig.2, G1 NEW = G1 OLD S2). The loop of comparison is then repeated. If the conditions for merging any two segments are not fulfilled in the next loop, the loop is stopped. In [2] the SLAM algorithm for a line-based environment representation is described. The global environment map is composed of the set of the global environment lines (e.g., 1000) and the local environment map is also composed of a set (e.g., 10) of local environment lines. For localization purposes the line segments of the global map that correspond to the same environment line segments (e.g., a wall) as the line segments of the local map, must be found. The line parameters of the local (robot s coordinates) map (α Li, p Li ) are recomputed to global coordinates according to the approximately known robot pose [2] (the prediction step of the Extended Kalman Filter). In general, parameters (α Gk, p Gk ) of each global line must be compared to recomputed parameters (α Li, p Li ) of each local line to find the corresponding pair of lines. In large environments this can take a huge number of comparisons (e.g., 1000 10). If the sum of the squared differences (α Gk α Li )2 +(p Gk p Li )2 is below a threshold, the lines can be chosen as a corresponding pair. In our mapping approach the map is composed of unions of lines, where each union corresponds to an environment object. It is very likely that objects seen by the robot in the previous environment scan are also seen in the current environment scan. There could be many line segments that are seen in the current environment scan that correspond to objects (unions Gi) seen in the previous environment scan. The search strategy can be faster if the line pairs for the comparisons are first found among all the global lines that correspond to the

Simulation of a mobile robot with a LRF in 2D and map building 7 environment objects seen in the previous environment scan and all the local lines (recomputed parameters) of the current scan. 4 Results Fig. 4. (a) Reflecting points of the LRF in a 2D environment. (b), (c) and (d) Experiment of building map G = (G1,G2,G3) at d max = 14m and (b) σ = 0.07m, (c) σ = 0.42m and (d) σ = 0.63m. Fig. (4) (a) shows a simulated LRF, a mobile robot and a 2D environment. Figures (4) (b), (c) and (d) show experiments at different values of LRF noise variance σ (0.07m, 0.42m and 0.63m) and a fixed value d max = 14m (Eq. (7)), where the global environment map G is built. The environment scans are taken at poses p1, p2 and p3. The experiments shows that our mapping algorithm is robust, even at a lot of noise from the LRF. Our mapping approach builds a global map with many sets (unions) of line segments, which correspond to the environment objects. Compared to occupancy grids [4] the environment description with the line segments requires a much smaller amount of computer memory. In [6] the environment is represented by polygonal curves (polylines), possibly containing rich shape information, which

8 Teslić L., Klančar G., and Škrjanc I. can be important when matching consecutive environment scans for localization purposes. The object represented by the union of the lines G3 (6 lines) in Fig. 4 (b) could not be represented by one, but four polylines (consecutive line segments) seen from the left, right, lower and upper sides of this object (14 lines). Environment representation with polylines could require more computer memory than our representation with unions of line segments. 5 Conclusion In this paper we represent a simulator of a mobile robot with a LRF in a 2D static environment and propose a map-building algorithm, which is tested on the simulator. An environment map describes each environment object with a union of line segments. In this way the search strategy to find pairs of line segments for localization purposes could be faster than with an environment map, which is composed of only one set of line segments. The mapping algorithm is fast enough for real-time applications and will be integrated into the SLAM algorithm in the future. References 1. Thrun S (2002) Robotic Mapping: A Survey. In: Lakemeyer G, Nebel B, (eds) Exploring Artificial Intelligence in the New Millenium. Morgan Kaufmann 2. Garulli A, Giannitrapani A, Rossi A, Vicino, A (2002) Mobile robot SLAM for line-based environment representation. 44th IEEE Conference on Decision and Control and European Control Conference. 3. Tomatis N, Nourbakhsh I, Siegwart R (2003) Hybrid simultaneous localization and map building: a natural integration of topological and metric. Robotics and Autonomous Systems 44:3 14 4. Veeck M, Veeck W (2004) Learning polyline maps from range scan data acquired with mobile robots. IEEE/RSJ International Conference on Intelligent Robots and Systems. Proceedings. 5. Nguyen V, Martinelli A, Tomatis N, Siegwart R, (2005) A Comparison of Line Extraction Algorithms using 2D Laser Rangefinder for Indoor Mobile Robotics. IEEE/RSJ International Conference on Intelligent Robots and Systems. 6. Latecki L J,Lakaemper R, Sun X, Wolter D (2004) Building Polygonal Maps from Laser Range Data. ECAI International Cognitive Robotics Workshop, Valencia, Spain, August. 7. Mazl R, Preucil L (2000) Building a 2D Environment Map from Laser Range- Finder Data. Proceedings of the IEEE Intelligent Vehicles Symposium.