Visual Topological Mapping

Similar documents
Reference Tracking System for a Mobile Skid Steer Robot (Version 1.00)

Topological Navigation and Path Planning

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

CS 4758 Robot Navigation Through Exit Sign Detection

Simulation of the pass through the labyrinth as a method of the algorithm development thinking

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

Motion Detection. Final project by. Neta Sokolovsky

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

Chapter 12 3D Localisation and High-Level Processing

Navigation and Metric Path Planning

Chapter 3 Image Registration. Chapter 3 Image Registration

Manipulator trajectory planning

Improving Vision-based Topological Localization by Combining Local and Global Image Features

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

Study on the Signboard Region Detection in Natural Image

Selective Search for Object Recognition

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

Flexible Calibration of a Portable Structured Light System through Surface Plane

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

CS4758: Rovio Augmented Vision Mapping Project

Topological Mapping. Discrete Bayes Filter

Ensemble of Bayesian Filters for Loop Closure Detection

Practical Image and Video Processing Using MATLAB

Simulation of a mobile robot with a LRF in a 2D environment and map building

Coverage and Search Algorithms. Chapter 10

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Measurements using three-dimensional product imaging

Robotics. CSPP Artificial Intelligence March 10, 2004

Optical Flow-Based Person Tracking by Multiple Cameras

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

Online Graph Exploration

A New Approach to Computation of Curvature Scale Space Image for Shape Similarity Retrieval

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Multi-Agent Deterministic Graph Mapping via Robot Rendezvous

Introducing Robotics Vision System to a Manufacturing Robotics Course

Processing 3D Surface Data

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Robust color segmentation algorithms in illumination variation conditions

CS4733 Class Notes, Computer Vision

Robot Localization based on Geo-referenced Images and G raphic Methods

RAPID 3D OBJECT RECOGNITION FOR AUTOMATIC PROJECT PROGRESS MONITORING USING A STEREO VISION SYSTEM

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization

COMS W4735: Visual Interfaces To Computers. Final Project (Finger Mouse) Submitted by: Tarandeep Singh Uni: ts2379

Graph Matching Iris Image Blocks with Local Binary Pattern

COLOR IMAGE SEGMENTATION IN RGB USING VECTOR ANGLE AND ABSOLUTE DIFFERENCE MEASURES

Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion

Topological Modeling with Fuzzy Petri Nets for Autonomous Mobile Robots

High Altitude Balloon Localization from Photographs

Acquisition of Qualitative Spatial Representation by Visual Observation

Autonomous Robot Navigation: Using Multiple Semi-supervised Models for Obstacle Detection

Multi-Camera Calibration, Object Tracking and Query Generation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

A New Feature Local Binary Patterns (FLBP) Method

Processing 3D Surface Data

Dr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions

Uninformed Search (Ch )

Motion Detection Algorithm

SINGLE IMAGE ORIENTATION USING LINEAR FEATURES AUTOMATICALLY EXTRACTED FROM DIGITAL IMAGES

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Robot localization method based on visual features and their geometric relationship

Robot Motion Planning Using Generalised Voronoi Diagrams

Advanced Robotics Path Planning & Navigation

Summary of Computing Team s Activities Fall 2007 Siddharth Gauba, Toni Ivanov, Edwin Lai, Gary Soedarsono, Tanya Gupta

A threshold decision of the object image by using the smart tag

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration

CS 223B Computer Vision Problem Set 3

Visually Augmented POMDP for Indoor Robot Navigation

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences

Collecting outdoor datasets for benchmarking vision based robot localization

Learning Image-Based Landmarks for Wayfinding using Neural Networks

Semantics in Human Localization and Mapping

A Statistical Approach to Culture Colors Distribution in Video Sensors Angela D Angelo, Jean-Luc Dugelay

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

CSE/EE-576, Final Project

Multi-Robot Navigation and Coordination

AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij

Gerstner Laboratory for Intelligent Decision Making. laboratory. Gerstner. SyRoTek: V User s manual for SyRoTek e-learning system

Coverage and Search Algorithms. Chapter 10

Chapter 4 - Image. Digital Libraries and Content Management

Long-term motion estimation from images

Region Based Image Fusion Using SVM

Training Algorithms for Robust Face Recognition using a Template-matching Approach

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization

Data Structure. IBPS SO (IT- Officer) Exam 2017

Car tracking in tunnels

Stable Vision-Aided Navigation for Large-Area Augmented Reality

Texture Segmentation by Windowed Projection

Autonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

arxiv: v1 [cs.cv] 28 Sep 2018

CS 280 Problem Set 10 Solutions Due May 3, Part A

Segmentation and Tracking of Partial Planar Templates

SELECTION OF WHEEL CHASSIS FOR MOBILE ROBOTS IN COURSE OF PRODUCTION PROCESSES AUTOMATIZATION

Feature Point Extraction using 3D Separability Filter for Finger Shape Recognition

Localization and Map Building

Transcription:

Visual Topological Mapping Karel Košnar, Tomáš Krajník, and Libor Přeučil The Gerstner Laboratory for Intelligent Decision Making and Control Department of Cybernetics, Faculty of Electrical Engineering Czech Technical University in Prague {kosnar,tkrajnik,preucil}@labe.felk.cvut.cz Summary. We present an outdoor topological exploration system based on visual recognition. Robot moves through a graph-like environment and creates a topological map, where edges represent paths and vertices their intersections. The algorithm can handle indistinguishable crossings and close loops in the environment with the help of one marked place. The visual navigation system supplies path traversing and crossing detection abilities. Path traversing is purely reactive and relies on color segmentation of an image taken by on-board camera. The crossing passage algorithm reports azimuths of paths leading out of a crossing to the topological subsystem, which decides what path to traverse next. Compass and odometry is then utilized to move the robot to the beginning of picked path. The proposed system performance is tested in simulated and real outdoor environment using a P3AT robotic platform. Key words: topological mapping, visual navigation, exploration 1 Introduction The problem of autonomous exploration and mapping of an unknown terrain remains a fundamental problem in mobile robotics. The key question during exploration is to determine where to move the robot in order to minimize the time needed to completely explore the environment. For known, graph-like environments, the mapping task sets up the problem of finding the shortest round trip through all edges of the graph. This denotes in principle the wellknown chinese postman problem. This paper describes a technique to explore an unknown environment and buildup a proper topological map of it. Topological maps [1],[2] mainly rely on graph representation of spatial properties of the environment as vertices represent places and edges denote paths between corresponding places. Therefore, edges represent procedural knowledge how to navigate from one place to another. Environment is sensed by a visual recognition system similar to [3] capable to navigate paths, recognize and traverse crossings. It is assumed, that

2 Karel Košnar, Tomáš Krajník, and Libor Přeučil all paths are distinguishable from non-traversable terrain by color. Identified crossings correspond to nodes, interconnecting paths refer to edges. Paths forming a crossing are distinguished by an azimuth at which they lead out. The paper is organized as follows: The introduction presents a brief overview of visual based topological mapping. The next chapter presents topological mapping algorithms. The following division describes how the robot navigates paths and recognizes crossings. After that, the experimental setups and results are described. The last section concludes results and proposes next research directions. 2 Topological exploration The exploration and mapping problem denotes the process of finding a spatially consistent map. A topological map generally represents spatial knowledge as a graph G = (V, E), describing locations and objects of interest as vertices v V and their spatial relations as edges e E. The edges can also reflect the procedural knowledge and control laws used to navigate between vertices. Vertices - places of interest - are crossings of the roads. Crossings are places where robot can take a decision and change the way. Only one vertex is marked and therefore distinguishable from others. This vertex is called a base. All other vertices are handled as indistinguishable. Vertices are detected by GeNav system(see sect. 3). The description of crossing detection procedures in GeNav is in the section 3.2. Edges reflect roads and also store directions of outgoing roads signed by azimuth referred to magnetic field of the Earth. This information is used for crossing navigation. The robot uses the compass for determination of roads azimuths. The exploration algorithm assumes that azimuths are determined with certain precision independently on the time. It is assumed that angle between any two edges leading from one vertex is not smaller than azimuth assessment precision. The exploration and mapping algorithm presented here is based on multirobot topological exploration algorithm described in details in [4]. Main difference is replacement of one robot with marker (base). This change allows to use only one robot. Let us presume that a robot can determine its position status as being on a vertex. On the other hand, this robot position can not be distinguished from another, similar places at once except being on vertex marked as base. It is assumed that if there is an edge between two vertices in a world model, the robot is able to move between these two vertices. This transition is executed by applying a single control strategy c(e). The control strategy c(e) leads the robot along the edge e. As a first step, the robot uses compass data for turning to azimuth read from e. Subsequently, the robot follows the road using GeNav system path recognition (see 3.1).

Visual Topological Mapping 3 It is also assumed that if the robot applies in certain vertex u control strategy c(e), e = (u, v), it gets to the same vertex v at any time. 2.1 Algorithm The algorithm consists of two phases, which are exploration and vertices merging procedures. In the exploration phase, the robot moves through the environment and makes its own map G M = (V M, E M ) of the world. As the robot cannot distinguish particular vertices from each other, it is also unable to close loop in exploration without visiting the base vertex (or interacting with some other robot). Moreover, every visited vertex must be handled as unvisited one until the robot proves the contrary. The vertices merging phase starts whenever the robot detects the base vertex. This situation allows to close the loop and merge identical vertices. During the exploration phase, one place in the environment might be represented by more vertices in the map. This inconsistence is reduced in the vertices merging phase. The algorithm works properly only if the robot is able to follow all detected edges. Existence of complementary edges is also necessary. Edge ē E is complementary to edge e E if and only if expression 1 holds. e E, e = (u, v) ē E, ē = (v, u) (1) Moreover, the robot knows complementary edge ē after passing e. It means that the robot is able to backtrack its movement. After the robot passes from vertex u to v, it also knows how to move from v to u even without passing this way back. In this implementation, this condition is fulfilled because if the robot knows azimuth from which it entered a vertex, it also knows the way back. Exploration Phase The robot moves through environment and stores vertices and edges into the map during this phase. At the beginning, the robot has no information about the environment. The robot starts to follow actual edge until a crossing is detected. This crossing is stored in the map as a first vertex. Nearest unexplored edge is used for further movement. The exploration phase is based on graph depth-first search (DFS) algorithm. This algorithm is greedy because the robot follows the nearest vertex with unexplored edge. If there is more than one unexplored edge in actual vertex, it is chosen randomly with uniform probability. It is possible use also breadth-first search (BFS) algorithm, but the robot travels bigger distances with BFS.

4 Karel Košnar, Tomáš Krajník, and Libor Přeučil When the robot arrives to a next vertex (crossing), the edge between this and previous vertex is added into the map. The complementary edge is known from entry azimuth and is also added into the map. If the robot visits the base vertex, vertices merging phase is executed. Algorithm 1: exploration phase follow edge c(e); if detected crossing then add new vertex v to the map V M ; while exists unexplored edge in the world do if all edges from u was explored then find path to nearest node with unexplored edge; choose first edge e from path; use control strategy c(e); else choose randomly unexplored edge e; store azimuth into t(e); use control strategy c(e) to move to v; use angle opposite to entry azimuth as t(ē); add vertex v into the map V M ; add edge e = (u, v) to the map E M ; add edge ē = (v, u) to the map E M ; As the robot uses only greedy algorithm, exploration can take a long time. If the environment is tree-like with n crossings, exploration finishes in 2n steps. When cycles occur in the environment, the robot can get stuck in it for long time, especially if the cycle does not contain the base. To ensure consistency of such environment map with greedy algorithm is also time consuming. Therefore the metrical heuristic function is utilized. This heuristic function estimates metric position of the vertex from robot odometry. After the robot spends certain time in unexplored space, edges directing to the base are preferred. Edges are still chosen randomly but not with uniform probability. The Roulette-wheel selection is used. The parts for each edge are allocated according to its deviation from the direction to the base. Largest part of the wheel has edge with lowest deviation. Random choice must be keeped because errors in computation of base position are affected by cumulative errors in odometry. Also roads may not be straight but can have different shapes. Vertices Merging Phase At first, the actual vertex recognized as base is merged with base vertex in the map. Next, the robot makes the map consistent. By assumption, there may exist exactly one edge of each type leading from every vertex. Type of the edge is denoted t(e) or t(u, v). The same edge

Visual Topological Mapping 5 Algorithm 2: vertices merging phase merge(v actual, v base ) ; while u, v, w V M : (u, v), (u, w) E M t(u, v) t(u, w) do merge(v,w); type means that the difference between azimuths is smaller than azimuth recognition precision. If two or more edges of the same type lead to different vertices, these vertices necessarily represent the same place in the world and therefore are merged. This is repeated recursively. Algorithm 3: merge(u,v) while x V M : (v, x) E M do if (u, x) / E M then add (u, x); remove (v, x); while x V M : (x, v) E M do if (x, u) / E M then add (x, u); remove (x, v); remove v; Terminal Condition The whole exploration procedure terminates whenever the environment is explored completely. It means that the robot has available a complete map of the environment at this time. By assumption, the map is complete if and only if no vertex of the map has unexplored edge. If local map acquired by the robot satisfies the condition of completeness, the robot stops its movement. 3 Visual navigation The GeNav (Gerstner navigation) system was created for path and crossing recognition by calibrated camera aimed at a surface in front of the robot. The viewed area spans from 1 to 5 m in the direction of the robot movement and approximately 3 m to both sides. It is supposed, that color of the path is given by other method or sensor, or is known in advance. Path color can be also entered by an operator. A hue-saturation-value (HSV) color space is utilized for path color specification, because a Cartesian product of HSV values color description offers

6 Karel Košnar, Tomáš Krajník, and Libor Přeučil greater invariance to the changing illumination than similar description in Red-Green-Blue color space. To prevent costly calculations of HSV description of every evaluated pixel during recognition procedures, a RGB lookup color table is first computed from the HSV color specification. The system implements two behaviors: path traversing and crossing passage. In the path traversing mode, the algorithm attempts to keep the robot in the middle of recognized path while driving it forwards. It estimates the width of the recognized path and executes crossing recognition routines when this width changes rapidly. Once a crossing is recognized and approached, the robot switches to the crossing passage mode and sends crossing description to topological mapping module. The topological module either designates a pathway the robot should take when leaving the detected crossing or announces completion of exploration. The robot moves to designated exit path and reactivates the path traversing mode once the exit path is reached. After start, GeNav checks, whether it can gain access to camera and the robot control board. If unsuccessful, it reads a predefined map and spawns a simulator. Then, it attempts to connect to the topological module. In case the topological module is not running, a random number generator for crossing turn decisions is initialized. Fig. 1. Block scheme of GeNav system 3.1 Path traversing In the first step of the algorithm, last row of acquired image is searched for pixels of path color and a mean value of their horizontal coordinate is computed. After that, an identification of path boundaries on this row is performed, i.e. a pixel sequence of other than path color is searched in both directions from the mean position. Path middle and width are then calculated out of the detected boundaries. If the width is greater than a predefined threshold, the algorithm proceeds to a higher row with search start position given by

Visual Topological Mapping 7 the current path middle coordinate. The search algorithm is completed when path width drops below this threshold. Robot forward velocity v and turn speed ϕ vector is given by ( ) ( v α(h r) β ) h = i=r ( w 2 m i) ϕ β h i=r ( w 2 m (2) i) where h and w are the image height and width respectively, m i is detected path middle of the i th row, r is the last processed row number and α, β are constants. Because noise is usually present in the image, middle and width values are smoothed by second order linear adaptive filters. Fig. 2. Detected path and crossing 3.2 Crossing recognition Unlike path recognition, employability and precision of this routine requires the camera to be calibrated. If the detected path width differs from the predicted one consecutively, crossing detection routines are activated. These search for continuous regions of path color on the periphery of the sensed image. Regions not connected by a path to the center of detected crossing are removed. Image coordinates of the remaining region centers are converted to the robot coordinate system (crossing is considered to be planar and collinear with robot undercarriage). The crossing description is then calculated out of these regions, detected crossing center and compass measurements. This description consists of a set of path bearings leading out of the crossing. Optionally, position of crossing center measured by odometry or GPS is added to the descriptor. Finally, the image of crossing center is searched for a large blob of predefined color. If such blob is found, the crossing is designated as a base. The description is then delivered to the topological mapping module and the robot moves forward to the crossing center. A command with azimuth

8 Karel Košnar, Tomáš Krajník, and Libor Přeučil of output path is received and the robot turns to this direction. Afterwards, the robot moves forwards a short distance and path traversing routines are activated. For a short time, crossing detection routines are inhibited to prevent recognition of same crossing consecutively. 4 Experiments 4.1 Simulated world experiments Because real-world testing is a time costly process, system behavior has been first tested on a simulator. The robot behavior was simulated using MobileSim 1. Synthetic camera images were automatically generated from a handdrawn map of a part of Kinsky garden in Prague. In order to improve the realism of generated images, real-world textures were used and artificial noise was added. Fig. 3. Generated view and textured map The system was tested on four maps of various sizes (see figure 4). Ten test runs were performed for each map. Exploration time, number of failed exploration attempts and number of crossing passages were recorded (see table 1). The topological exploration algorithm requires the system providing node information to be absolutely inerrant. Even that the GeNav system recognition success is approximately 98%, exploration success rate drops fast with the increasing number of passed crossings. 4.2 Real-world experiments Outdoor experiment was performed by Pioneer 3AT robotic platform with TCM2 compass. The robot was equipped with Fire i-400 camera providing 1 http://robots.mobilerobots.com/mobilesim/

Visual Topological Mapping 9 Fig. 4. Explored maps Table 1. Simulation results Map size (crossings) Crossings traversed Failures Exploration Time (s) Minimal (4) 11 0 365 Small (5) 14 0 418 Middle (8) 28 2 922 Large (14) 63.2 4 2283 15 color images per second at 640x480 pixel resolution. The images were processed in real time by Intel Core 2 Duo notebook. Fig. 5. (a) Outdoor experiment map; (b) robotic platform

10 Karel Košnar, Tomáš Krajník, and Libor Přeučil A rosarium at Kinsky garden 2 in Prague was chosen because of its narrow short paths and crossing abundance. In order to keep the mapped area reasonably small, the crossing descriptions send to the topological mapping system were reduced. Paths leading forward or to the right were not reported, resulting topological map was a cycle with four nodes. 5 Conclusion The proposed system is capable to create topological maps of outdoor, graph like environments. Its main disadvantage relies in the fact, that visual recognition is purely reactive and therefore not capable to recognize larger crossings. The system has been successfully tested for small-scale environments. Exploration of larger maps may result in a failure, because topological mapping expects the vision system to be absolutely inerrant. Our future research will be aimed towards increased robustness of the exploration system. The topological exploration algorithm will be improved to deal with occasional errors of the vision recognition. The visual recognition will be extended by building a local map of robot surrounding. Thus, navigation of crossings and paths larger than viewed area will be made possible and recognition reliability and precision will be raised. We also plan extending information exchange between both subsystems in order to use adaptive color segmentation. Methods allowing to distinguish passed crossings will be tested. 6 Acknowledgements This work was supported by a research grant CTU0706113 of Czech Technical University in Prague and the Research program funded by the Ministry of Education of the Czech Republic No. MSM 684077038. References 1. Kuipers, B.J.: The spatial semantic hierarchy. Artificial Intelligence 119 (2000) 191 233 2. Kuipers, B.J.: Modeling spatial knowledge. Cognitive Science 2 (1978) 129 153 3. Bartel, A., Meyer, F., Sinke, C., Wiemann, T., Nüchter, A., Lingemann, K., Hertzberg, J.: Real-time outdoor trail detection on a mobile robot. In: Proceedings of th 13th IASTED Internetional Conference on Robotics, Applications and Telematics. (2007) 477 482 4. Košnar, K., Přeučil, L., Štěpán, P.: Topological multi-robot exploration. In: Proceedings of the IEEE Systems, Man and Cybernetics Society United Kingdom & Republic of Ireland Capter 5th conference on Advances in Cybernetic System, New York : IEEE - Systems, Man and Cybernetics Society (2006) 137 141 2 Kinsky gardens 50 4 53.579 N, 14 23 48.846 E