Visual Topological Mapping
|
|
- Noreen Chase
- 6 years ago
- Views:
Transcription
1 Visual Topological Mapping Karel Košnar, Tomáš Krajník, and Libor Přeučil The Gerstner Laboratory for Intelligent Decision Making and Control Department of Cybernetics, Faculty of Electrical Engineering Czech Technical University in Prague Summary. We present an outdoor topological exploration system based on visual recognition. Robot moves through a graph-like environment and creates a topological map, where edges represent paths and vertices their intersections. The algorithm can handle indistinguishable crossings and close loops in the environment with the help of one marked place. The visual navigation system supplies path traversing and crossing detection abilities. Path traversing is purely reactive and relies on color segmentation of an image taken by on-board camera. The crossing passage algorithm reports azimuths of paths leading out of a crossing to the topological subsystem, which decides what path to traverse next. Compass and odometry is then utilized to move the robot to the beginning of picked path. The proposed system performance is tested in simulated and real outdoor environment using a P3AT robotic platform. Key words: topological mapping, visual navigation, exploration 1 Introduction The problem of autonomous exploration and mapping of an unknown terrain remains a fundamental problem in mobile robotics. The key question during exploration is to determine where to move the robot in order to minimize the time needed to completely explore the environment. For known, graph-like environments, the mapping task sets up the problem of finding the shortest round trip through all edges of the graph. This denotes in principle the wellknown chinese postman problem. This paper describes a technique to explore an unknown environment and buildup a proper topological map of it. Topological maps [1],[2] mainly rely on graph representation of spatial properties of the environment as vertices represent places and edges denote paths between corresponding places. Therefore, edges represent procedural knowledge how to navigate from one place to another. Environment is sensed by a visual recognition system similar to [3] capable to navigate paths, recognize and traverse crossings. It is assumed, that
2 2 Karel Košnar, Tomáš Krajník, and Libor Přeučil all paths are distinguishable from non-traversable terrain by color. Identified crossings correspond to nodes, interconnecting paths refer to edges. Paths forming a crossing are distinguished by an azimuth at which they lead out. The paper is organized as follows: The introduction presents a brief overview of visual based topological mapping. The next chapter presents topological mapping algorithms. The following division describes how the robot navigates paths and recognizes crossings. After that, the experimental setups and results are described. The last section concludes results and proposes next research directions. 2 Topological exploration The exploration and mapping problem denotes the process of finding a spatially consistent map. A topological map generally represents spatial knowledge as a graph G = (V, E), describing locations and objects of interest as vertices v V and their spatial relations as edges e E. The edges can also reflect the procedural knowledge and control laws used to navigate between vertices. Vertices - places of interest - are crossings of the roads. Crossings are places where robot can take a decision and change the way. Only one vertex is marked and therefore distinguishable from others. This vertex is called a base. All other vertices are handled as indistinguishable. Vertices are detected by GeNav system(see sect. 3). The description of crossing detection procedures in GeNav is in the section 3.2. Edges reflect roads and also store directions of outgoing roads signed by azimuth referred to magnetic field of the Earth. This information is used for crossing navigation. The robot uses the compass for determination of roads azimuths. The exploration algorithm assumes that azimuths are determined with certain precision independently on the time. It is assumed that angle between any two edges leading from one vertex is not smaller than azimuth assessment precision. The exploration and mapping algorithm presented here is based on multirobot topological exploration algorithm described in details in [4]. Main difference is replacement of one robot with marker (base). This change allows to use only one robot. Let us presume that a robot can determine its position status as being on a vertex. On the other hand, this robot position can not be distinguished from another, similar places at once except being on vertex marked as base. It is assumed that if there is an edge between two vertices in a world model, the robot is able to move between these two vertices. This transition is executed by applying a single control strategy c(e). The control strategy c(e) leads the robot along the edge e. As a first step, the robot uses compass data for turning to azimuth read from e. Subsequently, the robot follows the road using GeNav system path recognition (see 3.1).
3 Visual Topological Mapping 3 It is also assumed that if the robot applies in certain vertex u control strategy c(e), e = (u, v), it gets to the same vertex v at any time. 2.1 Algorithm The algorithm consists of two phases, which are exploration and vertices merging procedures. In the exploration phase, the robot moves through the environment and makes its own map G M = (V M, E M ) of the world. As the robot cannot distinguish particular vertices from each other, it is also unable to close loop in exploration without visiting the base vertex (or interacting with some other robot). Moreover, every visited vertex must be handled as unvisited one until the robot proves the contrary. The vertices merging phase starts whenever the robot detects the base vertex. This situation allows to close the loop and merge identical vertices. During the exploration phase, one place in the environment might be represented by more vertices in the map. This inconsistence is reduced in the vertices merging phase. The algorithm works properly only if the robot is able to follow all detected edges. Existence of complementary edges is also necessary. Edge ē E is complementary to edge e E if and only if expression 1 holds. e E, e = (u, v) ē E, ē = (v, u) (1) Moreover, the robot knows complementary edge ē after passing e. It means that the robot is able to backtrack its movement. After the robot passes from vertex u to v, it also knows how to move from v to u even without passing this way back. In this implementation, this condition is fulfilled because if the robot knows azimuth from which it entered a vertex, it also knows the way back. Exploration Phase The robot moves through environment and stores vertices and edges into the map during this phase. At the beginning, the robot has no information about the environment. The robot starts to follow actual edge until a crossing is detected. This crossing is stored in the map as a first vertex. Nearest unexplored edge is used for further movement. The exploration phase is based on graph depth-first search (DFS) algorithm. This algorithm is greedy because the robot follows the nearest vertex with unexplored edge. If there is more than one unexplored edge in actual vertex, it is chosen randomly with uniform probability. It is possible use also breadth-first search (BFS) algorithm, but the robot travels bigger distances with BFS.
4 4 Karel Košnar, Tomáš Krajník, and Libor Přeučil When the robot arrives to a next vertex (crossing), the edge between this and previous vertex is added into the map. The complementary edge is known from entry azimuth and is also added into the map. If the robot visits the base vertex, vertices merging phase is executed. Algorithm 1: exploration phase follow edge c(e); if detected crossing then add new vertex v to the map V M ; while exists unexplored edge in the world do if all edges from u was explored then find path to nearest node with unexplored edge; choose first edge e from path; use control strategy c(e); else choose randomly unexplored edge e; store azimuth into t(e); use control strategy c(e) to move to v; use angle opposite to entry azimuth as t(ē); add vertex v into the map V M ; add edge e = (u, v) to the map E M ; add edge ē = (v, u) to the map E M ; As the robot uses only greedy algorithm, exploration can take a long time. If the environment is tree-like with n crossings, exploration finishes in 2n steps. When cycles occur in the environment, the robot can get stuck in it for long time, especially if the cycle does not contain the base. To ensure consistency of such environment map with greedy algorithm is also time consuming. Therefore the metrical heuristic function is utilized. This heuristic function estimates metric position of the vertex from robot odometry. After the robot spends certain time in unexplored space, edges directing to the base are preferred. Edges are still chosen randomly but not with uniform probability. The Roulette-wheel selection is used. The parts for each edge are allocated according to its deviation from the direction to the base. Largest part of the wheel has edge with lowest deviation. Random choice must be keeped because errors in computation of base position are affected by cumulative errors in odometry. Also roads may not be straight but can have different shapes. Vertices Merging Phase At first, the actual vertex recognized as base is merged with base vertex in the map. Next, the robot makes the map consistent. By assumption, there may exist exactly one edge of each type leading from every vertex. Type of the edge is denoted t(e) or t(u, v). The same edge
5 Visual Topological Mapping 5 Algorithm 2: vertices merging phase merge(v actual, v base ) ; while u, v, w V M : (u, v), (u, w) E M t(u, v) t(u, w) do merge(v,w); type means that the difference between azimuths is smaller than azimuth recognition precision. If two or more edges of the same type lead to different vertices, these vertices necessarily represent the same place in the world and therefore are merged. This is repeated recursively. Algorithm 3: merge(u,v) while x V M : (v, x) E M do if (u, x) / E M then add (u, x); remove (v, x); while x V M : (x, v) E M do if (x, u) / E M then add (x, u); remove (x, v); remove v; Terminal Condition The whole exploration procedure terminates whenever the environment is explored completely. It means that the robot has available a complete map of the environment at this time. By assumption, the map is complete if and only if no vertex of the map has unexplored edge. If local map acquired by the robot satisfies the condition of completeness, the robot stops its movement. 3 Visual navigation The GeNav (Gerstner navigation) system was created for path and crossing recognition by calibrated camera aimed at a surface in front of the robot. The viewed area spans from 1 to 5 m in the direction of the robot movement and approximately 3 m to both sides. It is supposed, that color of the path is given by other method or sensor, or is known in advance. Path color can be also entered by an operator. A hue-saturation-value (HSV) color space is utilized for path color specification, because a Cartesian product of HSV values color description offers
6 6 Karel Košnar, Tomáš Krajník, and Libor Přeučil greater invariance to the changing illumination than similar description in Red-Green-Blue color space. To prevent costly calculations of HSV description of every evaluated pixel during recognition procedures, a RGB lookup color table is first computed from the HSV color specification. The system implements two behaviors: path traversing and crossing passage. In the path traversing mode, the algorithm attempts to keep the robot in the middle of recognized path while driving it forwards. It estimates the width of the recognized path and executes crossing recognition routines when this width changes rapidly. Once a crossing is recognized and approached, the robot switches to the crossing passage mode and sends crossing description to topological mapping module. The topological module either designates a pathway the robot should take when leaving the detected crossing or announces completion of exploration. The robot moves to designated exit path and reactivates the path traversing mode once the exit path is reached. After start, GeNav checks, whether it can gain access to camera and the robot control board. If unsuccessful, it reads a predefined map and spawns a simulator. Then, it attempts to connect to the topological module. In case the topological module is not running, a random number generator for crossing turn decisions is initialized. Fig. 1. Block scheme of GeNav system 3.1 Path traversing In the first step of the algorithm, last row of acquired image is searched for pixels of path color and a mean value of their horizontal coordinate is computed. After that, an identification of path boundaries on this row is performed, i.e. a pixel sequence of other than path color is searched in both directions from the mean position. Path middle and width are then calculated out of the detected boundaries. If the width is greater than a predefined threshold, the algorithm proceeds to a higher row with search start position given by
7 Visual Topological Mapping 7 the current path middle coordinate. The search algorithm is completed when path width drops below this threshold. Robot forward velocity v and turn speed ϕ vector is given by ( ) ( v α(h r) β ) h = i=r ( w 2 m i) ϕ β h i=r ( w 2 m (2) i) where h and w are the image height and width respectively, m i is detected path middle of the i th row, r is the last processed row number and α, β are constants. Because noise is usually present in the image, middle and width values are smoothed by second order linear adaptive filters. Fig. 2. Detected path and crossing 3.2 Crossing recognition Unlike path recognition, employability and precision of this routine requires the camera to be calibrated. If the detected path width differs from the predicted one consecutively, crossing detection routines are activated. These search for continuous regions of path color on the periphery of the sensed image. Regions not connected by a path to the center of detected crossing are removed. Image coordinates of the remaining region centers are converted to the robot coordinate system (crossing is considered to be planar and collinear with robot undercarriage). The crossing description is then calculated out of these regions, detected crossing center and compass measurements. This description consists of a set of path bearings leading out of the crossing. Optionally, position of crossing center measured by odometry or GPS is added to the descriptor. Finally, the image of crossing center is searched for a large blob of predefined color. If such blob is found, the crossing is designated as a base. The description is then delivered to the topological mapping module and the robot moves forward to the crossing center. A command with azimuth
8 8 Karel Košnar, Tomáš Krajník, and Libor Přeučil of output path is received and the robot turns to this direction. Afterwards, the robot moves forwards a short distance and path traversing routines are activated. For a short time, crossing detection routines are inhibited to prevent recognition of same crossing consecutively. 4 Experiments 4.1 Simulated world experiments Because real-world testing is a time costly process, system behavior has been first tested on a simulator. The robot behavior was simulated using MobileSim 1. Synthetic camera images were automatically generated from a handdrawn map of a part of Kinsky garden in Prague. In order to improve the realism of generated images, real-world textures were used and artificial noise was added. Fig. 3. Generated view and textured map The system was tested on four maps of various sizes (see figure 4). Ten test runs were performed for each map. Exploration time, number of failed exploration attempts and number of crossing passages were recorded (see table 1). The topological exploration algorithm requires the system providing node information to be absolutely inerrant. Even that the GeNav system recognition success is approximately 98%, exploration success rate drops fast with the increasing number of passed crossings. 4.2 Real-world experiments Outdoor experiment was performed by Pioneer 3AT robotic platform with TCM2 compass. The robot was equipped with Fire i-400 camera providing 1
9 Visual Topological Mapping 9 Fig. 4. Explored maps Table 1. Simulation results Map size (crossings) Crossings traversed Failures Exploration Time (s) Minimal (4) Small (5) Middle (8) Large (14) color images per second at 640x480 pixel resolution. The images were processed in real time by Intel Core 2 Duo notebook. Fig. 5. (a) Outdoor experiment map; (b) robotic platform
10 10 Karel Košnar, Tomáš Krajník, and Libor Přeučil A rosarium at Kinsky garden 2 in Prague was chosen because of its narrow short paths and crossing abundance. In order to keep the mapped area reasonably small, the crossing descriptions send to the topological mapping system were reduced. Paths leading forward or to the right were not reported, resulting topological map was a cycle with four nodes. 5 Conclusion The proposed system is capable to create topological maps of outdoor, graph like environments. Its main disadvantage relies in the fact, that visual recognition is purely reactive and therefore not capable to recognize larger crossings. The system has been successfully tested for small-scale environments. Exploration of larger maps may result in a failure, because topological mapping expects the vision system to be absolutely inerrant. Our future research will be aimed towards increased robustness of the exploration system. The topological exploration algorithm will be improved to deal with occasional errors of the vision recognition. The visual recognition will be extended by building a local map of robot surrounding. Thus, navigation of crossings and paths larger than viewed area will be made possible and recognition reliability and precision will be raised. We also plan extending information exchange between both subsystems in order to use adaptive color segmentation. Methods allowing to distinguish passed crossings will be tested. 6 Acknowledgements This work was supported by a research grant CTU of Czech Technical University in Prague and the Research program funded by the Ministry of Education of the Czech Republic No. MSM References 1. Kuipers, B.J.: The spatial semantic hierarchy. Artificial Intelligence 119 (2000) Kuipers, B.J.: Modeling spatial knowledge. Cognitive Science 2 (1978) Bartel, A., Meyer, F., Sinke, C., Wiemann, T., Nüchter, A., Lingemann, K., Hertzberg, J.: Real-time outdoor trail detection on a mobile robot. In: Proceedings of th 13th IASTED Internetional Conference on Robotics, Applications and Telematics. (2007) Košnar, K., Přeučil, L., Štěpán, P.: Topological multi-robot exploration. In: Proceedings of the IEEE Systems, Man and Cybernetics Society United Kingdom & Republic of Ireland Capter 5th conference on Advances in Cybernetic System, New York : IEEE - Systems, Man and Cybernetics Society (2006) Kinsky gardens N, E
Reference Tracking System for a Mobile Skid Steer Robot (Version 1.00)
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY IN PRAGUE Reference Tracking System for a Mobile Skid Steer Robot (Version 1.00) Vladimír Kubelka, Vladimír Burian, Přemysl Kafka kubelka.vladimir@fel.cvut.cz
More informationTopological Navigation and Path Planning
Topological Navigation and Path Planning Topological Navigation and Path Planning Based upon points of interest E.g., landmarks Navigation is relational between points of interest E.g., Go past the corner
More informationA Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles
Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 54 A Reactive Bearing Angle Only Obstacle Avoidance Technique for
More informationCS 4758 Robot Navigation Through Exit Sign Detection
CS 4758 Robot Navigation Through Exit Sign Detection Aaron Sarna Michael Oleske Andrew Hoelscher Abstract We designed a set of algorithms that utilize the existing corridor navigation code initially created
More informationSimulation of the pass through the labyrinth as a method of the algorithm development thinking
Simulation of the pass through the labyrinth as a method of the algorithm development thinking LIBOR MITROVIC, STEPAN HUBALOVSKY Department of Informatics University of Hradec Kralove Rokitanskeho 62,
More informationMULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION
MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of
More informationMotion Detection. Final project by. Neta Sokolovsky
Motion Detection Final project by Neta Sokolovsky Introduction The goal of this project is to recognize a motion of objects found in the two given images. This functionality is useful in the video processing
More informationMonocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily
More informationChapter 12 3D Localisation and High-Level Processing
Chapter 12 3D Localisation and High-Level Processing This chapter describes how the results obtained from the moving object tracking phase are used for estimating the 3D location of objects, based on the
More informationNavigation and Metric Path Planning
Navigation and Metric Path Planning October 4, 2011 Minerva tour guide robot (CMU): Gave tours in Smithsonian s National Museum of History Example of Minerva s occupancy map used for navigation Objectives
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationManipulator trajectory planning
Manipulator trajectory planning Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering Department of Cybernetics Czech Republic http://cmp.felk.cvut.cz/~hlavac Courtesy to
More informationImproving Vision-based Topological Localization by Combining Local and Global Image Features
Improving Vision-based Topological Localization by Combining Local and Global Image Features Shuai Yang and Han Wang Nanyang Technological University, Singapore Introduction Self-localization is crucial
More informationECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination
ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.
More informationStudy on the Signboard Region Detection in Natural Image
, pp.179-184 http://dx.doi.org/10.14257/astl.2016.140.34 Study on the Signboard Region Detection in Natural Image Daeyeong Lim 1, Youngbaik Kim 2, Incheol Park 1, Jihoon seung 1, Kilto Chong 1,* 1 1567
More informationSelective Search for Object Recognition
Selective Search for Object Recognition Uijlings et al. Schuyler Smith Overview Introduction Object Recognition Selective Search Similarity Metrics Results Object Recognition Kitten Goal: Problem: Where
More informationA Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision
A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision Stephen Karungaru, Atsushi Ishitani, Takuya Shiraishi, and Minoru Fukumi Abstract Recently, robot technology has
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationFOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES
FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES D. Beloborodov a, L. Mestetskiy a a Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University,
More informationTraffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers
Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane
More informationCS4758: Rovio Augmented Vision Mapping Project
CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer
More informationTopological Mapping. Discrete Bayes Filter
Topological Mapping Discrete Bayes Filter Vision Based Localization Given a image(s) acquired by moving camera determine the robot s location and pose? Towards localization without odometry What can be
More informationEnsemble of Bayesian Filters for Loop Closure Detection
Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information
More informationPractical Image and Video Processing Using MATLAB
Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and
More informationSimulation of a mobile robot with a LRF in a 2D environment and map building
Simulation of a mobile robot with a LRF in a 2D environment and map building Teslić L. 1, Klančar G. 2, and Škrjanc I. 3 1 Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, 1000 Ljubljana,
More informationCoverage and Search Algorithms. Chapter 10
Coverage and Search Algorithms Chapter 10 Objectives To investigate some simple algorithms for covering the area in an environment To understand how to break down an environment into simple convex pieces
More informationConstruction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors
33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors Kosei Ishida 1 1 School of
More informationMeasurements using three-dimensional product imaging
ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using
More informationRobotics. CSPP Artificial Intelligence March 10, 2004
Robotics CSPP 56553 Artificial Intelligence March 10, 2004 Roadmap Robotics is AI-complete Integration of many AI techniques Classic AI Search in configuration space (Ultra) Modern AI Subsumption architecture
More informationOptical Flow-Based Person Tracking by Multiple Cameras
Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and
More informationHOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder
HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical
More informationOnline Graph Exploration
Distributed Computing Online Graph Exploration Semester thesis Simon Hungerbühler simonhu@ethz.ch Distributed Computing Group Computer Engineering and Networks Laboratory ETH Zürich Supervisors: Sebastian
More informationA New Approach to Computation of Curvature Scale Space Image for Shape Similarity Retrieval
A New Approach to Computation of Curvature Scale Space Image for Shape Similarity Retrieval Farzin Mokhtarian, Sadegh Abbasi and Josef Kittler Centre for Vision Speech and Signal Processing Department
More informationEfficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information
Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm
More informationMulti-Agent Deterministic Graph Mapping via Robot Rendezvous
202 IEEE International Conference on Robotics and Automation RiverCentre, Saint Paul, Minnesota, USA May -8, 202 Multi-Agent Deterministic Graph Mapping via Robot Rendezvous Chaohui Gong, Stephen Tully,
More informationIntroducing Robotics Vision System to a Manufacturing Robotics Course
Paper ID #16241 Introducing Robotics Vision System to a Manufacturing Robotics Course Dr. Yuqiu You, Ohio University c American Society for Engineering Education, 2016 Introducing Robotics Vision System
More informationProcessing 3D Surface Data
Processing 3D Surface Data Computer Animation and Visualisation Lecture 17 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing
More informationVehicle Localization. Hannah Rae Kerner 21 April 2015
Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular
More informationHuman Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg
Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation
More informationRobust color segmentation algorithms in illumination variation conditions
286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,
More informationCS4733 Class Notes, Computer Vision
CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision
More informationRobot Localization based on Geo-referenced Images and G raphic Methods
Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,
More informationRAPID 3D OBJECT RECOGNITION FOR AUTOMATIC PROJECT PROGRESS MONITORING USING A STEREO VISION SYSTEM
RAPID 3D OBJECT RECOGNITION FOR AUTOMATIC PROJECT PROGRESS MONITORING USING A STEREO VISION SYSTEM Nuri Choi, Hyojoo Son, Changwan Kim Department of Architectural Engineering Chung-Ang University 221 Hueksuk-dong,
More informationRobotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)
Robotics Lecture 8: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College
More informationDynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space
MATEC Web of Conferences 95 83 (7) DOI:.5/ matecconf/79583 ICMME 6 Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space Tao Ni Qidong Li Le Sun and Lingtao Huang School
More informationAppearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization
Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Jung H. Oh, Gyuho Eoh, and Beom H. Lee Electrical and Computer Engineering, Seoul National University,
More informationCOMS W4735: Visual Interfaces To Computers. Final Project (Finger Mouse) Submitted by: Tarandeep Singh Uni: ts2379
COMS W4735: Visual Interfaces To Computers Final Project (Finger Mouse) Submitted by: Tarandeep Singh Uni: ts2379 FINGER MOUSE (Fingertip tracking to control mouse pointer) Abstract. This report discusses
More informationGraph Matching Iris Image Blocks with Local Binary Pattern
Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of
More informationCOLOR IMAGE SEGMENTATION IN RGB USING VECTOR ANGLE AND ABSOLUTE DIFFERENCE MEASURES
COLOR IMAGE SEGMENTATION IN RGB USING VECTOR ANGLE AND ABSOLUTE DIFFERENCE MEASURES Sanmati S. Kamath and Joel R. Jackson Georgia Institute of Technology 85, 5th Street NW, Technology Square Research Building,
More informationVisual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion
Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion Noriaki Mitsunaga and Minoru Asada Dept. of Adaptive Machine Systems, Osaka University,
More informationTopological Modeling with Fuzzy Petri Nets for Autonomous Mobile Robots
Topological Modeling with Fuzzy Petri Nets for Autonomous Mobile Robots Javier de Lope 1, Darío Maravall 2, and José G. Zato 1 1 Dept. Applied Intelligent Systems, Technical University of Madrid, Spain
More informationHigh Altitude Balloon Localization from Photographs
High Altitude Balloon Localization from Photographs Paul Norman and Daniel Bowman Bovine Aerospace August 27, 2013 Introduction On December 24, 2011, we launched a high altitude balloon equipped with a
More informationAcquisition of Qualitative Spatial Representation by Visual Observation
Acquisition of Qualitative Spatial Representation by Visual Observation Takushi Sogo Hiroshi Ishiguro Toru Ishida Department of Social Informatics, Kyoto University Kyoto 606-8501, Japan sogo@kuis.kyoto-u.ac.jp,
More informationAutonomous Robot Navigation: Using Multiple Semi-supervised Models for Obstacle Detection
Autonomous Robot Navigation: Using Multiple Semi-supervised Models for Obstacle Detection Adam Bates University of Colorado at Boulder Abstract: This paper proposes a novel approach to efficiently creating
More informationMulti-Camera Calibration, Object Tracking and Query Generation
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Multi-Camera Calibration, Object Tracking and Query Generation Porikli, F.; Divakaran, A. TR2003-100 August 2003 Abstract An automatic object
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationA New Feature Local Binary Patterns (FLBP) Method
A New Feature Local Binary Patterns (FLBP) Method Jiayu Gu and Chengjun Liu The Department of Computer Science, New Jersey Institute of Technology, Newark, NJ 07102, USA Abstract - This paper presents
More informationProcessing 3D Surface Data
Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing
More informationDr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions
Dr. Amotz Bar-Noy s Compendium of Algorithms Problems Problems, Hints, and Solutions Chapter 1 Searching and Sorting Problems 1 1.1 Array with One Missing 1.1.1 Problem Let A = A[1],..., A[n] be an array
More informationUninformed Search (Ch )
Uninformed Search (Ch. 3-3.4) Announcements First homework will be posted tonight (due next Wednesday at 11:55 pm) Review We use words that have a general English definition in a technical sense nd Rational=choose
More informationMotion Detection Algorithm
Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection
More informationSINGLE IMAGE ORIENTATION USING LINEAR FEATURES AUTOMATICALLY EXTRACTED FROM DIGITAL IMAGES
SINGLE IMAGE ORIENTATION USING LINEAR FEATURES AUTOMATICALLY EXTRACTED FROM DIGITAL IMAGES Nadine Meierhold a, Armin Schmich b a Technical University of Dresden, Institute of Photogrammetry and Remote
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationRobot localization method based on visual features and their geometric relationship
, pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department
More informationRobot Motion Planning Using Generalised Voronoi Diagrams
Robot Motion Planning Using Generalised Voronoi Diagrams MILOŠ ŠEDA, VÁCLAV PICH Institute of Automation and Computer Science Brno University of Technology Technická 2, 616 69 Brno CZECH REPUBLIC Abstract:
More informationAdvanced Robotics Path Planning & Navigation
Advanced Robotics Path Planning & Navigation 1 Agenda Motivation Basic Definitions Configuration Space Global Planning Local Planning Obstacle Avoidance ROS Navigation Stack 2 Literature Choset, Lynch,
More informationSummary of Computing Team s Activities Fall 2007 Siddharth Gauba, Toni Ivanov, Edwin Lai, Gary Soedarsono, Tanya Gupta
Summary of Computing Team s Activities Fall 2007 Siddharth Gauba, Toni Ivanov, Edwin Lai, Gary Soedarsono, Tanya Gupta 1 OVERVIEW Input Image Channel Separation Inverse Perspective Mapping The computing
More informationA threshold decision of the object image by using the smart tag
A threshold decision of the object image by using the smart tag Chang-Jun Im, Jin-Young Kim, Kwan Young Joung, Ho-Gil Lee Sensing & Perception Research Group Korea Institute of Industrial Technology (
More informationResearch on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration
, pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information
More informationCS 223B Computer Vision Problem Set 3
CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.
More informationVisually Augmented POMDP for Indoor Robot Navigation
Visually Augmented POMDP for Indoor obot Navigation LÓPEZ M.E., BAEA., BEGASA L.M., ESCUDEO M.S. Electronics Department University of Alcalá Campus Universitario. 28871 Alcalá de Henares (Madrid) SPAIN
More informationCamera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences
Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Jean-François Lalonde, Srinivasa G. Narasimhan and Alexei A. Efros {jlalonde,srinivas,efros}@cs.cmu.edu CMU-RI-TR-8-32 July
More informationCollecting outdoor datasets for benchmarking vision based robot localization
Collecting outdoor datasets for benchmarking vision based robot localization Emanuele Frontoni*, Andrea Ascani, Adriano Mancini, Primo Zingaretti Department of Ingegneria Infromatica, Gestionale e dell
More informationLearning Image-Based Landmarks for Wayfinding using Neural Networks
Learning Image-Based Landmarks for Wayfinding using Neural Networks Jeremy D. Drysdale Damian M. Lyons Robotics and Computer Vision Laboratory Department of Computer Science 340 JMH Fordham University
More informationSemantics in Human Localization and Mapping
Semantics in Human Localization and Mapping Aidos Sarsembayev, Antonio Sgorbissa University of Genova, Dept. DIBRIS Via Opera Pia 13, 16145 Genova, Italy aidos.sarsembayev@edu.unige.it, antonio.sgorbissa@unige.it
More informationA Statistical Approach to Culture Colors Distribution in Video Sensors Angela D Angelo, Jean-Luc Dugelay
A Statistical Approach to Culture Colors Distribution in Video Sensors Angela D Angelo, Jean-Luc Dugelay VPQM 2010, Scottsdale, Arizona, U.S.A, January 13-15 Outline Introduction Proposed approach Colors
More informationTransactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN
ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information
More informationCSE/EE-576, Final Project
1 CSE/EE-576, Final Project Torso tracking Ke-Yu Chen Introduction Human 3D modeling and reconstruction from 2D sequences has been researcher s interests for years. Torso is the main part of the human
More informationMulti-Robot Navigation and Coordination
Multi-Robot Navigation and Coordination Daniel Casner and Ben Willard Kurt Krebsbach, Advisor Department of Computer Science, Lawrence University, Appleton, Wisconsin 54912 daniel.t.casner@ieee.org, benjamin.h.willard@lawrence.edu
More informationAUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE
AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE Md. Akhtaruzzaman, Amir A. Shafie and Md. Raisuddin Khan Department of Mechatronics Engineering, Kulliyyah of Engineering, International
More informationCOLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij
COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij Intelligent Systems Lab Amsterdam, University of Amsterdam ABSTRACT Performance
More informationGerstner Laboratory for Intelligent Decision Making. laboratory. Gerstner. SyRoTek: V User s manual for SyRoTek e-learning system
Gerstner Laboratory for Intelligent Decision Making laboratory Gerstner SyRoTek: V012.2 - User s manual for SyRoTek e-learning system Karel Košnar, Jan Faigl, Martin Saska, Miroslav Kulich, Jan Chudoba,
More informationCoverage and Search Algorithms. Chapter 10
Coverage and Search Algorithms Chapter 10 Objectives To investigate some simple algorithms for covering the area in an environment To understand how break down an environment in simple convex pieces To
More informationChapter 4 - Image. Digital Libraries and Content Management
Prof. Dr.-Ing. Stefan Deßloch AG Heterogene Informationssysteme Geb. 36, Raum 329 Tel. 0631/205 3275 dessloch@informatik.uni-kl.de Chapter 4 - Image Vector Graphics Raw data: set (!) of lines and polygons
More informationLong-term motion estimation from images
Long-term motion estimation from images Dennis Strelow 1 and Sanjiv Singh 2 1 Google, Mountain View, CA, strelow@google.com 2 Carnegie Mellon University, Pittsburgh, PA, ssingh@cmu.edu Summary. Cameras
More informationRegion Based Image Fusion Using SVM
Region Based Image Fusion Using SVM Yang Liu, Jian Cheng, Hanqing Lu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences ABSTRACT This paper presents a novel
More informationTraining Algorithms for Robust Face Recognition using a Template-matching Approach
Training Algorithms for Robust Face Recognition using a Template-matching Approach Xiaoyan Mu, Mehmet Artiklar, Metin Artiklar, and Mohamad H. Hassoun Department of Electrical and Computer Engineering
More informationThree-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization
Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization Lana Dalawr Jalal Abstract This paper addresses the problem of offline path planning for
More informationData Structure. IBPS SO (IT- Officer) Exam 2017
Data Structure IBPS SO (IT- Officer) Exam 2017 Data Structure: In computer science, a data structure is a way of storing and organizing data in a computer s memory so that it can be used efficiently. Data
More informationCar tracking in tunnels
Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern
More informationStable Vision-Aided Navigation for Large-Area Augmented Reality
Stable Vision-Aided Navigation for Large-Area Augmented Reality Taragay Oskiper, Han-Pang Chiu, Zhiwei Zhu Supun Samarasekera, Rakesh Teddy Kumar Vision and Robotics Laboratory SRI-International Sarnoff,
More informationTexture Segmentation by Windowed Projection
Texture Segmentation by Windowed Projection 1, 2 Fan-Chen Tseng, 2 Ching-Chi Hsu, 2 Chiou-Shann Fuh 1 Department of Electronic Engineering National I-Lan Institute of Technology e-mail : fctseng@ccmail.ilantech.edu.tw
More informationAutonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment
Planning and Navigation Where am I going? How do I get there?? Localization "Position" Global Map Cognition Environment Model Local Map Perception Real World Environment Path Motion Control Competencies
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationCS 280 Problem Set 10 Solutions Due May 3, Part A
CS 280 Problem Set 10 Due May 3, 2002 Part A (1) You are given a directed graph G with 1000 vertices. There is one weakly connected component of size 800 and several other weakly connected components of
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationSELECTION OF WHEEL CHASSIS FOR MOBILE ROBOTS IN COURSE OF PRODUCTION PROCESSES AUTOMATIZATION
6th INTERNATIONAL MULTIDISCIPLINARY CONFERENCE SELECTION OF WHEEL CHASSIS FOR MOBILE ROBOTS IN COURSE OF PRODUCTION PROCESSES AUTOMATIZATION Ing. Ladislav Kárník, CSc., Technical University of Ostrava,
More informationFeature Point Extraction using 3D Separability Filter for Finger Shape Recognition
Feature Point Extraction using 3D Separability Filter for Finger Shape Recognition Ryoma Yataka, Lincon Sales de Souza, and Kazuhiro Fukui Graduate School of Systems and Information Engineering, University
More informationLocalization and Map Building
Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples
More information