Sensor Fusion of Laser & Stereo Vision Camera for Depth Estimation and Obstacle Avoidance

Size: px
Start display at page:

Download "Sensor Fusion of Laser & Stereo Vision Camera for Depth Estimation and Obstacle Avoidance"

Transcription

1 Volume 1 No. 26 Sensor Fusion of Laser & Stereo Vision Camera for Depth Estimation and Obstacle Avoidance Saurav Kumar Daya Gupta Sakshi Yadav Computer Engg. Deptt. HOD, Computer Engg. Deptt. Student, Electrical & Electronics Delhi Technological University Delhi Technological University Delhi Technological University ABSTRACT Laser Range Finders (LRF) have been widely used in the field of robotics to generate very accurate 2-D maps of environment perceived by Autonomous Mobile Robot. Stereo Vision devices on the other hand provide 3-D view of the surroundings with a range far much than of a LRF but at the tradeoff of accuracy. This paper demonstrates a technique of sensor fusion of information obtained from LRF and Stereovision camera systems to extract the accuracy and range of independents systems respectively. Pruning of the 3D point cloud obtained by the Stereo Vision Camera is done to achieve computational efficiency in real time environment, after which the point cloud model is scaled down to a 2-D vision map, to further reduce computational costs. The 2D map of the camera is fused with the 2D cost map of the LRF to generate a 2-D navigation map of the surroundings which in turn is passed as an occupancy grid to VFH+ for obstacle avoidance and path-planning. This technique has been successfully tested on Lakshya - an IGV platform developed at Delhi College of Engineering in outdoor environments. Keywords Sensor fusion, Stereovision, Laser range finder, Obstacle avoidance, Navigation map, 3D point cloud and Robotics 1. INTRODUCTION There are a large number of sensors available which can be used to detect obstacles present in the immediate surroundings, for eg. sensors like sonar, lasers, stereo vision camera, etc are widely used for obstacle detection. Each sensor works in a different manner and has its own limitation and advantages. Due to its inherent limitations, a single sensor cannot give an accurate reconstruction of the surroundings and hence cannot be used by mobile robots for obstacle detection and accurate path planning. This gives rise to the concept of sensor fusion i.e. integration of data from different sensors for successful obstacle avoidance and path planning. Distance sensors like laser range finders have been used before reconstruction of real world surroundings of a robot [1]. They give very accurate and reliable output, but in case of obstacles like a chair or table or obstacles not lying in the plane of the laser, they fail to detect the whole obstacle. Also, laser data is very much affected by the pitch and roll of the vehicle. On the other hand, stereo vision camera is involved in the acquisition of images of the dynamic environment. Though it can perceive up to infinity, its field of view is narrower as compared to that of LRF s. Also, if only the camera system is used for obstacle detection the data obtained is inferior in quality and it increases the computation burden on the system. Sensor fusion with Laser and Camera has been accomplished before in [2] but the method focuses on generating 3-D maps of 2D Laser maps and then fusing it with stereovision 3D map, which adds to computational burden. [3] deals with long range obstacle detection on road for which laser range finder detects and tracks the obstacle and stereovision camera system reconfirms the laser data. Sensor fusion of sensors like stereovision and lidar systems have been used widely for autonomous vehicles [4] [5]. In this paper, we propose an algorithm which relies on the fusion of the 2D cost maps generated by laser data with the 2D cost maps generated from the 3D real world map by stereo vision camera systems, to create an Occupancy grid for obstacle detection and trajectory planning. The fusion of both is a challenging task but the output is commendable and quite efficient to make a system move autonomously in a complex, dynamic environment with safe path planning and obstacle collision avoidance. Section II deals with range sensors- Hokuyo Laser Scanner and BumbleBee StereoVision Camera and the generation of their respective 2D cost maps. Section III deals sensor data fusion to generate an occupancy grid map and subsequent path planning. IV section is about the Lakshya s mechanical design and our results obtained from experiments performed on Lakshya. The paper is concluded in section V by discussing future works and applications in this field. 2. RANGE SENSORS A BumbleBee StereoVision camera by Point Grey Research, with two Sony 1/3 progressive scan CCDs and a resolution of 640x480 at 48FPS or 1024x768 at 18FPS, was also used in conjugation with the Hokuyo laser, on the Lakshya platform for stereo imaging of surroundings. Fig. 1 Image Sensing The LRF used in experimentation was Hokuyo s URG-04LX which has a range of 20mm to 4m. It has a 240 o scanning area with 0.36 o angular resolution. Laser beams strike off an object to determine its distance and direction. The scanning time is around 100msec/scan. Based on the position of objects around the robot a 2D map is generated. 22

2 compute the occupancy grid; whereas for regions II and III, both the laser and camera data is fused together. The drawback of the LRF is that it can detect only those objects which are lying in its line of scanning. Therefore objects like chairs and tables are represented by only their vertical supports and not as whole objects. Also objects lying below or above the scanning field of the laser are not detected and hence the robot can plan its path through them. This difficulty is overcome with the help of stereo vision camera, which uses 3D point cloud method to determine the position of obstacles in the environment. StereoVision Camera System A. 3D Point Cloud The stereo vision system constructs a 3-D view of the environment and then extracts the obstacles present in it. This is done by either generating disparity maps and evaluating them [6] [7], or generating 3D point cloud map of the environment using the disparity values. Another way to accomplish the above is to use v-disparity method [8] - a well known real time obstacle and ground detection algorithm, which was used in DARPA Grand Challenge and proposed by R. Labayrade et al. Using disparity values [9] and intrinsic and extrinsic camera parameters, each image pixel (x, y) is assigned a real world coordinate (X, Y, Z), where the Z-axis is in the direction of motion of the robot and the coordinate system is relative to the robot i.e. it moves with the robot. 3D coordinates are assigned [10] by finding out the corresponding points in left and right images and then using the calibrated camera parameters to determine its 3D position. Fig. 3 illustrates the 3D point cloud of the table formed by the stereovision camera. The 3D point cloud map thus generated for all the image pixels, is first pruned to reduce computational time. Fig. 2 Sensor Placements For initial experimentation the LRF was placed above the camera as shown in Fig.2. The 3D space in front of the robot is divided into 4 regions; region I and IV can be analyzed only by the laser and are out of the range of camera s vision; whereas region II and III are mapped by both the laser and the camera. For regions I and IV, only 2D laser map is used to Fig. 3 (clockwise from left) obstacle table; polar histogram generated by the laser of the obstacle; 3D point cloud of the obstacle by the stereovision camera. In the left image of Fig. 3, an obstacle, a table, is depicted. The 2D polar map generated by the laser (right image) is able to detect only the legs of the table, and marks the region between the table legs as traversable. On the other hand, the 23

3 3D point cloud generated by the stereovision camera is able to detect the whole table as an obstacle. B. Rationalizing the 3D Point Cloud To reduce computational burden and increase the efficiency of the algorithm, the 3D data so obtained has to be pruned. Pruning can be done in various steps- a) Ground Cancellation-Ground detection algorithms are applied to detect the ground, whose point cloud is subsequently deleted from 3-D point cloud as an effort to prune the data obtained from camera. One way to detect ground is to assume a constant horizontal ground plane underneath the robot. Another way is to use RANSAC (Random Sample Consensus) [11] technique, in which inliers constitute the ground plane whereas obstacles, ditches etc are rejected as outliers. C. Creation of 2D map from Rationalized 3D Point Cloud. A 2D vision map is created from the 3D point cloud generated from the camera data. The map cells are data structures with three integer variables and one float variable. int x = x-coordinate of cell; int y = y-coordinate of cell; int flag = 0(unoccupied) or 1(occupied) or Fig. 4 b) Deletion of points lying above height H - Obstacles lying above the height, H of robot do not hinder the motion of the robot and hence can be neglected from 3D point cloud (Fig. 4). The method has been described in [12]. c) Limiting the camera s range- The LRF has a range of 4 m. Though the range of stereo vision camera is infinite, but with increasing distance, noise in the data increases and accuracy decreases. Data beyond the range of R (here R is determined experimentally and depends on velocity, reaction time of the robot etc.) is eliminated and hence point clouds beyond this range, R are rejected which again minimalizes computational burden. d) Removing objects of height, h- Objects which are lying on the ground plane and are of a height, h, where, h << clearance of the robot are removed from the 3D point cloud as they are easily traversable. 2(unidentified); float r cell = range obtained from stereovision camera (0,0); The flag variable is assigned values 0 or 1 or 2, according to the following algorithm. Fig.5 Let the ground represent x-z plane; with the z-axis in the direction of motion of the robot. Let y-axis be the vertical axis. For a cell in the x-z plane, let its length and breadth be represented by Δz and Δx. In the pruned 3D point cloud, if n is the number of points in a cuboid of volume, V = Δx * Δz * y. then, flag = { σ > β max, 1 β min < σ < β max, flag value of cell lying in front of it σ < β min, 0 } where β is an experimentally determined threshold value. flag = 2, for cells which are out of range of camera. Thus each cell of the matrix stores a 0 or 1 or 2 value corresponding to the placement of the obstacles. Fig.5 is a line representation of how an obstacle is projected onto a 2D map to determine occupancy values of the cells of vision map. 24

4 III SENSOR FUSION TO GENERATE OCCUPANCY GRID AND PATH PLANNING The 2D matrix obtained after pruning and scaling down 3D point cloud is fused with the 2D map generated by the laser to create an occupancy grid. A) Assigning flag value to Occupancy grid The occupancy grid can contain four different values or weights- unoccupied (0), occupied (1), unidentified (2) and unsure (3). The cells are assigned the above values on the basis of the following parameters- The occupancy grid so generated is updated with each laser and camera scan, which enables the robot to move in a dynamic environment. The 2D occupancy grid created after the fusion of both the sensors data, is used for obstacle avoidance and path planning. The algorithm used for this purpose is the improved version of Vector Field Histogram (VFH) i.e. VFH+ [13]. The algorithm reduces the 2D occupancy grid to a 1D polar histogram and labels free spaces as candidate valleys, from which the direction of the motion for the robot is determined. Advantages of VFH+ are that besides path planning, it also takes into account the dynamics and kinematics of the robot while planning an optimum path through the obstacles. 1) A cell which is marked as occupied in both the laser 2D map and the stereovision camera s 2D map or only in the laser 2D map- is assigned a value of 1(occupied). 2) A cell identified as empty by both the 2D maps is given a value of 0 (unoccupied). 3) Cells lying beyond cells identified as definite obstacles are marked as unidentified (2). 4) Cells identified as obstacle by camera and detected as free space by the laser are assigned 1(occupied)(?). B) Assigning Range Value to Occupancy grid Range values are assigned in the occupancy according to the pseudo-algorithm (fig. 6) given below. Pseudo algorithm for a cell of the occupancy grid, If ( r laser - r camera < Δr ) r cell = r laser ; else if ( r laser doesn t exist ) r cell = r camera ; else if ( r camera doesn t exist ) r cell = r laser ; else r cel l is unidentified; Fig. 6 where, r cell is the final of range for the cell to be used by obstacle avoidance algorithm; Δr is an experimentally determined value; r laser and r camera are the range values given for an obstacle by the laser and the camera respectively. r laser doesn t exist in regions beyond 4m. For example, if the laser detects an object at 1m and the camera detects the obstacle at 1.01m, then the difference =0.01m can be considered as negligible and the corresponding cells(which show the object at 1.0m) of the occupancy grid are marked as occupied(1). IV HARDWARE SETUP AND EXPERIMENTAL RESULTS The chassis of the robot is made of welded aluminum bars and sheet metal and the wheels were powered by 2 Quicksilver Servo Motors. Hokuyo s URG-04LX and BumbleBee StereoVision camera are the sensors installed for obstacle detection and avoidance. Obstacle avoidance program is written in Microsoft Visual Studio 9.0 in C++ environment. It uses PGR library and Intel OpenCV Library for Navigation Map building. The proposed techniques and algorithms were tested successfully on Lakshya - an IGV platform conceptualized and developed in the Innovations Lab, Delhi College of 25

5 Engineering, in both indoor and outdoor environments. The robot was able to successfully identify and avoid stationary and dynamic objects of various shapes and configurations. In fig. 7, for experimentation the laser range has been limited to 2.5 m, hence the obstacle(box) is detected only by the camera and not the laser. In fig. 9, the camera is able to detect only obstacle. In fig. 7 and 8, only the LRF is able to detect the helmet, kept at the right of the robot, as an obstacle as it lies out of the FOV of the camera. In fig. 10 the robot has taken a turn to avoid the obstacle hence the obstacle moves out of the FOV of camera and is detected only by the laser scan. Fig. 7 (clockwise from top left) obstacle; 2D map by laser(when the range of the laser is set to 2.5 m); 2D map by stereovision camera; sensor fusion-occupancy grid. Fig. 9 (clockwise from top-left) obstacle; 2D by laser; 2D navigation map by stereovision camera; sensor fusionoccupancy grid. Fig. 8 (clockwise from top-left) obstacle; laser 2D cost map; 2D camera vision map; fused map. Fig. 10 (clockwise from top-left) obstacle detection; 2D map by laser; 2D map by stereovision camera - obstacle lies outside its view; sensor fusion-occupancy grid. part of the obstacle as it lies partly out of the field of view of the camera; whereas the laser is able to detect the whole of the 7. CONCLUSION AND FUTURE WORK 26

6 In this paper we have presented a method of sensor fusion of LRF and stereovision camera to generate 2D occupancy grids for obstacle detection and avoidance. As has been illustrated by the experimental results, both laser and camera data are necessary for successful path navigation. The accuracy of the laser is used to complement the 3D imaging properties of the stereovision camera. Our paper mainly focuses on reducing the computational burden involved in fusion process and accurate detection and avoidance of obstacles in the environment of a mobile robot. Future Work can brought together by focusing on data fusion technologies which are based on Bayesian inference and probabilistic reasoning as they are relatively better off in case of uncertainties available in the environment. REFERENCES [1] S. Thrun, D. Fox, and W. Burgard. A real-time algorithm for mobile robot mapping with application to multi robot and 3D mapping, in proceedings of the IEEE Int. Conf. on Robotics and Automation (ICRA 00), USA, [2] Haris Baltzakis, Antonis Argyros, Panos Trahanias, Fusion of laser and visual data for robot motion planning and collision avoidance, Machine Vision and Applications (2003) 15: V disparity representation, in IEEE Intelligent Vehicle Symposium, Versailles, June [9] Alper Yilmaz, Sensor Fusion in Computer Vision, IEEE Urban Remote Sensing Joint Event, 2007, April 2007 Page(s):1-5 [10] S.Nendevschi, R.Dancescu, D. Frentiu, T.Martia, F.Oniga, C.Pocol, R.Schmidt, T.Garf, High Accuracy Stereo Vision System for Far Distance Obstacle Detection, IEEE Intelligent Vehicle Symposium, Parma, Italy, 2004 [11] M.A. Fischler and R.C. Bolles, Random consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 1981, 24(6), [12] Saurav Kumar, Binocular Stereo Vision Based Obstacle Avoidance Algorithm for Autonomous Mobile Robots, IEEE Advance Computing Conference 2009 [13] Iwan Ulrich and Johann Borenstein, VFH+: Reliable Obstacle Avoidance for Fast Mobile Robots, in 1998 IEEE International Conference on Robotics and Automation. [3] Mathias Perrollaz, Raphael Labayrade, Cyril Royere, Nicolas Hautiere, Didier Aubert, Long Range Obstacle Detection Using Laser Scanner and Stereovision, in Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan. [4] C. Stiller, J. Hipp, C. Rossig, A. Ewald, Multisensor obstacle detection and tracking, Image and Vision Computing, Volume 18, Issue 5, April 2000, Pages [5] Romuald Aufrere, Jay Gowdy, Christoph Mertz, Chuck Thorpe, Chieh-Chih Wang, Teruko Yata, Perception for collision avoidance and autonomous driving, Mechatronics, Volume 13, Issue 10, December 2003, Pages [6] Zhuoyun Zhang, Chunping Hou, Lili Shen, Jiachen Yang, An Objective Evaluation for Disparity Map based on the Disparity Gradient and Disparity Acceleration, 2009 International Conference on Information Technology and Computer Science. [7] K. L. Boyer, D. M. Wuescher, and S. Sarkar, Dynamic Edge Warping: An Experimental System for Recovering Disparity Maps in Weakly Constrained Systems, IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 21, NO. 1 JANUARY/FEBRUARY 1991 [8] R. Labayrade, D. Aubert, and J.P. Tarel, Real time obstacle detection on non flat road geometry trough 27

LIDAR and stereo camera data fusion in mobile robot mapping

LIDAR and stereo camera data fusion in mobile robot mapping http://excel.fit.vutbr.cz LIDAR and stereo camera data fusion in mobile robot mapping Jana Vyroubalová* Abstract LIDAR (2D) has been widely used for mapping and navigation in mobile robotics. However,

More information

Stereo Vision Based Traversable Region Detection for Mobile Robots Using U-V-Disparity

Stereo Vision Based Traversable Region Detection for Mobile Robots Using U-V-Disparity Stereo Vision Based Traversable Region Detection for Mobile Robots Using U-V-Disparity ZHU Xiaozhou, LU Huimin, Member, IEEE, YANG Xingrui, LI Yubo, ZHANG Hui College of Mechatronics and Automation, National

More information

W4. Perception & Situation Awareness & Decision making

W4. Perception & Situation Awareness & Decision making W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic

More information

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 9 Toyomi Fujita and Yuya Kondo Tohoku Institute of Technology Japan 1. Introduction A 3D configuration and terrain sensing

More information

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm) Chapter 8.2 Jo-Car2 Autonomous Mode Path Planning (Cost Matrix Algorithm) Introduction: In order to achieve its mission and reach the GPS goal safely; without crashing into obstacles or leaving the lane,

More information

Vol. 21 No. 6, pp ,

Vol. 21 No. 6, pp , Vol. 21 No. 6, pp.69 696, 23 69 3 3 3 Map Generation of a Mobile Robot by Integrating Omnidirectional Stereo and Laser Range Finder Yoshiro Negishi 3, Jun Miura 3 and Yoshiaki Shirai 3 This paper describes

More information

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical

More information

Probabilistic representation of the uncertainty of stereo-vision and application to obstacle detection

Probabilistic representation of the uncertainty of stereo-vision and application to obstacle detection 2010 IEEE Intelligent Vehicles Symposium University of California, San Diego, CA, USA June 21-24, 2010 TuE1.11 Probabilistic representation of the uncertainty of stereo-vision and application to obstacle

More information

Free Space Estimation for Autonomous Navigation

Free Space Estimation for Autonomous Navigation Free Space Estimation for Autonomous Navigation Nicolas Soquet, Mathias Perrollaz, Raphaël Labayrade, Didier Aubert To cite this version: Nicolas Soquet, Mathias Perrollaz, Raphaël Labayrade, Didier Aubert.

More information

Fusion Between Laser and Stereo Vision Data For Moving Objects Tracking In Intersection Like Scenario

Fusion Between Laser and Stereo Vision Data For Moving Objects Tracking In Intersection Like Scenario Fusion Between Laser and Stereo Vision Data For Moving Objects Tracking In Intersection Like Scenario Qadeer Baig, Olivier Aycard, Trung Dung Vu and Thierry Fraichard Abstract Using multiple sensors in

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration

Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration Darius Burschka, Stephen Lee and Gregory Hager Computational Interaction and Robotics Laboratory Johns Hopkins University

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

Vision Guided AGV Using Distance Transform

Vision Guided AGV Using Distance Transform Proceedings of the 3nd ISR(International Symposium on Robotics), 9- April 00 Vision Guided AGV Using Distance Transform Yew Tuck Chin, Han Wang, Leng Phuan Tay, Hui Wang, William Y C Soh School of Electrical

More information

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module www.lnttechservices.com Table of Contents Abstract 03 Introduction 03 Solution Overview 03 Output

More information

Noise-resilient Road Surface and Free Space Estimation Using Dense Stereo

Noise-resilient Road Surface and Free Space Estimation Using Dense Stereo 2013 IEEE Intelligent Vehicles Symposium (IV) June 23-26, 2013, Gold Coast, Australia Noise-resilient Road Surface and Free Space Estimation Using Dense Stereo Jae Kyu Suhr, Member, IEEE, and Ho Gi Jung,

More information

OCCUPANCY GRID MODELING FOR MOBILE ROBOT USING ULTRASONIC RANGE FINDER

OCCUPANCY GRID MODELING FOR MOBILE ROBOT USING ULTRASONIC RANGE FINDER OCCUPANCY GRID MODELING FOR MOBILE ROBOT USING ULTRASONIC RANGE FINDER Jyoshita, Priti 2, Tejal Sangwan 3,2,3,4 Department of Electronics and Communication Engineering Hindu College of Engineering Sonepat,

More information

Real Time Vehicle Pose Using On-Board Stereo Vision System

Real Time Vehicle Pose Using On-Board Stereo Vision System Real Time Vehicle Pose Using On-Board Stereo Vision System AngelD.Sappa,DavidGerónimo, Fadi Dornaika, and Antonio López Computer Vision Center Edifici O Campus UAB 08193 Bellaterra, Barcelona, Spain {asappa,

More information

Modifications of VFH navigation methods for mobile robots

Modifications of VFH navigation methods for mobile robots Available online at www.sciencedirect.com Procedia Engineering 48 (01 ) 10 14 MMaMS 01 Modifications of VFH navigation methods for mobile robots Andre Babinec a * Martin Dean a Františe Ducho a Anton Vito

More information

COMPARISON OF ROBOT NAVIGATION METHODS USING PERFORMANCE METRICS

COMPARISON OF ROBOT NAVIGATION METHODS USING PERFORMANCE METRICS COMPARISON OF ROBOT NAVIGATION METHODS USING PERFORMANCE METRICS Adriano Flores Dantas, Rodrigo Porfírio da Silva Sacchi, Valguima V. V. A. Odakura Faculdade de Ciências Exatas e Tecnologia (FACET) Universidade

More information

DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS

DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS Tsunetake Kanatani,, Hideyuki Kume, Takafumi Taketomi, Tomokazu Sato and Naokazu Yokoya Hyogo Prefectural

More information

Virtual Range Scan for Avoiding 3D Obstacles Using 2D Tools

Virtual Range Scan for Avoiding 3D Obstacles Using 2D Tools Virtual Range Scan for Avoiding 3D Obstacles Using 2D Tools Stefan Stiene* and Joachim Hertzberg Institute of Computer Science, Knowledge-Based Systems Research Group University of Osnabrück Albrechtstraße

More information

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment Matching Evaluation of D Laser Scan Points using Observed Probability in Unstable Measurement Environment Taichi Yamada, and Akihisa Ohya Abstract In the real environment such as urban areas sidewalk,

More information

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains PhD student: Jeff DELAUNE ONERA Director: Guy LE BESNERAIS ONERA Advisors: Jean-Loup FARGES Clément BOURDARIAS

More information

Obstacle Avoidance (Local Path Planning)

Obstacle Avoidance (Local Path Planning) Obstacle Avoidance (Local Path Planning) The goal of the obstacle avoidance algorithms is to avoid collisions with obstacles It is usually based on local map Often implemented as a more or less independent

More information

3D-2D Laser Range Finder calibration using a conic based geometry shape

3D-2D Laser Range Finder calibration using a conic based geometry shape 3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University

More information

Stereo Vision Camera Pose Estimation for On-Board Applications

Stereo Vision Camera Pose Estimation for On-Board Applications Stereo Vision Camera Pose Estimation for On-Board Applications Sappa A. *, Gerónimo D. *, Dornaika F. *, and López A. * * Computer Vision Center and Autonomous University of Barcelona 08193 Bellaterra,

More information

Computing Occupancy Grids from Multiple Sensors using Linear Opinion Pools

Computing Occupancy Grids from Multiple Sensors using Linear Opinion Pools Computing Occupancy Grids from Multiple Sensors using Linear Opinion Pools Juan David Adarve, Mathias Perrollaz, Alexandros Makris, Christian Laugier To cite this version: Juan David Adarve, Mathias Perrollaz,

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

Obstacle Avoidance (Local Path Planning)

Obstacle Avoidance (Local Path Planning) 6.2.2 Obstacle Avoidance (Local Path Planning) The goal of the obstacle avoidance algorithms is to avoid collisions with obstacles It is usually based on local map Often implemented as a more or less independent

More information

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza ETH Master Course: 151-0854-00L Autonomous Mobile Robots Summary 2 Lecture Overview Mobile Robot Control Scheme knowledge, data base mission

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Extraction of 3D Images Using Pitch-Actuated 2D Laser Range Finder for Robotic Vision

Extraction of 3D Images Using Pitch-Actuated 2D Laser Range Finder for Robotic Vision ROSE 2 - IEEE International Workshop on Robotic and Sensors Environments Phoenix - Arizona, 5-6 October 2 Extraction of 3D Images Using Pitch-Actuated 2D Laser Range Finder for Robotic Vision Pinhas Ben-Tzvi

More information

Non-flat Road Detection Based on A Local Descriptor

Non-flat Road Detection Based on A Local Descriptor Non-flat Road Detection Based on A Local Descriptor Kangru Wang, Lei Qu, Lili Chen, Yuzhang Gu, Xiaolin Zhang Abstrct The detection of road surface and free space remains challenging for non-flat plane,

More information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm

More information

Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart

Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart Mehrez Kristou, Akihisa Ohya and Shin ichi Yuta Intelligent Robot Laboratory, University of Tsukuba,

More information

Lecture: Autonomous micro aerial vehicles

Lecture: Autonomous micro aerial vehicles Lecture: Autonomous micro aerial vehicles Friedrich Fraundorfer Remote Sensing Technology TU München 1/41 Autonomous operation@eth Zürich Start 2/41 Autonomous operation@eth Zürich 3/41 Outline MAV system

More information

Estimating Camera Position And Posture by Using Feature Landmark Database

Estimating Camera Position And Posture by Using Feature Landmark Database Estimating Camera Position And Posture by Using Feature Landmark Database Motoko Oe 1, Tomokazu Sato 2 and Naokazu Yokoya 2 1 IBM Japan 2 Nara Institute of Science and Technology, Japan Abstract. Estimating

More information

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving 3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving Quanwen Zhu, Long Chen, Qingquan Li, Ming Li, Andreas Nüchter and Jian Wang Abstract Finding road intersections in advance is

More information

EE631 Cooperating Autonomous Mobile Robots

EE631 Cooperating Autonomous Mobile Robots EE631 Cooperating Autonomous Mobile Robots Lecture: Multi-Robot Motion Planning Prof. Yi Guo ECE Department Plan Introduction Premises and Problem Statement A Multi-Robot Motion Planning Algorithm Implementation

More information

THE CAPABILITY of collision-free navigation in an unknown

THE CAPABILITY of collision-free navigation in an unknown INTL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 2013, VOL. 59, NO. 1, PP. 85 91 Manuscript received January 14, 2013; revised March, 2013. DOI: 10.2478/eletel-2013-0010 Obstacle Avoidance Procedure

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

Team Description Paper Team AutonOHM

Team Description Paper Team AutonOHM Team Description Paper Team AutonOHM Jon Martin, Daniel Ammon, Helmut Engelhardt, Tobias Fink, Tobias Scholz, and Marco Masannek University of Applied Science Nueremberg Georg-Simon-Ohm, Kesslerplatz 12,

More information

Neural Networks for Obstacle Avoidance

Neural Networks for Obstacle Avoidance Neural Networks for Obstacle Avoidance Joseph Djugash Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 josephad@andrew.cmu.edu Bradley Hamner Robotics Institute Carnegie Mellon University

More information

A Comparison between Active and Passive 3D Vision Sensors: BumblebeeXB3 and Microsoft Kinect

A Comparison between Active and Passive 3D Vision Sensors: BumblebeeXB3 and Microsoft Kinect A Comparison between Active and Passive 3D Vision Sensors: BumblebeeXB3 and Microsoft Kinect Diana Beltran and Luis Basañez Technical University of Catalonia, Barcelona, Spain {diana.beltran,luis.basanez}@upc.edu

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2016 Localization II Localization I 25.04.2016 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Efficient Techniques for Dynamic Vehicle Detection

Efficient Techniques for Dynamic Vehicle Detection Efficient Techniques for Dynamic Vehicle Detection Anna Petrovskaya and Sebastian Thrun Computer Science Department Stanford University Stanford, California 94305, USA { anya, thrun }@cs.stanford.edu Summary.

More information

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d 2017 International Conference on Mechanical Engineering and Control Automation (ICMECA 2017) ISBN: 978-1-60595-449-3 3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor

More information

VISION-BASED PERCEPTION AND SENSOR DATA INTEGRATION FOR A PLANETARY EXPLORATION ROVER

VISION-BASED PERCEPTION AND SENSOR DATA INTEGRATION FOR A PLANETARY EXPLORATION ROVER VISION-BASED PERCEPTION AND SENSOR DATA INTEGRATION FOR A PLANETARY EXPLORATION ROVER Zereik E. 1, Biggio A. 2, Merlo A. 2, and Casalino G. 1 1 DIST, University of Genoa, Via Opera Pia 13, 16145 Genoa,

More information

Real-Time Human Detection using Relational Depth Similarity Features

Real-Time Human Detection using Relational Depth Similarity Features Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,

More information

Epipolar geometry-based ego-localization using an in-vehicle monocular camera

Epipolar geometry-based ego-localization using an in-vehicle monocular camera Epipolar geometry-based ego-localization using an in-vehicle monocular camera Haruya Kyutoku 1, Yasutomo Kawanishi 1, Daisuke Deguchi 1, Ichiro Ide 1, Hiroshi Murase 1 1 : Nagoya University, Japan E-mail:

More information

AC : MEASURING AND MODELING OF A 3-D ROAD SURFACE

AC : MEASURING AND MODELING OF A 3-D ROAD SURFACE AC 2008-2782: MEASURING AND MODELING OF A 3-D ROAD SURFACE Pramod Kumar, University of Louisiana at Lafayette Pavel Ikonomov, Western Michigan University Suren Dwivedi, University of Louisiana-Lafayette

More information

Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot

Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot Yonghoon Ji 1, Atsushi Yamashita 1, and Hajime Asama 1 School of Engineering, The University of Tokyo, Japan, t{ji,

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

INTELLIGENT INDOOR MOBILE ROBOT NAVIGATION USING STEREO VISION

INTELLIGENT INDOOR MOBILE ROBOT NAVIGATION USING STEREO VISION INTELLIGENT INDOOR MOBILE ROBOT NAVIGATION USING STEREO VISION Arjun B Krishnan 1 and Jayaram Kollipara 2 1 Electronics and Communication Dept., Amrita Vishwa Vidyapeetham, Kerala, India abkrishna39@gmail.com

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016

Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016 Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016 Outdoor Robot Navigation Based on View-based Global Localization and Local Navigation Yohei Inoue, Jun Miura, and Shuji Oishi Department

More information

Dominant plane detection using optical flow and Independent Component Analysis

Dominant plane detection using optical flow and Independent Component Analysis Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,

More information

Stereoscopic Vision System for reconstruction of 3D objects

Stereoscopic Vision System for reconstruction of 3D objects Stereoscopic Vision System for reconstruction of 3D objects Robinson Jimenez-Moreno Professor, Department of Mechatronics Engineering, Nueva Granada Military University, Bogotá, Colombia. Javier O. Pinzón-Arenas

More information

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR Bijun Lee a, Yang Wei a, I. Yuan Guo a a State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University,

More information

Detection and Motion Planning for Roadside Parked Vehicles at Long Distance

Detection and Motion Planning for Roadside Parked Vehicles at Long Distance 2015 IEEE Intelligent Vehicles Symposium (IV) June 28 - July 1, 2015. COEX, Seoul, Korea Detection and Motion Planning for Roadside Parked Vehicles at Long Distance Xue Mei, Naoki Nagasaka, Bunyo Okumura,

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives

More information

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING Inženýrská MECHANIKA, roč. 15, 2008, č. 5, s. 337 344 337 DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING Jiří Krejsa, Stanislav Věchet* The paper presents Potential-Based

More information

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models Introduction ti to Embedded dsystems EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping Gabe Hoffmann Ph.D. Candidate, Aero/Astro Engineering Stanford University Statistical Models

More information

The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map.

The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map. Donald Rosselot and Ernest L. Hall Center for Robotics Research Department of Mechanical, Industrial, and Nuclear

More information

3D Environment Reconstruction

3D Environment Reconstruction 3D Environment Reconstruction Using Modified Color ICP Algorithm by Fusion of a Camera and a 3D Laser Range Finder The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15,

More information

Depth Estimation Using Monocular Camera

Depth Estimation Using Monocular Camera Depth Estimation Using Monocular Camera Apoorva Joglekar #, Devika Joshi #, Richa Khemani #, Smita Nair *, Shashikant Sahare # # Dept. of Electronics and Telecommunication, Cummins College of Engineering

More information

AMR 2011/2012: Final Projects

AMR 2011/2012: Final Projects AMR 2011/2012: Final Projects 0. General Information A final project includes: studying some literature (typically, 1-2 papers) on a specific subject performing some simulations or numerical tests on an

More information

Real-time Road Surface Mapping Using Stereo Matching, V-Disparity and Machine Learning

Real-time Road Surface Mapping Using Stereo Matching, V-Disparity and Machine Learning Real-time Road Surface Mapping Using Stereo Matching, V-Disparity and Machine Learning Vítor B. Azevedo, Alberto F. De Souza, Lucas P. Veronese, Claudine Badue and Mariella Berger Abstract We present and

More information

Spring 2016 :: :: Robot Autonomy :: Team 7 Motion Planning for Autonomous All-Terrain Vehicle

Spring 2016 :: :: Robot Autonomy :: Team 7 Motion Planning for Autonomous All-Terrain Vehicle Spring 2016 :: 16662 :: Robot Autonomy :: Team 7 Motion Planning for Autonomous All-Terrain Vehicle Guan-Horng Liu, Samuel Wang, Shu-Kai Lin, Chris Wang, Tiffany May Advisor : Mr. George Kantor OUTLINE

More information

2 OVERVIEW OF RELATED WORK

2 OVERVIEW OF RELATED WORK Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

Robotics and Autonomous Systems. On-line road boundary modeling with multiple sensory features, flexible road model, and particle filter

Robotics and Autonomous Systems. On-line road boundary modeling with multiple sensory features, flexible road model, and particle filter Robotics and Autonomous Systems 59 (2011) 274 284 Contents lists available at ScienceDirect Robotics and Autonomous Systems journal homepage: www.elsevier.com/locate/robot On-line road boundary modeling

More information

Curb Detection Based on a Multi-Frame Persistence Map for Urban Driving Scenarios

Curb Detection Based on a Multi-Frame Persistence Map for Urban Driving Scenarios Curb Detection Based on a Multi-Frame Persistence Map for Urban Driving Scenarios Florin Oniga, Sergiu Nedevschi, and Marc Michael Meinecke Abstract An approach for the detection of straight and curved

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information

Introduction to Mobile Robotics Path Planning and Collision Avoidance

Introduction to Mobile Robotics Path Planning and Collision Avoidance Introduction to Mobile Robotics Path Planning and Collision Avoidance Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras 1 Motion Planning Latombe (1991): eminently necessary

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Fast Local Planner for Autonomous Helicopter

Fast Local Planner for Autonomous Helicopter Fast Local Planner for Autonomous Helicopter Alexander Washburn talexan@seas.upenn.edu Faculty advisor: Maxim Likhachev April 22, 2008 Abstract: One challenge of autonomous flight is creating a system

More information

A real-time Road Boundary Detection Algorithm Based on Driverless Cars Xuekui ZHU. , Meijuan GAO2, b, Shangnian LI3, c

A real-time Road Boundary Detection Algorithm Based on Driverless Cars Xuekui ZHU. , Meijuan GAO2, b, Shangnian LI3, c 4th National Conference on Electrical, Electronics and Computer Engineering (NCEECE 2015) A real-time Road Boundary Detection Algorithm Based on Driverless Cars Xuekui ZHU 1, a, Meijuan GAO2, b, Shangnian

More information

DISTANCE MEASUREMENT USING STEREO VISION

DISTANCE MEASUREMENT USING STEREO VISION DISTANCE MEASUREMENT USING STEREO VISION Sheetal Nagar 1, Jitendra Verma 2 1 Department of Electronics and Communication Engineering, IIMT, Greater Noida (India) 2 Department of computer science Engineering,

More information

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Proc. 2001 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems pp. 31-36, Maui, Hawaii, Oct./Nov. 2001. Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Hiroshi

More information

A development of PSD sensor system for navigation and map building in the indoor environment

A development of PSD sensor system for navigation and map building in the indoor environment A development of PSD sensor system for navigation and map building in the indoor environment Tae Cheol Jeong*, Chang Hwan Lee **, Jea yong Park ***, Woong keun Hyun **** Department of Electronics Engineering,

More information

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011 Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011 Introduction The goal of this survey paper is to examine the field of robotic mapping and the use of FPGAs in various implementations.

More information

Advanced Robotics Path Planning & Navigation

Advanced Robotics Path Planning & Navigation Advanced Robotics Path Planning & Navigation 1 Agenda Motivation Basic Definitions Configuration Space Global Planning Local Planning Obstacle Avoidance ROS Navigation Stack 2 Literature Choset, Lynch,

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Safe Robot Driving in Cluttered Environments Abstract 1 The Need For 360 Degree Safeguarding

Safe Robot Driving in Cluttered Environments Abstract 1 The Need For 360 Degree Safeguarding Safe Robot Driving in Cluttered Environments Chuck Thorpe, Justin Carlson, Dave Duggins, Jay Gowdy, Rob MacLachlan, Christoph Mertz, Arne Suppe, Bob Wang, The Robotics Institute, Carnegie Mellon University,

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Implementation of Odometry with EKF for Localization of Hector SLAM Method

Implementation of Odometry with EKF for Localization of Hector SLAM Method Implementation of Odometry with EKF for Localization of Hector SLAM Method Kao-Shing Hwang 1 Wei-Cheng Jiang 2 Zuo-Syuan Wang 3 Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung,

More information

STEREO-VISION SYSTEM PERFORMANCE ANALYSIS

STEREO-VISION SYSTEM PERFORMANCE ANALYSIS STEREO-VISION SYSTEM PERFORMANCE ANALYSIS M. Bertozzi, A. Broggi, G. Conte, and A. Fascioli Dipartimento di Ingegneria dell'informazione, Università di Parma Parco area delle Scienze, 181A I-43100, Parma,

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

Semantic Mapping and Reasoning Approach for Mobile Robotics

Semantic Mapping and Reasoning Approach for Mobile Robotics Semantic Mapping and Reasoning Approach for Mobile Robotics Caner GUNEY, Serdar Bora SAYIN, Murat KENDİR, Turkey Key words: Semantic mapping, 3D mapping, probabilistic, robotic surveying, mine surveying

More information

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots Spring 2018 Localization II Localization I 16.04.2018 1 knowledge, data base mission commands Localization Map Building environment model local map position global map Cognition Path Planning path Perception

More information

Construction of Semantic Maps for Personal Mobility Robots in Dynamic Outdoor Environments

Construction of Semantic Maps for Personal Mobility Robots in Dynamic Outdoor Environments Construction of Semantic Maps for Personal Mobility Robots in Dynamic Outdoor Environments Naotaka Hatao, Satoshi Kagami, Ryo Hanai, Kimitoshi Yamazaki and Masayuki Inaba Abstract In this paper, a construction

More information

On Board 6D Visual Sensors for Intersection Driving Assistance Systems

On Board 6D Visual Sensors for Intersection Driving Assistance Systems On Board 6D Visual Sensors for Intersection Driving Assistance Systems S. Nedevschi, T. Marita, R. Danescu, F. Oniga, S. Bota, I. Haller, C. Pantilie, M. Drulea, C. Golban Sergiu.Nedevschi@cs.utcluj.ro

More information