Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Similar documents
W4. Perception & Situation Awareness & Decision making

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

One type of these solutions is automatic license plate character recognition (ALPR).

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

PATENT LIABILITY ANALYSIS. Daniel Barrett Sebastian Hening Sandunmalee Abeyratne Anthony Myers

Wall-Follower. Xiaodong Fang. EEL5666 Intelligent Machines Design Laboratory University of Florida School of Electrical and Computer Engineering

Build and Test Plan: IGV Team

2002 Intelligent Ground Vehicle Competition Design Report. Grizzly Oakland University

Robotics. Chapter 25. Chapter 25 1

Exam in DD2426 Robotics and Autonomous Systems

Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios

Summary of Computing Team s Activities Fall 2007 Siddharth Gauba, Toni Ivanov, Edwin Lai, Gary Soedarsono, Tanya Gupta

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

CS283: Robotics Fall 2016: Software

Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

St Margaret College Secondary School Verdala. Smart Wheelchair. Form 3 NXT Coursework Project. Marquita Formosa (Class X)

Fire Bird V Insect - Nex Robotics

Sensory Augmentation for Increased Awareness of Driving Environment

Attack Resilient State Estimation for Vehicular Systems

Thomas Bräunl EMBEDDED ROBOTICS. Mobile Robot Design and Applications with Embedded Systems. Second Edition. With 233 Figures and 24 Tables.

NAME :... Signature :... Desk no. :... Question Answer

Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016

DTU M.SC. - COURSE EXAM Revised Edition

OCSD-A / AeroCube-7A Status Update

EE368 Project: Visual Code Marker Detection

Localization, Where am I?

THE UNIVERSITY OF AUCKLAND

Introducing Robotics Vision System to a Manufacturing Robotics Course

LaserGuard LG300 area alarm system. 3D laser radar alarm system for motion control and alarm applications. Instruction manual

The area processing unit of Caroline

Autonomous Vehicle Navigation Using Stereoscopic Imaging

ENGR3390: Robotics Fall 2009

Robot Localization based on Geo-referenced Images and G raphic Methods

Localization and Map Building

Safe Prediction-Based Local Path Planning using Obstacle Probability Sections

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

Player/Gazebo Simulation Environment John I. Martin 7 January 2005

Dominant plane detection using optical flow and Independent Component Analysis

Laserscanner Based Cooperative Pre-Data-Fusion

Image Processing using LabVIEW. By, Sandip Nair sandipnair.hpage.com

Dynamic Sensor-based Path Planning and Hostile Target Detection with Mobile Ground Robots. Matt Epperson Dr. Timothy Chung

Handy Board MX. page 1

On-line and Off-line 3D Reconstruction for Crisis Management Applications

Chapters 1 7: Overview

Designing a Pick and Place Robotics Application Using MATLAB and Simulink

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

3D-2D Laser Range Finder calibration using a conic based geometry shape

UAV Autonomous Navigation in a GPS-limited Urban Environment

Image processing techniques for driver assistance. Razvan Itu June 2014, Technical University Cluj-Napoca

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

COMPUTER-BASED WORKPIECE DETECTION ON CNC MILLING MACHINE TOOLS USING OPTICAL CAMERA AND NEURAL NETWORKS

Estimating the wavelength composition of scene illumination from image data is an

3-D MAP GENERATION BY A MOBILE ROBOT EQUIPPED WITH A LASER RANGE FINDER. Takumi Nakamoto, Atsushi Yamashita, and Toru Kaneko

Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning

Servosila Robotic Heads

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

COS Lecture 10 Autonomous Robot Navigation

Vol. 21 No. 6, pp ,

Usability study of 3D Time-of-Flight cameras for automatic plant phenotyping

Understanding Tracking and StroMotion of Soccer Ball

Stereo Vision Based Traversable Region Detection for Mobile Robots Using U-V-Disparity

On Board 6D Visual Sensors for Intersection Driving Assistance Systems

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

CS4758: Rovio Augmented Vision Mapping Project

Odyssey. Build and Test Plan

Color Tracking Robot

Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Automatic Fatigue Detection System

Indoor Mobile Robot Navigation and Obstacle Avoidance Using a 3D Camera and Laser Scanner

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Vision Based Parking Space Classification

Building Reliable 2D Maps from 3D Features

Ch 22 Inspection Technologies

Performance Evaluation of Monitoring System Using IP Camera Networks

Autonomous Robot Navigation: Using Multiple Semi-supervised Models for Obstacle Detection

ROBOT TEAMS CH 12. Experiments with Cooperative Aerial-Ground Robots

A Vision System for Automatic State Determination of Grid Based Board Games

CSc Topics in Computer Graphics 3D Photography

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100

Vision-Based Navigation Solution for Autonomous Indoor Obstacle Avoidance Flight

Field-of-view dependent registration of point clouds and incremental segmentation of table-tops using time-offlight

Real-time Door Detection based on AdaBoost learning algorithm

Watchmaker precision for robotic placement of automobile body parts

Southern Illinois University Edwardsville

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Terrain Roughness Identification for High-Speed UGVs

Active2012 HOSEI UNIVERSITY

The NAO Robot, a case of study Robotics Franchi Alessio Mauro

Scan-point Planning and 3-D Map Building for a 3-D Laser Range Scanner in an Outdoor Environment

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots

Final Report. EEL 5666 Intelligent Machines Design Laboratory

3D Convolutional Neural Networks for Landing Zone Detection from LiDAR

Image Fusion of Video Images and Geo-localization for UAV Applications K.Senthil Kumar 1, Kavitha.G 2, Subramanian.

Indoor-Outdoor Navigation System for Visually-Impaired Pedestrians: Preliminary Evaluation of Position Measurement and Obstacle Display

Transcription:

Chapter 8.2 Jo-Car2 Autonomous Mode Path Planning (Cost Matrix Algorithm) Introduction: In order to achieve its mission and reach the GPS goal safely; without crashing into obstacles or leaving the lane, Jo-Car2 uses three main sources of information; GPS with magnetometer, Camera and Laser Range Finder. Using the combination between the GPS and magnetometer readings, we can determine the desired direction of the robot. The Camera is used to detect the path lines and also the obstacles, by applying the image processing algorithm described in chapter 9. While the Laser Range Finder is used also to detect obstacles. Camera Image Processing LRF Mapping Cost Matrix Path Selection Motors Driving GPS Direction Detection Magnetometer Path Planning Algorithm Jo-Car2 Autonomous Mode - Cost Matrix Algorithm - Mahmood Shubbak 54

The three sensors are then fused into one source of information; The Cost Matrix, that Jo-Car2 depends basically on for its artificial intelligence and path planning. Look at the following diagram. The Cost Matrix: The Cost Matrix is a 7x7 matrix, whose values represent the risk value for each position on the camera image plane; i.e. it has large values where obstacles or path lines exist, and low values in safe positions. Each sequence of matrix elements represents a path; which has its own cost (the sum of elements values), the path of lowest cost is the best path. For thorough understanding of this method, we provide in the following lines a step by step explanation. Step by Step Explanation: 1. Starting from the original image captured by the camera, the first step is to generate a binary image that only includes the obstacles and path lines; this can be achieved by detecting and removing the green grass from the image. Using an image processing algorithm depends basically on the ratio between red, green and blue channels for each pixel; we can get the desired binary image. (For more details about the image processing algorithm refer to Chapter 9). Look at the following example: 2. The second step is to detect and remove noise; this is also achieved by some image processing depends on the size of the white regions; the small ones are considered as noise and removed consequently. (more details can be found in the image processing chapter 9) Look at the following example: Jo-Car2 Autonomous Mode - Cost Matrix Algorithm - Mahmood Shubbak 55

Up to this point, we obtain the final binary image that contains the path lines and the obstacles only; without any noise. 3. Now we can generate the cost matrix; which is 7x7 matrix contains the average of pixels intensities as described previously, look at the figure below: Notice that on the obstacles positions (totally white) the matrix value is 255 which is the highest intensity; while on safe positions (totally black) the value is zero which is the lowest intensity. This matrix is the primary framework, to which all other sensors will be fused, and on which the robot will depend in planning its path. 4. The next step is to fuse other sensors into the cost matrix; sensor fusion here means adding the laser range finder LRF, the GPS and the magnetometer information into the path matrix. Firstly, fusing the LRF; by mapping the laser data (which is points on the horizontal ground plane) to the camera image plane. This has been done using some experimental methods for calibration. More details can be found in the LRF Mapping chapter 10. Jo-Car2 Autonomous Mode - Cost Matrix Algorithm - Mahmood Shubbak 56

After determining the positions of LRF points on the image plane, and their positions on the matrix consequently, an extra cost of 100 will be added to each position. Look at the following figure. The GPS direction can be then fused into the matrix by detecting wrong way motion; comparing the difference between current position and next GPS with the history data. 5. Now we reach paths evaluation step; on which pre-defined paths will be evaluated in order to determine the best path. Here we will show a sample of five paths; look at the following figure. By adding the cost values of each cell in the forward path we got a sum of 468; which represents the total cost of following the straight forward path. Applying the same procedure, we got results for another four paths as shown in the following figures. Jo-Car2 Autonomous Mode - Cost Matrix Algorithm - Mahmood Shubbak 57

Notice that the straight-right path costs 498, while the straight-left path costs 159. The right-straight path costs 713, and the left-straight path costs 45. 6. The last step in our path planning algorithm is choosing the best path; which is the lowest cost path. In our sample image here, it is obvious that the left straight path is the best path with the lowest cost of 45. Jo-Car2 Autonomous Mode - Cost Matrix Algorithm - Mahmood Shubbak 58

Summary: After choosing the best path, the controller must convert it into its relevant desired speeds for the both motors according to the differential drive model, explained in the actuators Chapter 6. The PID controller will guarantee that the motor will operate at the desired speed. Jo-Car2 uses a path planning algorithm called cost matrix algorithm; it depends on fusing all sensors into one 7x7 matrix, then evaluating alternative paths on it by their costs. The matrix is generated through six steps; 1 converting the original image from camera to binary image by image processing algorithm, 2 detecting and removing the noise, then 3 generating the matrix by averaging pixels, 4 fusing all sensors, after that 5 evaluating pre-defined paths, and finally 6 selecting the best path with the lowest cost. Jo-Car2 Autonomous Mode - Cost Matrix Algorithm - Mahmood Shubbak 59