Obstacle Avoidance (Local Path Planning)

Similar documents
Obstacle Avoidance (Local Path Planning)

Autonomous Mobile Robots, Chapter 6 Planning and Navigation Where am I going? How do I get there? Localization. Cognition. Real World Environment

Cognitive Robotics Robot Motion Planning Matteo Matteucci

Parameterized Sensor Model and Handling Specular Reflections for Robot Map Building

Robot Motion Control Matteo Matteucci

6 Planning and Navigation:

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Autonomous Robotics 6905

This chapter explains two techniques which are frequently used throughout

COMPARISON OF ROBOT NAVIGATION METHODS USING PERFORMANCE METRICS

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping

University of Moratuwa

Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning

Probabilistic Robotics

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Probabilistic Robotics

Controlled Steering. Prof. R.G. Longoria Spring Relating to Kinematic models of 2D steering and turning

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Sensor Models and Mapping. Chapter 9

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

Motion Planning. Howie CHoset

Improved Occupancy Grids for Map Building

SLAM: Robotic Simultaneous Location and Mapping

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots

Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang

Artificial Intelligence for Robotics: A Brief Summary

registration short-term maps map adaptation mechanism (see section 3.4) long-term map window of odometry error time sensor data mature map new map

Localization and Map Building

Path Planning. Marcello Restelli. Dipartimento di Elettronica e Informazione Politecnico di Milano tel:

ECE 497 Introduction to Mobile Robotics Spring 09-10

DOUBLE-VFH: RELIABLE OBSTACLE AVOIDANCE FOR LARGE, NON-POINT, OMNI-DIRECTIONAL MOBILE ROBOTS

Decision Fusion using Dempster-Schaffer Theory

Practical Course WS12/13 Introduction to Monte Carlo Localization

Collision avoidance with Vector Field Histogram+ and Nearness Diagram algorithms implemented on a LEGO Mindstorms NXT robot

IMPROVED LASER-BASED NAVIGATION FOR MOBILE ROBOTS

Real-time Obstacle Avoidance and Mapping for AUVs Operating in Complex Environments

Sensor Modalities. Sensor modality: Different modalities:

OCCUPANCY GRID MODELING FOR MOBILE ROBOT USING ULTRASONIC RANGE FINDER

Pathfinding in partially explored games environments

Localization and Map Building

Visually Augmented POMDP for Indoor Robot Navigation

Mapping and Exploration with Mobile Robots using Coverage Maps

Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps

Autonomous Mobile Robot Design

Probabilistic Robotics

Navigation and Metric Path Planning

Sensor Models / Occupancy Mapping / Density Mapping

Exam in DD2426 Robotics and Autonomous Systems

Monte Carlo Localization

Online Map Building for Autonomous Mobile Robots by Fusing Laser and Sonar Data

Online Simultaneous Localization and Mapping in Dynamic Environments

MONTE CARLO LOCALIZATION FOR ROBOTS USING DYNAMICALLY EXPANDING OCCUPANCY GRIDS KARAN M. GUPTA, B.E. A THESIS COMPUTER SCIENCE

Robot Mapping. Grid Maps. Gian Diego Tipaldi, Wolfram Burgard

Motion Planning. Howie CHoset

ME 456: Probabilistic Robotics

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017

Probabilistic Robotics

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Tracking Multiple Moving Objects with a Mobile Robot

Introduction to Mobile Robotics Path Planning and Collision Avoidance

Fast Local Planner for Autonomous Helicopter

EE565:Mobile Robotics Lecture 3

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

Introduction to Mobile Robotics Path Planning and Collision Avoidance. Wolfram Burgard, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

LAIR. UNDERWATER ROBOTICS Field Explorations in Marine Biology, Oceanography, and Archeology

RoboCup Rescue Summer School Navigation Tutorial

Opleiding Informatica

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING

Advanced Robotics Path Planning & Navigation

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3

Mobile Robot Mapping and Localization in Non-Static Environments

Spring 2010: Lecture 9. Ashutosh Saxena. Ashutosh Saxena

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Calypso Construction Features. Construction Features 1

Markov Localization for Mobile Robots in Dynaic Environments

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

A Comparison of Position Estimation Techniques Using Occupancy Grids

Arc Carving: Obtaining Accurate, Low Latency Maps from Ultrasonic Range Sensors

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Search-based Planning with Motion Primitives. Maxim Likhachev Carnegie Mellon University

Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios

Continuous Localization Using Evidence Grids *

Robot Path Planning Using SIFT and Sonar Sensor Fusion

Grid-Based Models for Dynamic Environments

Multiple camera, laser rangefinder, and encoder data fusion for navigation of a differentially steered 3-wheeled autonomous vehicle

CONTROL ALGORITHM OP THE WALKER CLIMBING OVER OBSTACLES. D.E. Okhotsimski, A.K, Platonov U S S R

Workhorse ADCP Multi- Directional Wave Gauge Primer

Introduction to Intelligent System ( , Fall 2017) Instruction for Assignment 2 for Term Project. Rapidly-exploring Random Tree and Path Planning

Goal-Directed Navigation

Comparison of Position Estimation Techniques. Bernt Schiele and James L. Crowley LIFIA-INPG. 46 Avenue Felix Viallet Grenoble Cedex FRANCE

CMPUT 412 Motion Control Wheeled robots. Csaba Szepesvári University of Alberta

Scene Modeling from Motion-Free Radar Sensing

Visual Navigation for Flying Robots. Motion Planning

Fox, Burgard & Thrun problem they attack. Tracking or local techniques aim at compensating odometric errors occurring during robot navigation. They re

Final project: 45% of the grade, 10% presentation, 35% write-up. Presentations: in lecture Dec 1 and schedule:

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Transcription:

6.2.2 Obstacle Avoidance (Local Path Planning) The goal of the obstacle avoidance algorithms is to avoid collisions with obstacles It is usually based on local map Often implemented as a more or less independent task However, efficient obstacle avoidance should be optimal with respect to the overall goal the actual speed and kinematics of the robot the on board sensors the actual and future risk of collision observed obstacle v(t), ω(t) known obstacles (map) Planed path

6.2.2 Obstacle Avoidance: Bug1 Follow along the obstacle to avoid it Fully circle each encountered obstacle Move to the point along the current obstacle boundary that is closest to the goal Move toward the goal and repeat for any future encountered obstacle

6.2.2 Obstacle Avoidance: Bug2 Follow the obstacle always on the left or right side Leave the obstacle if the direct connection between start and goal is crossed

Practical Implementation of Bug2 Two states of robot motion: (1) moving toward goal (GOALSEEK) (2) moving around contour of obstacle (WALLFOLLOW) Describe robot motion as function of sensor values and relative direction to goal Decide how to switch between these two states while (!atgoal atgoal) { if (goaldist( < goalthreshold) We re at the goal! Halt. else { forwardvel = ComputeTranslation(& (&sonars) if (robotstate( == GOALSEEK) { rotationvel = ComputeGoalSeekRot(goalAngle goalangle) if (ObstaclesInWay( ObstaclesInWay()) robotstate <- WALLFOLLOW } if (robotstate( == WALLFOLLOW) { rotationvel = ComputeRightWallFollowRot(& (&sonars) if (!ObstaclesInWay ObstaclesInWay()) robotstate <- GOALSEEK) } } robotsetvelocity(forwardvel forwardvel, rotationvel) }

Practical Implementation of Bug2 (con t.) ObstaclesInWay(): is true whenever any sonar range reading in the direction of the goal (i.e., within 45 o of the goal) is too short ComputeTranslation(): proportional to largest range reading in robot s approximate forward direction // Note similarity to potential field approach! If minsonarfront (i.e., within 45 o o return 0 of the goal) < min_dist Else return min (max_velocity, minsonarfront min_dist)

Practical Implementation of Bug2 (con t.) For computing rotation direction and speed, popular method is: Subtract left and right range readings The larger the difference, the faster the robot will turn in the direction of the longer range readings ComputeGoalSeekRot(): // returns rotational velocity if (abs(angle_to_goal)) < PI/10 o return 0 else return (angle_to_goal * k) // k is a gain ComputeRightWallFollowRot(): // returns rotational velocity if max(minrightsonar, minleftsonar) < min_dist o return hard_left_turn_value // this is for a right wall follower else o desiredturn = (hard_left_turn_value minrightsonar) * 2 o translate desiredturn into proper range o return desiredturn

Pros/Cons of Bug2 Pros: Simple Easy to understand Popularly used Cons: Does not take into account robot kinematics Since it only uses most recent sensor values, it can be negatively impacted by noise More complex algorithms (in the following) attempt to overcome these shortcomings

Obstacle Avoidance: Vector Field Histogram (VFH) 6.2.2 Overcomes Bug2 s limitation of only using most recent sensor data by creating local map of the environment around the robot Local map is a small occupancy grid This grid is populated only by relatively recent sensor data Koren & Borenstein, ICRA 1990 Grid cell values are equivalent to the probability that there is an obstacle in that cell

How to calculate probability that cell is occupied? Need sensor model to deal with uncertainty Let s look at the approach for a sonar sensor

Modeling Common Sonar Sensor R = maximum range β = field of view β Region I R Region III Region II Region I: Probably occupied Region II: Probably empty Region III: Unknown

How to Convert to Numerical Values? Need to translate model (previous slide) to specific numerical values for each occupancy grid cell These values represent the probability that a cell is occupied (or empty), given a sensor scan (i.e., P(occupied sensing)) Three methods: Bayesian Dempster-Shafer Theory HIMM (Histogrammic in Motion Mapping) We ll cover: Bayesian We won t cover: Dempster-Shafer HIMM

Bayesian: Most popular evidential method Approach: Convert sensor readings into probabilities Combine probabilities using Bayes rule: P( B A) P( A) P( A B) = P( B) That is, Likelihood Prior Posterior = Normalizing constant Pioneers of approach: Elfes and Moravec at CMU in 1980s

Review: Basic Probability Theory Probability function: Gives values from 0 to 1 indicating whether a particular event, H (Hypothesis), has occurred For sonar sensing: Experiment: Sending out acoustic wave and measuring time of flight Outcome: Range reading reporting whether the region being sensed is Occupied or Empty Hypotheses (H) = {Occupied, Empty) Probability that H has really occurred: 0 < P(H) < 1 Probability that H has not occurred: 1 P(H)

Unconditional and Conditional Probabilities Unconditional probability: P(H) Probability of H Only provides a priori information For example, could give the known distribution of rocks in the environment, e.g., x% of environment is covered by rocks For robotics, unconditional probabilities are not based on sensor readings For robotics, we want: Conditional probability: P(H s) Probability of H, given s (e.g., P(Occupied s), or P(Empty s)) These are based on sensor readings, s Note: P(H s) + P(not H s) = 1.0

Probabilities for Occupancy Grids For each grid[i][j] covered by sensor scan: Compute P(Occupied s) and P(Empty s) For each grid element, grid[i][j], store tuple of the two probabilities: typedef struct { double occupied; // i.e., P(occupied s) double empty; // i.e., P(empty s) } P; P occupancy_grid[rows][columns];

Recall: Modeling Common Sonar Sensor to get P(s H) R = maximum range β = field of view β Region I R Region III Region II Region I: Probably occupied Region II: Probably empty Region III: Unknown

Converting Sonar Reading to Probability: Region I Region I: R r β α R + β P(Occupied) = x Max occupied 2 The nearer the grid element to the origin of the sonar beam, the higher the belief where r is distance to grid element that is being updated α is angle to grid element that is being updated Max occupied = highest probability possible (e.g., 0.98) The closer to the acoustic axis, the higher the belief We never know with certainty P(Empty) = 1.0 P(Occupied)

Converting Sonar Reading to Probability: Region II Region II: P(Empty) = R r R + 2 The nearer the grid element to the origin of the sonar beam, the higher the belief β α β The closer to the acoustic axis, the higher the belief P(Occupied) = 1.0 P(Empty) where r is distance to grid element being updated, α is angle to grid element being updated Note that here, we allow probability of being empty to equal 1.0

Sonar Tolerance Sonar range readings have resolution error Thus, specific reading might actually indicate range of possible values E.g., reading of 0.87 meters actually means within (0.82, 0.92) meters Therefore, tolerance in this case is 0.05 meters. Tolerance gives width of Region I

Tolerance in Sonar Model Tolerance determines Region I Width β Region I R Region III Region II Region I: Probably occupied Region II: Probably empty Region III: Unknown

Example: What is value of grid cell? (assume tolerance = 0.5) Which region? 3.5 < (6.0 0.5) Region II β = 15 s = 6 Region I r = 3.5 α = 0 R = 10 P(Empty) = 10 3.5 15 0 10 + 15 2 Region III Region II = 0.83 P(Occupied) = (1 0.83) = 0.17

But, not yet there need P(H s), not P(s H) Note that previous calculations gave: P(s H), not P(H s) Thus, use Bayes Rule: P(H s) = P(s H) P(H) P(s H) P (H) + P(s not H) P(not H) P(H s) = P(s Empty) P(Empty) P(s Empty) P (Empty) + P(s Occupied) P(Occupied) P(s Occupied) and P(s Empty) are known from sensor model P(Occupied) and P(Empty) are unconditional, prior probabilities (which may or may not be known) If not known, okay to assume P(Occupied) = P(Empty) = 0.5

Returning to Example Let s assume we re on Mars, and we know that P(Occupied) = 0.75 Continuing same example for cell P(Empty s=6) = = P(s Empty) P(Empty) P(S Empty) P (Empty) + P(s Occupied) P(Occupied) 0.83 x 0.25 0.83 x 0.25 + 0.17 x 0.75 = 0.62 P(Occupied s=6) = 1 P(Empty s=6) = 0.38 These are the values we store in our grid cell representation

Updating with Bayes Rule How to fuse multiple readings obtained over time? First time: Each element of grid initialized with a priori probability of being occupied or empty Subsequently: Use Bayes rule iteratively Probability at time t n-1 becomes prior and is combined with current observation at t n using recursive version of Bayes rule: P (H s n ) = P(s n H) P(H s n-1 ) P(s n H) P (H s n-1 ) + P(s n not H) P(not H s n-1 )

Now, back to: Vector Field Histogram (VFH) 6.2.2 Environment represented in a grid (2 DOF) Koren & Borenstein, ICRA 1990 cell values are equivalent to the probability that there is an obstacle Generate polar histogram: probability of occupied angle at which obstacle is found (relative to robot s current position)

6.2.2 Now, back to: Vector Field Histogram (VFH) From histogram, calculate steering direction: Find all openings large enough for the robot to pass through Apply cost function G to each opening Koren & Borenstein, ICRA 1990 where: o target_direction = alignment of robot path with goal o wheel_orientation = difference between new direction and current wheel orientation o previous_direction = difference between previously selected direction and new direction Choose the opening with lowest cost function value

Obstacle Avoidance: Video VFH Borenstein et al. 6.2.2 Notes: Limitation if narrow areas (e.g. doors) have to be passed Local minimum might not be avoided Reaching of the goal cannot be guaranteed Dynamics of the robot not really considered VIDEO: Borenstein.mpg