Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps

Similar documents
Statistical Techniques in Robotics (16-831, F12) Lecture#05 (Wednesday, September 12) Mapping

Statistical Techniques in Robotics (STR, S15) Lecture#05 (Monday, January 26) Lecturer: Byron Boots

Sensor Models / Occupancy Mapping / Density Mapping

This chapter explains two techniques which are frequently used throughout

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

Spring Localization II. Roland Siegwart, Margarita Chli, Juan Nieto, Nick Lawrance. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

EE565:Mobile Robotics Lecture 3

Robot Mapping. Grid Maps. Gian Diego Tipaldi, Wolfram Burgard

Statistical Techniques in Robotics (16-831, F10) Lecture #02 (Thursday, August 28) Bayes Filtering

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta

Learning Occupancy Grids With Forward Sensor Models

Localization and Map Building

Probabilistic Robotics

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

Practical Course WS12/13 Introduction to Monte Carlo Localization

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM

Probabilistic Robotics

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

3D Convolutional Neural Networks for Landing Zone Detection from LiDAR

Monte Carlo Localization

Navigation methods and systems

CSE 573: Artificial Intelligence Autumn 2010

Agent Design Example Problems State Spaces. Searching: Intro. CPSC 322 Search 1. Textbook Searching: Intro CPSC 322 Search 1, Slide 1

Mixture Models and the EM Algorithm

Probabilistic Robotics

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms

Gaussian Processes, SLAM, Fast SLAM and Rao-Blackwellization

Adapting the Sample Size in Particle Filters Through KLD-Sampling

The Basics of Graphical Models

Adapting the Sample Size in Particle Filters Through KLD-Sampling

7630 Autonomous Robotics Probabilities for Robotics

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Graphical Models. David M. Blei Columbia University. September 17, 2014

Final project: 45% of the grade, 10% presentation, 35% write-up. Presentations: in lecture Dec 1 and schedule:

Graphical models are a lot like a circuit diagram they are written down to visualize and better understand a problem.

Probabilistic Robotics

Announcements. Recap Landmark based SLAM. Types of SLAM-Problems. Occupancy Grid Maps. Grid-based SLAM. Page 1. CS 287: Advanced Robotics Fall 2009

Classification: Feature Vectors

Grid-Based Models for Dynamic Environments

Robotics. Chapter 25. Chapter 25 1

Cache Coherence Tutorial

08 An Introduction to Dense Continuous Robotic Mapping

Statistical Techniques in Robotics (STR, S15) Lecture#06 (Wednesday, January 28)

Mobile Robot Mapping and Localization in Non-Static Environments

CSE-571 Robotics. Sensors for Mobile Robots. Beam-based Sensor Model. Proximity Sensors. Probabilistic Sensor Models. Beam-based Scan-based Landmarks

COS Lecture 13 Autonomous Robot Navigation

Probabilistic Graphical Models

Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku

W4. Perception & Situation Awareness & Decision making

Feature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule.

Introduction to Machine Learning

Note Set 4: Finite Mixture Models and the EM Algorithm

Announcements. CS 188: Artificial Intelligence Spring Classification: Feature Vectors. Classification: Weights. Learning: Binary Perceptron

3D Normal Distributions Transform Occupancy Maps: An Efficient Representation for Mapping in Dynamic Environments

Obstacle Avoidance (Local Path Planning)

Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011

ME 456: Probabilistic Robotics

Robotics. Haslum COMP3620/6320

Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi

MTRX4700: Experimental Robotics

SLAM: Robotic Simultaneous Location and Mapping

IMPROVED LASER-BASED NAVIGATION FOR MOBILE ROBOTS

Homework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures:

Obstacle Avoidance (Local Path Planning)

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

Machine Learning / Jan 27, 2010

Fast Local Planner for Autonomous Helicopter

Mobile Robot Map Learning from Range Data in Dynamic Environments

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19

Introduction to Mobile Robotics Probabilistic Motion Models

Introduction to robot algorithms CSE 410/510

Today MAPS AND MAPPING. Features. process of creating maps. More likely features are things that can be extracted from images:

CS 231A Computer Vision (Fall 2012) Problem Set 3

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm

Kaijen Hsiao. Part A: Topics of Fascination

Introduction to Mobile Robotics Techniques for 3D Mapping

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Introduction to Graphical Models

CS4495 Fall 2014 Computer Vision Problem Set 6: Particle Tracking

FMA901F: Machine Learning Lecture 6: Graphical Models. Cristian Sminchisescu

CSEP 573: Artificial Intelligence

Day 3 Lecture 1. Unsupervised Learning

Machine Learning. Sourangshu Bhattacharya

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS

Markov Localization for Mobile Robots in Dynaic Environments

Robotics. Chapter 25-b. Chapter 25-b 1

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

EECS 487: Interactive Computer Graphics

Graph Contraction. Graph Contraction CSE341T/CSE549T 10/20/2014. Lecture 14

Outline. Data Association Scenarios. Data Association Scenarios. Data Association Scenarios

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Segmentation and Tracking of Partial Planar Templates

Transcription:

Statistical Techniques in Robotics (16-831, F10) Lecture#06(Thursday September 11) Occupancy Maps Lecturer: Drew Bagnell Scribes: {agiri, dmcconac, kumarsha, nbhakta} 1 1 Occupancy Mapping: An Introduction Occupancy Grid Mapping refers to a family of computer algorithms in probabilistic robotics for mobile robots which address the problem of generating maps from noisy and uncertain sensor measurement data, with the assumption that the robot pose is known. The basic idea of the occupancy grid is to represent a map of the environment as an evenly spaced field of binary random variables each representing the presence of an obstacle at that location in the environment. Occupancy grid algorithms compute approximate posterior estimates for these random variables. Earlier, when solving the localization problem, we had a map of the world and tried to find the position of the robot in it. Now we will solve the opposite problem: given the position of the robot (perhaps through some sensor such as GPS), we need to find the map of the world. Similar to localization, we will solve the mapping problem as a filtering problem. We need two components to solve this filtering problem: a state and a measurement model. There is no explicit motion model defined in the case of mapping, i.e., we assume that the maps do not change with time and that calculations are only done on the basis of measurements. The state is the map of the world which we are trying to find. For example, it could be a pixelized grid of cells or a map of landmarks. The measurement model is p(z t m, l t ), or the probability of making an observation z t given a map m and a location on the map l t. We do not need to concern ourselves with the robot s controls since we have the position of the robot. For our first attempt at a solution to the mapping problem, we will partition the world into cells, with each being one of two states: filled or empty. The individual grid cells are labelled m i, and m is the vector of all grid cells. Our goal is to calculate the posterior p(m z 1:t, x 1:t ) Unfortunately, the curse of dimensionality prevents us from filtering m, since there are 2 m possible states. If we filter each cell independently, assuming that they are in fact independent, we can reduce the complexity to 2 m by reconstructing the map from products of the map s marginal probability: p(m z 1:t, x 1:t ) = i p(m i z 1:t, x 1:t ) In truth, this is a very bad assumption to make since obstacles usually span multiple cells. However, this makes the problem tractable. In class, Drew mentions that this is more correctly called an 1 Contect adapted from previous scribes: Victor Hwang, Nathan Brooks, Brian Coltin and Mehmet R. Dogar. Some wording is taken from Wikipedia 1

approximation rather than an assumption, because we are not assuming that cells are independent but rather we are approximating them as being independent. 1.1 Derivation Let X i represent the state of a grid cell m i. The state is either x, meaning filled, or x, meaning empty. We look at the probability that a cell is filled given the measurements, starting with a Bayesian filter. p(x z 1:t ) = p(z t x, z 1:t 1 )p(x z 1:t 1 ) We then make the Markov assumption that p(z t x, z 1:t 1 ) = p(z t x). p(x z 1:t ) = p(z t x)p(x z 1:t 1 ) However, this assumption does not hold right here. When we talk about a single cell then given the current state previous observations do not tell us anything new about what we should observe now. In general however we do not have a single celled map. The reasons are described in more detail in Section 1.2. We expand the equation once again using Bayes rule to get p(x z 1:t ) = p(x z t)p(z t ) p(x) p(x z 1:t 1 ) Now p(x z 1:t ) is based on the inverse sensor model, p(x z t ), instead of the familiar forward model p(z t x). The inverse sensor model specifies a distribution over the (binary) state variable as a function of the measurement z t. A sensor model for a laser scanner device might look like Figure 1, where z t is pass-through / hit information. Using the same proof, we can derive a matching update rule for : Next, we divide the two update rules. = p( x z t)p(z t ) p( x) p( x z 1:t 1 ). p(x z 1:t ) p(x z 1:t ) = p(x z t) p( x) p(x z 1:t 1 ) p( x z t ) p(x) p( x z 1:t 1 ) = p(x z t) p( x) p(x z 1:t 1 ) p( x z t ) p(x) p( x z 1:t 1 ) = p(x z t) p( x) p(x z 1:t 1 ) p(x) p( x z t ) p( x z 1:t 1 ) 2

Now we have a recursive update rule. If the probability of x decreases with the observation z t, then p( x) > p( x z t ), which causes the belief on x to increase relative to x. Next, we take the log of this update rule to find the log odds of the belief the square is filled over the belief it isn t filled, which we label l t (x). Using the log odds reduces numerical errors from multiplying minuscule floating point numbers. log p(x z 1:t) = log p(x z t) p( x) p(x z 1:t 1 ) p( x z t ) p(x) p( x z 1:t 1 ) l t (x) = log p(x z t) p( x) + log p( x z t ) p(x) + l t 1(x) Intuitively, this tells us that if the probability of an event, say a door being open, is just as likely to remain open on observing a new sensor reading, then the reading is uninformative and does not change the posterior. For this update rule, we need only to specify p(x z t ), the inverse sensor model, and p(x), the prior. p( x z t ) and p( x) are the complements of these two terms. Note that the inverse sensor model must respond to updates to the prior: consider section (1) in Figure 1. Figure 1: A sample sensor model for a laser scanner device which provides the probability a grid cell is occupied given a sensor reading. Grid cells in (1) would have probability equal to the prior, grid cells in (2) would have a low probability (since the beam does not return from those locations) and grid cells in (3) would have a high probability. 1.2 Problems with the Markov Assumption At the beginning of the derivation we used the Markov assumption to claim that p(z t x, z 1:t 1 ) = p(z t x). This would have been reasonable had x represented the state of the complete map. However, x represents the state of a single grid cell. The Markov assumption in this context doesn t make much sense in the case of a laser beam model: we can t say that an observation z t is independent of all prior observations given only the state of a single cell, since the beam model necessarily couples observations by virtue of the beam passing through multiple cells (states). Figure 2 shows an example using a wide laser beam where this Markov assumption fails. In that case, the odds of the conflicting cell would increase, then decrease later. Without the Markov assumption, we could explain away this inconsistency by taking into account the entire map. Given this natural error, narrow beam sensors (such as LIDAR) are very popular. 3

It should be noted, however, that this assumption is perfectly valid when the sensor directly observes the state, for instance in the case of a downward looking camera used for visual SLAM. Figure 2: The problem with the standard occupancy grid mapping algorithm in Chapter 9.2 of Probabilistic Robotics: For the environment shown in Figure (a), a passing robot might receive the (noise-free) measurement shown in (b). The occupancy grid map approach maps these beams into probabilistic maps separately for each grid cell and each beam, as shown in (c) and (d). Combining both interpretations yields the map shown in (e). Obviously, there is a conflict in the overlap region, indicated by the circles in (e). The interesting insight is: There exist maps, such as the one in diagram (f), that perfectly explain the sensor measurement without any such conflict. For a sensor reading to be explained, it suffices to assume an obstacle somewhere in the cone of a measurement, and not everywhere. 1.3 Limitations of Occupancy Mapping In occupancy grid mapping every grid cell is one of two states: filled or empty. But in some situations it makes sense for a cell to be partially filled. This may occur when only part of the grid cell is filled and the rest is empty, or when the objects that fill the grid cell have special characteristics: we may want a grid filled with vegetation to be less filled than a grid filled with a solid rock. 4

Semi-transparent obstacles and Mitigation Classical occupancy grids have trouble dealing with semi-transparent obstacles such as glass and vegetation. These obstacles may return hits to the laser rangefinder about half of the time, but eventually the occupancy grid will converge to either filled or not filled, both of which are incorrect. A possible solution to this problem would be to consider a more continous measure that measures the density or probability of a beam to have reflected and not passed. A straightforward approach would be to treat the random variable as a biased coin and for each state keep a count of the number of hits and pass throughs. Thus, the only difference is that each state tracks two numbers. Then, the probability is empirically the ration of the hits to the sum of hits and passthroughs. For the inverse sensor model, we either spread the high probability zone (section 3 of Fig. 1) or use fractional number of hits and misses for each state. 5