Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Similar documents
Practical Course WS12/13 Introduction to Monte Carlo Localization

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Probabilistic Robotics

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Probabilistic Robotics

Probabilistic Robotics

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Humanoid Robotics. Least Squares. Maren Bennewitz

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM

IROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Introduction to Mobile Robotics SLAM Grid-based FastSLAM. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

Introduction to Mobile Robotics

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm

Localization and Map Building

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

This chapter explains two techniques which are frequently used throughout

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization

Robot Mapping. Grid Maps. Gian Diego Tipaldi, Wolfram Burgard

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

Probabilistic Robotics

Monte Carlo Localization using Dynamically Expanding Occupancy Grids. Karan M. Gupta

Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang

Autonomous Navigation of Humanoid Using Kinect. Harshad Sawhney Samyak Daga 11633

Probabilistic Robotics. FastSLAM

Humanoid Robotics. Path Planning and Walking. Maren Bennewitz

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz

Probabilistic Robotics

Monte Carlo Localization for Mobile Robots

Autonomous Navigation of Nao using Kinect CS365 : Project Report

Tracking Multiple Moving Objects with a Mobile Robot

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

Localization and Map Building

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

COS Lecture 13 Autonomous Robot Navigation

Autonomous Mobile Robot Design

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

Real-Time Navigation in 3D Environments Based on Depth Camera Data

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Mobile Robot Mapping and Localization in Non-Static Environments

USING 3D DATA FOR MONTE CARLO LOCALIZATION IN COMPLEX INDOOR ENVIRONMENTS. Oliver Wulf, Bernardo Wagner

Probabilistic Matching for 3D Scan Registration

Introduction to Multi-Agent Programming

F1/10 th Autonomous Racing. Localization. Nischal K N

Introduction to Mobile Robotics Techniques for 3D Mapping

Particle Filter Localization

Introduction to Mobile Robotics SLAM Landmark-based FastSLAM

Localization, Where am I?

ABSTRACT ROBOT LOCALIZATION USING INERTIAL AND RF SENSORS. by Aleksandr Elesev

Markov Localization for Mobile Robots in Dynaic Environments

Advanced Techniques for Mobile Robotics Graph-based SLAM using Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Robust Monte-Carlo Localization using Adaptive Likelihood Models

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

08 An Introduction to Dense Continuous Robotic Mapping

Monte Carlo Localization

Final project: 45% of the grade, 10% presentation, 35% write-up. Presentations: in lecture Dec 1 and schedule:

AUTONOMOUS SYSTEMS. LOCALIZATION, MAPPING & SIMULTANEOUS LOCALIZATION AND MAPPING Part V Mapping & Occupancy Grid Mapping

Announcements. Recap Landmark based SLAM. Types of SLAM-Problems. Occupancy Grid Maps. Grid-based SLAM. Page 1. CS 287: Advanced Robotics Fall 2009

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Implementation of Odometry with EKF for Localization of Hector SLAM Method

Computer Vision 2 Lecture 8

Accurate Indoor Localization for RGB-D Smartphones and Tablets given 2D Floor Plans

Robust Localization of Persons Based on Learned Motion Patterns

AN INCREMENTAL SLAM ALGORITHM FOR INDOOR AUTONOMOUS NAVIGATION

What is the SLAM problem?

NAVIGATION SYSTEM OF AN OUTDOOR SERVICE ROBOT WITH HYBRID LOCOMOTION SYSTEM

Active Monte Carlo Localization in Outdoor Terrains using Multi-Level Surface Maps

RoboCup Rescue Summer School Navigation Tutorial

PacSLAM Arunkumar Byravan, Tanner Schmidt, Erzhuo Wang

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Robotics. Chapter 25-b. Chapter 25-b 1

Final Exam Practice Fall Semester, 2012

Planning, Execution and Learning Application: Examples of Planning for Mobile Manipulation and Articulated Robots

Grid-Based Models for Dynamic Environments

Navigation methods and systems

A Sample of Monte Carlo Methods in Robotics and Vision. Credits. Outline. Structure from Motion. without Correspondences

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Robotic Mapping. Outline. Introduction (Tom)

Artificial Intelligence for Robotics: A Brief Summary

Normal Distributions Transform Monte-Carlo Localization (NDT-MCL)

NERC Gazebo simulation implementation

Humanoid Robotics. Projective Geometry, Homogeneous Coordinates. (brief introduction) Maren Bennewitz

Multi-Cue Localization for Soccer Playing Humanoid Robots

Introduction to Mobile Robotics

Introduction to Mobile Robotics Probabilistic Motion Models

Uncertainties: Representation and Propagation & Line Extraction from Range data

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Control Approaches for Walking and Running

A New Omnidirectional Vision Sensor for Monte-Carlo Localization

Motion Models (cont) 1 3/15/2018

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Particle Filter for Robot Localization ECE 478 Homework #1

Robotics and Autonomous Systems

Transcription:

Humanoid Robotics Monte Carlo Localization Maren Bennewitz 1

Basis Probability Rules (1) If x and y are independent: Bayes rule: Often written as: The denominator is a normalizing constant that ensures that the posterior of the left hand side adds up to 1 over all possible values of x. In case the background knowledge, Bayes' rule turns into 2

Basis Probability Rules (2) Law of total probability: continuous case discrete case 3

Markov Assumption actions state of a dynamical system observations 4

Motivation To perform useful service, a robot needs to know its pose in its environment Motion commands are only executed inaccurately Here: Estimate the global pose of a robot in a given model of the environment Based on its sensor data and odometry measurements Sensor data: Range image with depth measurements or laser range readings 5

Motivation Estimate the pose of a robot in a given model of the environment Based on its sensor data and odometry measurements 6

State Estimation Estimate the state given observations and odometry measurements/actions Goal: Calculate the distribution 7

Recursive Bayes Filter 1 Definition of the belief all data up to time t 8

Recursive Bayes Filter 2 Bayes rule 9

Recursive Bayes Filter 3 independence assumption 10

Recursive Bayes Filter 4 Law of total probability 11

Recursive Bayes Filter 5 Markov assumption 12

Recursive Bayes Filter 6 independence assumption 13

Recursive Bayes Filter 7 recursive term 14

Recursive Bayes Filter 7 motion model 15

Recursive Bayes Filter 7 observation model 16

Probabilistic Motion Model Robots execute motion commands only inaccurately The motion model specifies the probability that action u carries the robot from to : Defined individually for each type or robot 17

Model for Range Readings Sensor data consists of measurements The individual measurements are independent given the robot s pose How well can the distance measurements be explained given the pose and the map 18

Simplest Ray-Cast Model Ray-cast models consider the first obstacle along the ray Compare the actually measured distance with the expected distance Use a Gaussian to evaluate the difference measured distance object in map 19

Beam-Endpoint Model Evaluate the distance of the hypothetical beam end point to the closest obstacle in the map with a Gaussian Courtesy: Thrun, Burgard, Fox 20

Summary Bayes filter is a framework for state estimation The motion and observation model are the central components of the Bayes filter 21

Particle Filter Monte Carlo Localization 22

Particle Filter One implementation of the recursive Bayes filter Non-parametric framework (not only Gaussian) Arbitrary models can be used as motion and observation models 23

Motivation Goal: Approach for dealing with arbitrary distributions 24

Key Idea: Samples Use multiple (weighted) samples to represent arbitrary distributions samples 25

Particle Set Set of weighted samples state hypothesis importance weight 26

Particle Filter Recursive Bayes filter Non-parametric approach The set of weighted particles approximates the belief about the robot s pose Two main steps to update the belief Prediction: Draw samples from the motion model (propagate forward) Correction: Weight samples based on the observation model 27

Monte Carlo Localization Each particle is a pose hypothesis Prediction: For each particle sample a new pose from the the motion model Correction: Weight samples according to the observation model [The weight has be chosen in exactly that way if the samples are drawn from the motion model] 28

Monte Carlo Localization Each particle is a pose hypothesis Prediction: For each particle sample a new pose from the the motion model Correction: Weight samples according to the observation model Resampling: Draw sample with probability and repeat times ( =#particles) 29

MCL Correction Step Image Courtesy: S. Thrun, W. Burgard, D. Fox 30

MCL Prediction Step Image Courtesy: S. Thrun, W. Burgard, D. Fox 31

MCL Correction Step Image Courtesy: S. Thrun, W. Burgard, D. Fox 32

MCL Prediction Step Image Courtesy: S. Thrun, W. Burgard, D. Fox 33

Resampling Draw sample with probability times ( =#particles). Repeat Informally: Replace unlikely samples by more likely ones Survival-of-the-fittest principle Trick to avoid that many samples cover unlikely states Needed since we have a limited number of samples 34

Resampling roulette wheel Assumption: normalized weights initial value between 0 and 1/J step size = 1/J W n-1 w n w 1 w 2 W n-1 w n w 1 w 2 w 3 w 3 Draw randomly between 0 and 1 Binary search Repeat J times O(J log J) Systematic resampling Low variance O(J) Also called stochastic universal resampling 35

Low Variance Resampling initialization cumulative sum of weights J = #particles step size = 1/J decide whether or not to take particle i 36

initialization Courtesy: Thrun, Burgard, Fox 37

observation Courtesy: Thrun, Burgard, Fox 38

weight update, resampling, motion update Courtesy: Thrun, Burgard, Fox 39

observation Courtesy: Thrun, Burgard, Fox 40

weight update, resampling Courtesy: Thrun, Burgard, Fox 41

motion update Courtesy: Thrun, Burgard, Fox 42

observation Courtesy: Thrun, Burgard, Fox 43

weight update, resampling Courtesy: Thrun, Burgard, Fox 44

motion update Courtesy: Thrun, Burgard, Fox 45

observation Courtesy: Thrun, Burgard, Fox 46

weight update, resampling Courtesy: Thrun, Burgard, Fox 47

motion update Courtesy: Thrun, Burgard, Fox 48

observation Courtesy: Thrun, Burgard, Fox 49

50

Summary Particle Filters Particle filters are non-parametric, recursive Bayes filters The belief about the state is represented by a set of weighted samples The motion model is used to draw the samples for the next time step The weights of the particles are computed based on the observation model Resampling is carried out based on weights Called: Monte-Carlo localization (MCL) 51

Literature Book: Probabilistic Robotics, S. Thrun, W. Burgard, and D. Fox 52

6D Localization for Humanoid Robots 53

Localization for Humanoids 3D environments require a 6D pose estimate 2D position height yaw, pitch, roll estimate the 6D torso pose 54

Localization for Humanoids Recursively estimate the distribution about the robot s pose using MCL Probability distribution represented by pose hypotheses (particles) Needed: Motion model and observation model 55

Motion Model The odometry estimate corresponds to the incremental motion of the torso is computed from forward kinematics (FK) from the current stance foot while walking 56

Kinematic Walking Odometry Keep track of the transform to the current stance foot, starting with the identity F torso frame of the torso, transform can be computed with FK over the right leg F odom F rfoot fixed frame frame of the current stance foot 57

Kinematic Walking Odometry Both feet remain on the ground, compute the transform to the frame of the left foot with FK F torso F torso F torso F odom F odom F odom F rfoot F rfoot F rfoot F lfoot 58

Kinematic Walking Odometry The left leg becomes the stance leg and is the new reference to compute the transform to the torso frame F torso F torso F torso F torso F odom F odom F odom F odom F rfoot F rfoot F rfoot F lfoot F lfoot 59

Kinematic Walking Odometry Using FK, the poses of all joints and sensors can be computed relative to the stance foot at each time step The transform from the fixed world frame to the stance foot is updated whenever the swing foot becomes the new stance foot Concatenate the accumulated transform to the previous stance foot and the relative transform to the new stance foot 60

Odometry Estimate Odometry estimate torso poses from two consecutive u t 61

Odometry Estimate The incremental motion of the torso is computed from kinematics of the legs Typically error sources: Slippage on the ground and backlash in the joints Accordingly, only noisy odometry estimates while walking, drift accumulates over time The particle filter has to account for that noise within the motion model 62

Motion Model Noise modelled as a Gaussian with systematic drift on the 2D plane Prediction step samples a new pose according to calibration matrix covariance matrix motion composition learned with least squares optimization using ground truth data (as in Ch. 2) 63

Sampling from the Motion Model We need to draw samples from a Gaussian Gaussian 65

How to Sample from a Gaussian Drawing from a 1D Gaussian can be done in closed form Example with 10 6 samples 66

How to Sample from a Gaussian Drawing from a 1D Gaussian can be done in closed form If we consider the individual dimensions of the odometry as independent, we can draw each dimension using a 1D Gaussian draw each of the dimensions independently 67

Sampling from the Odometry We know how to sample from a Gaussian We sample an own motion for each particle for each particle same distribution for each step We then incorporate the sampled odometry into the pose of particle i 68

Example for Sampled Motions in the 2D Plane samples drawn from Gaussian in motion model robot poses according to estimated odometry small angular error, larger translational error large angular error, small translational error 69

Motion Model uncalibrated motion model: ground truth odometry (uncalibrated) Resulting particle distribution (2000 particles) Nao walking straight for 40cm 70

Motion Model uncalibrated motion model: ground truth odometry (uncalibrated) calibrated motion model: ground truth odometry (uncalibrated) properly captures drift, closer to the ground truth 71

Observation Model Observation consists of independent range, height, and IMU (roll and pitch ) measurements is computed from the values of the joint encoders and are provided by the IMU (inertial measurement unit) 72

Observation Model Range data: Raycasting or endpoint model in 3D map 73

Recap: Simplest Ray-Cast Model Ray-cast models consider the first obstacle along the ray Compare the actually measured distance with the expected distance Use a Gaussian to evaluate the difference measured distance object in map 74

Recap: Beam-Endpoint Model Evaluate the distance of the hypothetical beam end point to the closest obstacle in the map with a Gaussian Courtesy: Thrun, Burgard, Fox 75

Observation Model Range data: Ray-casting or endpoint model in 3D map Torso height: Compare measured value from kinematics to predicted height (accord. to motion model) IMU data: Compare measured roll and pitch to the predicted angles Use individual Gaussians to evaluate the difference 76

Likelihoods of Measurements Gaussian distribution computed from the height difference to the closest ground level in the map standard deviation corresponding to noise characteristics of joint encoders and IMU 77

Localization Evaluation Trajectory of 5m, 10 runs each Ground truth from external motion capture system Raycasting results in significantly smaller error Calibrated motion model requires fewer particles 79

Trajectory and Error Over Time 80

Trajectory and Error Over Time 81

Summary Estimation of the 6D torso pose (globally and local tracking) in a given 3D model Odometry estimate from kinematic walking Sample an own motion for each particle from a Gaussian Use individual Gaussians in the observation model to evaluate the differences between measured and expected values Particle filter allows to locally track and globally estimate the robot s pose 84

Literature Chapter 3, Humanoid Robot Navigation in Complex Indoor Environments, Armin Hornung, PhD thesis, Univ. Freiburg, 2014 85

Next Lecture Inaugural lecture: Wed, May 20 at 5 p.m., lecture hall III on the university's main campus (talk in German, slides in English) No lecture on Thu, May 21 Lecture 6 of the course: Thu, June 11 86

Acknowledgment Previous versions of parts of the slides have been created by Wolfram Burgard, Armin Hornung, and Cyrill Stachniss 87