SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little

Similar documents
Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Autonomous Mobile Robot Design

Motion estimation of unmanned marine vehicles Massimo Caccia

Simultaneous Localization and Mapping

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

Introduction to SLAM Part II. Paul Robertson

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Data Association for SLAM

Robot Mapping. Graph-Based SLAM with Landmarks. Cyrill Stachniss

Robot Mapping. Graph-Based SLAM with Landmarks. Cyrill Stachniss

Long-term motion estimation from images

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Stereo Observation Models

An Automatic Method for Adjustment of a Camera Calibration Room

Epipolar Geometry in Stereo, Motion and Object Recognition

Direct Methods in Visual Odometry

3D Perception. CS 4495 Computer Vision K. Hawkins. CS 4495 Computer Vision. 3D Perception. Kelsey Hawkins Robotics

REMOTELY controlled mobile robots have been a subject

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms

POME A mobile camera system for accurate indoor pose

Multiview Stereo COSC450. Lecture 8

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Autonomous Navigation for Flying Robots

Dense 3D Reconstruction. Christiano Gava

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Recap from Previous Lecture

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

Egomotion Estimation by Point-Cloud Back-Mapping

CS201 Computer Vision Camera Geometry

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Dense 3D Reconstruction. Christiano Gava

Simultaneous Localization and Mapping (SLAM)

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang

Computing the relations among three views based on artificial neural network

BBR Progress Report 006: Autonomous 2-D Mapping of a Building Floor

Final Exam Study Guide

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

CAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM

Application questions. Theoretical questions

Stereo SLAM. Davide Migliore, PhD Department of Electronics and Information, Politecnico di Milano, Italy

Mobile Robotics. Mathematics, Models, and Methods

Simultaneous Localization

MASTER THESIS IN ELECTRONICS 30 CREDITS, ADVANCED LEVEL 505. Monocular Visual SLAM based on Inverse Depth Parametrization

Stereo Epipolar Geometry for General Cameras. Sanja Fidler CSC420: Intro to Image Understanding 1 / 33

C18 Computer Vision. Lecture 1 Introduction: imaging geometry, camera calibration. Victor Adrian Prisacariu.

From Personal Computers to Personal Robots

CS283: Robotics Fall 2016: Sensors

Vision-Based 3D Trajectory Tracking for Unknown Environments

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry

Semi-Dense Direct SLAM

Image Augmented Laser Scan Matching for Indoor Dead Reckoning

An idea which can be used once is a trick. If it can be used more than once it becomes a method

Robotics. Chapter 25. Chapter 25 1

Wide-Baseline Stereo Vision for Mars Rovers

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

Probabilistic Robotics

W4. Perception & Situation Awareness & Decision making

Ceilbot vision and mapping system

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Roadmaps. Vertex Visibility Graph. Reduced visibility graph, i.e., not including segments that extend into obstacles on either side.

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

Epipolar Geometry CSE P576. Dr. Matthew Brown

Survey: Simultaneous Localisation and Mapping (SLAM) Ronja Güldenring Master Informatics Project Intellgient Robotics University of Hamburg

SLAM WITH KLT POINT FEATURES. P.O. Box 22, Thammasat-Rangsit Post Office, Pathumthani Thailand

CV: 3D sensing and calibration

Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

High-precision, consistent EKF-based visual-inertial odometry

EKF Localization and EKF SLAM incorporating prior information

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful.

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS

Improving Vision-based Topological Localization by Combining Local and Global Image Features

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Camera Model and Calibration

Epipolar Geometry and Stereo Vision

Particle Filter in Brief. Robot Mapping. FastSLAM Feature-based SLAM with Particle Filters. Particle Representation. Particle Filter Algorithm

Robot Localization based on Geo-referenced Images and G raphic Methods

Stereo and Epipolar geometry

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Master Automática y Robótica. Técnicas Avanzadas de Vision: Visual Odometry. by Pascual Campoy Computer Vision Group

CS5670: Computer Vision

Augmented Reality, Advanced SLAM, Applications

Ian Mahon SLAM Summer School 2009

Towards a visual perception system for LNG pipe inspection

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Probabilistic Structure Matching for Visual SLAM with a Multi-Camera Rig

Camera Model and Calibration. Lecture-12

Topological Mapping. Discrete Bayes Filter

Transcription:

SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little + Presented by Matt Loper CS296-3: Robot Learning and Autonomy Brown University April 4th, 2007

SLAM: Simultaneous Localization and Mapping Objective Inputs: odometry and sensor readings, over time Outputs: a map, and the set of locations traversed on it Uncertainty modeling... on the inputs is essential, since sensors and odometry are noisy in the output is great if you can do it (this paper does).

Overview SIFT Stereo Ego-motion estimation: improving on odometry Landmark Tracking: remembering what we've seen First Results Heuristic improvements: tricks to improve success Error Modeling More Results

SIFT Stereo Sensors: typical sensors include... Laser Sonar Vision, with an instrumented environment Instead, this paper uses the Digiclops, a trinocular camera rig

SIFT Stereo Goal: find features in images, and compute their 3D relative position Three Steps: Detect unique features in each of 3 cameras Match across cameras Recover feature positions in 3D, relative to camera

SIFT Stereo Step 1 of 3: Detect unique features in each of 3 cameras SIFT allows detection and characterization of image features

SIFT Stereo Step 2 of 3: Match features across cameras Given our three input images, we could just match all features on each against all others But we can constrain matches further: Epipolar constraint Disparity constraint: for a rightmost camera, objects should appear more to the left Orientation, scale should be about the same in the two images Should be only one good match

SIFT Stereo Constraint demonstration Left Camera Right Camera

SIFT Stereo Step 3 of 3: Recover feature positions in 3D, relative to camera Approach #1: what they did The size of the disparity gives the distance We have two disparities, so we could take the average Approach #2: We could project rays through the pixels, and find the closest intersection

SIFT Stereo Note vertical and horizontal disparities Left Camera Right Camera

Ego-motion estimation Odometry gives us change in position/orientation Let's refine these moment-to-moment estimations of change in position/orientation Note: We will still have the problem of slowlydegrading sums of differences! It will just be less extreme

Ego-motion estimation Find matches across two frames Just as we had constraints between two cameras, we can have constraints between two frames position, scale, orientation, disparity Given >= 3 matches, we can solve for change in camera position/orientation Given odometry as initializer, minimize reprojection error (could use, for example, Matlab's lsqnonlin)

Landmark Tracking Let's turn SIFT features into landmarks For each 3D SIFT feature found, let's associate some data with it... X,Y,Z: 3D position in the real world s: scale of landmark in real world o: orientation of landmark in real world (from top down) l: count of number of times this landmark has been unseen, when it should have been seen. This is like a measure of error for this landmark.

Landmark Tracking Rules for landmarks: If a landmark is expected to be seen......but is not seen, that landmark's missed count goes up by one...and is seen, reset its miss count to zero If a new landmark is found, add it to the database and set its missed count to zero If missed count goes above 20, remove landmark

First results Generated map Starting point at cross Path on dotted line Come back to starting point, and measure error That error is very small

Heuristic improvements Only insert new landmarks if they've been seen a few frames Build permanent landmarks when the environment is clear Associate view vector with each landmark If we're behind the landmark, or more than 20 degrees to the side of it, don't expect to see it Allow multiple features per landmark (ie multiple scales, orientations, and view vectors)

Error Modeling: Robot Position Uses a Kalman filter During each frame, we should maintain... X: State P: Uncertainty over that state State: position and angle, ie... position: x,y angle: yaw, pitch, roll Uncertainty over that state: a covariance matrix P (ie ellipsoidal Gaussian)

Error Modeling: Robot Position x LS is pose predicted by SIFT stereo P LS is covariance over this prediction, computed as inverse of J T J

Error Modeling: Landmarks Each landmark has a covariance matrix Gets updated on a frame-by-frame-basis Math is in the paper

More results Just spinning around

More results Going in a path around the room

More results Uncertainty over robot position