Introduction to SLAM Part II. Paul Robertson

Similar documents
Topological Mapping. Discrete Bayes Filter

Visual Recognition and Search April 18, 2008 Joo Hyun Kim

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

Location Recognition and Global Localization Based on Scale-Invariant Keypoints

Local Features Tutorial: Nov. 8, 04

Location Recognition, Global Localization and Relative Positioning Based on Scale-Invariant Keypoints

The SIFT (Scale Invariant Feature

School of Computing University of Utah

Scale Invariant Feature Transform

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

Scale Invariant Feature Transform

Patch-based Object Recognition. Basic Idea

Object Recognition with Invariant Features

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Global Localization and Relative Positioning Based on Scale-Invariant Keypoints

CS231A Section 6: Problem Set 3

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Global Localization and Relative Positioning Based on Scale-Invariant Keypoints

Scale Invariant Feature Transform by David Lowe

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Outline 7/2/201011/6/

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

Feature Based Registration - Image Alignment

SIFT: Scale Invariant Feature Transform

2D Image Processing Feature Descriptors

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Robotics Programming Laboratory

Segmentation and Tracking of Partial Planar Templates

Local features: detection and description. Local invariant features

Local Feature Detectors

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Local features: detection and description May 12 th, 2015

A Comparison and Matching Point Extraction of SIFT and ISIFT

An Associate-Predict Model for Face Recognition FIPA Seminar WS 2011/2012

Local invariant features

SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little

TA Section 7 Problem Set 3. SIFT (Lowe 2004) Shape Context (Belongie et al. 2002) Voxel Coloring (Seitz and Dyer 1999)

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Implementing the Scale Invariant Feature Transform(SIFT) Method

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

CAP 5415 Computer Vision Fall 2012

CS 231A Computer Vision (Fall 2012) Problem Set 3

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

THE ANNALS OF DUNAREA DE JOS UNIVERSITY OF GALATI FASCICLE III, 2007 ISSN X ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS

Local Features: Detection, Description & Matching

Object Reconstruction

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Image features. Image Features

Probabilistic Robotics

Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching

CS 223B Computer Vision Problem Set 3

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform. Wenqi Zhu

Reconstruction of Images Distorted by Water Waves

A Novel Algorithm for Color Image matching using Wavelet-SIFT

Key properties of local features

Ensemble of Bayesian Filters for Loop Closure Detection

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Local features and image matching. Prof. Xin Yang HUST

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

Comparison of Feature Detection and Matching Approaches: SIFT and SURF

Using Geometric Blur for Point Correspondence

18 October, 2013 MVA ENS Cachan. Lecture 6: Introduction to graphical models Iasonas Kokkinos

Eppur si muove ( And yet it moves )

CS664 Lecture #21: SIFT, object recognition, dynamic programming

SIFT - scale-invariant feature transform Konrad Schindler

A Comparison of SIFT, PCA-SIFT and SURF

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011

Lecture 10 Detectors and descriptors

Verslag Project beeldverwerking A study of the 2D SIFT algorithm

Ulas Bagci

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Motion Estimation and Optical Flow Tracking

EE795: Computer Vision and Intelligent Systems

Performance of SIFT based Video Retrieval

Long-term motion estimation from images

3D Object Recognition using Multiclass SVM-KNN

Global localization from a single feature correspondence

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Improving Vision-based Topological Localization by Combining Local and Global Image Features

Application questions. Theoretical questions

Contents I IMAGE FORMATION 1

Autonomous Navigation for Flying Robots

Computer Vision for HCI. Topics of This Lecture

Determinant of homography-matrix-based multiple-object recognition

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Direct Methods in Visual Odometry

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

Det De e t cting abnormal event n s Jaechul Kim

Pictures at an Exhibition

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Transcription:

Introduction to SLAM Part II Paul Robertson

Localization Review Tracking, Global Localization, Kidnapping Problem. Kalman Filter Quadratic Linear (unless EKF) SLAM Loop closing Scaling: Partition space into overlapping regions, use rerouting algorithm. Not Talked About Features Exploration 2

Outline Topological Maps HMM SIFT Vision Based Localization 3

Topological Maps Idea: Build a qualitative map where the nodes are similar sensor signatures and transitions between nodes are control actions. 4

Advantages of Topological maps Can solve the Global Location Problem. Can solve the Kidnapping Problem. Human-like maps Supports Metric Localization Can represent as a Hidden Markov Model (HMM) 5

Hidden Markov Models (HMM) Scenario You have your domain represented as set of state variables. The states define what following state are reachable from any given state. State transitions involve action. Actions are observable, states are not. You want to be able to make sense of a sequence of actions Examples Part-of-speech tagging, natural language parsing, speech recognition, scene analysis, Location/Path estimation. 6

Overview of HMM What a Hidden Markov Model is Algorithm for finding the most likely state sequence. Algorithm for finding the probability of an action sequence (sum over all allowable state paths). Algorithm for training a HMM. Only works for problems whose state structure can be characterized as FSM in which a single action at a time is used to transition between states. Very popular because algorithms are linear on the length of the action sequence. 7

Hidden Markov Models A finite state machine with probabilities on the arcs. <s 1,S,W,E> where S={s 1,s 2,s 3,s 4,s 5,s 6,s 7,s 8 }; W={ Roger, }; E={<transition> } Transition <s 2,s 3, had,0.3> P( s " had " 2 s 3 ) = 0.3 Hot 0.5 Roger 0.3 Ordered 0.3 Big 0.1 Dog 0.5 s 1 Mary s 2 Had s 3 A s 4 Little s 5 Lamb s 6. 0.4 0.4 0.5 0.4 0.5 0.4 John Cooked A 0.3 0.3 0.3 0.5 Curry. 0.5 s 8 And 0.3 S 1 : Mary had a little Lamb and a big dog. S 2 : Roger ordered a lamb curry and a hot dog. S 3 : John cooked a hot dog curry. And 0.5 s 7 P(S 3 )=0.3*0.3*0.5*0.5*0.3*0.5=0.003375 8

Finding the most likely path Viterbi Algorithm: For an action sequence of length t-1 finds: in linear time. σ ( t) = arg max P( s1, t w1, t 1) s 1, t a 0 0.3 1 0.2 1 0.1 0 0.2 Viterbi Algorithm: 0 0.4 0 0.2 1 0.1 1110 b 1 0.5 For each state extend the most probable state sequence that ends in that state. States ε 1 11 111 1110 a b Sequence a aa aaa aaaa abbba Probability 1.0 0.2 0.04 0.008 0.005 Sequence b ab abb abbb abbbb Probability 0.0 0.1 0.05 0.025 0.005 9

Action Sequence Probabilities = + = = σ 1 1 1,, 1 ), ( ) ( i i n n n s S w P w P Let α i (t) be the probability P(w 1,t-1,S t =s i ) so = + = σ α 1, 1 ) 1 ( ) ( i i n n w P = = 0 1.0 1 (1) otherwise i i α (Must start in the start state). = = + σ α α 1 ) ( ) ( 1) ( i j w i i j s s P t t t 10

HMM forward probabilities t 1 2 3 4 5 1 0.2 0 0.3 1 0.1 0 0.2 ε 1 1 1 0 αa(t) 1.0 0.2 0.05 0.017 0.0148 a b αb(t) 0.0 0.1 0.07 0.04 0.0131 0 0.4 0 0.2 1 0.1 1 0.5 P(w 1,t ) 1.0 0.3 0.12 0.057 0.0279 1110 0.2*0.1=0.02 + 0.1*0.5=0.05 11

HMM Training (Baum-Welch Algorithm) Given a training sequence, adjusts the HMM state transition probabilities to make the action sequence as likely as possible. 0 1 2 a 0 4 1 3 a 8 2 1 0 0.5 1 0.375 a 2 0.125 Training Sequence: 01010210 12

With Hidden States a Intuitively 0 0.7 0 0.3 When counting transitions Prorate transitions by their Probability.? But you don t know the transition probabilities! b c 1. Guess a set of transition probabilities. 2. (while (improving) (propagate-training-sequences)) improving is calculated by comparing the cross-entropy after each iteration. When the cross-entropy decreases by less than ε in an iteration we are done. Cross entropy is: 1 PM 1( w1, n)log2 PM ( w1, n) w 1 n n, 13

Scale Invariant Feature Transform David Lowe Distinctive Image Features from Scale-Invariant Keypoints IJCV 2004. Stages: Scale Space (Witkin 83) Extrema Extraction Keypoint Pruning and Localization Orientation Assignment Keypoint Descriptor 14

Scale space in SIFT Motivation: Objects can be recognized at many levels of detail Large distances correspond to low l.o.d. Different kinds of information are available at each level Idea: Extract information content from an image at each l.o.d. Detail reduction done by Gaussian blurring: I(x, y) is input image. L(x, y, σ) is rep. at scale σ. G(x, y, σ) is 2D Gaussian with variance σ 2 L(x, y, σ) = G(x, y, σ) * I(x, y) D(x, y, σ) = L(x, y, k σ) L(x, y, σ) 15

Features of SIFT Invariant to: Scale Planar Rotation Contrast Illumination Large numbers of features 16

Difference of Gaussians 17

Scale Space Compute local extrema of D Each (x, y, σ) is a feature. (x, y) scale and planar rotation invariant. 18

Pruning for Stability Remove feature candidates that Low Contrast Unstable Edge Responses 19

Orientation Assignment For each feature (x, y, σ): Find fixed-pixel-area patch in L(x, y, σ) around (x, y) Compute gradient histogram; call this b i For b i within 80% of max, make feature (x, y, σ, b i ) 20

Readings: Vision Based SLAM Se, S., D. Lowe and J. Little, Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks, The International Journal of Robotics Research, Volume 21 Issue 08. Kosecka, J. Zhou, L. Barber, P. Duric, Z. Qualitative Image Based Localization in Indoor Environments CVPR 2003. 22

Predictive Vision-Based SLAM 1. Compute SIFT features from current location. 2. Use Stereo to locate features in 3D. 3. Move 4. Predict new location based on odometry and Kalman Filter. 5. Predict location of SIFT features based upon motion of robot. 6. Find SIFT features and find 3D position of each. 7. Compute position estimate from each matched feature. 23

Vision Based Localization Acquire video sequence during the exploration of new environment. Build environment model in terms of locations and spatial relationships between them. Topological localization by means of location recognition. Metric localization by computing relative pose of current view and representation of most likely location. 24

Same Location? 25

Global Topology, Local Geometry Issues: 1. Representation of individual locations 2. Learning the representative location features 3. Learning neighborhood relationships between locations. 4. Each view represented by a set of SIFT features. 5. Locations correspond to sub-sequences across which features can be matched successfully. 6. Spatial relationships between locations are captured by a location graph. 26

Image Matching 10 500 features per view For each feature find the discriminative nearest neighbor feature. Image Distance (Score) - # of successfully matched features. 27

Partitioning the Video Sequence Transitions determined during exploration. Location sub-sequence across which features can be matched successfully. Location Representation: set of representative views and their associated features. 28

29

Location Recognition Given a single view what is the location this view came from? Recognition voting scheme For each representative view selected in the exploration stage 1. Compute the number of matched features. 2. Location with maximum number of matches is the most likely location. 30

Markov Localization in the topological Model Exploiting the spatial relationships between the locations S discrete set of states L x {N, W, S, E} locations and orientations A discrete set of actions (N, W, S, E) T(S, A, S ) transition function, Discrete Markov Model 31

Markov Localization P(L t =l i o 1:t ) P(o t L t =l i ) P(L t =l i o 1:t-1 ) Location posterior Observation likelihood P(location observations) P(image location) Observation Likelihood P(image location) P(o t L t =l i ) = C(i) Σ j C(j) P(L t =l i o 1:t-1 ) = Σ A(i,j)P(L t-1 =l j o 1:t-1 ) Location transition probability matrix 32

HMM Recognition Improved recognition rate from 82% to 96% in experimental tests 33

Metric Localization within Location 1. Given closest representative view of the location 2. Establish exact correspondences between keypoints 3. Probabilistic matching combining (epipolar) geometry, keypoint descriptors and intrinsic scale Compute relative pose with respect to the reference view 34

Wrap up What we have covered: Supporting Methods Kalman Filter HMM SIFT Localization and Mapping Basic SLAM Large Scale SLAM (Leonard) Topological Maps Vision Based Localization/SLAM 35