CSE 527: Introduction to Computer Vision

Size: px
Start display at page:

Download "CSE 527: Introduction to Computer Vision"

Transcription

1 CSE 527: Introduction to Computer Vision Week 10 Class 2: Visual Odometry November 2nd, 2017

2 Today Visual Odometry Intro Algorithm SLAM

3 Visual Odometry Input Output Images, Video Camera trajectory, motion path (preferably in 3D) [YouTube]

4 Ways to Measure Motion GPS Odometry (wheels) Doesn t work indoors Inaccurate (triangulation from satellites) IMU (Accelerometer, Gyroscope) 2nd derivative 6DOF Drift WiFi / Bluetooth beacons Expensive Instrumented environments Problem on slippage terrain Drift

5 Visual Odometry Cameras as Motion Sensor Cheap: size, price, power draw Ubiquitous More information than just motion Passive sensor Needs processing power Follows nature Works in GPSdenied environments (underwater, sky, on Mars!) Camera is a bearing sensor: Measure object angle w.r.t optical axis (i.e. you can navigate by the stars!) Depth (range) cameras Stereo, SL, ToF Make odometry even easier

6 Visual Odometry Basic idea: Estimate 6DOF transformation (R, t) incrementally Use only static world features Need be robust to moving objects Detect, Extract, Track P Image plane p Can achieve this in several ways: p1 2D2D: Monocular, E, F matrices 3D3D: Stereo, range cameras, LIDAR) 2D3D: Reconstruction (SfM) Camera optical center 2 O1 O2 [R t]

7 Visual Odometry Algorithm 1. Get image Ik 2. Compute correspondences between Ik 1 and Ik (either feature matching or tracking). 3. Find correct correspondences and compute essential matrix E We saw: #2 in Weeks 3 and 4 #3 in Weeks 8 and 9 #4 in Weeks 8 and 9 #5 in Weeks 9 and 10 (last time) But #6?? 4. Decompose E into Rk and tk 5. Compute 3D model (triangulate points X). 6. Rescale tk according to relative scale r 7. k = k + 1

8 [Lazbnik]

9 [Lazbnik]

10 [Lazbnik]

11 [Lazbnik]

12 [Lazbnik]

13 [Lazbnik]

14 [Lazbnik]

15 [Lazbnik]

16 Even if we have a good K matrix (and get the E matrix) we can still only get similarityambiguous reconstruction... [Lazbnik]

17 Scale Ambiguity xrexlt = 0 2xRExLT = xr2exlt = 0 E1 can have one scale, and E2 a different scale 5xRExLT = xr5exlt = 0 What can we do?... Hint: What can we get with E and 2D features in both images? E1 E2

18 [Scaramuzza]

19 Questions?

20 Visual Odometry What other way do we know to sequentially find Rk+1, tk+1? Known 3D points Assume we already have Rk,tk and a set of 3D points we triangulated from them. Rk tk???

21 Visual Odometry What other way do we know to sequentially find Rk+1, tk+1? Known 3D points Assume we already have Rk,tk and a set of 3D points we triangulated from them. We can use the PnP algorithm! Use proxy 2D3D correspondences: Match 2D features from P3 to P2 Get estimated 3D points for features in P2 Use 3D points with proxy 2D points by association Solve robust PnP as usual Rk tk???

22 PTAM [Klein, Murray, 2007] [YouTube]

23 Simultaneous Localization and Mapping SLAM is basically VO with a few extra tricks to make it more complete: Loop closing Maintaining the space map Maintaining a pose graph (motion path) Modeling uncertainty The boundary between SLAM and VO is vague. The terms are used interchangeably. VO is usually used directly for Odometry Measure distance, speed Navigation SLAM has more applications: Mapping, charting a space (re)localizing within a known space SLAM is a more probabilistic approach to VO. [Lv]

24 [Seigwart]

25 [Seigwart]

26 [Seigwart]

27 [Seigwart]

28 [Seigwart]

29 Modeling Uncertainty MonoSLAM Extended Kalman Filter based (EKF) SLAM [Davison 2003] The state of the KF contains Camera position (6 DOF) All features positions (!!) Does not scale well (up to ~1000 features)

30 Loop Closure Determine if visited this place before connect the pose graph. Features: Points (2D, 3D) Patches (intensities) Lines (edges) Need to do matching search problem with a good prior (current location) [Strasdat]

31 Iterative Closest Point Algorithm: For each source point, Match the closest reference point Estimate the combination of rotation and translation (rejecting outlier points) Transform the source points using the obtained transformation. Iterate (reassociate the points, and so on). Key issues to get robust solution: Finding (choose neighbors) Matching (choose metric) Estimating (robust)

32 ICPbased Loop Closure [Fallon et al 2012]

33 ICPbased (volumetric) SLAM [Newcombe 2011]

34 Wrapup What have we learned today: Visual Odometry Intro Algorithm Ambiguity SLAM Intro ICP

Visual SLAM. An Overview. L. Freda. ALCOR Lab DIAG University of Rome La Sapienza. May 3, 2016

Visual SLAM. An Overview. L. Freda. ALCOR Lab DIAG University of Rome La Sapienza. May 3, 2016 An Overview L. Freda ALCOR Lab DIAG University of Rome La Sapienza May 3, 2016 L. Freda (University of Rome La Sapienza ) Visual SLAM May 3, 2016 1 / 39 Outline 1 Introduction What is SLAM Motivations

More information

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models Introduction ti to Embedded dsystems EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping Gabe Hoffmann Ph.D. Candidate, Aero/Astro Engineering Stanford University Statistical Models

More information

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

More information

Augmented Reality, Advanced SLAM, Applications

Augmented Reality, Advanced SLAM, Applications Augmented Reality, Advanced SLAM, Applications Prof. Didier Stricker & Dr. Alain Pagani alain.pagani@dfki.de Lecture 3D Computer Vision AR, SLAM, Applications 1 Introduction Previous lectures: Basics (camera,

More information

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 7: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

Hybrids Mixed Approaches

Hybrids Mixed Approaches Hybrids Mixed Approaches Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline Why mixing? Parallel Tracking and Mapping Benefits

More information

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang MULTI-MODAL MAPPING Robotics Day, 31 Mar 2017 Frank Mascarich, Shehryar Khattak, Tung Dang Application-Specific Sensors Cameras TOF Cameras PERCEPTION LiDAR IMU Localization Mapping Autonomy Robotic Perception

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: EKF-based SLAM Dr. Kostas Alexis (CSE) These slides have partially relied on the course of C. Stachniss, Robot Mapping - WS 2013/14 Autonomous Robot Challenges Where

More information

ORB SLAM 2 : an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

ORB SLAM 2 : an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras ORB SLAM 2 : an OpenSource SLAM System for Monocular, Stereo and RGBD Cameras Raul urartal and Juan D. Tardos Presented by: Xiaoyu Zhou Bolun Zhang Akshaya Purohit Lenord Melvix 1 Outline Background Introduction

More information

Simultaneous Localization and Mapping (SLAM)

Simultaneous Localization and Mapping (SLAM) Simultaneous Localization and Mapping (SLAM) RSS Lecture 16 April 8, 2013 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 SLAM Problem Statement Inputs: No external coordinate reference Time series of

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.2: Visual Odometry Jürgen Sturm Technische Universität München Cascaded Control Robot Trajectory 0.1 Hz Visual

More information

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 8: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

Davide Scaramuzza. University of Zurich

Davide Scaramuzza. University of Zurich Davide Scaramuzza University of Zurich Robotics and Perception Group http://rpg.ifi.uzh.ch/ Scaramuzza, D., Fraundorfer, F., Visual Odometry: Part I - The First 30 Years and Fundamentals, IEEE Robotics

More information

Direct Methods in Visual Odometry

Direct Methods in Visual Odometry Direct Methods in Visual Odometry July 24, 2017 Direct Methods in Visual Odometry July 24, 2017 1 / 47 Motivation for using Visual Odometry Wheel odometry is affected by wheel slip More accurate compared

More information

Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich. LSD-SLAM: Large-Scale Direct Monocular SLAM

Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich. LSD-SLAM: Large-Scale Direct Monocular SLAM Computer Vision Group Technical University of Munich Jakob Engel LSD-SLAM: Large-Scale Direct Monocular SLAM Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich Monocular Video Engel,

More information

Simultaneous Localization

Simultaneous Localization Simultaneous Localization and Mapping (SLAM) RSS Technical Lecture 16 April 9, 2012 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 Navigation Overview Where am I? Where am I going? Localization Assumed

More information

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Stephen Se, David Lowe, Jim Little Department of Computer Science University of British Columbia Presented by Adam Bickett

More information

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers

More information

Lecture 13 Visual Inertial Fusion

Lecture 13 Visual Inertial Fusion Lecture 13 Visual Inertial Fusion Davide Scaramuzza Course Evaluation Please fill the evaluation form you received by email! Provide feedback on Exercises: good and bad Course: good and bad How to improve

More information

Application questions. Theoretical questions

Application questions. Theoretical questions The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

KinectFusion: Real-Time Dense Surface Mapping and Tracking

KinectFusion: Real-Time Dense Surface Mapping and Tracking KinectFusion: Real-Time Dense Surface Mapping and Tracking Gabriele Bleser Thanks to Richard Newcombe for providing the ISMAR slides Overview General: scientific papers (structure, category) KinectFusion:

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery 1 Charles TOTH, 1 Dorota BRZEZINSKA, USA 2 Allison KEALY, Australia, 3 Guenther RETSCHER,

More information

CS 532: 3D Computer Vision 7 th Set of Notes

CS 532: 3D Computer Vision 7 th Set of Notes 1 CS 532: 3D Computer Vision 7 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Logistics No class on October

More information

Monocular Visual Odometry

Monocular Visual Odometry Elective in Robotics coordinator: Prof. Giuseppe Oriolo Monocular Visual Odometry (slides prepared by Luca Ricci) Monocular vs. Stereo: eamples from Nature Predator Predators eyes face forward. The field

More information

CSE-571 Robotics. Sensors for Mobile Robots. Beam-based Sensor Model. Proximity Sensors. Probabilistic Sensor Models. Beam-based Scan-based Landmarks

CSE-571 Robotics. Sensors for Mobile Robots. Beam-based Sensor Model. Proximity Sensors. Probabilistic Sensor Models. Beam-based Scan-based Landmarks Sensors for Mobile Robots CSE-57 Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks Contact sensors: Bumpers Internal sensors Accelerometers (spring-mounted masses) Gyroscopes (spinning

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

OpenStreetSLAM: Global Vehicle Localization using OpenStreetMaps

OpenStreetSLAM: Global Vehicle Localization using OpenStreetMaps OpenStreetSLAM: Global Vehicle Localization using OpenStreetMaps Georgios Floros, Benito van der Zander and Bastian Leibe RWTH Aachen University, Germany http://www.vision.rwth-aachen.de floros@vision.rwth-aachen.de

More information

Large-Scale Robotic SLAM through Visual Mapping

Large-Scale Robotic SLAM through Visual Mapping Large-Scale Robotic SLAM through Visual Mapping Christof Hoppe, Kathrin Pirker, Matthias Ru ther and Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology, Austria {hoppe,

More information

Image Augmented Laser Scan Matching for Indoor Localization

Image Augmented Laser Scan Matching for Indoor Localization Image Augmented Laser Scan Matching for Indoor Localization Nikhil Naikal Avideh Zakhor John Kua Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2009-35

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Master Automática y Robótica. Técnicas Avanzadas de Vision: Visual Odometry. by Pascual Campoy Computer Vision Group

Master Automática y Robótica. Técnicas Avanzadas de Vision: Visual Odometry. by Pascual Campoy Computer Vision Group Master Automática y Robótica Técnicas Avanzadas de Vision: by Pascual Campoy Computer Vision Group www.vision4uav.eu Centro de Automá

More information

An Overview to Visual Odometry and Visual SLAM: Applications to Mobile Robotics

An Overview to Visual Odometry and Visual SLAM: Applications to Mobile Robotics Intell Ind Syst (2015) 1:289 311 DOI 10.1007/s40903-015-0032-7 ORIGINAL PAPER An Overview to Visual Odometry and Visual SLAM: Applications to Mobile Robotics Khalid Yousif 1 Alireza Bab-Hadiashar 1 Reza

More information

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models Emanuele Ruffaldi Lorenzo Peppoloni Alessandro Filippeschi Carlo Alberto Avizzano 2014 IEEE International

More information

Outline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping

Outline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping Outline CSE 576 KinectFusion: Real-Time Dense Surface Mapping and Tracking PhD. work from Imperial College, London Microsoft Research, Cambridge May 6, 2013 1 Why we re interested in Real-Time tracking

More information

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains PhD student: Jeff DELAUNE ONERA Director: Guy LE BESNERAIS ONERA Advisors: Jean-Loup FARGES Clément BOURDARIAS

More information

Structured light 3D reconstruction

Structured light 3D reconstruction Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance

More information

Monocular Visual-Inertial SLAM. Shaojie Shen Assistant Professor, HKUST Director, HKUST-DJI Joint Innovation Laboratory

Monocular Visual-Inertial SLAM. Shaojie Shen Assistant Professor, HKUST Director, HKUST-DJI Joint Innovation Laboratory Monocular Visual-Inertial SLAM Shaojie Shen Assistant Professor, HKUST Director, HKUST-DJI Joint Innovation Laboratory Why Monocular? Minimum structural requirements Widely available sensors Applications:

More information

Data Association for SLAM

Data Association for SLAM CALIFORNIA INSTITUTE OF TECHNOLOGY ME/CS 132a, Winter 2011 Lab #2 Due: Mar 10th, 2011 Part I Data Association for SLAM 1 Introduction For this part, you will experiment with a simulation of an EKF SLAM

More information

Computationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor

Computationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor Computationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor Chang Liu & Stephen D. Prior Faculty of Engineering and the Environment, University of Southampton,

More information

UAV Autonomous Navigation in a GPS-limited Urban Environment

UAV Autonomous Navigation in a GPS-limited Urban Environment UAV Autonomous Navigation in a GPS-limited Urban Environment Yoko Watanabe DCSD/CDIN JSO-Aerial Robotics 2014/10/02-03 Introduction 2 Global objective Development of a UAV onboard system to maintain flight

More information

Visual SLAM for small Unmanned Aerial Vehicles

Visual SLAM for small Unmanned Aerial Vehicles Visual SLAM for small Unmanned Aerial Vehicles Margarita Chli Autonomous Systems Lab, ETH Zurich Simultaneous Localization And Mapping How can a body navigate in a previously unknown environment while

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Main Contents 1. Target & Related Work 2. Main Features of This System 3. System Overview & Workflow 4. Detail of This System 5. Experiments 6.

More information

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements M. Lourakis and E. Hourdakis Institute of Computer Science Foundation for Research and Technology Hellas

More information

Low Cost solution for Pose Estimation of Quadrotor

Low Cost solution for Pose Estimation of Quadrotor Low Cost solution for Pose Estimation of Quadrotor mangal@iitk.ac.in https://www.iitk.ac.in/aero/mangal/ Intelligent Guidance and Control Laboratory Indian Institute of Technology, Kanpur Mangal Kothari

More information

3D Scene Reconstruction with a Mobile Camera

3D Scene Reconstruction with a Mobile Camera 3D Scene Reconstruction with a Mobile Camera 1 Introduction Robert Carrera and Rohan Khanna Stanford University: CS 231A Autonomous supernumerary arms, or "third arms", while still unconventional, hold

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Probabilistic Motion and Sensor Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Sensors for Mobile

More information

Robotic Perception and Action: Vehicle SLAM Assignment

Robotic Perception and Action: Vehicle SLAM Assignment Robotic Perception and Action: Vehicle SLAM Assignment Mariolino De Cecco Mariolino De Cecco, Mattia Tavernini 1 CONTENTS Vehicle SLAM Assignment Contents Assignment Scenario 3 Odometry Localization...........................................

More information

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Dealing with Scale Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline Why care about size? The IMU as scale provider: The

More information

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1 Robot Mapping SLAM Front-Ends Cyrill Stachniss Partial image courtesy: Edwin Olson 1 Graph-Based SLAM Constraints connect the nodes through odometry and observations Robot pose Constraint 2 Graph-Based

More information

15 Years of Visual SLAM

15 Years of Visual SLAM 15 Years of Visual SLAM Andrew Davison Robot Vision Group and Dyson Robotics Laboratory Department of Computing Imperial College London www.google.com/+andrewdavison December 18, 2015 What Has Defined

More information

LOAM: LiDAR Odometry and Mapping in Real Time

LOAM: LiDAR Odometry and Mapping in Real Time LOAM: LiDAR Odometry and Mapping in Real Time Aayush Dwivedi (14006), Akshay Sharma (14062), Mandeep Singh (14363) Indian Institute of Technology Kanpur 1 Abstract This project deals with online simultaneous

More information

Stable Vision-Aided Navigation for Large-Area Augmented Reality

Stable Vision-Aided Navigation for Large-Area Augmented Reality Stable Vision-Aided Navigation for Large-Area Augmented Reality Taragay Oskiper, Han-Pang Chiu, Zhiwei Zhu Supun Samarasekera, Rakesh Teddy Kumar Vision and Robotics Laboratory SRI-International Sarnoff,

More information

CVPR 2014 Visual SLAM Tutorial Kintinuous

CVPR 2014 Visual SLAM Tutorial Kintinuous CVPR 2014 Visual SLAM Tutorial Kintinuous kaess@cmu.edu The Robotics Institute Carnegie Mellon University Recap: KinectFusion [Newcombe et al., ISMAR 2011] RGB-D camera GPU 3D/color model RGB TSDF (volumetric

More information

Dense 3D Reconstruction from Autonomous Quadrocopters

Dense 3D Reconstruction from Autonomous Quadrocopters Dense 3D Reconstruction from Autonomous Quadrocopters Computer Science & Mathematics TU Munich Martin Oswald, Jakob Engel, Christian Kerl, Frank Steinbrücker, Jan Stühmer & Jürgen Sturm Autonomous Quadrocopters

More information

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza ETH Master Course: 151-0854-00L Autonomous Mobile Robots Summary 2 Lecture Overview Mobile Robot Control Scheme knowledge, data base mission

More information

InFuse: A Comprehensive Framework for Data Fusion in Space Robotics

InFuse: A Comprehensive Framework for Data Fusion in Space Robotics InFuse InFuse: A Comprehensive Framework for Data Fusion in Space Robotics June 20 th, 2017 Shashank Govindaraj (Space Applications Services, Belgium) Overview 1. Motivations & Objectives 2. InFuse within

More information

Estimating Geospatial Trajectory of a Moving Camera

Estimating Geospatial Trajectory of a Moving Camera Estimating Geospatial Trajectory of a Moving Camera Asaad Hakeem 1, Roberto Vezzani 2, Mubarak Shah 1, Rita Cucchiara 2 1 School of Electrical Engineering and Computer Science, University of Central Florida,

More information

Implementation of Odometry with EKF for Localization of Hector SLAM Method

Implementation of Odometry with EKF for Localization of Hector SLAM Method Implementation of Odometry with EKF for Localization of Hector SLAM Method Kao-Shing Hwang 1 Wei-Cheng Jiang 2 Zuo-Syuan Wang 3 Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung,

More information

Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles Shaojie Shen Dept. of Electrical and Systems Engineering & GRASP Lab, University of Pennsylvania Committee: Daniel

More information

3D Photography: Active Ranging, Structured Light, ICP

3D Photography: Active Ranging, Structured Light, ICP 3D Photography: Active Ranging, Structured Light, ICP Kalin Kolev, Marc Pollefeys Spring 2013 http://cvg.ethz.ch/teaching/2013spring/3dphoto/ Schedule (tentative) Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar

More information

3D Modeling from Range Images

3D Modeling from Range Images 1 3D Modeling from Range Images A Comprehensive System for 3D Modeling from Range Images Acquired from a 3D ToF Sensor Dipl.-Inf. March 22th, 2007 Sensor and Motivation 2 3D sensor PMD 1k-S time-of-flight

More information

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms L17. OCCUPANCY MAPS NA568 Mobile Robotics: Methods & Algorithms Today s Topic Why Occupancy Maps? Bayes Binary Filters Log-odds Occupancy Maps Inverse sensor model Learning inverse sensor model ML map

More information

COMBINING MEMS-BASED IMU DATA AND VISION-BASED TRAJECTORY ESTIMATION

COMBINING MEMS-BASED IMU DATA AND VISION-BASED TRAJECTORY ESTIMATION COMBINING MEMS-BASED IMU DATA AND VISION-BASED TRAJECTORY ESTIMATION F. Tsai a*, H. Chang b and A. Y. S. Su c a Center for Space and Remote Sensing Research b Department of Civil Engineering c Research

More information

High-precision, consistent EKF-based visual-inertial odometry

High-precision, consistent EKF-based visual-inertial odometry High-precision, consistent EKF-based visual-inertial odometry Mingyang Li and Anastasios I. Mourikis, IJRR 2013 Ao Li Introduction What is visual-inertial odometry (VIO)? The problem of motion tracking

More information

Egomotion Estimation by Point-Cloud Back-Mapping

Egomotion Estimation by Point-Cloud Back-Mapping Egomotion Estimation by Point-Cloud Back-Mapping Haokun Geng, Radu Nicolescu, and Reinhard Klette Department of Computer Science, University of Auckland, New Zealand hgen001@aucklanduni.ac.nz Abstract.

More information

Introduction to Autonomous Mobile Robots

Introduction to Autonomous Mobile Robots Introduction to Autonomous Mobile Robots second edition Roland Siegwart, Illah R. Nourbakhsh, and Davide Scaramuzza The MIT Press Cambridge, Massachusetts London, England Contents Acknowledgments xiii

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Visual-Inertial RGB-D SLAM for Mobile Augmented Reality

Visual-Inertial RGB-D SLAM for Mobile Augmented Reality Visual-Inertial RGB-D SLAM for Mobile Augmented Reality Williem 1, Andre Ivan 1, Hochang Seok 2, Jongwoo Lim 2, Kuk-Jin Yoon 3, Ikhwan Cho 4, and In Kyu Park 1 1 Department of Information and Communication

More information

Graph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle)

Graph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle) Graph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle) Taekjun Oh 1), Sungwook Jung 2), Seungwon Song 3), and Hyun Myung 4) 1), 2), 3), 4) Urban

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

Research Article Parallel Tracking and Mapping for Controlling VTOL Airframe

Research Article Parallel Tracking and Mapping for Controlling VTOL Airframe Journal of Control Science and Engineering Volume 2011, Article ID 413074, 10 pages doi:10.1155/2011/413074 Research Article Parallel Tracking and Mapping for Controlling VTOL Airframe Michal Jama 1 and

More information

Indoor Localization Algorithms for a Human-Operated Backpack System

Indoor Localization Algorithms for a Human-Operated Backpack System Indoor Localization Algorithms for a Human-Operated Backpack System George Chen, John Kua, Stephen Shum, Nikhil Naikal, Matthew Carlberg, Avideh Zakhor Video and Image Processing Lab, University of California,

More information

Visual odometry (VO) is the process of estimating. Visual Odometry. Part I: The First 30 Years and Fundamentals

Visual odometry (VO) is the process of estimating. Visual Odometry. Part I: The First 30 Years and Fundamentals DIGITAL VISION Visual Odometry Part I: The First 30 Years and Fundamentals By Davide Scaramuzza and Friedrich Fraundorfer Visual odometry (VO) is the process of estimating the egomotion of an agent (e.g.,

More information

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100 ME 597/747 Autonomous Mobile Robots Mid Term Exam Duration: 2 hour Total Marks: 100 Instructions: Read the exam carefully before starting. Equations are at the back, but they are NOT necessarily valid

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Kalman Filter Based. Localization

Kalman Filter Based. Localization Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Kalman Filter Based Localization & SLAM Zürich Autonomous

More information

Targetless calibration for vehicle mounted cameras

Targetless calibration for vehicle mounted cameras Targetless calibration for vehicle mounted cameras in planar motion using visual odometry Master s thesis in Engineering mathematics and computational science & Systems, control and mechatronics (Ska det

More information

A Systems View of Large- Scale 3D Reconstruction

A Systems View of Large- Scale 3D Reconstruction Lecture 23: A Systems View of Large- Scale 3D Reconstruction Visual Computing Systems Goals and motivation Construct a detailed 3D model of the world from unstructured photographs (e.g., Flickr, Facebook)

More information

Stereo and Monocular Vision Applied to Computer Aided Navigation for Aerial and Ground Vehicles

Stereo and Monocular Vision Applied to Computer Aided Navigation for Aerial and Ground Vehicles Riccardo Giubilato Centro di Ateneo di Studi e Attivita Spaziali Giuseppe Colombo CISAS University of Padova 1 Introduction Previous Work Research Objectives Research Methodology Robot vision: Process

More information

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem Presented by: Justin Gorgen Yen-ting Chen Hao-en Sung Haifeng Huang University of California, San Diego May 23, 2017 Original

More information

Image Augmented Laser Scan Matching for Indoor Dead Reckoning

Image Augmented Laser Scan Matching for Indoor Dead Reckoning Image Augmented Laser Scan Matching for Indoor Dead Reckoning Nikhil Naikal, John Kua, George Chen, and Avideh Zakhor Abstract Most existing approaches to indoor localization focus on using either cameras

More information

Visual-SLAM Algorithms: a Survey from 2010 to 2016

Visual-SLAM Algorithms: a Survey from 2010 to 2016 Taketomi et al. REVIEW Visual-SLAM Algorithms: a Survey from 2010 to 2016 Takafumi Taketomi 1*, Hideaki Uchiyama 2 and Sei Ikeda 3 Abstract SLAM is an abbreviation for simultaneous localization and mapping,

More information

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Hauke Strasdat, Cyrill Stachniss, Maren Bennewitz, and Wolfram Burgard Computer Science Institute, University of

More information

Semi-Dense Direct SLAM

Semi-Dense Direct SLAM Computer Vision Group Technical University of Munich Jakob Engel Jakob Engel, Daniel Cremers David Caruso, Thomas Schöps, Lukas von Stumberg, Vladyslav Usenko, Jörg Stückler, Jürgen Sturm Technical University

More information

EE565:Mobile Robotics Lecture 3

EE565:Mobile Robotics Lecture 3 EE565:Mobile Robotics Lecture 3 Welcome Dr. Ahmad Kamal Nasir Today s Objectives Motion Models Velocity based model (Dead-Reckoning) Odometry based model (Wheel Encoders) Sensor Models Beam model of range

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz Humanoid Robotics Monte Carlo Localization Maren Bennewitz 1 Basis Probability Rules (1) If x and y are independent: Bayes rule: Often written as: The denominator is a normalizing constant that ensures

More information

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017 Electric Electronic Engineering Bogazici University December 13, 2017 Absolute position measurement Outline Motion Odometry Inertial systems Environmental Tactile Proximity Sensing Ground-Based RF Beacons

More information

Space Robotics. Ioannis Rekleitis

Space Robotics. Ioannis Rekleitis Space Robotics Ioannis Rekleitis On-Orbit Servicing of Satellites Work done at the Canadian Space Agency Guy Rouleau, Ioannis Rekleitis, Régent L'Archevêque, Eric Martin, Kourosh Parsa, and Erick Dupuis

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

GPS denied Navigation Solutions

GPS denied Navigation Solutions GPS denied Navigation Solutions Krishnraj Singh Gaur and Mangal Kothari ksgaur@iitk.ac.in, mangal@iitk.ac.in https://www.iitk.ac.in/aero/mangal/ Intelligent Guidance and Control Laboratory Indian Institute

More information

Multiview Stereo COSC450. Lecture 8

Multiview Stereo COSC450. Lecture 8 Multiview Stereo COSC450 Lecture 8 Stereo Vision So Far Stereo and epipolar geometry Fundamental matrix captures geometry 8-point algorithm Essential matrix with calibrated cameras 5-point algorithm Intersect

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

GTSAM 4.0 Tutorial Theory, Programming, and Applications

GTSAM 4.0 Tutorial Theory, Programming, and Applications GTSAM 4.0 Tutorial Theory, Programming, and Applications GTSAM: https://bitbucket.org/gtborg/gtsam Examples: https://github.com/dongjing3309/gtsam-examples Jing Dong 2016-11-19 License CC BY-NC-SA 3.0

More information

3D Line Segments Extraction from Semi-dense SLAM

3D Line Segments Extraction from Semi-dense SLAM 3D Line Segments Extraction from Semi-dense SLAM Shida He Xuebin Qin Zichen Zhang Martin Jagersand University of Alberta Abstract Despite the development of Simultaneous Localization and Mapping (SLAM),

More information