Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Similar documents
Hybrids Mixed Approaches

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

Monocular Visual-Inertial SLAM. Shaojie Shen Assistant Professor, HKUST Director, HKUST-DJI Joint Innovation Laboratory

Autonomous Navigation for Flying Robots

Lecture 13 Visual Inertial Fusion

UAV Autonomous Navigation in a GPS-limited Urban Environment

Visual SLAM for small Unmanned Aerial Vehicles

High-precision, consistent EKF-based visual-inertial odometry

Computationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor

Application questions. Theoretical questions

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem

A Loosely-Coupled Approach for Metric Scale Estimation in Monocular Vision-Inertial Systems

Marker Based Localization of a Quadrotor. Akshat Agarwal & Siddharth Tanwar

Autonomous Mobile Robot Design

Stable Vision-Aided Navigation for Large-Area Augmented Reality

Camera and Inertial Sensor Fusion

Autonomous navigation in industrial cluttered environments using embedded stereo-vision

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang

Distributed Vision-Aided Cooperative Navigation Based on Three-View Geometry

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

Direct Methods in Visual Odometry

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Master Automática y Robótica. Técnicas Avanzadas de Vision: Visual Odometry. by Pascual Campoy Computer Vision Group

Error Simulation and Multi-Sensor Data Fusion

Lecture: Autonomous micro aerial vehicles

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms

Image Augmented Laser Scan Matching for Indoor Localization

Robotic Perception and Action: Vehicle SLAM Assignment

Low Cost solution for Pose Estimation of Quadrotor

State-space models for 3D visual tracking

(1) and s k ωk. p k vk q

GPS denied Navigation Solutions

Visual-Inertial Localization and Mapping for Robot Navigation

Autonomous Navigation for Flying Robots

OpenStreetSLAM: Global Vehicle Localization using OpenStreetMaps

Removing Scale Biases and Ambiguity from 6DoF Monocular SLAM Using Inertial

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

GTSAM 4.0 Tutorial Theory, Programming, and Applications

DriftLess Technology to improve inertial sensors

Augmented Reality, Advanced SLAM, Applications

Perspective Sensing for Inertial Stabilization

Real-Time Vision-Based State Estimation and (Dense) Mapping

Davide Scaramuzza. University of Zurich

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Monocular Visual Odometry

Information Fusion in Navigation Systems via Factor Graph Based Incremental Smoothing

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS

Camera Drones Lecture 2 Control and Sensors

A General Framework for Mobile Robot Pose Tracking and Multi Sensors Self-Calibration

3-Point RANSAC for Fast Vision based Rotation Estimation using GPU Technology

CVPR 2014 Visual SLAM Tutorial Efficient Inference

Simultaneous Localization

Satellite and Inertial Navigation and Positioning System

W4. Perception & Situation Awareness & Decision making

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Factor Graph Based Incremental Smoothing in Inertial Navigation Systems

Humanoid Robotics. Least Squares. Maren Bennewitz

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Simultaneous Localization and Mapping (SLAM)

4D Crop Analysis for Plant Geometry Estimation in Precision Agriculture

Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza

CSE 527: Introduction to Computer Vision

Simuntaneous Localisation and Mapping with a Single Camera. Abhishek Aneja and Zhichao Chen

Semi-Dense Direct SLAM

Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018

Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization

Mobile Robots Summery. Autonomous Mobile Robots

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

XIII Simpósio Brasileiro de Automação Inteligente Porto Alegre RS, 1 o 4 de Outubro de 2017

COS Lecture 13 Autonomous Robot Navigation

Visual-Inertial RGB-D SLAM for Mobile Augmented Reality

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Selection and Integration of Sensors Alex Spitzer 11/23/14

Long-term motion estimation from images

Self-Calibration of Inertial and Omnidirectional Visual Sensors for Navigation and Mapping

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Fusion of unsynchronized optical tracker and inertial sensor in EKF framework for in-car Augmented Reality delay reduction

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Real-Time Global Localization with A Pre-Built Visual Landmark Database

Asynchronous Multi-Sensor Fusion for 3D Mapping and Localization

Loosely Coupled Stereo Inertial Odometry on Low-cost System

Self-calibration of a pair of stereo cameras in general position

Stereo and Epipolar geometry

Qadeer Baig, Mathias Perrollaz, Jander Botelho Do Nascimento, Christian Laugier

Active Online Visual-Inertial Navigation and Sensor Calibration via Belief Space Planning and Factor Graph Based Incremental Smoothing

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

Duo-VIO: Fast, Light-weight, Stereo Inertial Odometry

Robust Multi-Sensor Fusion for Micro Aerial Vehicle Navigation in GPS-Degraded/Denied Environments

Image Augmented Laser Scan Matching for Indoor Dead Reckoning

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016

Direct Visual Odometry for a Fisheye-Stereo Camera

arxiv: v1 [cs.cv] 28 Sep 2018

Incremental Light Bundle Adjustment for Robotics Navigation

Indoor navigation using smartphones. Chris Hide IESSG, University of Nottingham, UK

Transcription:

Dealing with Scale Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged.

Outline Why care about size? The IMU as scale provider: The idea: closed form Tightly vs. Loosely coupled Variable scale in optical flow 2

Recap: Monocular visual odometry is up to scale If the 3D coordinates of P are unknown and we measure only is projection in 2 images: Compute the essential matrix using 5 or more points Problem is of dimension 5: i.e. up to a global scale factor This is the same as the (old) Hollywood effect 3

Recap: Monocular visual odometry is up to scale Recovering scale if the 3D points are known p3p problem (e.g. stereo vision) Requires known markers Popular: AR Toolkit, APRIL Tags [Meier et al. EMAV 2010] 4

Recap: Monocular visual odometry is up to scale Recovering scale if the 3D points are known p3p problem (e.g. stereo vision) Requires known markers Popular: AR Toolkit, APRIL Tags Typically, no known markers in the real world [Weiss et al. JFR 2013] typical outdoor scene 5

Why Care About Size? Metric information used for robot control Parameter tuning necessary if no scale available Can change from run to run arbitrarily Can be constraint up to certain level (e.g. Mahony et al) Usually impractical for fast deployable platforms [Herisse T-RO 2012] 6

Why Care About Size? Metric information used for robot control Parameter tuning necessary if no scale available Can change from run to run arbitrarily Can be constraint up to certain level (e.g. Mahony et al) Usually impractical for fast deployble platforms Ground robots less affected than aerial/unstable platforms Can hold actuators to enter stable regime ground robot? multicopter? 7

IMU as a Scale Provider Accelerometers have metric information and provide information at high rate Pure IMU integration drifts quadratically in time High grade IMUs can be integrated over long periods Cost effective robots usually carry MEMS IMUs: very noisy and bias drifts relatively fast VO only drifts spatially (when observing new features) Intelligent combination of IMU and visual odometry (VO) can provide scale (and more!) Inertial Sensor (IMU) + Fast sampling + Scaled units Visual Sensor (camera) Biased, noisy data No vel. nor pos. Large drift Slow sampling unscaled units + position and attitude + slow spatial drift 8

IMU as a Scale Provider: The Idea Idea for combining IMU and VO to retrieve scale Compare integrated IMU data to VO: problem with drift Compare differentiated VO data to IMU: increased noise Methods: Closed form least squares solution over short periods Noisy, fails if motion is not sufficient No scale propagation Good for initialization Λ acclerometer d VO position dt Probabilistic approach (e.g. GTSAM, isam, EKF) [Kneip et al. ICRA 2011] 9

IMU as a Scale Provider A Probabilistic approach BA setup using IMU readings as edges in a graph: GTSAM, isam Odometry measurement (e.g. IMU) Robot pose Landmark measurement Landmark position Problem complexity can grow rabidly with high rate IMU readings Use pre-integrated IMU terms (Indelman et al. RAS2013) Use continuous time batch optimization (Furgale ICRA 2012) 10

IMU as a Scale Provider An EKF Approach EKF setup using the IMU in the motion model Do not use IMU in as a measurement/update Would require an EKF update (i.e. matrix inversion) at IMU rate Motion model without IMU is difficult: cannot model external disturbances Process noise of motion model is difficult to assess How do we implement the update? 11

EKF Approach: Tightly Coupled Tightly coupled approach Very high computational complexity 3D features in the state need special handling Idea: reduce complexity by not including the 3D feature positions in the filter Mourikis & Roumeliotis ICRA 2007 12

EKF Approach: Tightly Coupled Tightly coupled approach: remove feature positions from the state Corrupts Gaussian noise assumption of EKF Works well in practice Residuals Multiplying with A = left nullspace of Hf 13

EKF Approach: Tightly Coupled Runs at more than 10Hz on a mobile phone processor Accurate scaling Works well even though large trajectories with low acceleration Closed form solution would be difficult to use 14

EKF Approach: Loosely Coupled Loosely coupled: faster, less complex, modular; but needs failure detection Modularity: Use previously discussed VO/SLAM modules as black boxes At the cost of filter consistency features Update includes a 6x6 matrix inversion (camera pose measurement) [Weiss et al. ICRA 2011] Visual Odometry Loosely coupled EKF Optical Flow Velocity Visual motion sensor IMU Additional sensors - State estimation - Self-calibration - On-board computation - Modular design [Weiss PhD 2012] 15

EKF Approach: Loosely Coupled Augment system model Include visual scale as separate state Has constant dynamics for VO, VSLAM......scale drifts spatially not temporally with keyframe based VO Not constant in optical flow approaches (see later) Previous dynamics remain unchanged State vector 16

EKF Approach: Loosely Coupled Simple, constant size update, independent of number of features Complexity is distributed between vision module and EKF module Can be implemented in different architectures (or run in different cores) position: attitude (multiplicative): -1 17

EKF Approach: Loosely Coupled Implementation: EKF module uses minimal computation power Bottleneck is the covariance propagation, not the update step: use block-sparse methods to reduce complexity Setup on cell phone processor 1.7GHz quad core: 55Hz Visual SLAM: uses 1.5 cores (ethzasl_ptam: wiki.ros.org/ethzasl_ptam) 100Hz EKF module: uses 0.2 cores (ethzasl_sensor_fusion: wiki.ros.org/ethzasl_sensor_fusion) Sufficient computation power free for high level task: e.g. autonomous, safe landing @ 2Hz (Brockers et al. CVPR 2014) 18

Power-On-And-Go From scale estimation to self-calibrating platforms Do not stop at scale: use full capabilities of fusing IMU with vision Particularly: gravity aligned navigation frame Self-calibrating sensor suite Yields power-on-and-go robots IMU cam g world [Weiss et al. JFR 2013] MAV Control States IMU Intrinsics Sensor Extrinsics pos. b_acc trans. vel. att. metric control b_gyr rot. Visual Drifts scale drift_r drift_p continuous self-calibration 19

Power-On-And-Go From scale estimation to self-calibrating platforms [Weiss et al. JFR 2013] 20

Variable Scale in Optical Flow Back to the basics: frame to frame motion estimation Epipolar constraints yields R, T between two camera poses E = T R With high framerate we are entering the time-continuous domain Continuous epipolar constraint yields linear and angular velocities of the current camera frame 5-dimensional problem. Reduce to 2D: Use IMU readings for ω De-rotate features and set ω=0 X2 X1 X3 [Weiss et al. IROS 2013] 21

Variable Scale in Optical Flow Optical flow (OF) is proportional to the ratio of velocity and feature distance Λ v v OF λ Fusion of IMU and OF disambiguates scale and velocity Scale is proportional to the scene distance Motion model for scale propagation can be applied Allows fast scale tracking in agile motion [Weiss et al. IROS 2013] 22

Variable Scale in Optical Flow Optical flow benefits Only need two consecutive frames, no feature history, scale propagation, or map Use IMU effectively to reduce problem to 2 dimensions: need only 2 features Very fast computation and RANSAC outlier rejection BUT: no position estimation (other than scene distance) IMU yaw v [m/s] d [m] cam [Weiss et al. IROS2013] g world MAV Control States Plane dist velocity metric control attitude IMU Intrinsics b_acc b_gyr Sensor Extrinsics trans. rot. Visual Drifts scale continuous self-calibration 23

Throw-And-Go From scale estimation to fail-safe self-calibrating platforms Loosely coupled implementation: 50Hz on 1 core of 1.7GHz quad core CPU Inherently fail safe: only uses 3 feature matches in 2 consecutive images No feature history nor local map [Weiss et al. IROS 2013] 24

Modularity: other sources to estimate scale and drift Additional sensors UWB range sensing module: accuracy 2cm (TimeDomain) Use more than just scale information Other cameras: stereo vision Visual patterns GPS UWB range sensing Air pressure Observability analysis for additional states Self-Calibration is crucial for long-term opertation Literture: Martinelli FTR 2013 Modular Multi-Sensor Fusion available online: wiki.ros.org/ethzasl_sensor_fusion Theory: Weiss PhD Thesis 2012 To start: wiki.ros.org/ethzasl_sensor_fusion 25

Q&A 26