Incremental Light Bundle Adjustment for Robotics Navigation

Similar documents
Distributed Vision-Aided Cooperative Navigation Based on Three-View Geometry

Incremental Light Bundle Adjustment for Robotics Navigation

Information Fusion in Navigation Systems via Factor Graph Based Incremental Smoothing

Factor Graph Based Incremental Smoothing in Inertial Navigation Systems

Incremental Light Bundle Adjustment for Structure From Motion and Robotics

Real-Time Vision-Aided Localization and. Navigation Based on Three-View Geometry

Vision-based Target Tracking and Ego-Motion Estimation using Incremental Light Bundle Adjustment. Michael Chojnacki

Multiple View Geometry in Computer Vision Second Edition

Removing Scale Biases and Ambiguity from 6DoF Monocular SLAM Using Inertial

CVPR 2014 Visual SLAM Tutorial Efficient Inference

4D Crop Analysis for Plant Geometry Estimation in Precision Agriculture

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms

Autonomous Mobile Robot Design

High-precision, consistent EKF-based visual-inertial odometry

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

CS201 Computer Vision Camera Geometry

Active Online Visual-Inertial Navigation and Sensor Calibration via Belief Space Planning and Factor Graph Based Incremental Smoothing

Geometry for Computer Vision

Dense visual odometry and sensor fusion for UAV navigation

Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

A Sample of Monte Carlo Methods in Robotics and Vision. Credits. Outline. Structure from Motion. without Correspondences

Data Association for SLAM

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Asynchronous Multi-Sensor Fusion for 3D Mapping and Localization

navigation Isaac Skog

Humanoid Robotics. Least Squares. Maren Bennewitz

Methods for Localization and Mapping Using Vision and Inertial Sensors

GTSAM 4.0 Tutorial Theory, Programming, and Applications

A simple example. Assume we want to find the change in the rotation angles to get the end effector to G. Effect of changing s

Navigation Aiding Based on Coupled Online Mosaicking and Camera Scanning

Lecture 13 Visual Inertial Fusion

Incremental Real-time Bundle Adjustment for Multi-camera Systems with Points at Infinity

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Camera Drones Lecture 3 3D data generation

3D Model Acquisition by Tracking 2D Wireframes

Computer Vision I - Appearance-based Matching and Projective Geometry

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Bias Reduction and Filter Convergence for Long Range Stereo

Vision-Aided Inertial Navigation with Line Features and a Rolling-Shutter Camera

(W: 12:05-1:50, 50-N202)

Geometry of Multiple views

Robotic Perception and Action: Vehicle SLAM Assignment

A Loosely-Coupled Approach for Metric Scale Estimation in Monocular Vision-Inertial Systems

CHAPTER 2 SENSOR DATA SIMULATION: A KINEMATIC APPROACH

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Parallel Robots. Mechanics and Control H AMID D. TAG HI RAD. CRC Press. Taylor & Francis Group. Taylor & Francis Croup, Boca Raton London NewYoric

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

INCREMENTAL SMOOTHING AND MAPPING

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

Initialisation and Estimation Methods for Batch Optimisation of Inertial/Visual SLAM

INCREMENTAL SMOOTHING AND MAPPING

Error Simulation and Multi-Sensor Data Fusion

ORB SLAM 2 : an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

Relating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps

Automatic 3D Model Construction for Turn-Table Sequences - a simplification

Vision 3D articielle Multiple view geometry

Calibration and Noise Identification of a Rolling Shutter Camera and a Low-Cost Inertial Measurement Unit

POME A mobile camera system for accurate indoor pose

Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot

2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

Introduction to Inertial Navigation (INS tutorial short)

Smoothing and Mapping using Multiple Robots

Table of Contents. Chapter 1. Modeling and Identification of Serial Robots... 1 Wisama KHALIL and Etienne DOMBRE

SAE Aerospace Control & Guidance Systems Committee #97 March 1-3, 2006 AFOSR, AFRL. Georgia Tech, MIT, UCLA, Virginia Tech

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem

Marker Based Localization of a Quadrotor. Akshat Agarwal & Siddharth Tanwar

LOAM: LiDAR Odometry and Mapping in Real Time

Low Cost solution for Pose Estimation of Quadrotor

Unmanned Aerial Vehicles

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Graph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle)

The end of affine cameras

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization. Wolfram Burgard

Optimization-Based Estimator Design for Vision-Aided Inertial Navigation

Weighted Local Bundle Adjustment and Application to Odometry and Visual SLAM Fusion

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Chapter 13. Vision Based Guidance. Beard & McLain, Small Unmanned Aircraft, Princeton University Press, 2012,

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

3D Magnetic Field Mapping in Large-Scale Indoor Environment Using Measurement Robot and Gaussian Processes

Estimating Pose and Motion using Bundle Adjustment and Digital Elevation Model Constraints. Gil Briskin

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models

Monte Carlo Localization for Mobile Robots

UAV Autonomous Navigation in a GPS-limited Urban Environment

Camera and Inertial Sensor Fusion

Mysteries of Parameterizing Camera Motion - Part 1

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Autonomous Navigation for Flying Robots

Image Augmented Laser Scan Matching for Indoor Localization

Augmented Reality, Advanced SLAM, Applications

Self-calibration of a pair of stereo cameras in general position

Multiview Stereo COSC450. Lecture 8

Fast 3D Pose Estimation With Out-of-Sequence Measurements

Monocular Visual-Inertial SLAM: Continuous Preintegration and Reliable Initialization

Simuntaneous Localisation and Mapping with a Single Camera. Abhishek Aneja and Zhichao Chen

Transcription:

Incremental Light Bundle Adjustment for Robotics Vadim Indelman, Andrew Melim, Frank Dellaert Robotics and Intelligent Machines (RIM) Center College of Computing Georgia Institute of Technology

Introduction Robot : Recover the state of a moving robot over time through fusion of multiple sensors, including a monocular camera Simultaneous Localization and Mapping (SLAM) Left image courtesy of Georgia Tech Research Institute Right image courtesy of Chris Beall Indelman et al., Incremental Light Bundle Adjustment for Robot 2

Vision-Aided Robot Fusion of monocular image measurements and IMU measurements Full joint pdf: Q N p(x, L, B Z) / @p(x i x i,b i,zi,i IMU Q ) p(zi,j VIS x i,l j ) A i j 2 M i N M i : number of robot states : set of observed 3D points at state i l j x i zi,j VIS zi,i IMU b i : j-th 3D point : Robot state at time i (pose and velocity) : Image observation : IMU measurement : IMU Bias at time i Image from: http://www.tnt.uni-hannover.de/project/motionestimation 3

Vision-Aided Robot Fusion of monocular image measurements and IMU measurements Full joint pdf: Q N p(x, L, B Z) / @p(x i x i,b i,zi,i IMU Q ) p(zi,j VIS x i,l j ) A i j 2 M i N M i : number of robot states : set of observed 3D points at state i Assuming Gaussian distributions: X,Y,B l j x i zi,i IMU : j-th 3D point : Robot state at time i (pose and velocity) : Image observation : IMU measurement MAP estimate is obtained by b i : IMU Bias at time i J(X, L, B). = X,L,B = N P i = arg max X, L, B p(x, L, B Z) @ x i pred(x i,b i z IMU i,i ) 2 P IMU + P j 2 M z VIS i,j z VIS i,j (x i,l j ) 2 P VIS A Projection of a 3D point into the image plane (x i,l j ). = K i Ri t i lj Mahalanobis squared distance kak 2. = a T a Image from: http://www.tnt.uni-hannover.de/project/motionestimation 4

Factor Graph Representation [Kschischang et al. 2 ToIT] Factor graph: a graphical representation of the joint pdf factorization Full SLAM pdf: p(x, L, B Z) /. = {X, L, B} N Q i @p(x i x i p (X Z) / Y s f s (X s ),b i,z IMU i,i ) p (z i,j x i,l j ) / exp 2 kz i,j (x i,l j )k 2.= f proj (x i,l j ) p(x i x i,b i zi,i IMU ) exp( 2 x i pred(x i,b i,zi,i IMU 2 IMU) =. f IMU (x i,x i,b i ) l f bias (b k+,b k ). =exp 2 b k+ h b (b k ) 2 b f proj f proj Q j 2 M i p(z VIS f proj i,j x i,l j ) A The naïve IMU factor can add a significant number of unnecessary variables! f IMU f IMU f IMU f IMU f IMU x x 2 x 3 x 4 x 5 x 6 b b 2 b 3 b 4 b 5 f bias f bias f bias f bias 5

Incremental Light Bundle Adjustment (ilba) for Robot Problems! 3D structure is expensive to compute (and not necessary for navigation): Algebraically eliminate 3D points using multi-view geometry constraints Significantly reduce the number of variables for optimization 3D points can always be reconstructed (if required) based on optimized camera poses High rate sensors introduce large number of variables: Utilize pre-integration of IMU to reduce the number of variables [Lupton et al., TRO 22] Incremental inference requires only partial re-calculation Update factorization rather than compute from scratch 6

Three-View Constraints [Indelman et al., TAES 22] Theorem: Algebraic elimination of a 3D point that is observed by 3 views k,l and m leads to: g 2v (x k,x l,z k,z l ). = q k (t k!l q l )= g 2v (x l,x m,z l,z m ). = q l (t l!m q m )= Epipolar constraints t i!j g 3v (x k,x l,x m,z k,z l,z m ). =(q l q k ) (q m t l!m ) (q k t k!l ) (q m q l )= R i. q i = R T i K i z i : translation from view i to view j : rotation from global frame to view i Scale consistency Third equation relates between the magnitudes of t l!m and t k!l Necessary and sufficient conditions LBA cost function: J LBA (X) =. XN h kh i (X, Z)k 2 i h i 2 {g 2v,g 3v } i View k 3D point q k q l q m View m x m x k t k!l View l t l!m V. Indelman, P. Gurfil, E. Rivlin, H. Rotstein, Real-Time Vision-Aided Localization and Based on Three-View Geometry, IEEE Transactions on Aerospace and Electronic Systems, 22 x l 7

Vision Only : Light Bundle Adjustment (LBA) LBA cost function: h i : i-th multi-view constraint J LBA (X). = XN h i kh i (X, Z)k 2 i Involves several views and the corresponding image observations i : An equivalent covariance i = A i A T i : Jacobian with respect to image observations A i Number of optimized variables: 6N +3M 6N Multi-view constraints - Different formulations in literature Epipolar geometry, trifocal tensors, quadrifocal tensors etc. Independent relations exist only between up to three cameras [Ma et al., 24] Here, three-view constraints formulation is used Originally developed for navigation aiding [Indelman et al., TAES 22] 8

Pre-Integrated IMU Factors Pre-integrate IMU measurements and insert equivalent factors only when inserting new LBA factors into the graph. x i!j = pi!j, v i!j,rj i = Zi!j IMU,b i, f Equiv (x j,x i,b i ) =exp. 2 x j h Equiv (x i,b i, x i!j ) 2 Components of x i!j are expressed in body-frame, not navigation frame, which allows relinearization of the factor without repeated computation Equivalent IMU (non-linear) factor [Lupton et al., TRO 22] Significantly reduces graph size, and subsequently time for elimination. f IMU f IMU f IMU f IMU f IMU x x 2 x 3 x 4 x 5 x 6 f Equiv x x 6 b b 2 b 3 b 4 b 5 f bias f bias f bias f bias Todd Lupton and Salah Sukkarieh, Visual-Inertial-Aided for High-Dynamic Motion in Built Environments Without Initial Conditions, IEEE Transactions on Robotics, 22 b 9

Second Component - Incremental Inference [Kaess et al., 22] When adding new variables\factors, calculations can be reused Factorization can be updated (and not re-calculated from scratch) Example: Linearization and elimination Elimination order x,b,x 6,b 2,x 2 2-view factor f Equiv x x 6 b x x 6 b 3-view factor f bias 2-view factor f Equiv x Linearization 2 A = b 2 x 2 b 2 R = Jacobian matrix 2 3 6 7 4 5 x x 6 x 2 b b 2 Factorization Factorized Jacobian matrix 2 3 6 7 4 5 x b x 6 b 2 x 2 b 2

Incremental Inference in ilba (Cont.) Example: New camera and factors are added Linearization 2 3 R = 6 7 4 5 x b x 6 b 2 x 2 b 3 x 2 Nodes in all paths that lead from the last-eliminated node to nodes involved in new factors What should be re-calculated? A = Efficiently calculated using Bayes tree [Kaess et al., 22] 2 3 6 7 4 5 x x 6 x 2 b b 2 x 2 b 3

ilba for Robotics Monte-Carlo Study Position -sigma errors and sqrt. covariance North [m] 5 True Inertial East [m] Height [m] 5 5 8 6 4 2 5 5 3 2. LBA Full BA Sqrt. Cov. LBA Sqrt. Cov. Full BA 5 5 Time [sec] Euler Angles -sigma errors and sqrt. covariance Height [m] 2 2 4 4 2 5 2 4 6 8 East [m] 5 North [m] 5 Acceleration bias -sigma errors and sqrt. covariance [deg].5 X Axis [mg] 5 5 5 5 5.8 [deg].6.4.2 Y Axis [mg] 5 5 5 5 5.4 5 [deg].2 Z Axis [mg] 5 5 5 Time [sec] 5 5 Time [sec] 2

ilba for Robotics Simulation Results Scenario (GE for illustration) Estimated trajectory Scenario: LBA + IMU 6 BA + IMU 4 Single camera 5 4 3 65 2 IMU North [m] 64 8 Processing time 25 LBA + IMU BA + IMU 66 68 246 4 3 2 5 25 5 5 East [m] 5 2 Position estimation errors 2 5 Position errors [m] Processing time [sec] 75 2 6 5 IMU only 4 2 Ground Truth 2 3 4 Scenario Time [sec] 5 6 7 LBA + IMU BA + IMU 5 5 2 3 4 Scenario Time [sec] 5 6 7 3

Conclusions Algebraic elimination of 3D points significantly reduces the size of the optimization problem and provides speed up to online robot navigation Use of pre-integration methods for high-frequency inertial measurements also reduces the size of the problem Accuracy is similar to full SLAM At least 2-3.5x speed up in computation time Code and datasets are available from the author s website http://www.cc.gatech.edu/~vindelma 4