Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Similar documents
Multiview Stereo COSC450. Lecture 8

Humanoid Robotics. Least Squares. Maren Bennewitz

CS 395T Lecture 12: Feature Matching and Bundle Adjustment. Qixing Huang October 10 st 2018

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

CS 532: 3D Computer Vision 7 th Set of Notes

Robot Mapping. Graph-Based SLAM with Landmarks. Cyrill Stachniss

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

ICRA 2016 Tutorial on SLAM. Graph-Based SLAM and Sparsity. Cyrill Stachniss

SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks ) Se, Lowe, and Little

Data Association for SLAM

Lecture 8.2 Structure from Motion. Thomas Opsahl

Camera calibration. Robotic vision. Ville Kyrki

Real Time Localization and 3D Reconstruction

StereoScan: Dense 3D Reconstruction in Real-time

Robotic Perception and Action: Vehicle SLAM Assignment

Autonomous Mobile Robot Design

COMPUTER VISION Multi-view Geometry

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Simultaneous Localization and Mapping (SLAM)

High-precision, consistent EKF-based visual-inertial odometry

CS231A Course Notes 4: Stereo Systems and Structure from Motion

Augmenting Reality, Naturally:

Simultaneous Localization

DERIVING PEDESTRIAN POSITIONS FROM UNCALIBRATED VIDEOS

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Bundle Adjustment. Frank Dellaert CVPR 2014 Visual SLAM Tutorial

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Project: Camera Rectification and Structure from Motion

3D Scene Reconstruction with a Mobile Camera

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms

EECS 442: Final Project

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements

Implemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens

Motion Capture using Body Mounted Cameras in an Unknown Environment

Robotics. Haslum COMP3620/6320

Supplementary Note to Detecting Building-level Changes of a City Using Street Images and a 2D City Map

Smoothing and Mapping using Multiple Robots

Monocular Visual-Inertial SLAM. Shaojie Shen Assistant Professor, HKUST Director, HKUST-DJI Joint Innovation Laboratory

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Camera Calibration. COS 429 Princeton University

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Davide Scaramuzza. University of Zurich

OpenStreetSLAM: Global Vehicle Localization using OpenStreetMaps

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Video Georegistration: Key Challenges. Steve Blask Harris Corporation GCSD Melbourne, FL 32934

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen

Master Automática y Robótica. Técnicas Avanzadas de Vision: Visual Odometry. by Pascual Campoy Computer Vision Group

Autonomous Vehicle Navigation Using Stereoscopic Imaging

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.

Direct Methods in Visual Odometry

Localization, Mapping and Exploration with Multiple Robots. Dr. Daisy Tang

Using Particle Image Velocimetry for Road Vehicle Tracking and Performance Monitoring. Samuel C. Kucera Jeremy S. Daily

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

Project 2: Structure from Motion

Augmented Reality, Advanced SLAM, Applications

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Camera Drones Lecture 3 3D data generation

3D Face Modelling Under Unconstrained Pose & Illumination

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Topology Estimation and Global Alignment

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Towards a visual perception system for LNG pipe inspection

Outline. ETN-FPI Training School on Plenoptic Sensing

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

Localization, Where am I?

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D

Lecture 7, Video Coding, Motion Compensation Accuracy

Project: Camera Rectification and Structure from Motion

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

Calibration of a rotating multi-beam Lidar

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming

Robot Mapping. A Short Introduction to the Bayes Filter and Related Models. Gian Diego Tipaldi, Wolfram Burgard

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

ORB SLAM 2 : an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Targetless calibration for vehicle mounted cameras

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

POME A mobile camera system for accurate indoor pose

Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds.

Multiple Model Estimation : The EM Algorithm & Applications

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras

Dynamic Image Registration for Elevation Map Fusion

High Altitude Balloon Localization from Photographs

CS 231A: Computer Vision (Winter 2018) Problem Set 2

Structure from motion

Monocular Vision for Mobile Robot Localization and Autonomous Navigation

Visual SLAM. An Overview. L. Freda. ALCOR Lab DIAG University of Rome La Sapienza. May 3, 2016

What is the SLAM problem?

From Orientation to Functional Modeling for Terrestrial and UAV Images

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

A General Framework for Mobile Robot Pose Tracking and Multi Sensors Self-Calibration

Application questions. Theoretical questions

REMOTELY controlled mobile robots have been a subject

Gregory Walsh, Ph.D. San Ramon, CA January 25, 2011

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Quality Report Generated with version

Transcription:

Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1

Bundle Adjustment 2

Example Application A vehicle needs to map its environment that it is moving through, and estimate its self motion The main sensors are cameras mounted on the vehicle pose 1 range 60 Four camera configuration on vehicle pose 1 range 60 az -223 el 31 3

Bundle Adjustment The most accurate (and flexible) way to estimate structure and motion is to directly minimize the squared reprojection errors for the 2D points We can do this for N camera poses (N 2) The unknowns are: The (X,Y,Z) positions of the K 3D points The relative poses of the camera in N-1 poses The first pose is arbitrary and we can just set it to the origin There are a lot of unknown parameters to find! We can use non-linear least squares It will be slow probably not suitable for real-time We will need a good starting guess 4

Structure from Motion Parameters to be estimated N-1 vehicle poses wr.t. 1 st vehicle pose; each has 6 degrees of freedom K 3D point locations w.r.t. 1 st vehicle pose v3 P 1 Measurements Image point observations T p u v, i 1.. N, j 1 Assumptions K ij.. Calibrated cameras Point correspondences are known v2 v1 P 2 P 3 5

Bundle Adjustment We need a function that Predicts the image locations of the K 3D points, in each of the N images The inputs to the function are the current guess for the locations of the 3D points, and the N-1 poses As we did before, we can take the Jacobian of the function and iteratively add corrections to our guess Note - there are somewhat more efficient algorithms than the Newton s method that we used One is called Levenberg-Marquardt It adaptively switches between Newton s method and steepest descent It is implemented in Matlab s lsqnonlin function This is part of the optimization toolbox 6

Scale Factor Results have an unknown scale factor All points, and all translations are scaled by the same unknown amount Possible ways to determine scale factor: Take an picture of a known object in one of the images Use stereo cameras Use some other data source, such as odometry Odometry Many vehicles can estimate self-motion from wheel encoders and heading sensors It is fairly accurate over short distances, but can drift over longer distances 7

Structure from Motion Algorithm approach Search for parameters to minimize residual errors between predictions and measurements minimize where y E 2 1 i y T v v y i v v 2 v g 1 1 1 β, β p P β i odom β f 1 vi1 vi ij j, Predicts odometry measurement i i, j v i 2 Projects point i into image j to predict image measurement j Nonlinear least squares problem Levenberg Marquardt algorithm bundle adjust 8

Minimization Algorithm Bundle adjust is computationally expensive Limit # poses, # points Use a sliding window of last N poses Solutions are with respect to the first pose of this set Effectively, mapping is done w.r.t. a moving local coordinate system Origin of local reference frame Bundle adjust done over last 7 poses Also, a good initial guess is needed Pose guesses come from odometry Point locations are initialized using triangulation 9

Mapping Program Control Flow do Get next set of measurements (images and odometry) Predict pose of vehicle from odometry Track existing points Eliminate inactive points If predicted distance travelled is at least D meters Define this pose as a keyframe Try to acquire new points to track Try to initialize 3D locations of 2D points using triangulation Do bundle adjust algorithm over last 7 keyframes while measurements are available Note: D = 1.5..3.0 m, depending on vehicle speed 10

Tracking Videos Squares: tracked points Red: 2D information only Orange: Triangulation avail Yellow: 3D position estimate Blue bar: uncertainty est Red text: can t compute pose Scene: roxborough1 11

Scene Animations Scene: roxborough1 Yellow: tracked point Blue: no longer tracked (archived) Ellipsoids = 1 sigma point uncertainty Orbital View Top-down View 12

Mead Way Ground truth mapper 1.1 km 660 keyframes 3833 point landmarks 13

Ground truth mapper 4.2 km 2403 keyframes 5790 point landmarks Roxborough Road Camera 3 File 3570 Pose 44 139 128 127 120 159 140 160 169 Camera 4 File 1837 Pose 44 151 129 121 171 162 170 144 161 130 142 165 113 163 Figure 29 Roxborough Road site, showing mapped points in the zoomed in portion. Each pixel is 13.7 meters. 14

Rampart Range Ground Road truth mapper 4.5 km 5268 keyframes 24508 point landmarks Figure 32 Rampart Range Road site. Each pixel is 7.1 meters. 15