FLaME: Fast Lightweight Mesh Estimation using Variational Smoothing on Delaunay Graphs
|
|
- Basil Booth
- 6 years ago
- Views:
Transcription
1 FLaME: Fast Lightweight Mesh Estimation using Variational Smoothing on Delaunay Graphs W. Nicholas Greene Robust Robotics Group, MIT CSAIL LPM Workshop IROS 2017 September 28, 2017 with Nicholas Roy 1
2 We want Autonomous Vision-Based Navigation 2
3 But Dense Monocular SLAM is Expensive Sparse Methods Mono-Slam (Davison, ICCV 2007) PTAM (Klein and Murray, ISMAR 2007) SVO (Forster et al., ICRA 2014) Efficiency Density
4 But Dense Monocular SLAM is Expensive Sparse Methods Mono-Slam (Davison, ICCV 2007) PTAM (Klein and Murray, ISMAR 2007) SVO (Forster et al., ICRA 2014) Efficiency Semi-Dense Methods Engel et al. ICCV 2013 LSD-SLAM (Engel et al., ECCV 2014) Mur-Artal and Tardos (RSS 2015) Pillai et al. ICRA 2016 Density
5 But Dense Monocular SLAM is Expensive Sparse Methods Mono-Slam (Davison, ICCV 2007) PTAM (Klein and Murray, ISMAR 2007) SVO (Forster et al., ICRA 2014) Efficiency Semi-Dense Methods Engel et al. ICCV 2013 LSD-SLAM (Engel et al., ECCV 2014) Mur-Artal and Tardos (RSS 2015) Pillai et al. ICRA 2016 Dense Methods DTAM (Newcombe et al., ICCV 2011) Graber et al. (ICCV 2011) MonoFusion (Pradeep et al., ISMAR 2013) REMODE (Pizzoli et al., ICRA 2014) Density
6 But Dense Monocular SLAM is Expensive
7 But Dense Monocular SLAM is Expensive Sparse Methods Mono-Slam (Davison, ICCV 2007) PTAM (Klein and Murray, ISMAR 2007) SVO (Forster et al., ICRA 2014)? Efficiency Semi-Dense Methods Engel et al. ICCV 2013 LSD-SLAM (Engel et al., ECCV 2014) Mur-Artal and Tardos (RSS 2015) Pillai et al. ICRA 2016 Dense Methods DTAM (Newcombe et al., ICCV 2011) Graber et al. (ICCV 2011) MonoFusion (Pradeep et al., ISMAR 2013) REMODE (Pizzoli et al., ICRA 2014) Density Can we more efficiently estimate dense geometry?
8 What Would a Dense Method Do? Image 8
9 Estimate Depth for Every Pixel Depthmap 9
10 Dense Methods Oversample Geometry Depthmap One depth estimate per pixel 100k - 1M pixels per image 10
11 Dense Methods Make Regularization Hard(er) Depthmap One depth estimate per pixel 100k - 1M pixels per image 11
12 Meshes Encode Geometry with Fewer Depths Depthmap 12
13 Meshes Encode Geometry with Fewer Depths Depthmap 13
14 Meshes Encode Geometry with Fewer Depths Depthmap 14
15 Meshes Encode Geometry with Fewer Depths Depthmap One depth estimate per vertex k vertices per mesh 15
16 Meshes Make Regularization Easy(er) Depthmap One depth estimate per vertex k vertices per mesh 16
17 FLaME: Fast Lightweight Mesh Estimation Fast (< 5 ms/frame) Lightweight (< 1 Intel i7 CPU core) Runs Onboard MAV Greene and Roy, ICCV
18 FLaME: Fast Lightweight Mesh Estimation Image and pose Feature Selection Inverse Depth Estimation Output Depthmap interpolation Variational Regularizer (NLTGV 2 -L1) Delaunay Triangulation 18
19 FLaME: Fast Lightweight Mesh Estimation Image and pose Feature Selection Inverse Depth Estimation Output Depthmap interpolation Variational Regularizer (NLTGV 2 -L1) Delaunay Triangulation 19
20 Select Easily Trackable Features - At each frame we sample trackable pixels or features over the image - These features will serve as potential vertices to add to the mesh 20
21 Select Easily Trackable Features - At each frame we sample trackable pixels or features over the image - These features will serve as potential vertices to add to the mesh 21
22 Select Easily Trackable Features - At each frame we sample trackable pixels over the image domain - These features will serve as potential vertices to add to the mesh 22
23 Select Easily Trackable Features - At each frame we sample trackable pixels over the image domain - These features will serve as potential vertices to add to the mesh 23
24 Select Easily Trackable Features - At each frame we sample trackable pixels over the image domain - These features will serve as potential vertices to add to the mesh 24
25 Select Easily Trackable Features - At each frame we sample trackable pixels over the image domain - These features will serve as potential vertices to add to the mesh Select pixel in each grid cell with maximum score: 25
26 FLaME: Fast Lightweight Mesh Estimation Image and pose Feature Selection Inverse Depth Estimation Output Depthmap interpolation Variational Regularizer (NLTGV 2 -L1) Delaunay Triangulation 26
27 Estimate Inverse Depths for Features - We track features across new images using direct epipolar stereo comparisons 27
28 Estimate Inverse Depths for Features - We track features across new images using direct epipolar stereo comparisons 28
29 Estimate Inverse Depths for Features - We track features across new images using direct epipolar stereo comparisons 29
30 Estimate Inverse Depths for Features - We track features across new images using direct epipolar stereo comparisons 30
31 Estimate Inverse Depths for Features - We track features across new images using direct epipolar stereo comparisons 31
32 Estimate Inverse Depths for Features - We track features across new images using direct epipolar stereo comparisons - Each successful match generates an inverse depth measurement 32
33 Estimate Inverse Depths for Features - We track features across new images using direct epipolar stereo comparisons - Each successful match generates an inverse depth measurement - Measurements are fused over time using Bayesian update 33
34 FLaME: Fast Lightweight Mesh Estimation Image and pose Feature Selection Inverse Depth Estimation Output Depthmap interpolation Variational Regularizer (NLTGV 2 -L1) Delaunay Triangulation 34
35 Triangulate Features into Mesh - Once a features inverse depth variance drops below threshold, we insert it into the mesh using Delaunay triangulations 35
36 Triangulate Features into Mesh - Once a features inverse depth variance drops below threshold, we insert it into the mesh using Delaunay triangulations 36
37 Triangulate Features into Mesh - Once a features inverse depth variance drops below threshold, we insert it into the mesh using Delaunay triangulations 37
38 Triangulate Features into Mesh - Once a features inverse depth variance drops below threshold, we insert it into the mesh using Delaunay triangulations - We can project 2D mesh into 3D using the inverse depth for each vertex 38
39 Triangulate Features into Mesh - Once a features inverse depth variance drops below threshold, we insert it into the mesh using Delaunay triangulations - We can project 2D mesh into 3D using the inverse depth for each vertex 39
40 FLaME: Fast Lightweight Mesh Estimation Image and pose Feature Selection Inverse Depth Estimation Output Depthmap interpolation Variational Regularizer (NLTGV 2 -L1) Delaunay Triangulation 40
41 Smooth Mesh Using Variational Regularization - Mesh inverse depths are noisy and prone to outliers - Need to spatially regularize/smooth inverse depths Raw 41
42 Smooth Mesh Using Variational Regularization - Mesh inverse depths are noisy and prone to outliers - Need to spatially regularize/smooth inverse depths - Will exploit graph structure to make optimization fast and incremental Raw Smoothed 42
43 Define Variational Cost over Inverse Depthmap Raw idepthmap 43
44 Define Variational Cost over Inverse Depthmap Smoothed idepthmap Raw idepthmap 44
45 Define Variational Cost over Inverse Depthmap Smoothed idepthmap Raw idepthmap NLTGV 2 stands for Non-Local Total Generalized Variation (Second Order) Ranftl et al. 2014, Pinies et al
46 Approximate Cost over Mesh - Reinterpret mesh as graph 46
47 Approximate Cost over Mesh - Reinterpret mesh as graph 47
48 Approximate Cost over Mesh 48
49 Approximate Cost over Mesh Raw idepthmap 49
50 Approximate Cost over Mesh 50
51 Approximate Cost over Mesh Raw mesh 51
52 Approximate Cost over Mesh Smoothed mesh Raw mesh 52
53 Approximate Cost over Mesh Smoothed mesh Raw mesh 53
54 Approximate Cost over Mesh Math Smoothed mesh Raw mesh 54
55 Approximate Cost over Mesh Math Smoothed mesh Raw mesh 55
56 Optimize using Primal-Dual on Graph 56
57 Optimize using Primal-Dual on Graph Math 57
58 Optimize using Primal-Dual on Graph Math 58
59 Optimize using Primal-Dual on Graph Math 59
60 Optimize using Primal-Dual on Graph Math Edges update using vertices 60
61 Optimize using Primal-Dual on Graph Math Vertices update using edges 61
62 Optimize using Primal-Dual on Graph Math Optimization steps are fast even without GPU (<< frame rate) 62
63 Optimize using Primal-Dual on Graph Math Optimization steps are fast even without GPU (<< frame rate) Optimization convergence is fast (~ frame rate) 63
64 Optimize using Primal-Dual on Graph Math Optimization steps are fast even without GPU (<< frame rate) Optimization convergence is fast (~ frame rate) Mesh can be augmented without restarting optimization 64
65 FLaME: Fast Lightweight Mesh Estimation Image and pose Feature Selection Inverse Depth Estimation Output Depthmap interpolation Variational Regularizer (NLTGV 2 -L1) Delaunay Triangulation 65
66 Benchmark Results Compared FLaME to two existing CPU-only approaches: - LSD-SLAM - MLM 66
67 Benchmark Results Compared FLaME to two existing CPU-only approaches: - LSD-SLAM - MLM Evaluated on two benchmark datasets (ground truth poses): - TUM RGBD SLAM 30 Hz) - EuRoC MAV 20 Hz) 67
68 Benchmark Results Compared FLaME to two existing CPU-only approaches: - LSD-SLAM - MLM Evaluated on two benchmark datasets (ground truth poses): - TUM RGBD SLAM 30 Hz) - EuRoC MAV 20 Hz) Desktop Intel i7 CPU only 68
69 Benchmark Results Lower Error 69
70 Benchmark Results More Dense 70
71 Benchmark Results Less Load 71
72 Benchmark Results Faster 72
73 Benchmark Results Faster 73
74 Benchmark Results 74
75 Flight Experiments DJI F450 frame Intel NUC flight computer 10 ms/frame runtime 1.5 core load Indoor Flight at 2.5 m/s Outdoor Flight at 3.5 m/s 75
76 Flight Experiments 76
77 FLaME Contributions Proposed a lightweight method for dense online monocular depth estimation Paper Code* * Coming soon! W. Nicholas Greene (wng@csail.mit.edu) 77
78 FLaME Contributions Proposed a lightweight method for dense online monocular depth estimation Formulated the reconstruction problem as a non-local variational optimization over a Delaunay graph, which allows for a fast, efficient approach to depth estimation Paper Code* * Coming soon! W. Nicholas Greene (wng@csail.mit.edu) 78
79 FLaME Contributions Proposed a lightweight method for dense online monocular depth estimation Formulated the reconstruction problem as a non-local variational optimization over a Delaunay graph, which allows for a fast, efficient approach to depth estimation Demonstrated improved depth accuracy and density on benchmark datasets using less than 1 Intel i7 CPU core Paper Code* * Coming soon! W. Nicholas Greene (wng@csail.mit.edu) 79
80 FLaME Contributions Proposed a lightweight method for dense online monocular depth estimation Formulated the reconstruction problem as a non-local variational optimization over a Delaunay graph, which allows for a fast, efficient approach to depth estimation Demonstrated improved depth accuracy and density on benchmark datasets using less than 1 Intel i7 CPU core Demonstrated real-world applicability with indoor and outdoor flight experiments running FLaME onboard, in-the-loop Paper Code* * Coming soon! W. Nicholas Greene (wng@csail.mit.edu) 80
81 Acknowledgments This material is based upon work supported by DARPA under Contract No. HR C-0110 and by the NSF Graduate Research Fellowship under Grant No Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors(s) and do not necessarily reflect the views of the National Science Foundation. We also thank John Ware, John Carter, and the rest of the MIT-Draper FLA Team for continued development and testing support. Paper Code* * Coming soon! W. Nicholas Greene (wng@csail.mit.edu) 81
Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich. LSD-SLAM: Large-Scale Direct Monocular SLAM
Computer Vision Group Technical University of Munich Jakob Engel LSD-SLAM: Large-Scale Direct Monocular SLAM Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich Monocular Video Engel,
More informationFLaME: Fast Lightweight Mesh Estimation using Variational Smoothing on Delaunay Graphs
FLaME: Fast Lightweight Mesh Estimation using Variational Smoothing on Delaunay Graphs W. Nicholas Greene Nicholas Roy Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of
More informationVisual-Inertial Localization and Mapping for Robot Navigation
Visual-Inertial Localization and Mapping for Robot Navigation Dr. Guillermo Gallego Robotics & Perception Group University of Zurich Davide Scaramuzza University of Zurich - http://rpg.ifi.uzh.ch Mocular,
More informationDense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm
Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers
More informationLecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza
Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time, ICRA 14, by Pizzoli, Forster, Scaramuzza [M. Pizzoli, C. Forster,
More informationLecture 10 Dense 3D Reconstruction
Institute of Informatics Institute of Neuroinformatics Lecture 10 Dense 3D Reconstruction Davide Scaramuzza 1 REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time M. Pizzoli, C. Forster,
More informationDirect Methods in Visual Odometry
Direct Methods in Visual Odometry July 24, 2017 Direct Methods in Visual Odometry July 24, 2017 1 / 47 Motivation for using Visual Odometry Wheel odometry is affected by wheel slip More accurate compared
More informationSemi-Dense Direct SLAM
Computer Vision Group Technical University of Munich Jakob Engel Jakob Engel, Daniel Cremers David Caruso, Thomas Schöps, Lukas von Stumberg, Vladyslav Usenko, Jörg Stückler, Jürgen Sturm Technical University
More informationDense 3D Reconstruction from Autonomous Quadrocopters
Dense 3D Reconstruction from Autonomous Quadrocopters Computer Science & Mathematics TU Munich Martin Oswald, Jakob Engel, Christian Kerl, Frank Steinbrücker, Jan Stühmer & Jürgen Sturm Autonomous Quadrocopters
More informationHybrids Mixed Approaches
Hybrids Mixed Approaches Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline Why mixing? Parallel Tracking and Mapping Benefits
More information3D Line Segments Extraction from Semi-dense SLAM
3D Line Segments Extraction from Semi-dense SLAM Shida He Xuebin Qin Zichen Zhang Martin Jagersand University of Alberta Abstract Despite the development of Simultaneous Localization and Mapping (SLAM),
More informationScanning and Printing Objects in 3D Jürgen Sturm
Scanning and Printing Objects in 3D Jürgen Sturm Metaio (formerly Technical University of Munich) My Research Areas Visual navigation for mobile robots RoboCup Kinematic Learning Articulated Objects Quadrocopters
More informationAn Evaluation of Robust Cost Functions for RGB Direct Mapping
An Evaluation of Robust Cost Functions for RGB Direct Mapping Alejo Concha and Javier Civera Abstract The so-called direct SLAM methods have shown an impressive performance in estimating a dense 3D reconstruction
More informationLive Metric 3D Reconstruction on Mobile Phones ICCV 2013
Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Main Contents 1. Target & Related Work 2. Main Features of This System 3. System Overview & Workflow 4. Detail of This System 5. Experiments 6.
More informationComputationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor
Computationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor Chang Liu & Stephen D. Prior Faculty of Engineering and the Environment, University of Southampton,
More informationGraph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle)
Graph-based SLAM (Simultaneous Localization And Mapping) for Bridge Inspection Using UAV (Unmanned Aerial Vehicle) Taekjun Oh 1), Sungwook Jung 2), Seungwon Song 3), and Hyun Myung 4) 1), 2), 3), 4) Urban
More information3D Scene Reconstruction with a Mobile Camera
3D Scene Reconstruction with a Mobile Camera 1 Introduction Robert Carrera and Rohan Khanna Stanford University: CS 231A Autonomous supernumerary arms, or "third arms", while still unconventional, hold
More informationDeep Incremental Scene Understanding. Federico Tombari & Christian Rupprecht Technical University of Munich, Germany
Deep Incremental Scene Understanding Federico Tombari & Christian Rupprecht Technical University of Munich, Germany C. Couprie et al. "Toward Real-time Indoor Semantic Segmentation Using Depth Information"
More information15 Years of Visual SLAM
15 Years of Visual SLAM Andrew Davison Robot Vision Group and Dyson Robotics Laboratory Department of Computing Imperial College London www.google.com/+andrewdavison December 18, 2015 What Has Defined
More informationCNN-SLAM: Real-time dense monocular SLAM with learned depth prediction
CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction Keisuke Tateno 1,2, Federico Tombari 1, Iro Laina 1, Nassir Navab 1,3 {tateno, tombari, laina, navab}@in.tum.de 1 CAMP - TU Munich
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.2: Visual Odometry Jürgen Sturm Technische Universität München Cascaded Control Robot Trajectory 0.1 Hz Visual
More informationSLAM II: SLAM for robotic vision-based perception
SLAM II: SLAM for robotic vision-based perception Margarita Chli Martin Rufli, Roland Siegwart Margarita Chli, Martin Rufli, Roland Siegwart SLAM II today s lecture Last time: how to do SLAM? Today: what
More informationPL-SVO: Semi-Direct Monocular Visual Odometry by Combining Points and Line Segments
2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Daejeon Convention Center October 9-14, 2016, Daejeon, Korea PL-SVO: Semi-Direct Monocular Visual Odometry by Combining Points
More information3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava
3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationOmnidirectional DSO: Direct Sparse Odometry with Fisheye Cameras
Omnidirectional DSO: Direct Sparse Odometry with Fisheye Cameras Hidenobu Matsuki 1,2, Lukas von Stumberg 2,3, Vladyslav Usenko 2, Jörg Stückler 2 and Daniel Cremers 2 Abstract We propose a novel real-time
More informationOutline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping
Outline CSE 576 KinectFusion: Real-Time Dense Surface Mapping and Tracking PhD. work from Imperial College, London Microsoft Research, Cambridge May 6, 2013 1 Why we re interested in Real-Time tracking
More informationCVPR 2014 Visual SLAM Tutorial Kintinuous
CVPR 2014 Visual SLAM Tutorial Kintinuous kaess@cmu.edu The Robotics Institute Carnegie Mellon University Recap: KinectFusion [Newcombe et al., ISMAR 2011] RGB-D camera GPU 3D/color model RGB TSDF (volumetric
More informationVol agile avec des micro-robots volants contrôlés par vision
Vol agile avec des micro-robots volants contrôlés par vision From Active Perception to Event-based Vision Henri Rebecq from Prof. Davide Scaramuzza s lab GT UAV 17 Novembre 2016, Paris Davide Scaramuzza
More informationAugmented Reality, Advanced SLAM, Applications
Augmented Reality, Advanced SLAM, Applications Prof. Didier Stricker & Dr. Alain Pagani alain.pagani@dfki.de Lecture 3D Computer Vision AR, SLAM, Applications 1 Introduction Previous lectures: Basics (camera,
More informationGeometric Reconstruction Dense reconstruction of scene geometry
Lecture 5. Dense Reconstruction and Tracking with Real-Time Applications Part 2: Geometric Reconstruction Dr Richard Newcombe and Dr Steven Lovegrove Slide content developed from: [Newcombe, Dense Visual
More informationApplication of LSD-SLAM for Visualization Temperature in Wide-area Environment
Application of LSD-SLAM for Visualization Temperature in Wide-area Environment Masahiro Yamaguchi 1, Hideo Saito 1 and Shoji Yachida 2 1 Graduate School of Science and Technology, Keio University, 3-14-1
More informationSLAM II: SLAM for robotic vision-based perception
SLAM II: SLAM for robotic vision-based perception Margarita Chli Martin Rufli, Roland Siegwart Margarita Chli, Martin Rufli, Roland Siegwart SLAM II today s lecture Last time: how to do SLAM? Today: what
More informationProbabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM
Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM Raúl Mur-Artal and Juan D. Tardós Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Spain
More informationDPPTAM: Dense Piecewise Planar Tracking and Mapping from a Monocular Sequence
DPPTAM: Dense Piecewise Planar Tracking and Mapping from a Monocular Sequence Alejo Concha I3A Universidad de Zaragoza, Spain alejocb@unizar.es Javier Civera I3A Universidad de Zaragoza, Spain jcivera@unizar.es
More informationEasyChair Preprint. Visual Odometry Based on Convolutional Neural Networks for Large-Scale Scenes
EasyChair Preprint 413 Visual Odometry Based on Convolutional Neural Networks for Large-Scale Scenes Xuyang Meng, Chunxiao Fan and Yue Ming EasyChair preprints are intended for rapid dissemination of research
More informationEVALUATION OF THE VISUAL ODOMETRY METHODS FOR SEMI-DENSE REAL-TIME
Advanced Computing: An International Journal (ACIJ), Vol.9, No.2, March 2018 EVALUATION OF THE VISUAL ODOMETRY METHODS FOR SEMI-DENSE REAL-TIME HAIDARA Gaoussou and PENG Dewei School of Computer Science
More informationDirect Multichannel Tracking
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Direct Multichannel Tracking Jaramillo, C.; Taguchi, Y.; Feng, C. TR2017-146 October 2017 Abstract We present direct multichannel tracking,
More informationBUNDLE ADJUSTMENT (BA) is known to be the optimal
1 ORB-SLAM: a Versatile and Accurate Monocular SLAM System Raúl Mur-Artal*, J. M. M. Montiel, and Juan D. Tardós arxiv:152.956v1 [cs.ro] 3 Feb 215 Abstract The gold standard method for tridimensional reconstruction
More informationVisual SLAM for small Unmanned Aerial Vehicles
Visual SLAM for small Unmanned Aerial Vehicles Margarita Chli Autonomous Systems Lab, ETH Zurich Simultaneous Localization And Mapping How can a body navigate in a previously unknown environment while
More informationarxiv: v2 [cs.cv] 21 Feb 2018
UnDeepVO: Monocular Visual Odometry through Unsupervised Deep Learning Ruihao Li 1, Sen Wang 2, Zhiqiang Long 3 and Dongbing Gu 1 arxiv:1709.06841v2 [cs.cv] 21 Feb 2018 Abstract We propose a novel monocular
More informationEdge SLAM: Edge Points Based Monocular Visual SLAM
Edge SLAM: Edge Points Based Monocular Visual SLAM Soumyadip Maity Arindam Saha Brojeshwar Bhowmick Embedded Systems and Robotics, TCS Research & Innovation, Kolkata, India {soumyadip.maity, ari.saha,
More informationIncremental 3D Line Segment Extraction from Semi-dense SLAM
Incremental 3D Line Segment Extraction from Semi-dense SLAM Shida He Xuebin Qin Zichen Zhang Martin Jagersand University of Alberta arxiv:1708.03275v3 [cs.cv] 26 Apr 2018 Abstract Although semi-dense Simultaneous
More informationarxiv: v2 [cs.cv] 26 Aug 2018
A Synchronized Stereo and Plenoptic Visual Odometry Dataset Niclas Zeller 1,2, Franz Quint 2, and Uwe Stilla 1 1 Technische Universität München niclas.zeller@tum.de, stilla@tum.de 2 Karlsruhe University
More informationNID-SLAM: Robust Monocular SLAM using Normalised Information Distance
NID-SLAM: Robust Monocular SLAM using Normalised Information Distance Geoffrey Pascoe, Will Maddern, Michael Tanner, Pedro Pinie s and Paul Newman Oxford Robotics Institute University of Oxford, UK {gmp,wm,mtanner,ppinies,pnewman}@robots.ox.ac.uk
More informationSuper-Resolution Keyframe Fusion for 3D Modeling with High-Quality Textures
Super-Resolution Keyframe Fusion for 3D Modeling with High-Quality Textures Robert Maier, Jörg Stückler, Daniel Cremers International Conference on 3D Vision (3DV) October 2015, Lyon, France Motivation
More informationMonocular Visual Odometry: Sparse Joint Optimisation or Dense Alternation?
Monocular Visual Odometry: Sparse Joint Optimisation or Dense Alternation? Lukas Platinsky 1, Andrew J. Davison 1, and Stefan Leutenegger 1 Abstract Real-time monocular SLAM is increasingly mature and
More informationarxiv: v1 [cs.cv] 13 Nov 2016 Abstract
Semi-Dense 3D Semantic Mapping from Monocular SLAM Xuanpeng LI Southeast University 2 Si Pai Lou, Nanjing, China li xuanpeng@seu.edu.cn Rachid Belaroussi IFSTTAR, COSYS/LIVIC 25 allée des Marronniers,
More informationEfficient Multi-Camera Visual-Inertial SLAM for Micro Aerial Vehicles
Efficient Multi-Camera Visual-Inertial SLAM for Micro Aerial Vehicles Sebastian Houben, Jan Quenzel, Nicola Krombach, and Sven Behnke Abstract Visual SLAM is an area of vivid research and bears countless
More informationSemi-Direct EKF-based Monocular Visual-Inertial Odometry
Semi-Direct EKF-based Monocular Visual-Inertial Odometry Petri Tanskanen 1, Tobias Naegeli 2, Marc Pollefeys 1 and Otmar Hilliges 2 Abstract We propose a novel monocular visual inertial odometry algorithm
More informationReal-Time Vision-Based State Estimation and (Dense) Mapping
Real-Time Vision-Based State Estimation and (Dense) Mapping Stefan Leutenegger IROS 2016 Workshop on State Estimation and Terrain Perception for All Terrain Mobile Robots The Perception-Action Cycle in
More informationA Photometrically Calibrated Benchmark For Monocular Visual Odometry
A Photometrically Calibrated Benchmark For Monocular Visual Odometry Jakob Engel and Vladyslav Usenko and Daniel Cremers Technical University Munich Abstract arxiv:67.2555v2 [cs.cv] 8 Oct 26 We present
More informationDirect Methods for 3D Reconstruction and Visual SLAM
03-01 15th IAPR International Conference on Machine Vision Applications (MVA) Nagoya University, Nagoya, Japan, May 8-12, 2017. Direct Methods for 3D Reconstruction and Visual SLAM Daniel Cremers Departments
More informationMonocular Camera Localization in 3D LiDAR Maps
Monocular Camera Localization in 3D LiDAR Maps Tim Caselitz Bastian Steder Michael Ruhnke Wolfram Burgard Abstract Localizing a camera in a given map is essential for vision-based navigation. In contrast
More informationAutonomous navigation in industrial cluttered environments using embedded stereo-vision
Autonomous navigation in industrial cluttered environments using embedded stereo-vision Julien Marzat ONERA Palaiseau Aerial Robotics workshop, Paris, 8-9 March 2017 1 Copernic Lab (ONERA Palaiseau) Research
More informationCSE 527: Introduction to Computer Vision
CSE 527: Introduction to Computer Vision Week 10 Class 2: Visual Odometry November 2nd, 2017 Today Visual Odometry Intro Algorithm SLAM Visual Odometry Input Output Images, Video Camera trajectory, motion
More informationHuman Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview
Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due
More informationMonoRGBD-SLAM: Simultaneous Localization and Mapping Using Both Monocular and RGBD Cameras
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com MonoRGBD-SLAM: Simultaneous Localization and Mapping Using Both Monocular and RGBD Cameras Yousif, K.; Taguchi, Y.; Ramalingam, S. TR2017-068
More informationAutonomous 3D Reconstruction Using a MAV
Autonomous 3D Reconstruction Using a MAV Alexander Popov, Dimitrios Zermas and Nikolaos Papanikolopoulos Abstract An approach is proposed for high resolution 3D reconstruction of an object using a Micro
More informationπmatch: Monocular vslam and Piecewise Planar Reconstruction using Fast Plane Correspondences
πmatch: Monocular vslam and Piecewise Planar Reconstruction using Fast Plane Correspondences Carolina Raposo and João P. Barreto Institute of Systems and Robotics, University of Coimbra, Portugal {carolinaraposo,jpbar}@isr.uc.pt
More informationVisual-SLAM Algorithms: a Survey from 2010 to 2016
Taketomi et al. REVIEW Visual-SLAM Algorithms: a Survey from 2010 to 2016 Takafumi Taketomi 1*, Hideaki Uchiyama 2 and Sei Ikeda 3 Abstract SLAM is an abbreviation for simultaneous localization and mapping,
More informationOutdoor Obstacle Avoidance based on Hybrid Visual Stereo SLAM for an Autonomous Quadrotor MAV
Outdoor Obstacle Avoidance based on Hybrid Visual Stereo SLAM for an Autonomous Quadrotor MAV Radouane Ait-Jellal and Andreas Zell Abstract We address the problem of on-line volumetric map creation of
More informationTowards Robust Visual Odometry with a Multi-Camera System
Towards Robust Visual Odometry with a Multi-Camera System Peidong Liu 1, Marcel Geppert 1, Lionel Heng 2, Torsten Sattler 1, Andreas Geiger 1,3, and Marc Pollefeys 1,4 Abstract We present a visual odometry
More informationScanning and Printing Objects in 3D
Scanning and Printing Objects in 3D Dr. Jürgen Sturm metaio GmbH (formerly Technical University of Munich) My Research Areas Visual navigation for mobile robots RoboCup Kinematic Learning Articulated Objects
More informationROBUST SCALE ESTIMATION FOR MONOCULAR VISUAL ODOMETRY USING STRUCTURE FROM MOTION AND VANISHING POINTS
ROBUST SCALE ESTIMATION FOR MONOCULAR VISUAL ODOMETRY USING STRUCTURE FROM MOTION AND VANISHING POINTS Johannes Gräter 1, Tobias Schwarze 1, Martin Lauer 1 August 20, 2015 Abstract While monocular visual
More informationDealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech
Dealing with Scale Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline Why care about size? The IMU as scale provider: The
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More information3D Reconstruction with Tango. Ivan Dryanovski, Google Inc.
3D Reconstruction with Tango Ivan Dryanovski, Google Inc. Contents Problem statement and motivation The Tango SDK 3D reconstruction - data structures & algorithms Applications Developer tools Problem formulation
More informationTutorial on 3D Surface Reconstruction in Laparoscopic Surgery. Simultaneous Localization and Mapping for Minimally Invasive Surgery
Tutorial on 3D Surface Reconstruction in Laparoscopic Surgery Simultaneous Localization and Mapping for Minimally Invasive Surgery Introduction University of Bristol using particle filters to track football
More information3D Object Recognition and Scene Understanding from RGB-D Videos. Yu Xiang Postdoctoral Researcher University of Washington
3D Object Recognition and Scene Understanding from RGB-D Videos Yu Xiang Postdoctoral Researcher University of Washington 1 2 Act in the 3D World Sensing & Understanding Acting Intelligent System 3D World
More informationDirect Stereo Visual Odometry Based on Lines
Direct Stereo Visual Odometry Based on Lines Thomas Holzmann, Friedrich Fraundorfer and Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology, Austria {holzmann, fraundorfer,
More informationOn-line and Off-line 3D Reconstruction for Crisis Management Applications
On-line and Off-line 3D Reconstruction for Crisis Management Applications Geert De Cubber Royal Military Academy, Department of Mechanical Engineering (MSTA) Av. de la Renaissance 30, 1000 Brussels geert.de.cubber@rma.ac.be
More informationarxiv: v1 [cs.cv] 3 Aug 2018
L: Direct Sparse Odometry with Loop Closure arxiv:188.1111v1 [cs.cv] 3 Aug 18 Xiang Gao, Rui Wang, Nikolaus Demmel and Daniel Cremers Abstract In this paper we present an extension of Direct Sparse Odometry
More informationProject Updates Short lecture Volumetric Modeling +2 papers
Volumetric Modeling Schedule (tentative) Feb 20 Feb 27 Mar 5 Introduction Lecture: Geometry, Camera Model, Calibration Lecture: Features, Tracking/Matching Mar 12 Mar 19 Mar 26 Apr 2 Apr 9 Apr 16 Apr 23
More informationRGB-D Camera-Based Parallel Tracking and Meshing
RGB-D Camera-Based Parallel Tracking and Meshing Sebastian Lieberknecht Andrea Huber Slobodan Ilic Selim Benhimane metaio GmbH metaio GmbH TUM metaio GmbH Figure 1: A real-time tracking method based on
More informationDirect Visual Odometry for a Fisheye-Stereo Camera
Direct Visual Odometry for a Fisheye-Stereo Camera Peidong Liu, Lionel Heng, Torsten Sattler, Andreas Geiger,3, and Marc Pollefeys,4 Abstract We present a direct visual odometry algorithm for a fisheye-stereo
More informationColored Point Cloud Registration Revisited Supplementary Material
Colored Point Cloud Registration Revisited Supplementary Material Jaesik Park Qian-Yi Zhou Vladlen Koltun Intel Labs A. RGB-D Image Alignment Section introduced a joint photometric and geometric objective
More informationSupplementary Material to: Direct Sparse Visual-Inertial Odometry using Dynamic Marginalization
Supplementary Material to: Direct Sparse Visual-Inertial Odometry using Dynamic Marginalization Lukas von Stumberg, Vladyslav Usenko, Daniel Cremers This is the supplementary material to the paper Direct
More information3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationCRF Based Point Cloud Segmentation Jonathan Nation
CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to
More informationGeometric Stereo Increases Accuracy of Depth Estimations for an Unmanned Air Vehicle with a Small Baseline Stereo System
Geometric Stereo Increases Accuracy of Depth Estimations for an Unmanned Air Vehicle with a Small Baseline Stereo System Eleanor Tursman (Grinnell College, Physics), SUNFEST Fellow Camillo Jose Taylor,
More information3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller
3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationSolving Vision Tasks with variational methods on the GPU
Solving Vision Tasks with variational methods on the GPU Horst Bischof Inst. f. Computer Graphics and Vision Graz University of Technology Joint work with Thomas Pock, Markus Unger, Arnold Irschara and
More informationParallel, Real-Time Visual SLAM
Parallel, Real-Time Visual SLAM Brian Clipp 1, Jongwoo Lim 2, Jan-Michael Frahm 1 and Marc Pollefeys 1,3 Department of Computer Science 1 Honda Research Institute USA, Inc. 2 Department of Computer Science
More informationGuidance: A Visual Sensing Platform For Robotic Applications
Guidance: A Visual Sensing Platform For Robotic Applications Guyue Zhou, Lu Fang, Ketan Tang, Honghui Zhang, Kai Wang, Kang Yang {guyue.zhou, ketan.tang, honghui.zhang, kevin.wang, kang.yang}@dji.com,
More informationCluster-based 3D Reconstruction of Aerial Video
Cluster-based 3D Reconstruction of Aerial Video Scott Sawyer (scott.sawyer@ll.mit.edu) MIT Lincoln Laboratory HPEC 12 12 September 2012 This work is sponsored by the Assistant Secretary of Defense for
More informationarxiv: v1 [cs.cv] 21 Mar 2017
Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments Shichao Yang, Yu Song, Michael Kaess, and Sebastian Scherer arxiv:1703.07334v1 [cs.cv] 21 Mar 2017 Abstract Existing simultaneous
More informationTowards Domain Independence for Learning-based Monocular Depth Estimation
Towards Domain Independence for Learning-based Monocular Depth Estimation Michele Mancini, Gabriele Costante, Paolo Valigi and Thomas A. Ciarfuglia Jeffrey Delmerico 2 and Davide Scaramuzza 2 Abstract
More informationCRF Based Frontier Detection using Monocular Camera
CRF Based Frontier Detection using Monocular Camera Sarthak Upadhyay Robotics Research Center IIIT Hyderabad, India Suryansh Kumar Robotics Research Center IIIT Hyderabad, India K Madhava Krishna Robotics
More informationRGB-D SLAM in Dynamic Environments Using Points Correlations
RGB-D SLAM in Dynamic Environments Using Points Correlations WeiChen Dai, Yu Zhang, Ping Li, and Zheng Fang arxiv:1811.03217v1 [cs.cv] 8 Nov 2018 Abstract This paper proposed a novel RGB-D SLAM method
More informationMonocular Tracking and Reconstruction in Non-Rigid Environments
Monocular Tracking and Reconstruction in Non-Rigid Environments Kick-Off Presentation, M.Sc. Thesis Supervisors: Federico Tombari, Ph.D; Benjamin Busam, M.Sc. Patrick Ruhkamp 13.01.2017 Introduction Motivation:
More informationRobotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)
Robotics Lecture 7: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College
More informationDCTAM: Drift-Corrected Tracking and Mapping for Autonomous Micro Aerial Vehicles
DCTAM: Drift-Corrected Tracking and Mapping for Autonomous Micro Aerial Vehicles Sebastian A. Scherer 1, Shaowu Yang 2 and Andreas Zell 1 Abstract Visual odometry, especially using a forwardlooking camera
More informationIROS 05 Tutorial. MCL: Global Localization (Sonar) Monte-Carlo Localization. Particle Filters. Rao-Blackwellized Particle Filters and Loop Closing
IROS 05 Tutorial SLAM - Getting it Working in Real World Applications Rao-Blackwellized Particle Filters and Loop Closing Cyrill Stachniss and Wolfram Burgard University of Freiburg, Dept. of Computer
More informationarxiv: v1 [cs.cv] 23 Apr 2017
Proxy Templates for Inverse Compositional Photometric Bundle Adjustment Christopher Ham, Simon Lucey 2, and Surya Singh Robotics Design Lab 2 Robotics Institute The University of Queensland, Australia
More informationarxiv: v1 [cs.ro] 16 Jan 2018
Real-time CPU-based large-scale 3D mesh reconstruction arxiv:8.23v [cs.ro] 6 Jan 28 Enrico Piazza Andrea Romanoni Matteo Matteucci Abstract In Robotics, especially in this era of autonomous driving, mapping
More informationIntrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting
Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting R. Maier 1,2, K. Kim 1, D. Cremers 2, J. Kautz 1, M. Nießner 2,3 Fusion Ours 1
More information3D Spatial Layout Propagation in a Video Sequence
3D Spatial Layout Propagation in a Video Sequence Alejandro Rituerto 1, Roberto Manduchi 2, Ana C. Murillo 1 and J. J. Guerrero 1 arituerto@unizar.es, manduchi@soe.ucsc.edu, acm@unizar.es, and josechu.guerrero@unizar.es
More informationOrganized Segmenta.on
Organized Segmenta.on Alex Trevor, Georgia Ins.tute of Technology PCL TUTORIAL @ICRA 13 Overview Mo.va.on Connected Component Algorithm Planar Segmenta.on & Refinement Euclidean Clustering Timing Results
More informationImage Based Reconstruction II
Image Based Reconstruction II Qixing Huang Feb. 2 th 2017 Slide Credit: Yasutaka Furukawa Image-Based Geometry Reconstruction Pipeline Last Lecture: Multi-View SFM Multi-View SFM This Lecture: Multi-View
More informationPerceiving the 3D World from Images and Videos. Yu Xiang Postdoctoral Researcher University of Washington
Perceiving the 3D World from Images and Videos Yu Xiang Postdoctoral Researcher University of Washington 1 2 Act in the 3D World Sensing & Understanding Acting Intelligent System 3D World 3 Understand
More information