Large-Scale Robotic SLAM through Visual Mapping

Size: px
Start display at page:

Download "Large-Scale Robotic SLAM through Visual Mapping"

Transcription

1 Large-Scale Robotic SLAM through Visual Mapping Christof Hoppe, Kathrin Pirker, Matthias Ru ther and Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology, Austria {hoppe, kpirker, ruether, Abstract Keyframe-based visual SLAM systems perform reliably and fast in medium-sized environments. Currently, their main weaknesses are robustness and scalability in large scenarios. In this work, we propose a hybrid, keyframe based visual SLAM system, which overcomes these problems. We combine visual features of different strength, add appearance-based loop detection and present a novel method to incorporate non-visual sensor information into standard bundle adjustment frameworks to tackle the problem of weakly textured scenes. On a standardized test dataset, we outperform EKFbased solutions in terms of localization accuracy by at least a factor of two. On a self-recorded dataset, we achieve a performance comparable to a laser scanner approach. Hallway 10 Our Approach Laser y [m] x [m] Figure 1. Reconstructed trajectory of a loop that is passed four times. The result of our approach overlaid by the trajectory reconstructed by a laser range finder. The black dots show the map points of our map. The green dots are obstacles identified by the laser range finder. 1. Introduction In Simultaneous Localization and Mapping (SLAM) one seeks to generate a map using noisy environment information while simultaneously localizing oneself within this map. This is a basic prerequisite for most autonomous robotic applications like path planning or navigation. Typical SLAM systems need to work robustly within an area of several thousand square meters, while the accuracy of the map and the reconstructed trajectory has to be within a few centimeters. Trajectory estimation and map generation have to be performed in real-time. Further, especially in aerial robotics, it is mandatory This work was supported within the Austrian Research Promotion Agencys (FFG) program: Research Studios Austria Machine Vision Meets Mobility (#818633) and the Austrian Federal Ministry of Economy, Family and Youth (bmwfj).

2 to recover a full 6 Degrees of Freedom (DoF) pose and the sensor has to provide three-dimensional information of the environment. Camera-based SLAM is able to retrieve this kind of information. Additionally, the sensor is cheap, lightweight and small. Existing SLAM methods can be classified into Bayesian and geometric approaches by their underlying noise handling method. Most Bayesian methods are standard Extended Kalman Filter (EKF) systems which differ in their map representation, type of image features or feature initialization. Geometric approaches formulate the SLAM problem as an optimization problem and use bundle adjustment for noise handling. EKF systems are inherently unable to cope with a large number of observations, because their complexity squares with the number of measurements. Geometric approaches usually solve the SLAM problem within a Structure-from-Motion framework over selected keyframes. Only a few systems [1, 10] showed that they are able to build a map of several thousand square meters reliably. In this paper, we improve a keyframe based stereo SLAM approach by addressing the following problems: (a) Accurate pose estimation of keyframes, (b) reliable map extension, (c) loop detection and correction, and (d) handling of weakly textured environment. We combine features of different strength to achieve accurate pose estimation of keyframes. To robustly extend the map, we apply a simple criterion that triggers map updates when entering unknown terrain. By adding appearancebased loop detection, we are able to correct highly erroneous trajectory and map estimates. Since our system is designed for performing in large indoor environments where featureless areas may occure, we propose a method that handles such areas by integrating odometry information in the bundle adjustment framework. We evaluate our system on two datasets: A hallway sequence and a standardized dataset of the Rawseeds project which comes with groundtruth information. Compared to other stereo SLAM solutions on this dataset our approach shows superior accuracy while still being real-time capable. Figure 1 shows the reconstruction of the hallway sequence. 2. Related Work When visual SLAM (VSLAM) came up, most methods were based on the Extended Kalman Filter (EKF) [4]. Although theoretically the EKF is a maximum likelihood estimator of a Gaussian random variable, there are several practical problems. Computational complexity squares with the number of observations, the method is not robust against data association errors and the linearization of the current state is a source of error. Montiel et al. [8] tackled the problem of EKF linearization by changing the representation of landmarks from a Euclidean representation to the unified inverse depth parameterization. This leads to better performance but requires six parameters per landmark instead of three. To decrease computational complexity, Paz et al. [10] used a divide-and-conquer algorithm to split the map into sub-maps. They also used the unified inverse depth parameterization to reconstruct a trajectory of 200 m. The authors expect that their system can be implemented in real-time. With increasing computational power, geometric optimization methods became popular for SLAM systems. One of the first systems was presented by Klein et al. [6]. Their system (PTAM) provides the pose of a monocular camera in an unknown environment to an augmented reality application in real-time. Klein et al. split tracking and map building into parallel threads with different timing requirements. While tracking has to be performed in real-time, the map is extended only by selected frames (keyframes) and can be more time-consuming. Their approach has set a keypoint with respect to accuracy and speed, but does not fulfill the prerequisites of robotic SLAM. It requires user input for map initialization, generates a map with arbitrary scale and introduces scale drift over time. Restricted to operation in small desktop environments, it comes without a loop correction mechanism.

3 (a) Figure 2. (a) The convex hull and the recognized map points (green) used for pose estimation of this frame. The red crosses outside the convex hull mark the detected keypoints. (b) Weakly textured environment. (b) The demanding optimization of structure and pose prevents most SLAM algorithms from performing loop closing in real-time. Konolige et al. [1] reduced the number of camera poses and 3D points. They showed that even with reduced data the same accuracy could be reached after bundle adjustment. In a follow-up paper [7], they present an optimized implementation of the underlying bundle adjustment that solves very large SLAM problems with several thousand poses in short time. Sibley et al. [12] introduced a new representation of map structure and camera poses. Map points and camera poses are described by their relative positions, which prevents a propagation of large reprojection errors through the whole loop and bundle adjustment converges in a few iterations. Strasdat et al. [14] proposed a loop closing algorithm for monocular SLAM that uses only camera poses as measurements. They optimize the relative similarity transform between camera poses taking also the scale drift of monocular tracking into account. Their results in terms of accuracy are comparable to standard bundle adjustment. In contrast to the discussed systems, our method is designed for use on a robot and it is able to build a map with true metric scale. Furthermore, we integrate non-visual sensor information into the VSLAM process without changing the underlying optimization algorithm, which increases robustness without special treatment of weakly connected keyframes. 3. Methodology In our SLAM system, we combine features of different strength to estimate the pose of a keyframe precisely. We present two rules for selecting new keyframes. We perform loop detection using an appearance-based method and we incorporate information of additional sensors into the standard bundle adjustment by introducing synthetic map points Accurate pose estimation of keyframes As in previous approaches [3], the pose between two keyframes is estimated by visual odometry through active feature search. It reduces the effort of exhaustive feature matching between 3D map points and the image by using a prior for pose information. It also enables us to use patch-based correlation as a feature descriptor. A pose prior C t is calculated by moving the old pose C t 1 with its current speed v t 1 : C t = C t 1 + α v t 1 δt, (1) where δt is the time difference between two frames and α a damping factor. To find correspondences between the set of map points M = {M 1,...M n }, we reproject these points to the camera at its predicted pose: m = H(C t, M), (2)

4 Nearest Keyframes Recent frames Loop closing candidates Noise Current Keyframe Figure 3. The 7 nearest neighbors (y-axis) of each keyframe (x-axis). Starting from keyframe 100, the robot enters the loop for a second time. where H(, ) is the projective mapping of camera at pose C t. This results in a set m that contains the reprojected positions of the map points which lie within image boundaries. The feature descriptor of a valid map point M j is matched with image features in a fixed radius around m j. The predicted pose C t is then refined by minimizing the reprojection error of the correspondences. The advantage of active search is the reduced search space for feature matching. It also avoids gross outlier matches. However, active search fails if the motion of the robot is not well predicted, resulting in a wrong motion estimate. To meliorate the problem, we add a seperate feature matching step for every new keyframe C k. We select a set L of map points that are visible in the last n = 5 keyframes. We project L to C k l = H(C k, L), (3) where l is the set of all projected positions of L, which lie in the image boundaries of C k. We determine the correspondences between L and the keypoints in C k using the SURF feature descriptor. The pose is re-calculated using these correspondences. Since the correspondence set may contain outliers, we use the Tukey M-Estimator error function for pose optimization. Afterwards, sliding window bundle adjustment ist applied, taking the 10 most recent keyframes and their corresponding map points into account Map extension trigger A crucial point for keyframe based VSLAM methods is the decision when to extend the map. We define two rules to determine whether a frame is used for map extension. The first rule accounts for the distance between the current camera pose C t and the last keyframe C k 1. If the Euclidean distance d = C t C k 1 exceeds a given threshold (e.g m) a map extension is triggered. This is reasonable, because patch-based correlation is able to handle only a limited range of scale change. The second rule triggers map extension if the camera rotates. Given the set m of matched keypoints used for visual odometry, we analyze their distribution in image space (see Figure 2(a)). If the area of the convex hull of m is below a certain threshold (e.g. 40% of the image area), map extension is also triggered. This rule prevents track loss in tight turns of the robot Loop Closing Because of the incremental nature of SLAM, small errors sum up over time and cause a large error. Loop detection/correction decreases the overall error. To detect a loop closure, we compare the appearance of a new keyframe C k to all previous keyframes C 1,..., C k 1. To get a set of N most similar

5 C t-1 C t Visual map point Synthetic map point Keyframe pose Visual odometry Dead-reckoning Figure 4. Sensor Fusion. If only few map points are available, the camera is tracked by a relative pose estimation sensor, e.g. odometry or IMU. We generate synthetic map points and reproject them to the new as well as to older ones. keyframes, we make use of the vocabulary tree approach proposed by Nister et al. [9]. The set of nearest neighbors N can be classified into three groups: Recently added keyframes, old keyframes and false matches. Since recently added keyframes are not relevant for loop closure, they are removed from N. To identify outlier keyframes in N, we perform SURF feature matching between C k and the keyframes remained in N, followed by geometric validation. To guarantee robust loop detection, we expect that N consists of at least three keyframes which are located close to each other. The keyframe structure is illustrated in Figure 3. For loop correction, we associate C k to map points that were also visible in N using SURF features. The overall reprojection error is minimized by performing least-squares bundle adjustment on the whole map Sensor fusion A serious problem for visual SLAM are weakly textured areas (see Figure 2(b)). If the robot is equipped with additional sensors like odometer, laser range finder, GPS or IMU, these can be used to keep track of the robot. However, a fallback to dead-reckoning breaks the visual connectivity between keyframes. We follow a different strategy by incorporating relative movement information into the visual domain by generating synthetic map points. A set of synthetic map points is used to connect a keyframe C k to the last n keyframes K = {C k 1,..., C k n }. A synthetic map point S j consists of a 3D position M j and a set of image measurements m = {m 1,..., m l }, where l n. A synthetic map point is not linked to any kind of real visual feature. Since it carries the information required by bundle adjustment, it can be used in the optimization like any other map point. Therefore, existing and highly optimized structure from motion techniques can be used without special treatment of breaks in visual connectivity. If the number of 3D-2D correspondences found in the tracking step, drops below a given threshold (e.g. 30 correspondences), we use relative movement information to estimate camera pose C t. If the Euclidean distance or the rotational difference between C t and the last keyframe C k exceeds a threshold, a new keyframe C k+1 is added. We reconnect C k+1 by generating a set of synthetic map points S = {S 1,..., S l }. Each S j is a random 3D position M j assigned, such that its reprojected position is visible in C k+1 and in at least one keyframe of K. To obtain the measurement set m of S j, we reproject S j to C k+1 and the keyframes in K. The reprojected positions are stored in m. The number of synthetic map points generated to reconnect C k+1 is in the range of the average number of visible map points per keyframe in K. Figure 4 illustrates the synthetic map point generation. 4. Experiments To evaluate the performance of our approach, we applied it to two datasets: A hallway scene and a sequence of the Rawseeds [2] project. Whereas the hallway scene is well textured and spacious, the Rawseeds sequence consists of long, narrow corridors with little texture. To demonstrate the problems

6 of a state-of-the-art approach, we processed both sequences by a modified monocular, keyframe-based VSLAM system [6] (stereo PTAM). We extended this system by using a calibrated stereo camera setup to achieve a true scale map Datasets and accuracy measure The hallway dataset is a trajectory of a 69 m loop that is passed four times in a row. We recorded 3000 stereo images of 640x480 pixel at a framerate of 7.5 Hz. The camera was mounted on a wheeled robot with a driving speed of 0.7 m/s. The scene is static, with artificial and natural lighting. This dataset is taken at a well textured environment (Figure 5 (c)-(e) ) and can be processed without sensor fusion as described in Section 3.4. The Rawseeds dataset is a 180 m subsequence of the Bicocca b dataset provided by the Rawseeds project. The sequence comprises 5500 images with a resolution of 640x480. The framerate was 15 Hz. The camera was also mounted on a wheeled robot and the average driving speed was 0.5 m/s. The environment is illuminated by artificial lighting and the robot is the only moving object. As shown in Figure 5 (a), 5 (b) and Figure 2(b), the trajectory of the Rawseeds dataset starts in a large hallway and leads to narrow corridors. In contrast to the well textured hallway, the corridors are weakly textured and some parts do not contain visual features (Figure 2(b) ). To quantify accuracy, we use the absolute trajectory error (ATE). Given two sets of pose estimates at synchronized time instances A = A1,..., An and B = B1,..., Bn, the ATE is defined as AT E = n X d(ak, Bk ), k=1 where d(, ) denotes the Euclidean distance. Since both sequences can be defined in two different coordinate systems, they are first aligned by minimizing the ATE in a least squares sense. The specified error is the remaining error after sequence alignment. For the Rawseeds dataset, we compare our results with the provided groundtruth. The hallway dataset is compared to a trajectory reconstructed by a laser range finder. We used the GMapping algorithm proposed by Grisetti et al. [5] Accuracy analysis On the hallway dataset, stereo PTAM fails after roughly 15 m because visual odometry is not able to handle the abrupt motion changes of the robot. A door sill causes a displacement of up to 40 px in two consecutive frames. Since the active search region is limited to 8 px, no valid 3D-2D image correspondences could be identified. Patch-based correlation for data association is sensitive to repetitive texture and prevents increasing the search region. Our system processes the hallway dataset completely without the need of sensor fusion. The mean ATE, compared to the trajectory (a) (b) (c) (d) (e) Figure 5. Example image from datasets. (a) and (b) are taken from the Rawseeds dataset. The other three images are contained in the hallway dataset.

7 reconstructed by GMapping, is 0.22 m. The map contains 31,678 map points, which are 116 map points per meter. The result is shown in Figure 1 Stereo PTAM reconstructs m of the Rawseeds sequence with a mean ATE error of 0.88 m. The map contains 31,450 map points, which are 377 points per meter. The system fails after 83 m because it faces a pure white wall as shown in Figure 2(b). The experiment shows that stereo PTAM is able to reconstruct a long trajectory with acceptable error but it fails in case of abrupt motion changes and little feature information. For processing the Rawseeds dataset with our approach, we use odometry information in case of too few image features. We fall back to odometry if less than 30 3D-2D correspondences are found by active search. This was the case at three parts of the trajectory. To reconnect keyframes generated during odometry tracking, we generate 80 map points per keyframe on average. Before performing loop correction, the mean ATE was 0.60 m, which is reduced to 0.55 m after loop optimization. The reconstructed map contains 31,600 map points (166 map points per meter). Although the number of map points is decreased by 56%, the accuracy is increased. For the Rawseeds dataset, two visual SLAM solutions are available online. One solution is based on an EKF approach using lines as visual features (H-Trinocular SLAM [13]). It achieves a mean ATE of 2.55 m with a standard deviation of 1.14 m. The second result is generated by an EKF approach using SURF features (CI-Stereo Graph [11]). The trajectory has a mean ATE of 1.12 m with a standard deviation of 0.51 m. Our approach with a mean ATE of 0.55 m outperforms both example solutions by at least a factor of two. Figure 6 shows both reconstructed trajectories compared to the groundtruth and our approach. 5. Conclusion Many state of the art visual SLAM systems suffer from limited scalability, lacking loop-closure capability and limited robustness in environments with weak visual information. In our work, we extended a keyframe-based VSLAM approach to achieve accurate map and trajectory reconstruction in largescale environments. We have shown that our system explores and navigates over a trajectory of more than 200 m with an ATE below 1%. It is capable of navigating through completely untextured scenes, and manages map sizes of more than 3000 m 2. The increase in robustness is achieved through employing stronger feature descriptors, adding virtual map points if visual information is insufficient, and performing appearance-based localization for loop-closing. Compared to other keyframe-based systems like PTAM, our system creates more computational overhead, which is primarily due to SURF-descriptor computation and matching. However, on a high-end PC equipped with a GPGPU the system is still real-time capable. The incorporation of synthetic map points allows to add non- Rawseeds - Comparison y [m] CI Stereo Groundtruth Trinocular SLAM Our approach x [m] Figure 6. Published SLAM results on the Rawseeds dataset, including our approach.

8 visual sensor information into the bundle adjustment framework without changing existing, highly optimized implementations. One open aspect in our approach is the propagation of measurement uncertainty from external sensors to the synthetic map points. We currently follow the ad-hoc approach of adding random points with a covariance of 1. In the future we seek to examine a point placement strategy which allows to model uncertainty in a more realistic way. References [1] M. Agrawal and K. Konolige. FrameSLAM: From bundle adjustment to real-time visual mapping. IEEE Transactions on Robotics, 24(5), [2] S. Ceriani, G. Fontana, A. Giusti, D. Marzorati, M. Matteucci, D. Migliore, D. Rizzi, D. G. Sorrenti, and P. Taddei. Rawseeds ground truth collection systems for indoor self-localization and mapping. Autonomous Robots, 27: , [3] A.J. Davison. Active search for real-time vision. Int. Conf. on Computer Vision, 1:66 73, [4] A.J. Davison and N. Kita. Sequential Localization and Map-Building for Real-Time Computer Vision and Robotics. Robotics and Autonomous Systems, 36(4): , [5] G. Grisetti, C. Stachniss, and W. Burgard. Improved Techniques for Grid Mapping With Rao- Blackwellized Particle Filters. IEEE Transactions on Robotics, 23(1):34 46, [6] G. Klein and D. Murray. Parallel tracking and mapping for small AR workspaces. In International Symposium on Mixed and Augmented Reality, [7] K. Konolige, G. Grisetti, R. Kummerle, W. Burgard, B. Limketkai, and R. Vincent. Sparse pose adjustment for 2d mapping. In IROS, Taipei, Taiwan, 10/ [8] J. Montiel, J. Civera, and A. Davison. Unified Inverse Depth Parametrization for Monocular SLAM. In Proc. of Robotics: Science and Systems, [9] D. Nister and H. Stewenius. Scalable recognition with a vocabulary tree. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, [10] L.M. Paz, P. Piniés, J. D. Tardós, and J. Neira. Large Scale 6DOF SLAM with Stereo-in-Hand. IEEE Transactions on Robotics, 24(5): , [11] P. Piniés, Lina María Paz, and J. D. Tardós. CI-Graph: An efficient approach for Large Scale SLAM. In IEEE Int. Conf. Robotics and Automation, [12] G. Sibley, C. Mei, I. Reid, and P. Newman. Adaptive relative bundle adjustment. In Proc. of Robotics: Science and Systems, [13] D. G. Sorrenti, M. Matteucci, D. Marzorati, and A. Furlan. Benchmark Solution to the Stereo or Trinocular SLAM - Bicocca b BP rs/rawseeds/rs/assets/solutions_data/4adc84637e0a0/bs_milan_ Bicocca_25b.pdf, last visited , [14] H. Strasdat, J. M. M. Montiel, and A. Davison. Scale Drift-Aware Large Scale Monocular SLAM. In Proc. of Robotics: Science and Systems, 2010.

Direct Methods in Visual Odometry

Direct Methods in Visual Odometry Direct Methods in Visual Odometry July 24, 2017 Direct Methods in Visual Odometry July 24, 2017 1 / 47 Motivation for using Visual Odometry Wheel odometry is affected by wheel slip More accurate compared

More information

Augmented Reality, Advanced SLAM, Applications

Augmented Reality, Advanced SLAM, Applications Augmented Reality, Advanced SLAM, Applications Prof. Didier Stricker & Dr. Alain Pagani alain.pagani@dfki.de Lecture 3D Computer Vision AR, SLAM, Applications 1 Introduction Previous lectures: Basics (camera,

More information

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching

Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching Hauke Strasdat, Cyrill Stachniss, Maren Bennewitz, and Wolfram Burgard Computer Science Institute, University of

More information

Parallel, Real-Time Visual SLAM

Parallel, Real-Time Visual SLAM Parallel, Real-Time Visual SLAM Brian Clipp 1, Jongwoo Lim 2, Jan-Michael Frahm 1 and Marc Pollefeys 1,3 Department of Computer Science 1 Honda Research Institute USA, Inc. 2 Department of Computer Science

More information

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers

More information

Parallel, Real-Time Visual SLAM

Parallel, Real-Time Visual SLAM Parallel, Real-Time Visual SLAM Brian Clipp 1, Jongwoo Lim 2, Jan-Michael Frahm 1 and Marc Pollefeys 1,3 Department of Computer Science 1 Honda Research Institute USA, Inc. 2 Department of Computer Science

More information

On Feature Parameterization for EKF-based Monocular SLAM

On Feature Parameterization for EKF-based Monocular SLAM On Feature Parameterization for EKF-based Monocular SLAM Simone Ceriani Daniele Marzorati Matteo Matteucci Davide Migliore Domenico G. Sorrenti Politecnico di Milano, Italy (e-mail: {ceriani,matteucci}@elet.polimi.it)

More information

CSE 527: Introduction to Computer Vision

CSE 527: Introduction to Computer Vision CSE 527: Introduction to Computer Vision Week 10 Class 2: Visual Odometry November 2nd, 2017 Today Visual Odometry Intro Algorithm SLAM Visual Odometry Input Output Images, Video Camera trajectory, motion

More information

Simultaneous Localization and Mapping

Simultaneous Localization and Mapping Sebastian Lembcke SLAM 1 / 29 MIN Faculty Department of Informatics Simultaneous Localization and Mapping Visual Loop-Closure Detection University of Hamburg Faculty of Mathematics, Informatics and Natural

More information

Histogram of Oriented Cameras - A New Descriptor for Visual SLAM in Dynamic Environments

Histogram of Oriented Cameras - A New Descriptor for Visual SLAM in Dynamic Environments PIRKER ET AL.: HISTOGRAM OF ORIENTED CAMERAS 1 Histogram of Oriented Cameras - A New Descriptor for Visual SLAM in Dynamic Environments Katrin Pirker kpirker@icg.tugraz.at Matthias Rüther ruether@icg.tugraz.at

More information

Evaluation of Pose Only SLAM

Evaluation of Pose Only SLAM The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Evaluation of Pose Only SLAM Gibson Hu, Shoudong Huang and Gamini Dissanayake Abstract In

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 7: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Dealing with Scale Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline Why care about size? The IMU as scale provider: The

More information

Hybrids Mixed Approaches

Hybrids Mixed Approaches Hybrids Mixed Approaches Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline Why mixing? Parallel Tracking and Mapping Benefits

More information

Davide Scaramuzza. University of Zurich

Davide Scaramuzza. University of Zurich Davide Scaramuzza University of Zurich Robotics and Perception Group http://rpg.ifi.uzh.ch/ Scaramuzza, D., Fraundorfer, F., Visual Odometry: Part I - The First 30 Years and Fundamentals, IEEE Robotics

More information

Monocular Visual Odometry

Monocular Visual Odometry Elective in Robotics coordinator: Prof. Giuseppe Oriolo Monocular Visual Odometry (slides prepared by Luca Ricci) Monocular vs. Stereo: eamples from Nature Predator Predators eyes face forward. The field

More information

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Main Contents 1. Target & Related Work 2. Main Features of This System 3. System Overview & Workflow 4. Detail of This System 5. Experiments 6.

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Data Association for SLAM

Data Association for SLAM CALIFORNIA INSTITUTE OF TECHNOLOGY ME/CS 132a, Winter 2011 Lab #2 Due: Mar 10th, 2011 Part I Data Association for SLAM 1 Introduction For this part, you will experiment with a simulation of an EKF SLAM

More information

Long-term motion estimation from images

Long-term motion estimation from images Long-term motion estimation from images Dennis Strelow 1 and Sanjiv Singh 2 1 Google, Mountain View, CA, strelow@google.com 2 Carnegie Mellon University, Pittsburgh, PA, ssingh@cmu.edu Summary. Cameras

More information

Monocular SLAM with Inverse Scaling Parametrization

Monocular SLAM with Inverse Scaling Parametrization Monocular SLAM with Inverse Scaling Parametrization D. Marzorati 2, M. Matteucci 1, D. Migliore 1, and D. G. Sorrenti 2 1 Dept. Electronics and Information, Politecnico di Milano 2 Dept. Informatica, Sistem.

More information

Implementation of Odometry with EKF for Localization of Hector SLAM Method

Implementation of Odometry with EKF for Localization of Hector SLAM Method Implementation of Odometry with EKF for Localization of Hector SLAM Method Kao-Shing Hwang 1 Wei-Cheng Jiang 2 Zuo-Syuan Wang 3 Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung,

More information

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor Takafumi Taketomi, Tomokazu Sato, and Naokazu Yokoya Graduate School of Information

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Direct Stereo Visual Odometry Based on Lines

Direct Stereo Visual Odometry Based on Lines Direct Stereo Visual Odometry Based on Lines Thomas Holzmann, Friedrich Fraundorfer and Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology, Austria {holzmann, fraundorfer,

More information

Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich. LSD-SLAM: Large-Scale Direct Monocular SLAM

Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich. LSD-SLAM: Large-Scale Direct Monocular SLAM Computer Vision Group Technical University of Munich Jakob Engel LSD-SLAM: Large-Scale Direct Monocular SLAM Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich Monocular Video Engel,

More information

Switchable Constraints vs. Max-Mixture Models vs. RRR A Comparison of Three Approaches to Robust Pose Graph SLAM

Switchable Constraints vs. Max-Mixture Models vs. RRR A Comparison of Three Approaches to Robust Pose Graph SLAM To appear in Proc. of IEEE Intl. Conf. on Robotics and Automation (ICRA), 2013. DOI: not yet available c 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for

More information

Temporally Scalable Visual SLAM using a Reduced Pose Graph

Temporally Scalable Visual SLAM using a Reduced Pose Graph Temporally Scalable Visual SLAM using a Reduced Pose Graph Hordur Johannsson, Michael Kaess, Maurice Fallon and John J. Leonard Abstract In this paper, we demonstrate a system for temporally scalable visual

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: EKF-based SLAM Dr. Kostas Alexis (CSE) These slides have partially relied on the course of C. Stachniss, Robot Mapping - WS 2013/14 Autonomous Robot Challenges Where

More information

Monocular SLAM for a Small-Size Humanoid Robot

Monocular SLAM for a Small-Size Humanoid Robot Tamkang Journal of Science and Engineering, Vol. 14, No. 2, pp. 123 129 (2011) 123 Monocular SLAM for a Small-Size Humanoid Robot Yin-Tien Wang*, Duen-Yan Hung and Sheng-Hsien Cheng Department of Mechanical

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

Localization and Map Building

Localization and Map Building Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples

More information

Real-Time Global Localization with A Pre-Built Visual Landmark Database

Real-Time Global Localization with A Pre-Built Visual Landmark Database Real-Time Global Localization with A Pre-Built Visual Database Zhiwei Zhu, Taragay Oskiper, Supun Samarasekera, Rakesh Kumar and Harpreet S. Sawhney Sarnoff Corporation, 201 Washington Road, Princeton,

More information

High-precision, consistent EKF-based visual-inertial odometry

High-precision, consistent EKF-based visual-inertial odometry High-precision, consistent EKF-based visual-inertial odometry Mingyang Li and Anastasios I. Mourikis, IJRR 2013 Ao Li Introduction What is visual-inertial odometry (VIO)? The problem of motion tracking

More information

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

More information

15 Years of Visual SLAM

15 Years of Visual SLAM 15 Years of Visual SLAM Andrew Davison Robot Vision Group and Dyson Robotics Laboratory Department of Computing Imperial College London www.google.com/+andrewdavison December 18, 2015 What Has Defined

More information

An image to map loop closing method for monocular SLAM

An image to map loop closing method for monocular SLAM An image to map loop closing method for monocular SLAM Brian Williams, Mark Cummins, José Neira, Paul Newman, Ian Reid and Juan Tardós Universidad de Zaragoza, Spain University of Oxford, UK Abstract In

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

CS 532: 3D Computer Vision 7 th Set of Notes

CS 532: 3D Computer Vision 7 th Set of Notes 1 CS 532: 3D Computer Vision 7 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Logistics No class on October

More information

Monocular Camera Localization in 3D LiDAR Maps

Monocular Camera Localization in 3D LiDAR Maps Monocular Camera Localization in 3D LiDAR Maps Tim Caselitz Bastian Steder Michael Ruhnke Wolfram Burgard Abstract Localizing a camera in a given map is essential for vision-based navigation. In contrast

More information

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms L15. POSE-GRAPH SLAM NA568 Mobile Robotics: Methods & Algorithms Today s Topic Nonlinear Least Squares Pose-Graph SLAM Incremental Smoothing and Mapping Feature-Based SLAM Filtering Problem: Motion Prediction

More information

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss Robot Mapping Least Squares Approach to SLAM Cyrill Stachniss 1 Three Main SLAM Paradigms Kalman filter Particle filter Graphbased least squares approach to SLAM 2 Least Squares in General Approach for

More information

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General Robot Mapping Three Main SLAM Paradigms Least Squares Approach to SLAM Kalman filter Particle filter Graphbased Cyrill Stachniss least squares approach to SLAM 1 2 Least Squares in General! Approach for

More information

A Sparse Hybrid Map for Vision-Guided Mobile Robots

A Sparse Hybrid Map for Vision-Guided Mobile Robots A Sparse Hybrid Map for Vision-Guided Mobile Robots Feras Dayoub Grzegorz Cielniak Tom Duckett Department of Computing and Informatics, University of Lincoln, Lincoln, UK {fdayoub,gcielniak,tduckett}@lincoln.ac.uk

More information

Simultaneous Localization and Mapping (SLAM)

Simultaneous Localization and Mapping (SLAM) Simultaneous Localization and Mapping (SLAM) RSS Lecture 16 April 8, 2013 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 SLAM Problem Statement Inputs: No external coordinate reference Time series of

More information

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements M. Lourakis and E. Hourdakis Institute of Computer Science Foundation for Research and Technology Hellas

More information

Visual Navigation for Micro Air Vehicles

Visual Navigation for Micro Air Vehicles Visual Navigation for Micro Air Vehicles Abraham Bachrach, Albert S. Huang, Daniel Maturana, Peter Henry, Michael Krainin, Dieter Fox, and Nicholas Roy Computer Science and Artificial Intelligence Laboratory,

More information

Temporally Scalable Visual SLAM using a Reduced Pose Graph

Temporally Scalable Visual SLAM using a Reduced Pose Graph Temporally Scalable Visual SLAM using a Reduced Hordur Johannsson, Michael Kaess, Maurice Fallon and John J. Leonard Abstract In this paper, we demonstrate a system for temporally scalable visual SLAM

More information

UAV Autonomous Navigation in a GPS-limited Urban Environment

UAV Autonomous Navigation in a GPS-limited Urban Environment UAV Autonomous Navigation in a GPS-limited Urban Environment Yoko Watanabe DCSD/CDIN JSO-Aerial Robotics 2014/10/02-03 Introduction 2 Global objective Development of a UAV onboard system to maintain flight

More information

Visual SLAM for small Unmanned Aerial Vehicles

Visual SLAM for small Unmanned Aerial Vehicles Visual SLAM for small Unmanned Aerial Vehicles Margarita Chli Autonomous Systems Lab, ETH Zurich Simultaneous Localization And Mapping How can a body navigate in a previously unknown environment while

More information

Stable Vision-Aided Navigation for Large-Area Augmented Reality

Stable Vision-Aided Navigation for Large-Area Augmented Reality Stable Vision-Aided Navigation for Large-Area Augmented Reality Taragay Oskiper, Han-Pang Chiu, Zhiwei Zhu Supun Samarasekera, Rakesh Teddy Kumar Vision and Robotics Laboratory SRI-International Sarnoff,

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1

Robot Mapping. SLAM Front-Ends. Cyrill Stachniss. Partial image courtesy: Edwin Olson 1 Robot Mapping SLAM Front-Ends Cyrill Stachniss Partial image courtesy: Edwin Olson 1 Graph-Based SLAM Constraints connect the nodes through odometry and observations Robot pose Constraint 2 Graph-Based

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

Multi-scale Tetrahedral Fusion of a Similarity Reconstruction and Noisy Positional Measurements

Multi-scale Tetrahedral Fusion of a Similarity Reconstruction and Noisy Positional Measurements Multi-scale Tetrahedral Fusion of a Similarity Reconstruction and Noisy Positional Measurements Runze Zhang, Tian Fang 1, Siyu Zhu, Long Quan The Hong Kong University of Science and Technology Abstract.

More information

Simultaneous Localization

Simultaneous Localization Simultaneous Localization and Mapping (SLAM) RSS Technical Lecture 16 April 9, 2012 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 Navigation Overview Where am I? Where am I going? Localization Assumed

More information

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping

Particle Filters. CSE-571 Probabilistic Robotics. Dependencies. Particle Filter Algorithm. Fast-SLAM Mapping CSE-571 Probabilistic Robotics Fast-SLAM Mapping Particle Filters Represent belief by random samples Estimation of non-gaussian, nonlinear processes Sampling Importance Resampling (SIR) principle Draw

More information

arxiv: v1 [cs.ro] 24 Nov 2018

arxiv: v1 [cs.ro] 24 Nov 2018 BENCHMARKING AND COMPARING POPULAR VISUAL SLAM ALGORITHMS arxiv:1811.09895v1 [cs.ro] 24 Nov 2018 Amey Kasar Department of Electronics and Telecommunication Engineering Pune Institute of Computer Technology

More information

CS 4758: Automated Semantic Mapping of Environment

CS 4758: Automated Semantic Mapping of Environment CS 4758: Automated Semantic Mapping of Environment Dongsu Lee, ECE, M.Eng., dl624@cornell.edu Aperahama Parangi, CS, 2013, alp75@cornell.edu Abstract The purpose of this project is to program an Erratic

More information

Localization, Where am I?

Localization, Where am I? 5.1 Localization, Where am I?? position Position Update (Estimation?) Encoder Prediction of Position (e.g. odometry) YES matched observations Map data base predicted position Matching Odometry, Dead Reckoning

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm

More information

Semi-Dense Direct SLAM

Semi-Dense Direct SLAM Computer Vision Group Technical University of Munich Jakob Engel Jakob Engel, Daniel Cremers David Caruso, Thomas Schöps, Lukas von Stumberg, Vladyslav Usenko, Jörg Stückler, Jürgen Sturm Technical University

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics FastSLAM Sebastian Thrun (abridged and adapted by Rodrigo Ventura in Oct-2008) The SLAM Problem SLAM stands for simultaneous localization and mapping The task of building a map while

More information

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 8: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

Visual SLAM. An Overview. L. Freda. ALCOR Lab DIAG University of Rome La Sapienza. May 3, 2016

Visual SLAM. An Overview. L. Freda. ALCOR Lab DIAG University of Rome La Sapienza. May 3, 2016 An Overview L. Freda ALCOR Lab DIAG University of Rome La Sapienza May 3, 2016 L. Freda (University of Rome La Sapienza ) Visual SLAM May 3, 2016 1 / 39 Outline 1 Introduction What is SLAM Motivations

More information

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem Presented by: Justin Gorgen Yen-ting Chen Hao-en Sung Haifeng Huang University of California, San Diego May 23, 2017 Original

More information

Estimating Geospatial Trajectory of a Moving Camera

Estimating Geospatial Trajectory of a Moving Camera Estimating Geospatial Trajectory of a Moving Camera Asaad Hakeem 1, Roberto Vezzani 2, Mubarak Shah 1, Rita Cucchiara 2 1 School of Electrical Engineering and Computer Science, University of Central Florida,

More information

Master Automática y Robótica. Técnicas Avanzadas de Vision: Visual Odometry. by Pascual Campoy Computer Vision Group

Master Automática y Robótica. Técnicas Avanzadas de Vision: Visual Odometry. by Pascual Campoy Computer Vision Group Master Automática y Robótica Técnicas Avanzadas de Vision: by Pascual Campoy Computer Vision Group www.vision4uav.eu Centro de Automá

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

Lecture 13 Visual Inertial Fusion

Lecture 13 Visual Inertial Fusion Lecture 13 Visual Inertial Fusion Davide Scaramuzza Course Evaluation Please fill the evaluation form you received by email! Provide feedback on Exercises: good and bad Course: good and bad How to improve

More information

A TECHNIQUE OF NATURAL VISUAL LANDMARKS DETECTION AND DESCRIPTION FOR MOBILE ROBOT COGNITIVE NAVIGATION

A TECHNIQUE OF NATURAL VISUAL LANDMARKS DETECTION AND DESCRIPTION FOR MOBILE ROBOT COGNITIVE NAVIGATION A TECHNIQUE OF NATURAL VISUAL LANDMARKS DETECTION AND DESCRIPTION FOR MOBILE ROBOT COGNITIVE NAVIGATION Ekaterina Smirnova, Dmitrii Stepanov, Vladimir Goryunov Central R&D Institute for Robotics and Technical

More information

Introduction to robot algorithms CSE 410/510

Introduction to robot algorithms CSE 410/510 Introduction to robot algorithms CSE 410/510 Rob Platt robplatt@buffalo.edu Times: MWF, 10-10:50 Location: Clemens 322 Course web page: http://people.csail.mit.edu/rplatt/cse510.html Office Hours: 11-12

More information

ORB SLAM 2 : an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

ORB SLAM 2 : an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras ORB SLAM 2 : an OpenSource SLAM System for Monocular, Stereo and RGBD Cameras Raul urartal and Juan D. Tardos Presented by: Xiaoyu Zhou Bolun Zhang Akshaya Purohit Lenord Melvix 1 Outline Background Introduction

More information

AN INCREMENTAL SLAM ALGORITHM FOR INDOOR AUTONOMOUS NAVIGATION

AN INCREMENTAL SLAM ALGORITHM FOR INDOOR AUTONOMOUS NAVIGATION 20th IMEKO TC4 International Symposium and 18th International Workshop on ADC Modelling and Testing Research on Electric and Electronic Measurement for the Economic Upturn Benevento, Italy, September 15-17,

More information

Weighted Local Bundle Adjustment and Application to Odometry and Visual SLAM Fusion

Weighted Local Bundle Adjustment and Application to Odometry and Visual SLAM Fusion EUDES,NAUDET,LHUILLIER,DHOME: WEIGHTED LBA & ODOMETRY FUSION 1 Weighted Local Bundle Adjustment and Application to Odometry and Visual SLAM Fusion Alexandre Eudes 12 alexandre.eudes@lasmea.univ-bpclermont.fr

More information

3D Scene Reconstruction with a Mobile Camera

3D Scene Reconstruction with a Mobile Camera 3D Scene Reconstruction with a Mobile Camera 1 Introduction Robert Carrera and Rohan Khanna Stanford University: CS 231A Autonomous supernumerary arms, or "third arms", while still unconventional, hold

More information

Application questions. Theoretical questions

Application questions. Theoretical questions The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list

More information

Improvements for an appearance-based SLAM-Approach for large-scale environments

Improvements for an appearance-based SLAM-Approach for large-scale environments 1 Improvements for an appearance-based SLAM-Approach for large-scale environments Alexander Koenig Jens Kessler Horst-Michael Gross Neuroinformatics and Cognitive Robotics Lab, Ilmenau University of Technology,

More information

Robust Place Recognition for 3D Range Data based on Point Features

Robust Place Recognition for 3D Range Data based on Point Features 21 IEEE International Conference on Robotics and Automation Anchorage Convention District May 3-8, 21, Anchorage, Alaska, USA Robust Place Recognition for 3D Range Data based on Point Features Bastian

More information

Image Augmented Laser Scan Matching for Indoor Localization

Image Augmented Laser Scan Matching for Indoor Localization Image Augmented Laser Scan Matching for Indoor Localization Nikhil Naikal Avideh Zakhor John Kua Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2009-35

More information

ICRA 2016 Tutorial on SLAM. Graph-Based SLAM and Sparsity. Cyrill Stachniss

ICRA 2016 Tutorial on SLAM. Graph-Based SLAM and Sparsity. Cyrill Stachniss ICRA 2016 Tutorial on SLAM Graph-Based SLAM and Sparsity Cyrill Stachniss 1 Graph-Based SLAM?? 2 Graph-Based SLAM?? SLAM = simultaneous localization and mapping 3 Graph-Based SLAM?? SLAM = simultaneous

More information

Monocular Visual-Inertial SLAM. Shaojie Shen Assistant Professor, HKUST Director, HKUST-DJI Joint Innovation Laboratory

Monocular Visual-Inertial SLAM. Shaojie Shen Assistant Professor, HKUST Director, HKUST-DJI Joint Innovation Laboratory Monocular Visual-Inertial SLAM Shaojie Shen Assistant Professor, HKUST Director, HKUST-DJI Joint Innovation Laboratory Why Monocular? Minimum structural requirements Widely available sensors Applications:

More information

A High Dynamic Range Vision Approach to Outdoor Localization

A High Dynamic Range Vision Approach to Outdoor Localization A High Dynamic Range Vision Approach to Outdoor Localization Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono Abstract We propose a novel localization method for outdoor mobile robots using High Dynamic

More information

Probabilistic Robotics. FastSLAM

Probabilistic Robotics. FastSLAM Probabilistic Robotics FastSLAM The SLAM Problem SLAM stands for simultaneous localization and mapping The task of building a map while estimating the pose of the robot relative to this map Why is SLAM

More information

FLaME: Fast Lightweight Mesh Estimation using Variational Smoothing on Delaunay Graphs

FLaME: Fast Lightweight Mesh Estimation using Variational Smoothing on Delaunay Graphs FLaME: Fast Lightweight Mesh Estimation using Variational Smoothing on Delaunay Graphs W. Nicholas Greene Robust Robotics Group, MIT CSAIL LPM Workshop IROS 2017 September 28, 2017 with Nicholas Roy 1

More information

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview Arun Das 05/09/2017 Arun Das Waterloo Autonomous Vehicles Lab Introduction What s in a name? Arun Das Waterloo Autonomous

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

Unified Loop Closing and Recovery for Real Time Monocular SLAM

Unified Loop Closing and Recovery for Real Time Monocular SLAM Unified Loop Closing and Recovery for Real Time Monocular SLAM Ethan Eade and Tom Drummond Machine Intelligence Laboratory, Cambridge University {ee231, twd20}@cam.ac.uk Abstract We present a unified method

More information

Hidden View Synthesis using Real-Time Visual SLAM for Simplifying Video Surveillance Analysis

Hidden View Synthesis using Real-Time Visual SLAM for Simplifying Video Surveillance Analysis 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center May 9-13, 2011, Shanghai, China Hidden View Synthesis using Real-Time Visual SLAM for Simplifying

More information

Towards a visual perception system for LNG pipe inspection

Towards a visual perception system for LNG pipe inspection Towards a visual perception system for LNG pipe inspection LPV Project Team: Brett Browning (PI), Peter Rander (co PI), Peter Hansen Hatem Alismail, Mohamed Mustafa, Joey Gannon Qri8 Lab A Brief Overview

More information

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms L17. OCCUPANCY MAPS NA568 Mobile Robotics: Methods & Algorithms Today s Topic Why Occupancy Maps? Bayes Binary Filters Log-odds Occupancy Maps Inverse sensor model Learning inverse sensor model ML map

More information

Image Augmented Laser Scan Matching for Indoor Dead Reckoning

Image Augmented Laser Scan Matching for Indoor Dead Reckoning Image Augmented Laser Scan Matching for Indoor Dead Reckoning Nikhil Naikal, John Kua, George Chen, and Avideh Zakhor Abstract Most existing approaches to indoor localization focus on using either cameras

More information

On the Use of Inverse Scaling in Monocular SLAM

On the Use of Inverse Scaling in Monocular SLAM On the Use of Inverse Scaling in Monocular SLAM Daniele Marzorati 1, Matteo Matteucci 2, Davide Migliore 2, Domenico G. Sorrenti 1 1 Università degli Studi di Milano - Bicocca 2 Politecnico di Milano SLAM

More information

Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology

Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology Christoph Stock, Ulrich Mühlmann, Manmohan Krishna Chandraker, Axel Pinz Institute of Electrical Measurement and Measurement

More information