Global localization from a single feature correspondence

Size: px
Start display at page:

Download "Global localization from a single feature correspondence"

Transcription

1 Global localization from a single feature correspondence Friedrich Fraundorfer and Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology {fraunfri,bischof}@icg.tu-graz.ac.at Abstract: This paper presents a new approach to global localization for mobile robots from a single feature correspondence only. The method is based on a piece-wise planar environment map and uses planar natural landmarks as commonly encountered in man made environments. The method is especially useful in case of large occlusions. The standard Lu and Hager pose estimation algorithm [6] is extended to improve accuracy and robustness. Localization experiments are performed in a room-size real world scenario. 1 Introduction This work particularly focuses on a specific detail in mobile robot localization, global localization. It is needed when no prior information about the robots position is known, e.g. when the robot is switched on at an arbitrary position or for the kidnapped robot problem. In addition global localization may also be used to verify and correct the robots position estimated incrementally from odometry (mechanical or visual) measures. Such methods inherently accumulate position errors and the longer the system runs the larger it deviates from the correct position. Such situations could be detected and corrected using global localization. Recent advances in wide-baseline matching (see [7, 13, 9, 5, 4, 8, 3]) opened the door to solve the data correspondence problem for natural visual landmarks. One successful approach to global localization has been presented by Se et al. [12]. Their system uses a metric map consisting of 3D points as landmarks. The map is composed of various sub-maps which are merged through global alignment. The extracted landmarks are scale-invariant features, DoG-keypoints [5]. The landmark correspondence problem is solved with a rotation invariant descriptor (SIFT [5]) which allows a fast and reliable matching of corresponding landmarks. The robots pose is then calculated from the 3D-2D landmark correspondences between the landmarks detected in the current view and the corresponding 3D landmarks in the map. Difficulties with that approach arise when only a small number of feature correspondences can be detected or when the accuracy of 3D-2D point correspondences is weak. Heavy occlusions (e.g. due to surrounding people) might make it difficult to establish a large enough number of

2 correspondences and usually only a large number of correspondences assures a good estimate. On the other hand the accuracy of point correspondences strongly effects the precision of the pose estimate. And simple SIFT-feature matches (especially in wide-baseline cases) will only achieve limited accuracy. The intention of this work therefore is to focus on the problem, when only one feature correspondence could be established and how to solve pose estimation for such a worst case scenario. We will present a method which successfully allows pose estimation from a single planar feature correspondence of a very small image region with an area as small as pixel (see section 3). Additional 3D-2D point correspondences are established within a single planar landmark and the pose is estimated using the Lu and Hager [6] iterative pose estimation algorithm. An extension to the Lu and Hager pose estimation algorithm is presented which improves accuracy and robustness. The method is based on a piece-wise planar world representation whose details are also outlined in this paper in section 2. Results for localization and map building experiments are given in section 4. Finally we draw some conclusions in section 5. 2 Map building The proposed world map is a set of 3D reconstructions of planar interest regions. It is organized in sub-maps, which are linked by similarity transformations. It is possible to merge the single sub-maps into one coordinate frame to build a single complete world map. The planar map features are represented with their full 6DOF. Each map feature is associated with an image region. An interest region detector (in our case MSER [7]) is used to extract descriptive and distinctive parts of an image as natural landmarks. The proposed map building algorithm is an off-line batch algorithm. Its input is an image sequence from a single camera. 3D reconstruction is done in a structure from motion approach. The algorithm first identifies submaps in the whole image sequence. Subsequently for each sub-map a metric reconstruction is built. The last step is linking the sub-maps. 2.1 Sub-map identification We assume that we have a large set of unorganized images of the area to map (we do not assume subsequent images, thus the images might be acquired by multiple robots). This step partitions the whole set, into sub-sets containing images with a short-baseline variation only. Each partition will than act as a sub-map. To partition the image set we calculate a global feature vector for each image and define a similarity measure in feature space. To calculate the global image description we re-sample the image to a very low resolution of pixel. Then we calculate the SIFT descriptor of the whole image which results in a feature vector of length 128. The feature vectors of images from similar viewpoints with a short baseline

3 form clusters in feature space. The similarity is measured with the Euclidean distance of the vectors. The partitioning of the image set is now done by hierarchical clustering [1] in feature space. Every cluster will act as a sub-map, and two images from the cluster are chosen to represent the sub-map. The remaining images are not used for further processing. 2.2 Sub-map reconstruction Each sub-map is defined by two short baseline images. With the 5-point stereo algorithm of Nister [10] the camera positions are estimated. Next, scene planes are identified in the images using inter-image homographies [2]. From the resulting point correspondences the planes can be reconstructed in 3D. In a next step features for wide baseline matching are extracted from the images, in this case MSER regions. A local affine frame (LAF) [7] is computed for every region and used for normalization. A SIFT-descriptor is calculated for each normalized MSER region. Only planar landmarks should be contained in the map, thus the planarity has to be checked for each image region. A landmark located within the detected planar areas is of course planar. Features which are not located at one of the detected planes are discarded. The sub-map is finally represented by the identified planes in 3D and in the image, the camera positions and the extracted MSER regions as image patch and its SIFT-descriptors. A KD-tree data-structure is used to store the SIFT-features. 2.3 Sub-map linking Two sub-maps can be linked if both contain at least one common planar feature. Identifying corresponding features is done by feature matching. We search for two close SIFT-feature vectors in feature space. This is done by computing the distance matrix of all SIFT-features (using KD-tree) and sort the distances in an ascending order. This list of tentative matches is now verified with an iterative correlation based region matcher [2] starting at the one with the smallest distance. The match verification will go on until a link between all sub-maps has been found or all list entries have been processed. The region matching provides a set of point correspondences with sub-pixel accuracy for each matched region. Two sub-maps to be linked are represented in two different coordinate frames and differ in scale. Thus linking two sub-maps is estimating a rigid transformation (rotation R and translation T ) and a scale factor s. First, we calculate a 3D reconstruction of the point correspondences in each submap. By projecting the image points onto the plane we get the 3D reconstructions of the point correspondences. For the scale factor s two arbitrary points from the reconstructed point set are selected and the distance is measured. The distance between the corresponding points from the other sub-map is calculated too and the ratio between these two distances defines the scale factor s. The second sub-map is then scaled with s. Now the rigid transformation between both sub-maps can easily be estimated from the corresponding 3D point set. The second sub-map is then transformed into the coordinate frame of the first sub-map using R

4 and t and the second sub-map s planes and features are added to the first sub-map. 3 Localization from a single feature correspondence 3.1 Finding feature correspondence Similar to the sub-map linking step, MSER regions are extracted from the image of the current view. A LAF is computed for each region and region normalization is performed. A SIFTdescriptor is computed from the normalized image patch. Using the SIFT-descriptor tentative feature correspondences are searched within the map features. The matches are confirmed by the already mentioned iterative region matching. Matching yields planar regions only (which is a requirement for the subsequent pose estimation) because only planar regions are stored in the map (see section 2). The region matching algorithm establishes point correspondences within the support region of the feature (the area the SIFT-descriptor is computed from). That means, that a single landmark correspondence gives rise to a set of point correspondences which allows pose estimation. 3.2 Pose estimation from a single landmark Pose estimation will be performed using the iterative pose estimation algorithm from Lu and Hager [6]. The algorithm returns the full 3DOF position and 3DOF rotation of the robot. Position and rotation will be computed from a set of 3D-2D point correspondences. For each matched landmark the matching algorithm returns 2D point correspondences between the landmark in the actual view and the landmark stored in the environment map. As the map contains the 3D parameters of the plane on which the landmark is located the corresponding 3D coordinates of the point set can be computed by projecting the 2D point matches (from the landmark in the map) onto the corresponding 3D plane. This creates the 3D-2D point correspondences for the pose estimation. To successfully apply the Lu and Hager pose estimation we have to deal with the following two issues: There exist two possibly solutions for the pose. This possible ambiguity exists because of two local minima of the used error function. This was shown by Schweighofer and Pinz [11] which results into arbitrary pose jumps. To obtain best results it is necessary to provide a coarse initial pose (rotation only) for the algorithm. The first issue is addressed as follows. Most of the time we get more than one corresponding feature. In such a case we verify if the computed pose coincides with the other feature correspondences. In detail, we check if the 2D point correspondences for the other features

5 Algorithm 1 Pose estimation from planar landmarks (sub-sampling method). Q [] {list to hold possible solutions R, t for pose estimation} for all region correspondences do project 2D points on plane to create 3D-2D matches for i = 1 to n do select random subset S from 3D-2D correspondences of size p compute R,t from S using 5-point algorithm 3D-2D pose estimation with initial rotation R add pose (R,t) to Q if 3D points are located in front of the camera end for end for for i = 1 to length(q) do calculate mean epipolar distance using R, t from Q(i) on 2D-2D correspondences over all matching regions end for return R, t with minimal epipolar distance satisfy the epipolar constraint imposed from the calculated pose. Only for a correct solution the additional correspondences would satisfy the constraint. For the case when only one feature match is present a wrong solution can be identified if it exhibits an impossible camera configuration, i.e. the reconstruction is not in-front of the camera. However, if both solutions would provide valid configurations the correct one can not always be detected. But such cases can be handled with a higher-level statistical SLAM framework. Difficulties with the second issue arise as we want to perform global localization. For an accurate solution the pose estimation needs a coarse initial rotation. Without initial rotation the minimization might stop at a local minimum. In a tracking scenario (which is the usual field of application for such pose estimation algorithms) the initial rotation is given from a previous position. For global localization however no previous position is known. In our case the initial rotation is obtained by essential matrix computation (using the 5-point stereo algorithm by Nister [10]). The 5-point algorithm would already return a complete pose estimation, but for our special application (point correspondences located on a plane) the accuracy of the results is not sufficient. Especially the translation estimate shows a large variance. The estimated rotation however works good enough to initialize the Lu and Hager pose estimation. The resulting method is outlined in Algorithm 1. 4 Experiments For the experiments a mobile robot was equipped with a single camera and a laser range finder (LRF). The camera has a resolution of pixels and is equipped with a wide-angle lens.

6 (a) (b) Figure 1: Test environment office : (a) Floor plan created by laser range finder. Red circles mark the positions of laser readings. The robots path is drawn in black and interpolated between the laser readings. (b) 3D piece-wise planar map (created from the camera images shown in (a)) used in the localization experiments.

7 The wide-angle lens has a field-of-view of 90 which already causes severe radial distortions at the image borders. The radial distortions are removed by calibration and re-sampling. The robot was steered manually through the test environment office while the camera was continuously capturing images and the LRF taking readings. The LRF readings were used to create a floor plan and to compute the path of the robot using the software ScanStudio 1). The floor plan is illustrated in Figure 1a). The circles mark the positions of the acquired laser leadings and the robots path has been interpolated between the distinct positions. 5 short-baseline image pairs were selected as sub-maps for the map building algorithm. The result of the map building algorithm (from section 2) is shown in Fig. 1b). In the following we describe two experiments. The first experiment is about computing the path of the robot from the acquired image data with respect to the computed 3D map using the algorithm of section 3. A second experiment evaluates the accuracy of the localization algorithm and compares it with standard Lu and Hager pose estimation [6]. 4.1 Computing the path of the robot A part of the robots path through the test environment is computed using the proposed method for global localization. The corresponding path is marked with the large rectangle in Fig. 1a). For every image acquired within this section of the path the 3D pose of the robot is computed. Fig. 2 shows two views of the environment map augmented with the computed path. The computed poses (camera positions) are depicted as cones pointing into the direction of the robot and starting at the current robot location. (a) (b) Figure 2: Reconstructed path of the robot: Images show the environment map augmented with robot positions computed for a part of the robots path. (a) Top view (b) 3D view. 1) ScanStudio is available from

8 4.2 Evaluation of the localization algorithm Next we investigate the accuracy of the localization algorithm. As a measure of accuracy the epipolar distance between 2D image points and epipolar lines is used. Each pose is computed from a single landmark only. The other detected landmarks are now used to assess the quality of the computed pose by computing the epipolar distance between the 2D point correspondences and the epipolar lines. The proposed sub-sampling algorithm (Alg. 1) creates n subsets of size p from the point correspondences of a region. That means, every region generates n solutions. The best solutions is selected and we will show in this experiment, that in most cases there exists a subset which produces a better solution than computing the pose from all correspondences. Using all correspondences of a landmark for pose estimation is the usual way the Lu and Hager [6] algorithm (LH method) is applied. The results of both methods are compared by means of the epipolar distance measure. For our algorithm n was set to 50 and p was set to 10. Fig. 4 and Table 1 summarize the results. We calculated the poses for 3 different sequences (all part of the robots path through the office). The table shows the achieved average epipolar distance (of the best solutions), minimal and maximal distance and the standard deviation as well as the average area in pixel of the image regions used for the pose estimation. It is evident that our proposed algorithm achieves a smaller epipolar error than the LH method. The result is even more impressive by looking at individual frames, e.g., for frame 15 in sequence 2 the epipolar distance achieved by the simple algorithm was pixel while our method came down to 0.75 pixel. This is more than a significant improvement, in fact the solution produced by the simple method will be wrong. The differences between both methods are illustrated in Fig. 3 where the positions of a forward motion sequence are computed. That means, all the camera positions should be aligned in a row. The result with the sub-sampling method shows only a little deviation from a straight line. The results obtained from the LH method however shows large deviations. One hardly sees, that this should be the path from a forward motion only. One column in the tables shows the average area of the image regions in pixel from which the poses have been computed. We want to stress the impressing fact, that our method achieves pose estimation with an epipolar distance error below 1 pixel with an image region of approximately 400 pixel. 5 Conclusion We presented a method for visual global localization from a single landmark correspondence only. Prerequisite for the method is a piece-wise planar environment map. Map building is only shortly addressed, the main part focuses on the localization algorithm. The pose of the robot is computed with iterative pose estimation from additional 3D-2D correspondences

9 (a) (b) Figure 3: (a) Results for global localization. Images show poses computed for a forward motion sequence using the sub-sampling algorithm (b) LH method. 6 avg. epipolar distance [pixel] Sub-sampling method LH method 0 S1 S2 S3 S1+S2+S3 Figure 4: The graph compares the average epipolar distance for our sub-sampling method and the standard Lu and Hager method (LH) [6] method for 3 different image sequences (S1,S2,S3) and all sequences together. Our sub-sampling method produces a smaller error than the LH method. detected within a single landmark. The pose is estimated with the algorithm from Lu and Hager [6]. However, the analysis showed that a straightforward estimation from all detected correspondences does not produce optimal and robust results. A sub-sampling scheme is introduced to generate multiple hypotheses and select the best solution. The improvements in pose estimation are shown visually and in quantitative assessments. Furthermore, our experiments show that robust pose estimation is still possible from very small landmarks. Even landmarks with an area of about 400 pixel allow robust pose estimation. This allows global localization in extreme situations, like large occlusions or minimal scene overlap. References [1] Richard O. Duda, Peter E. Hart, and David G. Stork. Pattern Classification. Wiley-Interscience, [2] Friedrich Fraundorfer, Konrad Schindler, and Horst Bischof. Piecewise planar scene reconstruction from sparse correspondences. Submitted to Image and Vision Computing.

10 sequence 1 avg. epidist min. epidist max. epidist stddev epidist avg. patch area 31 frames [pixel] [pixel] [pixel] [pixel] [pixel] LH method sub-sampling method sequence 2 avg. epidist min. epidist max. epidist stddev epidist avg. patch area 21 frames [pixel] [pixel] [pixel] [pixel] [pixel] LH method sub-sampling method sequence 3 avg. epidist min. epidist max. epidist stddev epidist avg. patch area 9 frames [pixel] [pixel] [pixel] [pixel] [pixel] LH method sub-sampling method all sequences avg. epidist min. epidist max. epidist stddev epidist avg. patch area 61 frames [pixel] [pixel] [pixel] [pixel] [pixel] LH method sub-sampling Table 1: Epipolar distances for pose estimation using the LH method and for our sub-sampling method. [3] T. Kadir, A. Zisserman, and M. Brady. An affine invariant salient region detector. In Proc. 7th European Conference on Computer Vision, Prague, Czech Republic, pages Vol I: , [4] T. Lindeberg. Feature detection with automatic scale selection. International Journal of Computer Vision, 30(2):79 116, [5] D.G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91 110, November [6] C.P. Lu, G.D. Hager, and E. Mjolsness. Fast and globally convergent pose estimation from video images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(6): , June [7] J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust wide baseline stereo from maximally stable extremal regions. In Proc. 13th British Machine Vision Conference, Cardiff, UK, pages , [8] K. Mikolajczyk and C. Schmid. An affine invariant interest point detector. In Proc. 7th European Conference on Computer Vision, Copenhagen, Denmark, page I: 128 ff., [9] Krystian Mikolajczyk and Cordelia Schmid. Indexing based on scale invariant interest points. In Proceedings of the 8th International Conference on Computer Vision, Vancouver, Canada, pages , [10] D. Nister. An efficient solution to the five-point relative pose problem. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, Madison, Wisconsin, pages II: , [11] Gerald Schweighofer and Axel Pinz. Robust pose estimation from a planar target. Technical Report TR-EMT , Graz University of Technology, [12] Stephen Se, David G. Lowe, and James J. Little. Vision-based global localization and mapping for mobile robots. IEEE Transactions on Robotics, 21(3): , [13] T. Tuytelaars and L. Van Gool. Matching widely separated views based on affine invariant regions. International Journal of Computer Vision, 1(59):61 85, 2004.

Region matching for omnidirectional images using virtual camera planes

Region matching for omnidirectional images using virtual camera planes Computer Vision Winter Workshop 2006, Ondřej Chum, Vojtěch Franc (eds.) Telč, Czech Republic, February 6 8 Czech Pattern Recognition Society Region matching for omnidirectional images using virtual camera

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

3D model search and pose estimation from single images using VIP features

3D model search and pose estimation from single images using VIP features 3D model search and pose estimation from single images using VIP features Changchang Wu 2, Friedrich Fraundorfer 1, 1 Department of Computer Science ETH Zurich, Switzerland {fraundorfer, marc.pollefeys}@inf.ethz.ch

More information

Specular 3D Object Tracking by View Generative Learning

Specular 3D Object Tracking by View Generative Learning Specular 3D Object Tracking by View Generative Learning Yukiko Shinozuka, Francois de Sorbier and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku 223-8522 Yokohama, Japan shinozuka@hvrl.ics.keio.ac.jp

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Object of interest discovery in video sequences

Object of interest discovery in video sequences Object of interest discovery in video sequences A Design Project Report Presented to Engineering Division of the Graduate School Of Cornell University In Partial Fulfillment of the Requirements for the

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

Salient Visual Features to Help Close the Loop in 6D SLAM

Salient Visual Features to Help Close the Loop in 6D SLAM Visual Features to Help Close the Loop in 6D SLAM Lars Kunze, Kai Lingemann, Andreas Nüchter, and Joachim Hertzberg University of Osnabrück, Institute of Computer Science Knowledge Based Systems Research

More information

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Stephen Se, David Lowe, Jim Little Department of Computer Science University of British Columbia Presented by Adam Bickett

More information

Instance-level recognition part 2

Instance-level recognition part 2 Visual Recognition and Machine Learning Summer School Paris 2011 Instance-level recognition part 2 Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique,

More information

Wide Baseline Matching using Triplet Vector Descriptor

Wide Baseline Matching using Triplet Vector Descriptor 1 Wide Baseline Matching using Triplet Vector Descriptor Yasushi Kanazawa Koki Uemura Department of Knowledge-based Information Engineering Toyohashi University of Technology, Toyohashi 441-8580, JAPAN

More information

Requirements for region detection

Requirements for region detection Region detectors Requirements for region detection For region detection invariance transformations that should be considered are illumination changes, translation, rotation, scale and full affine transform

More information

Invariant Features from Interest Point Groups

Invariant Features from Interest Point Groups Invariant Features from Interest Point Groups Matthew Brown and David Lowe {mbrown lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract This paper

More information

Prof. Feng Liu. Spring /26/2017

Prof. Feng Liu. Spring /26/2017 Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6

More information

Viewpoint Invariant Features from Single Images Using 3D Geometry

Viewpoint Invariant Features from Single Images Using 3D Geometry Viewpoint Invariant Features from Single Images Using 3D Geometry Yanpeng Cao and John McDonald Department of Computer Science National University of Ireland, Maynooth, Ireland {y.cao,johnmcd}@cs.nuim.ie

More information

Augmenting Reality, Naturally:

Augmenting Reality, Naturally: Augmenting Reality, Naturally: Scene Modelling, Recognition and Tracking with Invariant Image Features by Iryna Gordon in collaboration with David G. Lowe Laboratory for Computational Intelligence Department

More information

Robust Online Object Learning and Recognition by MSER Tracking

Robust Online Object Learning and Recognition by MSER Tracking Computer Vision Winter Workshop 28, Janez Perš (ed.) Moravske Toplice, Slovenia, February 4 6 Slovenian Pattern Recognition Society, Ljubljana, Slovenia Robust Online Object Learning and Recognition by

More information

Structured light 3D reconstruction

Structured light 3D reconstruction Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance

More information

Structure Guided Salient Region Detector

Structure Guided Salient Region Detector Structure Guided Salient Region Detector Shufei Fan, Frank Ferrie Center for Intelligent Machines McGill University Montréal H3A2A7, Canada Abstract This paper presents a novel method for detection of

More information

Instance-level recognition II.

Instance-level recognition II. Reconnaissance d objets et vision artificielle 2010 Instance-level recognition II. Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique, Ecole Normale

More information

A Novel Real-Time Feature Matching Scheme

A Novel Real-Time Feature Matching Scheme Sensors & Transducers, Vol. 165, Issue, February 01, pp. 17-11 Sensors & Transducers 01 by IFSA Publishing, S. L. http://www.sensorsportal.com A Novel Real-Time Feature Matching Scheme Ying Liu, * Hongbo

More information

Evaluation and comparison of interest points/regions

Evaluation and comparison of interest points/regions Introduction Evaluation and comparison of interest points/regions Quantitative evaluation of interest point/region detectors points / regions at the same relative location and area Repeatability rate :

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

3D Reconstruction of a Hopkins Landmark

3D Reconstruction of a Hopkins Landmark 3D Reconstruction of a Hopkins Landmark Ayushi Sinha (461), Hau Sze (461), Diane Duros (361) Abstract - This paper outlines a method for 3D reconstruction from two images. Our procedure is based on known

More information

Video Google: A Text Retrieval Approach to Object Matching in Videos

Video Google: A Text Retrieval Approach to Object Matching in Videos Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic, Frederik Schaffalitzky, Andrew Zisserman Visual Geometry Group University of Oxford The vision Enable video, e.g. a feature

More information

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion

A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion Marek Schikora 1 and Benedikt Romba 2 1 FGAN-FKIE, Germany 2 Bonn University, Germany schikora@fgan.de, romba@uni-bonn.de Abstract: In this

More information

3D reconstruction how accurate can it be?

3D reconstruction how accurate can it be? Performance Metrics for Correspondence Problems 3D reconstruction how accurate can it be? Pierre Moulon, Foxel CVPR 2015 Workshop Boston, USA (June 11, 2015) We can capture large environments. But for

More information

An Evaluation of Volumetric Interest Points

An Evaluation of Volumetric Interest Points An Evaluation of Volumetric Interest Points Tsz-Ho YU Oliver WOODFORD Roberto CIPOLLA Machine Intelligence Lab Department of Engineering, University of Cambridge About this project We conducted the first

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

VEHICLE MAKE AND MODEL RECOGNITION BY KEYPOINT MATCHING OF PSEUDO FRONTAL VIEW

VEHICLE MAKE AND MODEL RECOGNITION BY KEYPOINT MATCHING OF PSEUDO FRONTAL VIEW VEHICLE MAKE AND MODEL RECOGNITION BY KEYPOINT MATCHING OF PSEUDO FRONTAL VIEW Yukiko Shinozuka, Ruiko Miyano, Takuya Minagawa and Hideo Saito Department of Information and Computer Science, Keio University

More information

CS 231A Computer Vision (Winter 2014) Problem Set 3

CS 231A Computer Vision (Winter 2014) Problem Set 3 CS 231A Computer Vision (Winter 2014) Problem Set 3 Due: Feb. 18 th, 2015 (11:59pm) 1 Single Object Recognition Via SIFT (45 points) In his 2004 SIFT paper, David Lowe demonstrates impressive object recognition

More information

Finding the Best Feature Detector-Descriptor Combination

Finding the Best Feature Detector-Descriptor Combination Finding the Best Feature Detector-Descriptor Combination Anders Lindbjerg Dahl, Henrik Aanæs DTU Informatics Technical University of Denmark Lyngby, Denmark abd@imm.dtu.dk, haa@imm.dtu.dk Kim Steenstrup

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Image Features: Detection, Description, and Matching and their Applications

Image Features: Detection, Description, and Matching and their Applications Image Features: Detection, Description, and Matching and their Applications Image Representation: Global Versus Local Features Features/ keypoints/ interset points are interesting locations in the image.

More information

From Structure-from-Motion Point Clouds to Fast Location Recognition

From Structure-from-Motion Point Clouds to Fast Location Recognition From Structure-from-Motion Point Clouds to Fast Location Recognition Arnold Irschara1;2, Christopher Zach2, Jan-Michael Frahm2, Horst Bischof1 1Graz University of Technology firschara, bischofg@icg.tugraz.at

More information

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping

More information

Instance-level recognition

Instance-level recognition Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Local Features Tutorial: Nov. 8, 04

Local Features Tutorial: Nov. 8, 04 Local Features Tutorial: Nov. 8, 04 Local Features Tutorial References: Matlab SIFT tutorial (from course webpage) Lowe, David G. Distinctive Image Features from Scale Invariant Features, International

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Fuzzy based Multiple Dictionary Bag of Words for Image Classification

Fuzzy based Multiple Dictionary Bag of Words for Image Classification Available online at www.sciencedirect.com Procedia Engineering 38 (2012 ) 2196 2206 International Conference on Modeling Optimisation and Computing Fuzzy based Multiple Dictionary Bag of Words for Image

More information

ISSUES FOR IMAGE MATCHING IN STRUCTURE FROM MOTION

ISSUES FOR IMAGE MATCHING IN STRUCTURE FROM MOTION ISSUES FOR IMAGE MATCHING IN STRUCTURE FROM MOTION Helmut Mayer Institute of Geoinformation and Computer Vision, Bundeswehr University Munich Helmut.Mayer@unibw.de, www.unibw.de/ipk KEY WORDS: Computer

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Lecture 14: Indexing with local features. Thursday, Nov 1 Prof. Kristen Grauman. Outline

Lecture 14: Indexing with local features. Thursday, Nov 1 Prof. Kristen Grauman. Outline Lecture 14: Indexing with local features Thursday, Nov 1 Prof. Kristen Grauman Outline Last time: local invariant features, scale invariant detection Applications, including stereo Indexing with invariant

More information

Image Feature Evaluation for Contents-based Image Retrieval

Image Feature Evaluation for Contents-based Image Retrieval Image Feature Evaluation for Contents-based Image Retrieval Adam Kuffner and Antonio Robles-Kelly, Department of Theoretical Physics, Australian National University, Canberra, Australia Vision Science,

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-21 April 2012, Tallinn, Estonia LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS Shvarts, D. & Tamre, M. Abstract: The

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Construction of Precise Local Affine Frames

Construction of Precise Local Affine Frames Construction of Precise Local Affine Frames Andrej Mikulik, Jiri Matas, Michal Perdoch, Ondrej Chum Center for Machine Perception Czech Technical University in Prague Czech Republic e-mail: mikulik@cmp.felk.cvut.cz

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Using Geometric Blur for Point Correspondence

Using Geometric Blur for Point Correspondence 1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence

More information

Towards a visual perception system for LNG pipe inspection

Towards a visual perception system for LNG pipe inspection Towards a visual perception system for LNG pipe inspection LPV Project Team: Brett Browning (PI), Peter Rander (co PI), Peter Hansen Hatem Alismail, Mohamed Mustafa, Joey Gannon Qri8 Lab A Brief Overview

More information

Instance-level recognition

Instance-level recognition Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction

More information

COMPARING COMBINATIONS OF FEATURE REGIONS FOR PANORAMIC VSLAM

COMPARING COMBINATIONS OF FEATURE REGIONS FOR PANORAMIC VSLAM COMPARING COMBINATIONS OF FEATURE REGIONS FOR PANORAMIC VSLAM Arnau Ramisa, Ramón López de Mántaras Artificial Intelligence Research Institute, UAB Campus, 08193, Bellaterra, SPAIN aramisa@iiia.csic.es,

More information

Matching Local Invariant Features with Contextual Information: An Experimental Evaluation.

Matching Local Invariant Features with Contextual Information: An Experimental Evaluation. Matching Local Invariant Features with Contextual Information: An Experimental Evaluation. Desire Sidibe, Philippe Montesinos, Stefan Janaqi LGI2P - Ecole des Mines Ales, Parc scientifique G. Besse, 30035

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

Yudistira Pictures; Universitas Brawijaya

Yudistira Pictures; Universitas Brawijaya Evaluation of Feature Detector-Descriptor for Real Object Matching under Various Conditions of Ilumination and Affine Transformation Novanto Yudistira1, Achmad Ridok2, Moch Ali Fauzi3 1) Yudistira Pictures;

More information

An Overview of Matchmoving using Structure from Motion Methods

An Overview of Matchmoving using Structure from Motion Methods An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Stereoscopic Images Generation By Monocular Camera

Stereoscopic Images Generation By Monocular Camera Stereoscopic Images Generation By Monocular Camera Swapnil Lonare M. tech Student Department of Electronics Engineering (Communication) Abha Gaikwad - Patil College of Engineering. Nagpur, India 440016

More information

Visual Recognition and Search April 18, 2008 Joo Hyun Kim

Visual Recognition and Search April 18, 2008 Joo Hyun Kim Visual Recognition and Search April 18, 2008 Joo Hyun Kim Introduction Suppose a stranger in downtown with a tour guide book?? Austin, TX 2 Introduction Look at guide What s this? Found Name of place Where

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Image Processing. Image Features

Image Processing. Image Features Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching

More information

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Examination of Hybrid Image Feature Trackers

Examination of Hybrid Image Feature Trackers Examination of Hybrid Image Feature Trackers Peter Abeles Robotic Inception pabeles@roboticinception.com Abstract. Typical image feature trackers employ a detect-describe-associate (DDA) or detect-track

More information

Performance Characterization of Image Feature Detectors in Relation to the Scene Content Utilizing a Large Image Database

Performance Characterization of Image Feature Detectors in Relation to the Scene Content Utilizing a Large Image Database IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. XX, XXXXX 20XX 1 Performance Characterization of Image Feature Detectors in Relation to the Scene Content Utilizing a Large Image Database Bruno Ferrarini

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Simultaneous Recognition and Homography Extraction of Local Patches with a Simple Linear Classifier

Simultaneous Recognition and Homography Extraction of Local Patches with a Simple Linear Classifier Simultaneous Recognition and Homography Extraction of Local Patches with a Simple Linear Classifier Stefan Hinterstoisser 1, Selim Benhimane 1, Vincent Lepetit 2, Pascal Fua 2, Nassir Navab 1 1 Department

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know

More information

Adaptive Dominant Points Detector for Visual Landmarks Description Λ

Adaptive Dominant Points Detector for Visual Landmarks Description Λ Adaptive Dominant Points Detector for Visual Landmarks Description Λ Esther Antúnez 1, Rebeca Marl 2, Antonio Bandera 2, and Walter G. Kropatsch 1 1 PRIP, Vienna University of Technology, Austria fenunez,krwg@prip.tuwien.ac.at

More information

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian. Announcements Project 1 Out today Help session at the end of class Image matching by Diva Sian by swashford Harder case Even harder case How the Afghan Girl was Identified by Her Iris Patterns Read the

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Object and Class Recognition I:

Object and Class Recognition I: Object and Class Recognition I: Object Recognition Lectures 10 Sources ICCV 2005 short courses Li Fei-Fei (UIUC), Rob Fergus (Oxford-MIT), Antonio Torralba (MIT) http://people.csail.mit.edu/torralba/iccv2005

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

FAIR: Towards A New Feature for Affinely-Invariant Recognition

FAIR: Towards A New Feature for Affinely-Invariant Recognition FAIR: Towards A New Feature for Affinely-Invariant Recognition Radim Šára, Martin Matoušek Czech Technical University Center for Machine Perception Karlovo nam 3, CZ-235 Prague, Czech Republic {sara,xmatousm}@cmp.felk.cvut.cz

More information

Camera Drones Lecture 3 3D data generation

Camera Drones Lecture 3 3D data generation Camera Drones Lecture 3 3D data generation Ass.Prof. Friedrich Fraundorfer WS 2017 Outline SfM introduction SfM concept Feature matching Camera pose estimation Bundle adjustment Dense matching Data products

More information

Piecewise Image Registration in the Presence of Multiple Large Motions

Piecewise Image Registration in the Presence of Multiple Large Motions Piecewise Image Registration in the Presence of Multiple Large Motions Pravin Bhat 1, Ke Colin Zheng 1, Noah Snavely 1, Aseem Agarwala 1, Maneesh Agrawala 2, Michael F. Cohen 3, Brian Curless 1 1 University

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

Introduction to SLAM Part II. Paul Robertson

Introduction to SLAM Part II. Paul Robertson Introduction to SLAM Part II Paul Robertson Localization Review Tracking, Global Localization, Kidnapping Problem. Kalman Filter Quadratic Linear (unless EKF) SLAM Loop closing Scaling: Partition space

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration , pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information

More information

CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry

CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry Some material taken from: David Lowe, UBC Jiri Matas, CMP Prague http://cmp.felk.cvut.cz/~matas/papers/presentations/matas_beyondransac_cvprac05.ppt

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

Multiple Map Intersection Detection using Visual Appearance

Multiple Map Intersection Detection using Visual Appearance Multiple Map Intersection Detection using Visual Appearance Kin Leong Ho, Paul Newman Oxford University Robotics Research Group {klh,pnewman}@robots.ox.ac.uk Abstract It is difficult to detect intersections

More information

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo -- Computational Optical Imaging - Optique Numerique -- Multiple View Geometry and Stereo -- Winter 2013 Ivo Ihrke with slides by Thorsten Thormaehlen Feature Detection and Matching Wide-Baseline-Matching

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

Topological Mapping. Discrete Bayes Filter

Topological Mapping. Discrete Bayes Filter Topological Mapping Discrete Bayes Filter Vision Based Localization Given a image(s) acquired by moving camera determine the robot s location and pose? Towards localization without odometry What can be

More information

Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching

Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching Akshay Bhatia, Robert Laganière School of Information Technology and Engineering University of Ottawa

More information

Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics

Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics Heiko Hirschmüller, Peter R. Innocent and Jon M. Garibaldi Centre for Computational Intelligence, De Montfort

More information