Estimation of Camera Pose with Respect to Terrestrial LiDAR Data
|
|
- Loreen Jacobs
- 5 years ago
- Views:
Transcription
1 Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Wei Guan Suya You Guan Pang Computer Science Department University of Southern California, Los Angeles, USA Abstract In this paper, we present an algorithm that is to estimate the position of a hand-held camera with respect to terrestrial LiDAR data. Our input is a set of 3D range scans with intensities and one or a set of 2D uncalibrated camera images of the scene. The algorithm that automatically registers range scans and 2D images is composed of following steps. In the first step, we project the terrestrial LiDAR onto 2D images according to several preselected viewpoints. Intensity-based features such as SIFT are extracted from these projected images and these features are projected back onto the LiDAR data to obtain their 3D positions. In the second step, we estimate the initial pose of given 2D images from feature correspondences. In the third step, we refine the coarse camera pose obtained from the previous step through iterative matchings and optimization process. We presents results from experiments in several different urban settings. 1. Introduction This paper deals with the problem of automatic pose estimation of a 2D camera image with respect to 3D LiDAR data of an urban scene, which is an important problem in computer vision. Its application include urban modeling, robots localization and augmented reality. One way to solve this problem is to extract features on both types of data and find the 2D-to-3D feature correspondences. However, since the structures of this two types of data are so different, the features extracted from one type of data are usually not repeatable in the other (except for very simple features such as lines or corners). Instead of direct extracting in 3D space, features can be extracted on their 2D projections and 2D-to- 2D-to-3D matching scheme can be used. As remote sensing technology develops, most recent Li- DAR data has intensity value for each point in the cloud. wguan@usc.edu suya@usc.edu gpang@usc.edu Figure 1. The 3D LiDAR data with color information (sampled by software for fast rendering). The 2D image of the same scene taken at the ground level. Some LiDAR data also contains color information. The intensity information is obtained by measuring the strength of surface reflectance, and the color information is provided by an additional co-located optical sensor that captures visible light. These information is very helpful for matching 3D range scans with 2D camera images. Unlike geometry-only LiDAR data, intensity-based features can be applied in the pose estimation process. Figure 1 shows the colored LiDAR data and a camera image taken on the ground. As we can observe, the projected LiDAR looks similar to an image that is taken by an optical camera. The fact is that if the point cloud is dense enough, the projected 2D image can be treated the same
2 way as a normal camera image. However, there are several differences between projected image and an image taken by a camera. First, there are many holes on the projected image due to the missing data. This is usually caused by the non-reflecting surfaces and occlusions in the scene. Second, if the point cloud intensity is measured by reflectance strength, the reflectance property of invisible lights are different from that of visible lights. Even in the case that visible lights are used to obtain LiDAR intensities, the lighting conditions could be different from the lighting of a camera image. In this paper, we propose an algorithm that can handle LiDAR with both types of intensity information. The intensity information of LiDAR data is useful for camera pose initialization. However, due to intensity differences, occlusions etc., there are not many correspondences available and one small displacement in any of these matching points will cause large errors in the computed camera pose. Moreover, in most urban scenes, there exist many repeated patterns, which make many features fail in the matching process. With the initial pose, we can estimate the location of corresponding features and limit the searching range within which repeated patterns does not appear. Therefore, we can generate more correspondences and refine the pose. After several iterative matchings, we further refine the camera pose by minimizing the differences between the projected image and camera image. The estimated camera pose is more stable after the above two steps of refinement. The contribution of this paper is summarized as follows. 1. We propose a framework of camera pose estimation with respect to 3D terrestrial LiDAR data that contains intensity values. No prior knowledge about the camera position is required. 2. We designed a novel algorithm that refines the camera pose in two steps. Both intensity and geometric information are used in the refinement process. 3. We have tested the proposed framework on different urban settings. The results show that the estimated camera pose is accurate and the framework can be applied in many applications such as mixed reality. The remainder of this paper presents the proposed algorithm in more details. We first discuss some related work in Section 2. Section 3 describes the camera pose initialization process. Following that, Section 4 discusses the algorithm that refines camera pose. We show experimental results in Section 5 and conclude the paper in the last section. 2. Related Work There has been a considerable amount of research in registering images with LiDAR data. The registration methods vary from keypoint-based matching [3, 1], structure-based matching [20, 13, 14, 21], to mutual information based registration [24]. There are also methods that are specially designed for registering aerial LiDAR and aerial images [7, 22, 5, 23, 16]. When the LiDAR data contains intensity values, keypoint-based matchings [3, 1] that are based on similarity between LiDAR intensity image and camera intensity image can be applied. Feature points such as SIFT [15] are extracted from both images and a matching strategy is used to determine the correspondences thus camera parameters. The drawback of intensity-based matching is that it usually generates very few correspondences and the estimated pose is not accurate or stable. Najafi et al [18] also created an environment map to represent an object appearance and geometry using SIFT features. Vasile, et al. [22] used LiDAR data to generate a pseudo-intensity image with shadows that are used to match with aerial imagery. They used GPS as the initial pose and applied exhaustive search to obtain the translation, scale, and lens distortion. Ding et al. [5] registered oblique aerial images based on 2D and 3D corner features in the 2D images and 3D LiDAR model respectively. The correspondences between extracted corners are generated through Hough transform and a generalized M- estimator. The corner correspondences are used to refine camera parameters. In general, a robust feature extraction and matching scheme is the key to a successful registration for this type of approaches. Instead of point-based matchings, structural features such as lines and corners have been utilized in many researches. Stamos and Allen [20] used matching of rectangles from building facades for alignment. Liu et al. [13, 14, 21] extracted line segments to form rectangular parallelepiped, which are composed of vertical or horizontal 3D rectangular parallelepiped in the LiDAR and 2D rectangles in the images. The matching of parallelepiped as well as vanishing points are used to estimate camera parameters. Yang, et al. [25] used feature matching to align ground images, but they worked with a very detailed 3D model. Wang and Neumann [23] proposed an automatic registration method between aerial images and aerial Li- DAR based on matching 3CS ( 3 Connected Segments) in which each linear feature contains 3 connected segments. They used a two-level RANSAC algorithm to refine putative matches and estimated camera pose from the correspondences. Given a set of 3D to 2D point or line correspondences, there are many approaches to solve the pose recovery problem [17, 12, 4, 19, 11, 8]. The same problems also appear in pose recovery with respect to point cloud which is generated from image sequences [9, 10]. In both cases, a probabilistic RANSAC method [6] was also introduced for automatically computing matching 3D and 2D points and remove outliers.
3 Figure 2. The virtual cameras are placed around the LiDAR scene. They are placed uniformly in viewing directions and logarithmically in the distance. In this paper, we will apply keypoint-based method to estimate initial camera pose, then use iterative methods with RANSAC by utilizing both intensity and geometric information to obtain the refined pose. 3. Camera Pose Initialization 3.1. Synthetic Views of 3D LiDAR To compute the pose for an image taken at an arbitrary viewpoint, we first create synthetic views that cover a large viewing directions. Z-buffers are used to handle occlusions. Our application is to recover the camera poses of images taken in urban environments, so we can restrict the placement of virtual cameras to the height of eye-level to simplify the problem. Generally, the approach is not limited to such images. We place the cameras around the LiDAR in about 180 degrees. The cameras are placed uniformly in the angle of views and logarithmically in distance, as shown in Figure 2. The density of locations depends on the type of feature that we use for matchings. If the feature is able to handle rotation, scale and wide baseline, we need less virtual cameras to cover most cases. In contrast, if the feature is neither rotation-invariant nor scale-invariant, we need to select as many viewpoints as possible, and rotate the camera at each viewpoint. Furthermore, it should be noted that the viewpoints cannot be too close to point cloud, otherwise the quality of projected image is not good enough to generate initial feature correspondences. In our work, we use SIFT [15] features which are scale and rotation invariant and robust to moderate viewing angle changes. We select 6 viewing angles uniformly and 3 distance for each viewing angle in a logarithmic way. The synthetic views are shown in Figure Generation of 3D Feature Cloud We extract 2D SIFT features for each synthetic view. Once the features are extracted, we project them back onto the point cloud by finding intersection with the first plane Figure 3. The synthetic views of LiDAR data. 2D features are extracted from each synthetic view. that is obtained through plane segmentation with method [20]. It is possible that the same feature is reprojected onto different points through different synthetic views. To handle this problem, we post process these feature points so that close points with similar descriptors are merged into one feature. Note that we can also get the 3D features by triangulation method. However, such method depends on matching pairs so it generates much fewer features for potential match. The obtained positions of 3D keypoints are not accurate due to projection and reprojection errors, but good enough to provide an initial pose. We will optimize their positions and camera pose in later stage. The generated 3D feature cloud is shown in Figure 4. Each point is associated with one descriptor. For a given camera image, we extract SIFT features and match them with the feature cloud. A direct 3D to 2D matching and RANSAC method is used to estimate the pose and remove outliers. When we use RANSAC method, rather than maximizing the number of inliers that are consensus to the hypothesized pose, we make modifications as follows. We cluster the inliers according to their normal directions. Inliers with close normal directions will be grouped into the same cluster. Let N1 and N2 be the number of inliers for the largest two clusters. Among all the hypothesized poses, we want to maximize the value of N2, i.e.
4 Figure 4. SIFT features in 3D space. The 3D positions are obtained by reprojecting 2D features onto the 3D LiDAR data. [R T ] = argmaxn2. (1) [R T ] This is to ensure that not all the inliers lie within the same plane, in which case the calculated pose is unstable and sensitive to position errors. 4. Camera Pose Refinement (c) 4.1. Refinement by More Point Correspondences With the estimated initial pose, we can generate more feature correspondences by limiting the similarity searching space. For the first iteration, we still use SIFT feature. From 2nd iteration on, we can use less distinctive features to generate more correspondences. In our work, we use Harris corners as the keypoints. For each corner point, a normalized intensity histogram within an 8x8 patch is computed as the descriptor. Its corresponding point will probably lie within the neighborhood of H by H pixels. Initially, H is set to 64 pixels. For each iteration, the size is reduced to half since more accurate pose is obtained. We keep the minimum searching size to 16 pixels. Figure 5 shows a few iterations and matching results within reduced searching space Geometric Structures and Alignment The purpose of geometric structure extraction is not to form features to generate correspondences. Instead, they are used to align 3D structure with 2D structures in the camera image. In our work, line segments are used to align 3D range scans with 2D images. Therefore, we need to define the distance between these line segments. There are two types of lines in the 3D LiDAR data. The first type is generated from the geometric structure, which can be computed at the intersections between segmented planar regions and at the borders of the segmented planar (d) (e) Figure 5. The initial camera pose. 3D to 2D matching on initial pose. (c) Camera pose after 1st iteration. (d) 3D to 2D matching based on refined pose. (e) Camera pose after 2nd iteration. regions [20]. The other type of lines is formed by intensities. These lines can be detected on the projected synthetic image with method [2] and reprojected onto 3D LiDAR to get their 3D coordinates. For each hypothesis pose, these
5 3D lines are projected onto 2D images and we measure the alignment error as follows. Eline = M N 1 K(li, Lj ) max(d(li1, Lj ), D(li2, Lj )), N i=1 j=1 (2) where li is the ith 2D line segment with li1 and li2 as its two endpoints, Lj is jth 3D line segment. M and N are number of 2D segments and 3D segments respectively. K(li, Lj ) is a binary function deciding whether the two line segments have similar slopes. D(li1, Lj ) is a function describing the distance from the endpoint li1 to projected line segment Lj. The function K and D are defined as follows, { 0 for (l, L) < Kth K(l, L) = (3) 1 for (l, L) Kth { 0 for d(l12, L) Dth D(l12, L) = d(l12, L) for d(l12, L) < Dth (4) where (l, L) represents the angle difference between the two line segments, and d(l12, L) is the distance from endpoint l1 or l2 to projected line segment L. Kth and Dth are two thresholds deciding whether the two segments are potential matches. In our experiment, we set Kth = π/6, Dth = W/20, where W is the image width Refinement by Minimizing Error Function Figure 7. Intensity differences between projected image and camera image. errors after iterative refinements errors after optimization. the differences between LiDAR-projected image and camera image. The differences are represented by an error function, which is composed of two parts, line differences and intensity differences. We have talked about line differences above. The intensity error function is defined as follows, Eintensity = Figure 6. The refined pose from iterative matchings. Camera pose after minimizing the error function. Once we have obtained the camera pose with iterative refinements, we can further refine the pose by minimizing 1 (s I3D (i) I2D (i))2, {i} i (5) where I3D (i) and I2D (i) are intensity values for the ith pixel on projected image and camera image respectively. {i} is the number of projected pixels. s is the scale factor that compensate the reflectance or lighting differences. s will take the value that can minimize the intensity errors, so the above error function is equivalent to, Eintensity = i (I2D (i)2 I3D (i)2 I2D (i)2 ), I3D (i)2 (6)
6 The overall error function is a weighted combination of two error functions, E pose = αe line pose + (1 α)e intensity pose, (7) where pose is determined by the rotation R, translation T or equivalently 3D positions of keypoints P. We set α = 0.5 in our experiments. Since intensity errors usually have larger scales, this will make intensity a larger effect on the overall error function. The relative pose is refined via minimization of the above error function: (R, T, P ) = argmmine pose R,T,P. (8) R,T,P The refinement results are shown in Figure 6 and Experimental Results We have tested more than 10 sets of scenes with camera images taken from different viewpoints. Figure 8 shows an example of pose recovery through iterations and optimizations. After a series of refinement through iterative matchings and optimization, we can get an accurate view of a given camera image. Figure 9 shows an image of the same scene but taken from another view. It can be easily observed that the virtual image is well aligned with the real image by blending the two images together. We have also measured the projection errors for each refinement process. The results are shown in Figure 10 and Figure 11. As is shown in Figure 10, the errors stay constant after 3rd refinements. This is because that usually we have obtained sufficient correspondences after 3rd iteration to get a stable camera pose. The errors are caused by the errors in calculating the 3D position of keypoints. This can be further improved by adjusting the pose to get even smaller projection errors, as shown in Figure 11. However, due to moving passengers, occlusions, lighting conditions etc., there are always errors between a projected image and a camera image. 6. Conclusion We have proposed a framework of camera pose estimation with respect to 3D terrestrial LiDAR data. The LiDAR data contains intensity information. We first project the Li- DAR onto several pre-selected viewpoints and calculate the SIFT features. These features are reprojected back onto the LiDAR data to obtain their positions in 3D space. These 3D features are used to compute the initial pose of the camera pose. In the next stage, we iteratively refine the camera pose by generating more correspondences. After that, we further refine the pose through minimizing the proposed objective errors errors iteration no. Figure 10. The errors after each iterative refinement after iterative refinements after optimization image no. Figure 11. The errors before and after optimization. function. The function is composed of two components, errors from intensity differences and errors from geometric structure displacements between projected LiDAR image and camera image. We have tested the proposed framework on different urban settings. The results show that the estimated camera pose is stable and the framework can be applied in many applications such as augmented reality. References [1] D. G. Aguilera, P. R. Gonzalvez, and J. G. Lahoz. An automatic procedure for co-registration of terrestrial laser scanners and digital cameras. ISPRS Journal of Photogrammetry and Remote Sensing, 64(3): , [2] N. Ansari and E. J. Delp. On detecting dominant points. Pattern Recognition, 24(5): , [3] S. Becker and N. Haala. Combined feature extraction for facade reconstructio. In ISPRS Workshop on Laser Scanning,
7 (c) (d) (e) (f) Figure 8. The camera image The initial view calculated from limited number of matches (c) The refined view by generating more correspondences (d) It has no further refinement (from more matches) after 2 or 3 iterations (e) The pose is refined by minimizing the proposed error function (f) The virtual building is well aligned with the real image for the calculated view [4] S. Christy and R. Horaud. Iterative pose computation from line correspondences, 73(1): , [5] M. Ding, K. Lyngbaek, and A. Zakhor. Automatic registration of aerial imagery with untextured 3d lidar models. In Computer Vision and Pattern Recognition (CVPR), [6] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fiting with applications to image analysis and automated cartography, 24(6): , [7] C. Frueh, R. Sammon, and A. Zakhor. Automated texture mapping of 3d city models with oblique aerial imagery. In Symposium on 3D Data Processing, Visualization and Transmission, pages , [8] W. Guan, L. Wang, M. Jonathan, S. You, and U. Neumann. Robust pose estimation in untextured environments for augmented reality applications. In ISMAR, [9] W. Guan, S. You, and U. Neumann. Recognition-driven 3d navigation in large-scale virtual environments. In IEEE Virtual Reality, [10] W. Guan, S. You, and U. Neumann. Efficient matchings and mobile augmented reality. In ACM TOMCCAP, [11] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. [12] R. Horaud, F. Dornaika, B. Lamiroy, and S. Christy. Object pose: The link between weak perspective, paraperspective and full perspective, 22(2), 1997.
8 Figure 9. The estimated camera pose with respect to the same scene as in Figure 8 but from a different viewpoint. The right figure shows the mixed reality of both virtual world and real world. [13] L. Liu and I. Stamos. Automatic 3d to 2d registration for the photorealistic rendering of urban scenes. In Computer Vision and Pattern Recognition, pages , [14] L. Liu and I. Stamos. A systematic approach for 2d-image to 3d-range registration in urban environments. In In International Conference on Computer Vision, pages 1 8, [15] D. Lowe. Object recognition from local scale invariant features. In International Conference on Computer Vision, [16] A. Mastin, J. Kepner, and J. Fisher. Automatic registration of lidar and optical images of urban scenes. In Computer Vision and Pattern Recognition (CVPR), pages , [17] D. Oberkampf, D. DeMenthon, and L. Davis. Iterative pose estimation using coplanar feature points. In CVGIP, [18] F. of 3D, appearance models for fast object detection, and pose estimation. Iterative pose estimation using coplanar feature points. In in ACCV, pages , [19] L. Quan and Z. Lan. Linear n-point camera pose determination. In PAMI, [20] I. Stamos and P. K. Allen. Geometry and texture recovery of scenes of large scale, 88(2):94 118, [21] I. Stamos, L. Liu, C. Chen, G.Wolberg, G. Yu, and S. Zokai. Integrating automated range registration with multiview geometry for the photorealistic modeling of large-scale scenes. pages , [22] A. Vasile, F. R. Waugh, D. Greisokh, and R. M. Heinrichs. Automatic alignment of color imagery onto 3d laser radar data. In Applied Imagery and Pattern Recognition Workshop, [23] L. Wang and U. Neumann. A robust approach for automatic registration of aerial images with untextured aerial lidar data. In Computer Vision and Pattern Recognition (CVPR), pages , [24] R. Wang, F. Ferrie, and J. Macfarlane. Automatic registration of mobile lidar and spherical panoramas. In Computer Vision and Pattern Recognition Workshops (CVPRW), pages 33 40, [25] G. Yang, J. Becker, and C. Stewart. Estimating the location of a camera with respect to a 3d model. In 3D Digital Imaging and Modeling, 2007.
Automatic Registration of LiDAR and Optical Imagery using Depth Map Stereo
Automatic Registration of LiDAR and Optical Imagery using Depth Map Stereo Hyojin Kim Lawrence Livermore National Laboratory kim63@llnl.gov Carlos D. Correa Google Inc. cdcorrea@google.com Nelson Max University
More informationAutomatic Registration of Mobile LiDAR and Spherical Panoramas
Automatic Registration of Mobile LiDAR and Spherical Panoramas Ruisheng Wang NOKIA Chicago, IL, USA ruisheng.wang@nokia.com Frank P. Ferrie Centre for Intelligent Machines McGill University, Canada ferrie@cim.mcgill.ca
More informationData Acquisition, Leica Scan Station 2, Park Avenue and 70 th Street, NY
Automated registration of 3D-range with 2D-color images: an overview 44 th Annual Conference on Information Sciences and Systems Invited Session: 3D Data Acquisition and Analysis March 19 th 2010 Ioannis
More informationFAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES
FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing
More informationMap-Enhanced UAV Image Sequence Registration and Synchronization of Multiple Image Sequences
Map-Enhanced UAV Image Sequence Registration and Synchronization of Multiple Image Sequences Yuping Lin and Gérard Medioni Computer Science Department, University of Southern California 941 W. 37th Place,
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationFeature Fusion for Registering Wide-baseline Urban Images
Feature Fusion for Registering Wide-baseline Urban Images Quan Wang, Suya You Computer Science Department University of Southern California Los Angeles, California, U.S.A. {quanwang, suyay} @graphics.usc.edu
More informationCamera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006
Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation
More informationViewpoint Invariant Features from Single Images Using 3D Geometry
Viewpoint Invariant Features from Single Images Using 3D Geometry Yanpeng Cao and John McDonald Department of Computer Science National University of Ireland, Maynooth, Ireland {y.cao,johnmcd}@cs.nuim.ie
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationAutomatic registration of aerial imagery with untextured 3D LiDAR models
Automatic registration of aerial imagery with untextured 3D LiDAR models Min Ding, Kristian Lyngbaek and Avideh Zakhor University of California, Berkeley Electrical Engineering and Computer Science Department
More informationREFINEMENT OF BUILDING FASSADES BY INTEGRATED PROCESSING OF LIDAR AND IMAGE DATA
In: Stilla U et al (Eds) PIA07. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 36 (3/W49A) REFINEMENT OF BUILDING FASSADES BY INTEGRATED PROCESSING OF LIDAR
More information3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationAUTOMATED 3D MODELING OF URBAN ENVIRONMENTS
AUTOMATED 3D MODELING OF URBAN ENVIRONMENTS Ioannis Stamos Department of Computer Science Hunter College, City University of New York 695 Park Avenue, New York NY 10065 istamos@hunter.cuny.edu http://www.cs.hunter.cuny.edu/
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationAutomatic registration of 2 D with 3 D imagery in urban environments. 3 D rectangular structures from the range data and 2 D
Automatic registration of 2 D with 3 D imagery in urban environments Ioannis Stamos and Peter K. Allen [Submitted for publication to ICCV 20] Abstract We are building a system that can automatically acquire
More informationInstance-level recognition part 2
Visual Recognition and Machine Learning Summer School Paris 2011 Instance-level recognition part 2 Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique,
More informationCSc Topics in Computer Graphics 3D Photography
CSc 83010 Topics in Computer Graphics 3D Photography Tuesdays 11:45-1:45 1:45 Room 3305 Ioannis Stamos istamos@hunter.cuny.edu Office: 1090F, Hunter North (Entrance at 69 th bw/ / Park and Lexington Avenues)
More informationGRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES
GRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES Norbert Haala, Martin Kada, Susanne Becker, Jan Böhm, Yahya Alshawabkeh University of Stuttgart, Institute for Photogrammetry, Germany Forename.Lastname@ifp.uni-stuttgart.de
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com
More informationBUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS INTRODUCTION
BUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS I-Chieh Lee 1, Shaojun He 1, Po-Lun Lai 2, Alper Yilmaz 2 1 Mapping and GIS Laboratory 2 Photogrammetric Computer Vision Laboratory Dept. of Civil
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationFrom Structure-from-Motion Point Clouds to Fast Location Recognition
From Structure-from-Motion Point Clouds to Fast Location Recognition Arnold Irschara1;2, Christopher Zach2, Jan-Michael Frahm2, Horst Bischof1 1Graz University of Technology firschara, bischofg@icg.tugraz.at
More informationInstance-level recognition II.
Reconnaissance d objets et vision artificielle 2010 Instance-level recognition II. Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique, Ecole Normale
More informationStep-by-Step Model Buidling
Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure
More informationA Vision-based 2D-3D Registration System
A Vision-based 2D-3D Registration System Quan Wang, Suya You CGIT/IMSC USC Los Angeles, CA 90089 quanwang@usc.edu, suyay@graphics.usc.edu Abstract In this paper, we propose an automatic system for robust
More informationURBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES
URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment
More informationIntegrating Automated Range Registration with Multiview Geometry for the Photorealistic Modeling of Large-Scale Scenes
Click here to download Manuscript: all.tex 1 1 1 1 1 0 1 0 1 Integrating Automated Range Registration with Multiview Geometry for the Photorealistic Modeling of Large-Scale Scenes Ioannis Stamos, Lingyun
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationRobust Range Image Registration using a Common Plane
VRVis Technical Report 1 Robust Range Image Registration using a Common Plane Joachim Bauer bauer@icg.vrvis.at Konrad Karner karner@vrvis.at Andreas Klaus klaus@vrvis.at Roland Perko University of Technology
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationObject Recognition with Invariant Features
Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user
More informationStructure from Motion. Introduction to Computer Vision CSE 152 Lecture 10
Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley
More informationFrom Orientation to Functional Modeling for Terrestrial and UAV Images
From Orientation to Functional Modeling for Terrestrial and UAV Images Helmut Mayer 1 Andreas Kuhn 1, Mario Michelini 1, William Nguatem 1, Martin Drauschke 2 and Heiko Hirschmüller 2 1 Visual Computing,
More informationHomographies and RANSAC
Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationA Summary of Projective Geometry
A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]
More informationOverview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers
Augmented reality Overview Augmented reality and applications Marker-based augmented reality Binary markers Textured planar markers Camera model Homography Direct Linear Transformation What is augmented
More informationCell Decomposition for Building Model Generation at Different Scales
Cell Decomposition for Building Model Generation at Different Scales Norbert Haala, Susanne Becker, Martin Kada Institute for Photogrammetry Universität Stuttgart Germany forename.lastname@ifp.uni-stuttgart.de
More informationStructured light 3D reconstruction
Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance
More informationDETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION
DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,
More informationImage correspondences and structure from motion
Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.
More informationREGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh
REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS Y. Postolov, A. Krupnik, K. McIntosh Department of Civil Engineering, Technion Israel Institute of Technology, Haifa,
More informationBUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION
BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION Ruijin Ma Department Of Civil Engineering Technology SUNY-Alfred Alfred, NY 14802 mar@alfredstate.edu ABSTRACT Building model reconstruction has been
More informationAugmenting Reality, Naturally:
Augmenting Reality, Naturally: Scene Modelling, Recognition and Tracking with Invariant Image Features by Iryna Gordon in collaboration with David G. Lowe Laboratory for Computational Intelligence Department
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationDETERMINATION OF CORRESPONDING TRUNKS IN A PAIR OF TERRESTRIAL IMAGES AND AIRBORNE LASER SCANNER DATA
The Photogrammetric Journal of Finland, 20 (1), 2006 Received 31.7.2006, Accepted 13.11.2006 DETERMINATION OF CORRESPONDING TRUNKS IN A PAIR OF TERRESTRIAL IMAGES AND AIRBORNE LASER SCANNER DATA Olli Jokinen,
More informationAccurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion
007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,
More information3D Perception. CS 4495 Computer Vision K. Hawkins. CS 4495 Computer Vision. 3D Perception. Kelsey Hawkins Robotics
CS 4495 Computer Vision Kelsey Hawkins Robotics Motivation What do animals, people, and robots want to do with vision? Detect and recognize objects/landmarks Find location of objects with respect to themselves
More informationSpecular 3D Object Tracking by View Generative Learning
Specular 3D Object Tracking by View Generative Learning Yukiko Shinozuka, Francois de Sorbier and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku 223-8522 Yokohama, Japan shinozuka@hvrl.ics.keio.ac.jp
More informationObservations. Basic iteration Line estimated from 2 inliers
Line estimated from 2 inliers 3 Observations We need (in this case!) a minimum of 2 points to determine a line Given such a line l, we can determine how well any other point y fits the line l For example:
More informationSIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE
SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE S. Hirose R&D Center, TOPCON CORPORATION, 75-1, Hasunuma-cho, Itabashi-ku, Tokyo, Japan Commission
More informationTitle: Vanishing Hull: A Geometric Concept for Vanishing Points Detection and Analysis
Pattern Recognition Manuscript Draft Manuscript Number: Title: Vanishing Hull: A Geometric Concept for Vanishing Points Detection and Analysis Article Type: Full Length Article Section/Category: Keywords:
More informationApplication questions. Theoretical questions
The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list
More informationROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationA NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION
A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu
More informationInstance-level recognition
Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction
More informationPhoto Tourism: Exploring Photo Collections in 3D
Photo Tourism: Exploring Photo Collections in 3D SIGGRAPH 2006 Noah Snavely Steven M. Seitz University of Washington Richard Szeliski Microsoft Research 2006 2006 Noah Snavely Noah Snavely Reproduced with
More informationRANSAC and some HOUGH transform
RANSAC and some HOUGH transform Thank you for the slides. They come mostly from the following source Dan Huttenlocher Cornell U Matching and Fitting Recognition and matching are closely related to fitting
More informationAugmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit
Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationVideo Georegistration: Key Challenges. Steve Blask Harris Corporation GCSD Melbourne, FL 32934
Video Georegistration: Key Challenges Steve Blask sblask@harris.com Harris Corporation GCSD Melbourne, FL 32934 Definitions Registration: image to image alignment Find pixel-to-pixel correspondences between
More informationAUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA
AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA Changjae Kim a, Ayman Habib a, *, Yu-Chuan Chang a a Geomatics Engineering, University of Calgary, Canada - habib@geomatics.ucalgary.ca,
More informationInstance-level recognition
Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction
More informationOutline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2
Outline Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion Media IC & System Lab Po-Chen Wu 2 Outline Introduction System Overview Camera Calibration
More informationAutomated, 3D, Airborne Modeling of Large Scale Urban Environments by Min Ding. Research Project
Automated, 3D, Airborne Modeling of Large Scale Urban Environments by Min Ding Research Project Submitted to the Department of Electrical Engineering and Computer Sciences, University of California at
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationMap-Enhanced UAV Image Sequence Registration
Map-Enhanced UAV mage Sequence Registration Yuping Lin Qian Yu Gerard Medioni Computer Science Department University of Southern California Los Angeles, CA 90089-0781 {yupingli, qianyu, medioni}@usc.edu
More informationSimultaneous Pose and Correspondence Determination using Line Features
Simultaneous Pose and Correspondence Determination using Line Features Philip David, Daniel DeMenthon, Ramani Duraiswami, and Hanan Samet Department of Computer Science, University of Maryland, College
More informationFeature Based Registration - Image Alignment
Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html
More informationSingle View Pose Estimation of Mobile Devices in Urban Environments
Single View Pose Estimation of Mobile Devices in Urban Environments Aaron Hallquist University of California, Berkeley aaronh@eecs.berkeley.edu Avideh Zakhor University of California, Berkeley avz@eecs.berkeley.edu
More information1-2 Feature-Based Image Mosaicing
MVA'98 IAPR Workshop on Machine Vision Applications, Nov. 17-19, 1998, Makuhari, Chibq Japan 1-2 Feature-Based Image Mosaicing Naoki Chiba, Hiroshi Kano, Minoru Higashihara, Masashi Yasuda, and Masato
More informationA REAL-TIME TRACKING SYSTEM COMBINING TEMPLATE-BASED AND FEATURE-BASED APPROACHES
A REAL-TIME TRACKING SYSTEM COMBINING TEMPLATE-BASED AND FEATURE-BASED APPROACHES Alexander Ladikos, Selim Benhimane, Nassir Navab Department of Computer Science, Technical University of Munich, Boltzmannstr.
More information3D Environment Reconstruction
3D Environment Reconstruction Using Modified Color ICP Algorithm by Fusion of a Camera and a 3D Laser Range Finder The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15,
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationIGTF 2016 Fort Worth, TX, April 11-15, 2016 Submission 149
IGTF 26 Fort Worth, TX, April -5, 26 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 2 Light weighted and Portable LiDAR, VLP-6 Registration Yushin Ahn (yahn@mtu.edu), Kyung In Huh (khuh@cpp.edu), Sudhagar Nagarajan
More information3D Reconstruction from Scene Knowledge
Multiple-View Reconstruction from Scene Knowledge 3D Reconstruction from Scene Knowledge SYMMETRY & MULTIPLE-VIEW GEOMETRY Fundamental types of symmetry Equivalent views Symmetry based reconstruction MUTIPLE-VIEW
More informationLocal features and image matching. Prof. Xin Yang HUST
Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points
More informationRefined Non-Rigid Registration of a Panoramic Image Sequence to a LiDAR Point Cloud
Refined Non-Rigid Registration of a Panoramic Image Sequence to a LiDAR Point Cloud Arjen Swart 1,2, Jonathan Broere 1, Remco Veltkamp 2, and Robby Tan 2 1 Cyclomedia Technology BV, Waardenburg, the Netherlands
More informationObject Reconstruction
B. Scholz Object Reconstruction 1 / 39 MIN-Fakultät Fachbereich Informatik Object Reconstruction Benjamin Scholz Universität Hamburg Fakultät für Mathematik, Informatik und Naturwissenschaften Fachbereich
More informationCS 231A Computer Vision (Winter 2014) Problem Set 3
CS 231A Computer Vision (Winter 2014) Problem Set 3 Due: Feb. 18 th, 2015 (11:59pm) 1 Single Object Recognition Via SIFT (45 points) In his 2004 SIFT paper, David Lowe demonstrates impressive object recognition
More informationCS 231A Computer Vision (Winter 2018) Problem Set 3
CS 231A Computer Vision (Winter 2018) Problem Set 3 Due: Feb 28, 2018 (11:59pm) 1 Space Carving (25 points) Dense 3D reconstruction is a difficult problem, as tackling it from the Structure from Motion
More information3D object recognition used by team robotto
3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object
More informationMapping textures on 3D geometric model using reflectance image
Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheeler Katsushi Ikeuchi The University of Tokyo Cyra Technologies, Inc. The University of Tokyo fkurazume,kig@cvl.iis.u-tokyo.ac.jp
More informationIntegrating LiDAR, Aerial Image and Ground Images for Complete Urban Building Modeling
Integrating LiDAR, Aerial Image and Ground Images for Complete Urban Building Modeling Jinhui Hu, Suya You, Ulrich Neumann University of Southern California {jinhuihu,suyay, uneumann}@graphics.usc.edu
More informationAn Image Based 3D Reconstruction System for Large Indoor Scenes
36 5 Vol. 36, No. 5 2010 5 ACTA AUTOMATICA SINICA May, 2010 1 1 2 1,,,..,,,,. : 1), ; 2), ; 3),.,,. DOI,,, 10.3724/SP.J.1004.2010.00625 An Image Based 3D Reconstruction System for Large Indoor Scenes ZHANG
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationEVOLUTION OF POINT CLOUD
Figure 1: Left and right images of a stereo pair and the disparity map (right) showing the differences of each pixel in the right and left image. (source: https://stackoverflow.com/questions/17607312/difference-between-disparity-map-and-disparity-image-in-stereo-matching)
More informationMULTI-IMAGE FUSION FOR OCCLUSION-FREE FAÇADE TEXTURING
MULTI-IMAGE FUSION FOR OCCLUSION-FREE FAÇADE TEXTURING KEY WORDS: Texture, Fusion, Rectification, Terrestrial Imagery Jan Böhm Institut für Photogrammetrie, Universität Stuttgart, Germany Jan.Boehm@ifp.uni-stuttgart.de
More informationProf. Jose L. Flores, MS, PS Dept. of Civil Engineering & Surveying
Prof. Jose L. Flores, MS, PS Dept. of Civil Engineering & Surveying Problem One of the challenges for any Geographic Information System (GIS) application is to keep the spatial data up to date and accurate.
More informationAn Image-Based System for Urban Navigation
An Image-Based System for Urban Navigation Duncan Robertson and Roberto Cipolla Cambridge University Engineering Department Trumpington Street, Cambridge, CB2 1PZ, UK Abstract We describe the prototype
More informationAlignment of Continuous Video onto 3D Point Clouds
1 Alignment of Continuous Video onto 3D Point Clouds W. Zhao 1, D. Nister 2, and S. Hsu Sarnoff Corporation 201 Washington Road Princeton, NJ 08540, USA email: { wzhao, dnister, shsu }@sarnoff.com Tel:
More informationPLANE-BASED COARSE REGISTRATION OF 3D POINT CLOUDS WITH 4D MODELS
PLANE-BASED COARSE REGISTRATION OF 3D POINT CLOUDS WITH 4D MODELS Frédéric Bosché School of the Built Environment, Heriot-Watt University, Edinburgh, Scotland bosche@vision.ee.ethz.ch ABSTRACT: The accurate
More informationFOOTPRINTS EXTRACTION
Building Footprints Extraction of Dense Residential Areas from LiDAR data KyoHyouk Kim and Jie Shan Purdue University School of Civil Engineering 550 Stadium Mall Drive West Lafayette, IN 47907, USA {kim458,
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationCS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry
CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry Some material taken from: David Lowe, UBC Jiri Matas, CMP Prague http://cmp.felk.cvut.cz/~matas/papers/presentations/matas_beyondransac_cvprac05.ppt
More informationMosaics. Today s Readings
Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections
More information