A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES
|
|
- Edith Simmons
- 5 years ago
- Views:
Transcription
1 A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES Yuzhu Lu Shana Smith Virtual Reality Applications Center, Human Computer Interaction Program, Iowa State University, Ames, IA, USA. ABSTRACT Recovering 3D objects from 2D photos is an important application in the areas of computer vision, computer intelligence, feature recognition, and virtual reality. This paper describes an innovative and systematic method to integrate automatic feature extraction, automatic feature matching, manual revision, feature recovery, and model reconstruction as an effective recovery tool. This method has been proven to be a convenient and inexpensive way to recover 3D scenes and models directly from 2D photos. We have developed a new automatic key point selection and hierarchical matching algorithm for matching 2D photos, which have less similarity. Our method uses a universal camera intrinsic matrix estimation method to omit camera calibration experiment. We have also developed a new automatic texture-mapping algorithm to find the best textures from 2D photos. In this paper, we include some examples and results to show the capability of the developed tool. KEY WORDS: 3D recovery, Stereo match, Computer modeling, Wide baseline. Introduction With the rapid and wide application of virtual models in many areas, creating 3D models from real scenes is greatly needed. Traditional manual model building is labor-intensive and expensive. Thus, automatically constructing 3D computer models has recently received much attention [][2]. Much work has been conducted concerning recovering existing 3D environments. These recovery efforts can be classified into two categories: using scanning devices [3][4], and using cameras [5][6][7][8][9]. Scanning devices can automatically reconstruct objects precisely, but they are very expensive and inconvenient to carry, especially in an outdoor environment. Here, we chose to focus on 3D model recovery from 2D images by cameras. This technology uses two or more photos of the same objects to recover the 3D information of the overlapped areas and, subsequently, reconstruct the model. The process includes four steps: key feature selection, feature matching, recovery computation, and model reconstruction. The features of an image are often expressed as discontinuities in image signals. In the prior research, these discontinuities were extracted as corner points [][8][9][0][][2], edges [3][4] or regions [7][5][6] by using the first or second derivative information of the image signals. Feature matching is both the focus and the bottleneck of recent research in the area of recovering 3D information from 2D images. These matching processes are applied according to the different attributes of detected features: corner points [][8][9][][2][7], line edges [0][8], curved edges [9], and regions [5][7][20]. The point matching methods have been most widely used in stereovision research because corners are easy to detect
2 and they are more stable and robust when perspective is changed. Almost all of these point-matching algorithms are designed according to the images similarity, uniqueness, continuity, and Epipolar information [][2][5][9][][7][2]. Recovery computation (also called stereo triangulation) is relatively stable and sophisticated when a user knows the camera s parameters. However, if these parameters are not given, it is necessary to calibrate the camera [][2][2], an operation which is inconvenient for many common users. Thus, camera calibration and selfcalibration research has become another major research focus. Even though there have been many studies in this area, problems still arise because of the lack of sophistication with current methods. These problems include difficulties in completing 3D recovery automatically, and difficulties in dealing with images having wide baselines. In response to these problems, we have developed a systematic semi-automated method to recover 3D models directly from 2D photos with less similarity. We have also developed an automatic feature information extraction method and a hierarchical matching algorithm for images with less similarity, as well as a tool for users to edit key points, revise possible mismatches, and select triangles to reconstruct a model with surfaces. A universal camera intrinsic matrix estimation method from statistical analysis is used to recover 3D information without camera calibration, and ultimately, a new texture-mapping algorithm is developed to automatically select the better textures from different photos. 2. Methodology 2.. Key Points Extraction Feature points are those holding the main characteristics of a 2D image. In our application, geometric information is the main character(s) to be recovered. Again, we return to the Harris corner detection method [][9][0] and the Canny edge detection method [8][3]. We have chosen to utilize the Canny method to extract segment information for two reasons: first, because edge detection will hold more complete geometric information; and second, because edge segments can be displayed using only two end points, which can be easily edited and revised manually Hierarchical Matching Algorithm The epipolar constraints have a great contribution in stereo matching [][2][9][0][]. Unlike other matching methods, epipolar constraints are more robust. Only eight well-matched points information are necessary to compute the epipolar geometry []. However, getting these eight well-matched points is a challenging problem. It is almost impossible to check all the possible combinations of extracted feature points. Therefore, an initial seed matching is necessary to provide candidate matching for epipolar geometry computation. The most widely used method to obtain an initial matching set is the classical cross correlation method []. Although there are other methods which could be used to obtain the initial seed matching, these algorithms usually do not work very well when they are applied to images with less similarity, because high similarity is a fundamental requirement for those matching methods. Our method presents a new hierarchical method to obtain the initial correspondence set, as shown in figure. First, segments will be matched. Because segments have more attributes than points (such as length, position, direction and background color information), matching accuracy could be increased, particularly when using large baseline images with less similarity. Second, two end points of the segments will be matched based on the segments matched from the first step. If these segments from the first step are well matched, the accuracy of the second step will be very high. Our approach uses four indexes for segment matching. The first index is the relative position of the center points of the extracted segments, and is represented 09
3 CP=a*(I x - α* I x )+b*(i 2x - α* I 2x ) +c*(i 3x - I 3x )+d*(i 4x - I 4x ) () Figure. Hierarchical matching algorithms by the summation of the vectors from the segment centers to all other segment centers. For example, the index vector for p is found by adding up all the vectors from p to other points as shown in figure 2 (a), and the index vector for p2 is found by adding up all the vectors from p2 to other points as shown in figure 2 (b). The second index is the length of a segment, and it is represented by the distance between two end points. The third index is the background information of each segment, which is represented by the mean color value of the neighborhood of the center point of the segment. The fourth index is the direction of a segment, which is represented by the angle of the segment vector. (a) Relative position of p (b) Relative position of p2 Figure 2. Index one for p and p2 - Relative position The four indexes for each segment in two images are compared, and the difference between each index value for each pair of segments is computed and added together, as shown in equation. The potential matched segments will be the ones having the least differences between index values. In the above equation, I x, I 2x, I 3x, I 4x, I x, I 2x, I 3x, and I 4x are the four index values for a pair of segments in two images. Similarly, a, b, c, and d are the weights of the four indexes. These weights are determined by the relative importance of the four indexes. Through our study, we have found that the indexes of relative position and segment length are more crucial when matching images with wide baselines. Therefore, weights of these two indexes are larger than others. α is the estimated scale parameter of two photos used, which is estimated by the ratio of the bounding box size of the object in the two images. This factor could help to solve the scaling problem in the photos used. In the first level of the matching process, we can find potential matched segments, but we cannot determine their matching directions. Consequently, we use the classical cross correlation method in level two to match the end points of the matched segments. If segments are corresponded correctly in level one, level two matching will be easier because there will be few candidate points. Finally, the correspondences for the initial set of key feature points are obtained from the proposed hierarchical matching algorithm. A least squares method was used to find eight points which are best matched. The eight best-matched points are then used to calculate the fundamental matrix, which is useful in finding the most reasonable correspondences. The fundamental matrix is the algebraic representation of the epipolar geometry. After the fundamental matrix is calculated, it is used to find inliers of the seed matches found in the level two matching and reject the outliers. Figure 3 shows the comparison of the final matching result of the proposed hierarchical matching method and the classical cross correlation method [][8][9][][2][7]. This result shows that our algorithm could give more correct correspondent key points (6 matches) than the classical cross correlation algorithm (9 matches). In figure 3(b), there is an obvious mismatch as the result of using the classical cross correlation algorithm. 0
4 (a) Our matching method (b) Classical cross correlation method Figure 3. Matching result comparison To improve the results and accuracy, various strategies can be implemented, such as the relaxation process and searching the points correspondence a second time with the constraint of epipolar geometry, as suggested by Zhang []. However, this will again cost more time and resources D Recovery The relationship of a 3D point coordinate to its image plane coordinate through a camera is shown in equations 2 and 3. S is a scaling factor, while (x,y,z) and (u,v) are correspondent 3D point coordinates and their camera image coordinates. P is a three by four perspective projection matrix, which can be decomposed as camera intrinsic matrix A and extrinsic matrix with rotation and translation information (R, T), as shown in equation 3. x u y S v = P z (2) P = ART [ ] (3) Thus, the relationship of a 3D point and its two image coordinates on two image planes through two cameras can be expressed through equation 4, where A and A 2 are the two cameras intrinsic matrices. Here, we set the first camera as the original, and the second camera as the transformed one. These two equations work as the principle of stereo triangulation to recover 3D coordinates when all parameters are known. After the correspondent key points are matched and triangular surfaces are constructed, the triangulation x u y s v = A [ I0] z x u 2 y s 2 v 2 = A 2 [ RT] z (4) method will be carried out to recover the 3D information [7][2]. Prior research has shown that camera calibration (or self-calibration) should be carried out to obtain the camera s intrinsic parameter matrix [][2][6]. The intrinsic matrix A (shown in equation 5) and the fundamental matrix F we obtain in the matching process can then be used to calculate the rotation and translation parameters between the two cameras, with which we could calculate the 3D information of the object. However, camera calibration experimentation is very inconvenient and time consuming, and is sometimes even impossible when, for example, we attempt to recover a historical scene from old photographs. A= fku fku cotθ u 0 0 fkv /sinθ v (5) There are six intrinsic parameters: focal length f of the camera, aspect ratios k u and k v, angle θ between the retinal axes, and coordinates of the principal camera points u 0 and v 0. [][2][2]. Xu, Terai, and Shum [2] suggest that if high precision is not required, we can assume that the two cameras are π /2 in between (θ = π /2), that the aspect ratio is (k u = k v = ), and that the principal point is at the image s center. Thus, the only unknown parameter left is the focal length f. The intrinsic matrix can then be rewritten as equation 6: f 0 pixel x / 2 A= 0 f pixel / (6) y 2 0 0
5 After surveying and analyzing focal lengths used from different research areas and obtained 2 samples of focal lengths, we have found that most of focal lengths (80%) vary in a narrow range [ ]. Thus, it is possible to estimate a camera s intrinsic matrix if we recover 3D information from photos taken by a normal camera. Our 3D recovery tool allows the user to adjust the focal length to find the best results. Figure 4 shows an example of recovering a 3D scene using a focal length of 000. Figure 5. Reconstruction with texture mapping Figure 4. Results by using f = Reconstruction by texture mapping After we obtain all the 3D information of the feature points, a new model can be constructed based on the connectivity information given by the triangles created above. Since the reconstructed solid model does not contain surface texture, texture mapping is necessary to make the model more real. However, since we will have at least two photos, and since each photo will be taken from a different angle (and will therefore show certain details in a slightly different way), the question arises: Which photo to use? Normally, a better surface texture might be created from the camera, which captures a larger area of that surface of the object and contains clearer details. Using this principle, we have designed an algorithm to compare every correspondent triangle texture and select the texture with larger area. Figure 5 shows an example of our algorithm. 3. Conclusion and Discussion We have proposed a hierarchical featurematching algorithm for wide baseline images with less similarity.. We have presented a universal camera intrinsic matrix, by which the camera calibration experimentation could be neglected (saving time and resources). 2. We have presented a new texture-mapping algorithm which will automatically select better and clearer triangle textures to be mapped to the reconstructed model. 3. We have presented an integrated and comprehensive process for recovering a 3D model from 2D images, while almost all previous research only focused on one or two topics of this process. Since our method only recovers a 3D scene from two 2D photos, the unseen areas in either photo cannot be recovered. Objects with simple geometric shapes (like buildings) are easier to recover than other complicated objects like trees and grass. Because the estimated camera intrinsic matrix does affect the results recovered, users can adjust the focal length within the suggested range. In the near future, more work will be done to register and fuse model parts retrieved from more photos to recover a complete model. The result will also be output as a VRML model to be used in more applications. References [] K. Cornelis, M. Pollefeys, M. Vergauwen, & L, Van Gool. Augmented Reality using Uncalibrated Video Sequences, 2th European Workshop on 3D Structure 2
6 from Multiple Images of Large-Scale Environments (SMILE2000), Dublin, Irleand, 2000, [2] M. Pollefeys, Self-Calibration and Metric 3D Reconstruction from Uncalibrated Image Sequences, PH.D thesis, Katholieke Universiteit Leuven, Heverlee, Belgium, 999. [3] M. Reed, & P. Allen, 3-D Modeling from Range Imagery: An Incremental Method with a Planning Component, Image and Vision Computing, 7, 999, 99-. [4] I. Stamos, & P. Allen, 3-D Model Construction Using Range and Image Data, Computer Vision & Pattern Recognition Conf. (CVPR), 2000, [5] H. Shum, & R. Szeliski, Stereo Reconstruction from Multiperspective Panoramas, 7th International Conf. on Computer Vision (ICCV'99), Kerkyra, Greece, 999,4-2. [6] T. Jebara, A. Azarbayejani, & A. Pentland, 3D Structure from 2D Motion, IEEE Signal Processing Magazine, 6(3). 999, [7] S. Baker, R. Szeliski, & P. Anandan, A layered approach to stereo reconstruction, IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR98), 998, [8] C. Baillard, & A. Zisserman, A plane-sweep strategy for the 3d reconstruction of buildings from multiple images, International Archives of Photogrammetry and Remote Sensing, 32(2), 2000, [9] A. W. Fitzgibbon, & G. Cross, A. Zisserman, Automatic 3D Model Construction for Turn-Table Sequences, Proc. European Workshop on 3D Structure from Multiple Images of Large-Scale Environments, 998, [0] C. G. Harris, & M. J. Stephens, Combined Corner and Edge Detector, Proc. 4th Alvey Vision Conf., Manchester, England, 988,47-5. [] Z. Zhang, R. Deriche, O. Faugeras, & Q. Luong, A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry, Artificial Intelligence, 78, 995,87-9. [2] P. Tissainayagam, & D. Suter, Assessing the performance of corner detectors for point feature tracking applications, Image and Vision Computing, 22(8), 2004, [3] J. F. Canny, Finding edges and lines in images, Master s thesis, MIT. AI Lab, 983. [4] R. Gonzalez, & R. Woods, Digital Image Processing (2 nd edition, Prentice Hall, 2002). [5] J. Gao, A. kosaka, & A. Kak, A Deformable Model for Human Organ Extraction, Proc. IEEE International Conf. on Image Processing, 998, 3: [6] D. L. Pham, C. Xu, & J. L. Prince, A Survey of Current Methods in Medical Image Segmentation, Annual Review of Biomedical Engineering, 2, 998, [7] Y. Ma, S. Soatto, J. Kosecka, & S. Sastry, An Invitation to 3-D Vision: From Images to Geometric Models (Springer-Verlag, 2003). [8] H. Loaiza, J. Triboulet, & S. Lelandais, Matching segments in stereoscopic vision, IEEE Instruction & Measurement Magazine, 4(), 200, [9] R. Deriche, & O. Faugeras, 2-D Curve Matching Using High Curvature Points: Application to Stereo Vision, Proc. International Conf. on Pattern Recognition. New Jersey, USA, 990, [20] T. Tuytelaars. M. Vergauwen, M. Pollefeys, & L. Van Gool, Image Matching for Wide baseline Stereo, Proc. International Conf. on Forensic Human Identification, 999. [2] G. Xu, J. Terai, & H. Shum, A Linear Algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images, Computer Vision & Pattern Recognition Conf. (CVPR), 2000:
Step-by-Step Model Buidling
Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure
More informationStructure from Motion. Introduction to Computer Vision CSE 152 Lecture 10
Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley
More informationPassive 3D Photography
SIGGRAPH 2000 Course on 3D Photography Passive 3D Photography Steve Seitz Carnegie Mellon University University of Washington http://www.cs cs.cmu.edu/~ /~seitz Visual Cues Shading Merle Norman Cosmetics,
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationA General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras
A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan
More informationA Summary of Projective Geometry
A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More information1-2 Feature-Based Image Mosaicing
MVA'98 IAPR Workshop on Machine Vision Applications, Nov. 17-19, 1998, Makuhari, Chibq Japan 1-2 Feature-Based Image Mosaicing Naoki Chiba, Hiroshi Kano, Minoru Higashihara, Masashi Yasuda, and Masato
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationInvariant Features from Interest Point Groups
Invariant Features from Interest Point Groups Matthew Brown and David Lowe {mbrown lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract This paper
More informationIMPACT OF SUBPIXEL PARADIGM ON DETERMINATION OF 3D POSITION FROM 2D IMAGE PAIR Lukas Sroba, Rudolf Ravas
162 International Journal "Information Content and Processing", Volume 1, Number 2, 2014 IMPACT OF SUBPIXEL PARADIGM ON DETERMINATION OF 3D POSITION FROM 2D IMAGE PAIR Lukas Sroba, Rudolf Ravas Abstract:
More informationLecture 10: Multi-view geometry
Lecture 10: Multi-view geometry Professor Stanford Vision Lab 1 What we will learn today? Review for stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from
More informationFast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation
Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Engin Tola 1 and A. Aydın Alatan 2 1 Computer Vision Laboratory, Ecóle Polytechnique Fédéral de Lausanne
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More informationObject Recognition with Invariant Features
Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user
More informationCompositing a bird's eye view mosaic
Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition
More information3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY
IJDW Volume 4 Number January-June 202 pp. 45-50 3D FACE RECONSRUCION BASED ON EPIPOLAR GEOMERY aher Khadhraoui, Faouzi Benzarti 2 and Hamid Amiri 3,2,3 Signal, Image Processing and Patterns Recognition
More informationSurface Normal Aided Dense Reconstruction from Images
Computer Vision Winter Workshop 26, Ondřej Chum, Vojtěch Franc (eds.) Telč, Czech Republic, February 6 8 Czech Pattern Recognition Society Surface Normal Aided Dense Reconstruction from Images Zoltán Megyesi,
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationEuclidean Reconstruction Independent on Camera Intrinsic Parameters
Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean
More information3D Models from Extended Uncalibrated Video Sequences: Addressing Key-frame Selection and Projective Drift
3D Models from Extended Uncalibrated Video Sequences: Addressing Key-frame Selection and Projective Drift Jason Repko Department of Computer Science University of North Carolina at Chapel Hill repko@csuncedu
More informationWide Baseline Matching using Triplet Vector Descriptor
1 Wide Baseline Matching using Triplet Vector Descriptor Yasushi Kanazawa Koki Uemura Department of Knowledge-based Information Engineering Toyohashi University of Technology, Toyohashi 441-8580, JAPAN
More informationOn Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications
ACCEPTED FOR CVPR 99. VERSION OF NOVEMBER 18, 2015. On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications Peter F. Sturm and Stephen J. Maybank Computational Vision Group,
More informationA New Representation for Video Inspection. Fabio Viola
A New Representation for Video Inspection Fabio Viola Outline Brief introduction to the topic and definition of long term goal. Description of the proposed research project. Identification of a short term
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationIMAGE-BASED 3D ACQUISITION TOOL FOR ARCHITECTURAL CONSERVATION
IMAGE-BASED 3D ACQUISITION TOOL FOR ARCHITECTURAL CONSERVATION Joris Schouteden, Marc Pollefeys, Maarten Vergauwen, Luc Van Gool Center for Processing of Speech and Images, K.U.Leuven, Kasteelpark Arenberg
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points
More informationAn Overview of Matchmoving using Structure from Motion Methods
An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationA linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images
A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images Gang Xu, Jun-ichi Terai and Heung-Yeung Shum Microsoft Research China 49
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationLecture 10: Multi view geometry
Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationFast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics
Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics Heiko Hirschmüller, Peter R. Innocent and Jon M. Garibaldi Centre for Computational Intelligence, De Montfort
More informationProjector Calibration for Pattern Projection Systems
Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.
More informationPerception and Action using Multilinear Forms
Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationMulti-View Geometry Part II (Ch7 New book. Ch 10/11 old book)
Multi-View Geometry Part II (Ch7 New book. Ch 10/11 old book) Guido Gerig CS-GY 6643, Spring 2016 gerig@nyu.edu Credits: M. Shah, UCF CAP5415, lecture 23 http://www.cs.ucf.edu/courses/cap6411/cap5415/,
More informationOn Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications
On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications Peter Sturm, Steve Maybank To cite this version: Peter Sturm, Steve Maybank. On Plane-Based Camera Calibration: A General
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More informationCamera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006
Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation
More informationEpipolar Geometry in Stereo, Motion and Object Recognition
Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,
More informationVision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes
Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes Yuo Uematsu and Hideo Saito Keio University, Dept. of Information and Computer Science, Yoohama, Japan {yu-o,
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X
More informationMOTION STEREO DOUBLE MATCHING RESTRICTION IN 3D MOVEMENT ANALYSIS
MOTION STEREO DOUBLE MATCHING RESTRICTION IN 3D MOVEMENT ANALYSIS ZHANG Chun-sen Dept of Survey, Xi an University of Science and Technology, No.58 Yantazhonglu, Xi an 710054,China -zhchunsen@yahoo.com.cn
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera
More informationUsing Geometric Blur for Point Correspondence
1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence
More informationStructure and motion in 3D and 2D from hybrid matching constraints
Structure and motion in 3D and 2D from hybrid matching constraints Anders Heyden, Fredrik Nyberg and Ola Dahl Applied Mathematics Group Malmo University, Sweden {heyden,fredrik.nyberg,ola.dahl}@ts.mah.se
More informationMultiple Views Geometry
Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationMulti-Projector Display with Continuous Self-Calibration
Multi-Projector Display with Continuous Self-Calibration Jin Zhou Liang Wang Amir Akbarzadeh Ruigang Yang Graphics and Vision Technology Lab (GRAVITY Lab) Center for Visualization and Virtual Environments,
More informationStructure from Motion and Multi- view Geometry. Last lecture
Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationModel Refinement from Planar Parallax
Model Refinement from Planar Parallax A. R. Dick R. Cipolla Department of Engineering, University of Cambridge, Cambridge, UK {ard28,cipolla}@eng.cam.ac.uk Abstract This paper presents a system for refining
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More informationMathematics of a Multiple Omni-Directional System
Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com
More informationEstimation of common groundplane based on co-motion statistics
Estimation of common groundplane based on co-motion statistics Zoltan Szlavik, Laszlo Havasi 2, Tamas Sziranyi Analogical and Neural Computing Laboratory, Computer and Automation Research Institute of
More information3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly
More informationA Novel Stereo Camera System by a Biprism
528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel
More informationFactorization Method Using Interpolated Feature Tracking via Projective Geometry
Factorization Method Using Interpolated Feature Tracking via Projective Geometry Hideo Saito, Shigeharu Kamijima Department of Information and Computer Science, Keio University Yokohama-City, 223-8522,
More informationComputer Vision 558 Corner Detection Overview and Comparison
Computer Vision 558 Corner Detection Overview and Comparison Alexandar Alexandrov ID 9823753 May 3, 2002 0 Contents 1 Introduction 2 1.1 How it started............................ 2 1.2 Playing with ideas..........................
More informationCHAPTER 3. Single-view Geometry. 1. Consequences of Projection
CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.
More informationLecture 14: Basic Multi-View Geometry
Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram
More informationHierarchical Matching Techiques for Automatic Image Mosaicing
Hierarchical Matching Techiques for Automatic Image Mosaicing C.L Begg, R Mukundan Department of Computer Science, University of Canterbury, Christchurch, New Zealand clb56@student.canterbury.ac.nz, mukund@cosc.canterbury.ac.nz
More informationStereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world
More informationDetecting Multiple Symmetries with Extended SIFT
1 Detecting Multiple Symmetries with Extended SIFT 2 3 Anonymous ACCV submission Paper ID 388 4 5 6 7 8 9 10 11 12 13 14 15 16 Abstract. This paper describes an effective method for detecting multiple
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationCamera Calibration for a Robust Omni-directional Photogrammetry System
Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,
More informationA Case Against Kruppa s Equations for Camera Self-Calibration
EXTENDED VERSION OF: ICIP - IEEE INTERNATIONAL CONFERENCE ON IMAGE PRO- CESSING, CHICAGO, ILLINOIS, PP. 172-175, OCTOBER 1998. A Case Against Kruppa s Equations for Camera Self-Calibration Peter Sturm
More informationSimultaneous Vanishing Point Detection and Camera Calibration from Single Images
Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Bo Li, Kun Peng, Xianghua Ying, and Hongbin Zha The Key Lab of Machine Perception (Ministry of Education), Peking University,
More informationDetection of surfaces for projection of texture
Detection of surfaces for projection of texture Thierry Molinier a, David Fofi a, Patrick Gorria a, Joaquim Salvi b a Le2i UMR CNRS 5158, University of Burgundy, France b Computer Vision and Robotics Group,
More informationBUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS INTRODUCTION
BUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS I-Chieh Lee 1, Shaojun He 1, Po-Lun Lai 2, Alper Yilmaz 2 1 Mapping and GIS Laboratory 2 Photogrammetric Computer Vision Laboratory Dept. of Civil
More information3D RECONSTRUCTION FROM VIDEO SEQUENCES
3D RECONSTRUCTION FROM VIDEO SEQUENCES 'CS365: Artificial Intelligence' Project Report Sahil Suneja Department of Computer Science and Engineering, Indian Institute of Technology, Kanpur
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More information6.819 / 6.869: Advances in Computer Vision Antonio Torralba and Bill Freeman. Lecture 11 Geometry, Camera Calibration, and Stereo.
6.819 / 6.869: Advances in Computer Vision Antonio Torralba and Bill Freeman Lecture 11 Geometry, Camera Calibration, and Stereo. 2d from 3d; 3d from multiple 2d measurements? 2d 3d? Perspective projection
More informationVIDEO-TO-3D. Marc Pollefeys, Luc Van Gool, Maarten Vergauwen, Kurt Cornelis, Frank Verbiest, Jan Tops
VIDEO-TO-3D Marc Pollefeys, Luc Van Gool, Maarten Vergauwen, Kurt Cornelis, Frank Verbiest, Jan Tops Center for Processing of Speech and Images, K.U.Leuven Dept. of Computer Science, University of North
More informationAN AUTOMATIC 3D RECONSTRUCTION METHOD BASED ON MULTI-VIEW STEREO VISION FOR THE MOGAO GROTTOES
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-4/W5, 05 Indoor-Outdoor Seamless Modelling, Mapping and avigation, May 05, Tokyo, Japan A AUTOMATIC
More informationLecture'9'&'10:'' Stereo'Vision'
Lecture'9'&'10:'' Stereo'Vision' Dr.'Juan'Carlos'Niebles' Stanford'AI'Lab' ' Professor'FeiAFei'Li' Stanford'Vision'Lab' 1' Dimensionality'ReducIon'Machine'(3D'to'2D)' 3D world 2D image Point of observation
More informationFace Recognition At-a-Distance Based on Sparse-Stereo Reconstruction
Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,
More informationEuclidean Reconstruction and Auto-Calibration from Continuous Motion
Euclidean Reconstruction and Auto-Calibration from Continuous Motion Fredrik Kahl and Anders Heyden Λ Centre for Mathematical Sciences Lund University Box 8, SE- Lund, Sweden {fredrik, andersp}@maths.lth.se
More informationOn-line and Off-line 3D Reconstruction for Crisis Management Applications
On-line and Off-line 3D Reconstruction for Crisis Management Applications Geert De Cubber Royal Military Academy, Department of Mechanical Engineering (MSTA) Av. de la Renaissance 30, 1000 Brussels geert.de.cubber@rma.ac.be
More informationInstance-level recognition part 2
Visual Recognition and Machine Learning Summer School Paris 2011 Instance-level recognition part 2 Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique,
More informationEpipolar geometry. x x
Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections
More informationThe end of affine cameras
The end of affine cameras Affine SFM revisited Epipolar geometry Two-view structure from motion Multi-view structure from motion Planches : http://www.di.ens.fr/~ponce/geomvis/lect3.pptx http://www.di.ens.fr/~ponce/geomvis/lect3.pdf
More informationCOSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor
COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The
More informationDense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera
Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information
More information