Photo-Consistency Based Registration of an Uncalibrated Image Pair to a 3D Surface Model Using Genetic Algorithm

Size: px
Start display at page:

Download "Photo-Consistency Based Registration of an Uncalibrated Image Pair to a 3D Surface Model Using Genetic Algorithm"

Transcription

1 Photo-Consistency Based Registration of an Uncalibrated Image Pair to a 3D Surface Model Using Genetic Algorithm Zsolt Jankó and Dmitry Chetverikov Computer and Automation Research Institute and Eötvös Loránd University Budapest, Hungary {janko,csetverikov}@sztaki.hu Abstract We consider the following data fusion problem. A 3D object with textured Lambertian surface is measured and independently photographed. A triangulated model of the object and two uncalibrated images are obtained. The goal is to precisely register the images to the model. Solving this problem is necessary for building a geometrically accurate, photorealistic model from laser-scanned 3D data and high quality images. Recently, we have proposed a novel method that generalises the photo-consistency approach by Clarkson et al. [2] to the case of uncalibrated cameras, when both intrinsic and extrinsic parameters are unknown. This gives a user the freedom of taking the pictures by a conventional digital camera, from arbitrary positions and with varying zoom. The method is based on manual pre-registration followed by a genetic optimisation algorithm. A brief description of the pilot version of the method [8] has been given together with the results of a few initial tests. In this paper, we report on some new significant developments in this project. The critical issue of robustness against illumination changes is addressed and various colour representations and cost functions are tested and compared. Natural constraints are introduced and experimentally validated to simplify the camera model and accelerate the algorithm. Finally, we present synthetic and real data with ground truth, apply the improved method to the data and measure the quality of the results. 1. Introduction Precise registration of images to a 3D surface model is needed in a number of computer vision areas. An important application is building and visualising photorealistic 3D models of real-world objects based on multimodal sensor data. In our view, a photorealistic model has three major components: geometry, appearance and dynamics. Within these components, precision, continuity, high-level description (geometry), texture, realistic surface models, presentation at varying level of detail (appearance), motion and deformable shapes (dynamics) are required. In this paper the problem of combining precise geometry with high quality images is addressed. One of the most important application areas of registration is medical imaging. Registering optical images to 3D models obtained from CT or MRI images helps surgeon plan and perform the operation. This application, called image guided surgery, has been shown to improve the accuracy of operation and to reduce the operation time [2]. Another promising application, which is closer to our approach, is visualising exhibits of museums or exhibitions in fine details. When an expert wishes to examine the exhibits through a web site, providing realistic textures on the surface is as important as providing the precise geometry. There are several approaches to the registration problem. The task is frequently referred to as pose estimation, which assumes calibrated cameras, thus only the pose of the object in the world needs to be estimated. This can be achieved by extracting features on the 3D model as well as in the images, and by searching for the corresponding feature pairs [11, 3, 5]. Clarkson et al. [2] approached the problem in a different way. They have presented an algorithm based on photoconsistency. The method needs a calibrated setup. In [8] we have proposed a novel method which generalises this approach to the case of uncalibrated cameras, when both intrinsic and extrinsic parameters are unknown. The method is based on manual pre-registration followed by a genetic optimisation algorithm. In this paper we improve the method presented in [8] and apply it to real and synthetic data with ground truth. We consider the following scenario: The surface of an object is measured by an accurate 3D laser scanner and a dense point set is captured. Triangulation of the point set yields a 3D triangular mesh with the surface normals. Furthermore, an uncalibrated digital camera is used to acquire high quality

2 images about the object. The user has the freedom of taking the pictures from arbitrary positions and with varying zoom. The task is to register the images to the 3D model to obtain a geometrically accurate, photorealistic model. The contributions of the paper are as follows. The critical issue of robustness against illumination changes is addressed and various colour representations and cost functions are tested and compared. Without loss of generality, constraints are introduced and experimentally validated to simplify the camera model and make the algorithm more efficient. Finally, we present synthetic and real data with ground truth, apply the improved method to the data and measure the quality of the results. The structure of the paper is the following. Section 2 summarises our method introduced in [8]. The cost function and the selected optimisation strategy are presented. Section 3 is devoted to the improvement of the original method. The improved method is tested on real data and synthetic data with ground truth in section 4. Finally, section 5 sums up the results. 2. Method In this section we give a short overview of the method presented in [8]. Although we formulate the task in a special case of a 3D model and two images, the approach can be easily extended to multiple images Cost function The input data consist of two colour images, I 1 and I 2, and a 3D model. An example is shown in figure 1. The images and the model represent the same object. Fixed lighting conditions and identical sensitivities of the cameras are assumed. All other camera parameters may differ and are unknown. Furthermore, we assume that the surface of the object is textured and Lambertian. The 3D model consists of a 3D point set P and a set of normal vectors. P is obtained by a hand-held 3D scanner and then triangulated by the robust algorithm [9] which provides the normal vectors. The finite projective camera model [6] is used to represent the projection of the object surface to the image plane: u P X, where u is an image point, P is the 3 4 projection matrix and X is a surface point [6]. ( means that the projection is defined up to an unknown scale.) The task of the registration is to determine the precise projection matrices, P 1 and P 2, for both images. The projection matrix P has 12 elements but only 11 degrees of freedom (DOFs), since it is up to a scale factor. Decomposing P as P = K [R Rt] shows the meaning of these DOFs [6]: K is the 3 3 upper triangular camera matrix, R the 3 3 rotation matrix and t the 3 1 translation vector. The elements of K are the intrinsic camera parameters, while R and t are the extrinsic camera parameters, namely the orientation and the position of the camera. Let us denote by p the collection of the 11 unknown parameters (5 intrinsic and 6 extrinsic). p represents the projection matrix P as an 11-dimensional parameter vector. We search for values of p 1 and p 2 such that the images are consistent, that is the corresponding points different projections of the same 3D point have the same colour value. The definition is valid only when the surface is Lambertian. Formally, we say that images I 1 and I 2 are consistent by P 1 and P 2 (or p 1 and p 2 ) if for each X P: u 1 = P 1 X, u 2 = P 2 X and I 1 (u 1 ) = I 2 (u 2 ). (Here I i (u i ) is the colour value in point u i of image I i.) This type of consistency is called photo-consistency [10, 2]. The photo-consistency holds for accurate estimates for p 1 and p 2. Inversely, misregistered projection matrices mean much less photo-consistent images. The cost function introduced in [8] is the following: C φ (p 1, p 2 ) = 1 P I 1 (P 1 X) I 2 (P 2 X) 2. (1) X P Here φ stays for photo-inconsistency and P is the number of points in P. Difference of the colour values I 1 I 2 will be defined later. Finding the minimum of the cost function (1) over p 1 and p 2 yields estimates for the projection matrices Optimisation strategy Because of the 22-dimensional parameter space and the unpredictable shape of C φ (p 1, p 2 ), finding the minimum of C φ (p 1, p 2 ) is a difficult task. We have attempted to approach the problem in different ways. Auto-calibration [6] is a widely used process to determine internal camera parameters directly from multiple uncalibrated images. The method is based on the absolute conic Ω. This conic is fixed under the rigid motion of the camera. Determining Ω from the images yields the metric geometry of the model. Ω can be computed by using constraints on the internal or external camera parameters. There are several methods for auto-calibration, for instance estimating the absolute dual quadric Q, or using the Kruppa equations. However all of these methods assume that the projective reconstruction or the fundamental matrix has already been computed from point correspondences across the image set. In the registration problem considered, precise initial point correspondences are not available. The estimated projective reconstruction, hence the constraints and the computed internal parameters can only be approximations. This means that auto-calibration cannot be used to eliminate the internal parameters from the parameter space.

3 Figure 1. The Shell dataset. Centre: 3D model. Sides: Image pair. As a result, we preferred to minimise the cost function over all 22 parameters. To initialise the local search, manual pre-registration was introduced. Several standard nonlinear optimisation methods were tested. However, due to the non-smoothness of Cφ (p1, p2 ), gradient based methods (such as the Levenberg-Marquardt algorithm [6]) failed to provide reliable results. Finally, we decided to apply a global search strategy, namely a genetic algorithm. The proposed two-stage method is illustrated in figure 2. The rough estimates P10 and P20 provided by the manual pre-registration are refined by minimising Cφ (p1, p2 ). Note that the human assistance to initialise the search is reasonable because this operation is simple and fast compared to the 3D scanning, which is also done manually. The task of the photo-consistency based registration is to make the method more accurate. In section 4 we demonstrate by tests with ground truth that the gain in accuracy is essential. Reg 1 image 1 Reg 2 3D model image 2 manual pre registration 0 0 P2 P1 image 1 3D model image 2 photo consistency based registration P1, P2 Figure 2. Block-diagram of proposed method. For the implementation of the genetic algorithm we have chosen the GAlib genetic algorithm package [14] written by Matthew Wall at the MIT. Different settings were tested and the best ones were chosen. The steady-state algorithm proved to converge faster than the simple one, hence we use it, with the default elitist option. The algorithm applies uniform crossover and Gaussian mutation operator. Instead of the default roulette wheel selection method we use the tournament selector. To avoid premature convergence (when the population becomes homogenous before finding the minimum) we set the mutation probability to 0.1. For the same reason the algorithm runs until 5 times 100 generations are created, instead of 1 times 500. This means that the algorithm starts 5 times from the beginning, preserving only the best individual from the previous population. The population is set to contain 500 or 1000 individuals. The intervals of the genes are set to the pre-registered values plus a margin of ±. (The concrete values are given in section 3.2.) 3. Improving the algorithm 3.1. Robustness and colour model Several preliminary experiments were run to study the robustness of the method, which is a critical issue in registration, as well as in correspondence. It is clear that the cost function (1) is not robust, due to the inconsistencies produced by outliers, typically, by occluded points. In [2], the visibility is checked by ray tracing, but in [8] we use surface normals for this purpose. Our implementation is less accurate but much faster, which is more important in this case. The essence of our algorithm is to discard the point when the scalar product of the normal vector and the unit vector pointing towards the camera falls below a threshold. The product is the cosine of the angle between the two vectors. Typically, the threshold is set at 0.5. Since this algorithm cannot guarantee that all outliers will be filtered out, in [8] we modified the cost function (1) to remove the remaining outliers. The Trimmed Squares (TS) and the α-trimmed mean [12] techniques were applied. Both techniques have a single parameter, α. In TS, α is the

4 rate of the largest squares which are discarded. In the α- trimmed mean, both smallest and largest values are rejected: when α is close to 0.5, the median is used. In our experiments, we used α = 0.2. The α-trimmed mean performed slightly better. In attempts to improve the initial method [8], we later tested a few other cost functions. However, the variance of the colour values [2] or the Modified Normalised Cross Correlation [13] yielded worse results than the robust leastsquares described above. Another important question related to robustness is how to compare two colour images, what colour differences should be used to eliminate the influence of illumination changes. We have tested various colour models: CIE XYZ ITU [1], HSI [4] and CIE LUV [7]. In the literature CIE LUV is usually used to compute colour differences as the simple sum of squared differences in the three components L, U, and V. This model proved to be the best in our experiments, as well. However, it should be emphasised that in our experimental data the illumination changes were small. The size of our test images is or It seemed reasonable to reduce the size and apply image pyramids for the registration. We tried this as well, but the results did not improve significantly Constraints on the camera model As already mentioned, our original method applied optimisation in the full 22-dimensional parameter space. The size of the space and the non-smoothness of the cost function are two critical problems that make the search difficult and time-consuming despite the restrictions due to the manual pre-registration. To improve the efficiency of the optimisation process, we impose several reasonable constraints on the camera model, as suggested in [6]. Note that using the finite projective camera model without camera distortion is already a constraint which works well in practice. As mentioned above, the projection matrix can be decomposed as P = K [R Rt], where the calibration matrix can be expressed in form K = α x s x 0 α y y 0 1. (2) Here α x = fm x and α y = fm y represent the focal length of the camera in terms of pixel dimensions in the x and y direction respectively, s is the skew parameter and (x 0, y 0 ) is the principal point [6]. For most cameras the skew parameter is zero. It is also usual to assume that the pixels are squared, that is the ratio of m x and m y, which is referred to as the aspect ratio, is equal to 1. Thus the calibration matrix can be simplified: K = f f p x p y 1, (3) where p x = x 0 /m x and p y = y 0 /m y. This simplified model is usually called pinhole camera model. These simplifications reduce the number of the DOFs from 22 to 18. Although the decrease is not large, in this case every reasonable reduction is important. Hence we also applied a commonly used assumption, namely that the principal point is close to the origin. It does not reduce the number of the parameters, but the search space becomes more restricted. To exploit the relation between the internal and the external camera parameters, a technical simplification is implemented as well. This relation is the following: If we use the simplified camera model (3) and assume that the principal point is close to the origin, then one may give a rough estimate for the ratio of the focal length f and the distance d of the camera from the centroid of the object: f d w W, (4) where the width of the object in the image is denoted by w and in the 3D world by W (figure 3). Although w and W are unknown, the initial state of the camera provided by the manual pre-registration gives a good approximation to them. W O Figure 3. The relation among the distances. C is camera centre, O object s centroid. It is clear that the estimate in (4) is quite rough. Nevertheless, applying this constraint to determine the initial d w f C

5 population of the genetic algorithm makes the method much more efficient. In section 2.2 we did not specify the ɛ values for the intervals of the genes. Considering the simplifications detailed above, the values we use are the following: focal length: ±2% principal point: ±0.5% camera translation: ±3% camera rotation: ±1. 4. Tests In [8] pilot experimental results were presented to demonstrate the feasibility of the method. In this section, we run the improved method on the same input data as well as on new synthetic data with ground truth. When registration results are visualised, differences between original and the improved methods are not easy to perceive. These differences become clear when one uses the metric derived from the ground truth. We obtain the synthetic ground-truthed data by covering the triangular mesh of the original Shell dataset with a synthetic texture. Two views of this object produced by a visualisation program provide the two input images for which the projection matrices are completely known. To quantitatively assess the registration results, the projection error is measured: The 3D point set P is projected onto the image planes by both the ground truth and the estimated projection matrices, then the average distance between the corresponding image points is calculated. Formally, E(P 1, P 2 ) = 1 2 i=1,2 1 P X P P G i X P i X, (5) where Pi G are the ground truth, P i the estimated projection matrices. Evaluating the result of the manual pre-registration by this metric we obtained that the average error is pixels. The original method [8] brought it down to pixels. The results of the improved method presented in this paper are shown in table 1 and figure 4. The algorithm was tested 10 times with 500 individuals in the population and 10 times with The average error of 5 6 pixels is acceptable, considering the dimension of the images which is The typical running time with a 3D model containing 1000 points 1 was 15 minutes for 500 individuals and 40 minutes for The test was performed on a 2.40 GHz PC with 1 GB memory. 1 3D models generated by laser scanners usually contain tens of thousand points. However, for registration a less dense point set is sufficient. 500 individuals 1000 individuals average: 6.54 average: 5.63 Table 1. The projection error. 10 runs are performed with 500 individuals and 10 runs with 1000 individuals. Finally, in figures 5 7 we visualise typical registration results for the Shell and the Frog datasets. The precision of the registration can be best judged at the feet, the mouth and the tie of the Frog and the stripes of the Shell. Note that the accuracy is not uniform over the surface. The most accurate areas are those which are clearly visible from both viewpoints. 5. Summary Figure 4. Plot of the projection error. We have presented an improved method for registering a 3D model and two high-quality images assuming uncalibrated cameras. After manual pre-registration the method uses a genetic algorithm to minimise a photo-consistency based cost function. To ensure the robustness, different cost functions and colour models were tested. Imposing reasonable constraints on the camera model the 22-dimensional parameter space could be reduced, hereby increasing the efficiency of the search.

6 Figure 5. Shell registration. (See figure 1.) Left: textured model. Right: textureless model. Figure 6. The Frog dataset. Centre: 3D model. Sides: Image pair. Figure 7. Frog registration. Left: textured model. Right: textureless model. Tests have shown that the projection error of the registration can be decreased from to 5-6 pixels. In the course of the runs, worse results have also occurred occasionally. In these cases the genetic algorithm failed to come close to the global optimum. The genetic algorithm is a global search strategy, but due to the non-smoothness of the cost function, it may stop in a local minimum far from the global one. To increase the probability of the good results the number of the individuals must be set higher. On the other hand, this slows the algorithm down. Although the genetic algorithm is not deterministic, the variance of the projection error is quite small. In our experiments the algorithm ran until a fixed number of generations were created. One may set the algorithm to run until a given level of cost function is reached. This is possible since the projection error and the cost function correlate: the smaller

7 the cost function the smaller the error. (See figure 8.) [8] Z. Jankó and D. Chetverikov. Precise registration of an uncalibrated image pair to a 3D surface model. Proc. 17 th International Conference on Pattern Recognition, Accepted for publication. [9] G. Kós. An algorithm to triangulate surfaces in 3D using unorganised point clouds. Computing Suppl., 14: , [10] K. Kutulakos and S. Seitz. A Theory of Shape by Space Carving. Prentice Hall, [11] M. Leventon, W. Wells III, and W. Grimson. Multiple view 2D-3D mutual information registration. Image Understanding Workshop, [12] I. Pitas. Digital Image Processing Algorithms. Prentice Hall, [13] R. Sara. Finding the largest unambiguous component of stereo matching. 7th European Conference on Computer Vision, 2: , [14] M. Wall. The GAlib genetic algorithm package. URL: Figure 8. Cost function vs. projection error. The presented method works with image pairs. Further research will show if the approach should be extended to more images. Using more images allows one to impose more constraints on the intrinsic parameters, but the number of the extrinsic parameters would also grow. Thus the advantages of using more images are not obvious. Further tests are also needed to prove the robustness of the method against changes of lighting conditions and shadows. Acknowledgement. This work was supported by the Hungarian Scientific Research Fund (OTKA) under grants T and M28078 and the EU Network of Excellence MUSCLE (FP ). References [1] CIE Publ. No Colorimetry. Second Edition, [2] M. J. Clarkson, D. Rueckert, D. L. Hill, and D. J. Hawkes. Using photo-consistency to register 2D optical images of the human face to a 3D surface model. IEEE Tr. on Patt. Anal. and Machine Intell., 23: , [3] P. David, D. DeMenthon, R. Duraiswami, and H. Samet. SoftPOSIT: Simultaneous pose and correspondence determination. 7th European Conference on Computer Vision, Copenhagen, Denmark, pages , [4] R. C. Gonzalez and R. E. Woods. Digital Image Processing. Addison-Wesley Publishing Company, [5] R. Haralick, H. Joo, C.-N. Lee, X. Zhuang, V. Vaidya, and M. Kim. Pose estimation from corresponding point data. IEEE Tr. Systems, Man, and Cybernetics, 19: , [6] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, [7] R. Hunt. Measuring Colour. Ellis Horwood Limited, 1987.

Fusing Spatial, Pictorial and Photometric Data to Build Photorealistic Models

Fusing Spatial, Pictorial and Photometric Data to Build Photorealistic Models Fusing Spatial, Pictorial and Photometric Data to Build Photorealistic Models Z. Jankó MTA SZTAKI, Budapest E. Lomonosov MTA SZTAKI, Budapest D. Chetverikov MTA SZTAKI, Budapest Abstract We are working

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Surface Normal Aided Dense Reconstruction from Images

Surface Normal Aided Dense Reconstruction from Images Computer Vision Winter Workshop 26, Ondřej Chum, Vojtěch Franc (eds.) Telč, Czech Republic, February 6 8 Czech Pattern Recognition Society Surface Normal Aided Dense Reconstruction from Images Zoltán Megyesi,

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few... STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Lecture'9'&'10:'' Stereo'Vision'

Lecture'9'&'10:'' Stereo'Vision' Lecture'9'&'10:'' Stereo'Vision' Dr.'Juan'Carlos'Niebles' Stanford'AI'Lab' ' Professor'FeiAFei'Li' Stanford'Vision'Lab' 1' Dimensionality'ReducIon'Machine'(3D'to'2D)' 3D world 2D image Point of observation

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB HIGH ACCURACY 3-D MEASUREMENT USING MULTIPLE CAMERA VIEWS T.A. Clarke, T.J. Ellis, & S. Robson. High accuracy measurement of industrially produced objects is becoming increasingly important. The techniques

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo -- Computational Optical Imaging - Optique Numerique -- Multiple View Geometry and Stereo -- Winter 2013 Ivo Ihrke with slides by Thorsten Thormaehlen Feature Detection and Matching Wide-Baseline-Matching

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Globally Optimal Algorithms for Stratified Autocalibration

Globally Optimal Algorithms for Stratified Autocalibration Globally Optimal Algorithms for Stratified Autocalibration By Manmohan Chandraker, Sameer Agarwal, David Kriegman, Serge Belongie Presented by Andrew Dunford and Adithya Seshasayee What is Camera Calibration?

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching -- Computational Optical Imaging - Optique Numerique -- Single and Multiple View Geometry, Stereo matching -- Autumn 2015 Ivo Ihrke with slides by Thorsten Thormaehlen Reminder: Feature Detection and Matching

More information

Camera calibration. Robotic vision. Ville Kyrki

Camera calibration. Robotic vision. Ville Kyrki Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne

More information

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006 Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation

More information

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based

More information

Identifying Car Model from Photographs

Identifying Car Model from Photographs Identifying Car Model from Photographs Fine grained Classification using 3D Reconstruction and 3D Shape Registration Xinheng Li davidxli@stanford.edu Abstract Fine grained classification from photographs

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Multiple View Geometry in Computer Vision Second Edition

Multiple View Geometry in Computer Vision Second Edition Multiple View Geometry in Computer Vision Second Edition Richard Hartley Australian National University, Canberra, Australia Andrew Zisserman University of Oxford, UK CAMBRIDGE UNIVERSITY PRESS Contents

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

Real-time surface tracking with uncoded structured light

Real-time surface tracking with uncoded structured light Real-time surface tracking with uncoded structured light Willie Brink Council for Scientific and Industrial Research, South Africa wbrink@csircoza Abstract A technique for tracking the orientation and

More information

Lecture 10: Multi-view geometry

Lecture 10: Multi-view geometry Lecture 10: Multi-view geometry Professor Stanford Vision Lab 1 What we will learn today? Review for stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi 1, Francois de Sorbier 1 and Hideo Saito 1 1 Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi,

More information

An Overview of Matchmoving using Structure from Motion Methods

An Overview of Matchmoving using Structure from Motion Methods An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu

More information

Geometric camera models and calibration

Geometric camera models and calibration Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October

More information

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation. 3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Multi-View 3D-Reconstruction

Multi-View 3D-Reconstruction Multi-View 3D-Reconstruction Cedric Cagniart Computer Aided Medical Procedures (CAMP) Technische Universität München, Germany 1 Problem Statement Given several calibrated views of an object... can we automatically

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

3D Model Acquisition by Tracking 2D Wireframes

3D Model Acquisition by Tracking 2D Wireframes 3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract

More information

Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics

Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics Heiko Hirschmüller, Peter R. Innocent and Jon M. Garibaldi Centre for Computational Intelligence, De Montfort

More information

Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors

Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Keith Forbes 1 Anthon Voigt 2 Ndimi Bodika 2 1 Digital Image Processing Group 2 Automation and Informatics Group Department of Electrical

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles Carlo Colombo, Dario Comanducci, and Alberto Del Bimbo Dipartimento di Sistemi ed Informatica Via S. Marta 3, I-5139

More information

Lecture 10: Multi view geometry

Lecture 10: Multi view geometry Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Quasi-Euclidean Uncalibrated Epipolar Rectification

Quasi-Euclidean Uncalibrated Epipolar Rectification Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report September 2006 RR 43/2006 Quasi-Euclidean Uncalibrated Epipolar Rectification L. Irsara A. Fusiello Questo

More information

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Chris J. Needham and Roger D. Boyle School of Computing, The University of Leeds, Leeds, LS2 9JT, UK {chrisn,roger}@comp.leeds.ac.uk

More information

Jacobian of Point Coordinates w.r.t. Parameters of General Calibrated Projective Camera

Jacobian of Point Coordinates w.r.t. Parameters of General Calibrated Projective Camera Jacobian of Point Coordinates w.r.t. Parameters of General Calibrated Projective Camera Karel Lebeda, Simon Hadfield, Richard Bowden Introduction This is a supplementary technical report for ACCV04 paper:

More information

3D Editing System for Captured Real Scenes

3D Editing System for Captured Real Scenes 3D Editing System for Captured Real Scenes Inwoo Ha, Yong Beom Lee and James D.K. Kim Samsung Advanced Institute of Technology, Youngin, South Korea E-mail: {iw.ha, leey, jamesdk.kim}@samsung.com Tel:

More information

A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES

A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES Yuzhu Lu Shana Smith Virtual Reality Applications Center, Human Computer Interaction Program, Iowa State University, Ames,

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING

VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING Y. Kuzu a, O. Sinram b a Yıldız Technical University, Department of Geodesy and Photogrammetry Engineering 34349 Beşiktaş Istanbul, Turkey - kuzu@yildiz.edu.tr

More information

Estimation of common groundplane based on co-motion statistics

Estimation of common groundplane based on co-motion statistics Estimation of common groundplane based on co-motion statistics Zoltan Szlavik, Laszlo Havasi 2, Tamas Sziranyi Analogical and Neural Computing Laboratory, Computer and Automation Research Institute of

More information

Freehand Voxel Carving Scanning on a Mobile Device

Freehand Voxel Carving Scanning on a Mobile Device Technion Institute of Technology Project in Image Processing and Analysis 234329 Freehand Voxel Carving Scanning on a Mobile Device Author: Student Number: 305950099 Supervisors: Aaron Wetzler, Yaron Honen,

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

Projective Geometry and Camera Models

Projective Geometry and Camera Models /2/ Projective Geometry and Camera Models Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Note about HW Out before next Tues Prob: covered today, Tues Prob2: covered next Thurs Prob3:

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Ryusuke Homma, Takao Makino, Koichi Takase, Norimichi Tsumura, Toshiya Nakaguchi and Yoichi Miyake Chiba University, Japan

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement

More information

Multiview Stereo COSC450. Lecture 8

Multiview Stereo COSC450. Lecture 8 Multiview Stereo COSC450 Lecture 8 Stereo Vision So Far Stereo and epipolar geometry Fundamental matrix captures geometry 8-point algorithm Essential matrix with calibrated cameras 5-point algorithm Intersect

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Camera Calibration and 3D Scene Reconstruction from image sequence and rotation sensor data

Camera Calibration and 3D Scene Reconstruction from image sequence and rotation sensor data Camera Calibration and 3D Scene Reconstruction from image sequence and rotation sensor data Jan-Michael Frahm and Reinhard Koch Christian Albrechts University Kiel Multimedia Information Processing Hermann-Rodewald-Str.

More information

Genetic Algorithms for Vision and Pattern Recognition

Genetic Algorithms for Vision and Pattern Recognition Genetic Algorithms for Vision and Pattern Recognition Faiz Ul Wahab 11/8/2014 1 Objective To solve for optimization of computer vision problems using genetic algorithms 11/8/2014 2 Timeline Problem: Computer

More information

Projective Geometry and Camera Models

Projective Geometry and Camera Models Projective Geometry and Camera Models Computer Vision CS 43 Brown James Hays Slides from Derek Hoiem, Alexei Efros, Steve Seitz, and David Forsyth Administrative Stuff My Office hours, CIT 375 Monday and

More information

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Engin Tola 1 and A. Aydın Alatan 2 1 Computer Vision Laboratory, Ecóle Polytechnique Fédéral de Lausanne

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

3D shape from the structure of pencils of planes and geometric constraints

3D shape from the structure of pencils of planes and geometric constraints 3D shape from the structure of pencils of planes and geometric constraints Paper ID: 691 Abstract. Active stereo systems using structured light has been used as practical solutions for 3D measurements.

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers Augmented reality Overview Augmented reality and applications Marker-based augmented reality Binary markers Textured planar markers Camera model Homography Direct Linear Transformation What is augmented

More information

Easy to Use Calibration of Multiple Camera Setups

Easy to Use Calibration of Multiple Camera Setups Easy to Use Calibration of Multiple Camera Setups Ferenc Kahlesz, Cornelius Lilge, and Reinhard Klein University of Bonn, Institute of Computer Science II, Computer Graphics Group Römerstrasse 164, D-53117

More information

CS 664 Structure and Motion. Daniel Huttenlocher

CS 664 Structure and Motion. Daniel Huttenlocher CS 664 Structure and Motion Daniel Huttenlocher Determining 3D Structure Consider set of 3D points X j seen by set of cameras with projection matrices P i Given only image coordinates x ij of each point

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information