External camera calibration for synchronized multi-video systems

Size: px
Start display at page:

Download "External camera calibration for synchronized multi-video systems"

Transcription

1 External camera calibration for synchronized multi-video systems Ivo Ihrke Lukas Ahrenberg Marcus Magnor Max-Planck-Institut für Informatik D Saarbrücken ABSTRACT We present a camera calibration system that is simple to use and offers generality in the positioning of the cameras. This makes it very suitable for the calibration of mobile, synchronized camera setups. We use a camera graph to perform global registration which helps lifting restrictions on the camera setup imposed by other calibration methods. A further advantage is, that all information is taken into account at the same time. The method is based on a virtual calibration object which is constructed over time. This implies that no calibration object must be visible simultaneously in all cameras. Keywords External Camera Calibration, Registration, Multi-Video System, Graphs 1 Introduction We have built a mobile camera system to take computer vision research to more general settings than a fixed studio setup. This added flexibility demands a simple to use camera calibration technique that allows for convenient calibration of the camera system. Since camera calibration is such a crucial task in computer vision, a lot of effort has been spent on this subject. There exist a number of methods, e.g. [Tsai 1987; Hartley and Zisserman 2000] in different flavors. Most of them use carefully prepared calibration objects, while others estimate camera parameters from general images or image sequences. These approaches are called self-calibrating. A relatively new approach is the use of a virtual calibration object. This calibration object does not exist in a physical meaning, but is instead constructed over time (assuming a static camera setup and scene) by tracking an easily identifiable object through an image sequence. The point correspondences hereby obtained are then used for a traditional calibration step [Azarbayejani and Pentland 1995; Chen et al. 2000]. The advantages of using virtual calibration objects are the simple establishment of point correspondences in difficult wide baseline and encircling camera setups as well as the possibility of acquiring as many point measurements as necessary, which is difficult in one image approaches. Furthermore there is no need for a carefully manufactured calibration object, which can break during transportation. The paper is organized as follows. In Section 2 we discuss some previous work that is important in this context. Section 3 gives an outline of the method, section 4 describes the establishment of 2D correspondences for calibration purposes. In section 5 the computation of pairwise relationships between cameras is discussed. Section 6 describes the main contribution of this paper, namely the usage of a graph structure for external camera calibration. Section 7 presents experiments with synthetic as well as with real data. Finally section 8 concludes the paper and gives some directions for future work.

2 2 Related Work Earlier approaches to external calibration with help of virtual calibration objects [Azarbayejani and Pentland 1995; Chen et al. 2000] use light emitting objects (i.e. flashlight, LED) in dark rooms for tracking. In [Azarbayejani and Pentland 1995] the virtual calibration object is introduced to calibrate a pair of stereo cameras. A synchronized pair of cameras is used to track a flashlight, the path of which is used for calibration. Chen et. al. extend this work by applying the virtual calibration object approach to unsynchronized multicamera setups. The lack of camera synchronization is treated with an Extended Kalman Filter (EKF) which estimates the path of a marker object (LED). Pairwise relationships between cameras are computed employing a structure from motion algorithm [Zhang 1996]. Global registration in a common coordinate system is then done using a triangulation scheme iteratively, to estimate the position of yet unregistered cameras to two already registered cameras. This requires that any unregistered camera is connected to at least two already registered cameras by known pairwise relationships. Our approach removes this restriction by analyzing a graph consisting of the cameras as vertices and known pairwise position and orientation as the edges. An additional advantage is that all information about the global position of a given camera is imposed simultaneously. 3 Outline To accommodate general camera setups, we use an approach that does not require a common view of a calibration object for all cameras. Instead a virtual calibration object that covers the working volume is constructed over time by tracking an easily identifiable object [Azarbayejani and Pentland 1995; Chen et al. 2000]. We mainly follow the approach of [Chen et al. 2000]. However, we lift the constraints that the cameras must be registered by triangulation from the base camera pair and that the working volume must be dark during calibration. Intrinsic calibration is done independently for each camera using Tsai s method [Tsai 1987]. This is reasonable since our cameras do not allow for varying internal parameters. This requires us to record an image of a checkerboard for each camera, from which the internal parameters can be computed. However, the checkerboard does not have to be visible from all cameras simultaneously. 2 C2 S 1 1 C 1 C 3 3 C4 S rigid object Figure 1: An example for a graph with nonempty sets S, S k, S (upper image) and a corresponding scene where this situation could occur (lower image). At the place of recording, we first obtain a number of 2D correspondences via tracking the marker object depicted in Fig.2. The midpoints of the marker object are determined and the correspondences are used to robustly compute the relative orientation of each camera pair [Zhang 1996; Horn 1990]. Since the pairwise relative position of the cameras can be found up to a scale factor only, a global registration has to be performed to achieve global calibration. This is done by constructing a graph G = (V, E) that represents the relationship between cameras Fig. 1. The cameras C i V are the vertices, and known relative position and orientation between any pair of cameras (C i, C j ) E are represented by the edges of the graph. The graph is undirected, since known relative position and orientation (R, t) for camera pair (C i, C j ) also defines the relative position and orientation of (C j, C i ) as (R T, R T t). Once the graph is set up, it is searched for cyclical connected subsets S k S containing all cameras. The set S := S \ ( k S k ) consists of all cameras not belonging to any cycle. If S = S, no cycles exist and a special treatment is necessary. Figure 1 illustrates a situation where all sets are nonempty. The unknown scale factors for the pairwise translations are determined for each S k independently. This is done by solving an over-determined 4 C 7 C 5 5 C 6 7 S 2 6 S

3 Figure 2: The marker object that is tracked through the scene to establish 2D correspondences between camera frames. linear system of equations. The equations correspond to cycles in the graph and require that the scaled translations along the traversed path sum to zero. That means the cycle represents a closed curve in three-dimensional space. Since the estimation obtained this way is not connected to the image measurements any longer the reprojection error of this solution is usually not satisfactory. Nevertheless the overall shape of the setup is recovered quite well. Therefore this estimate can be used as an initial estimate for bundle adjustment [Triggs et al. 2000; Hartley and Zisserman 2000]. Using these partial registrations, parts of the virtual calibration object (i.e. the path of the tracked marker object) are reconstructed. The remaining step is to register all subsets S k and all C i S with each other to form a globally consistent calibration. In this framework the improvement of our method can be stated like this: The restriction of the vertices V being interconnected by threecycles in the graph G as required by the algorithm of Chen et. al. [Chen et al. 2000] is lifted and arbitrary connectivity of the graph G is allowed for as long as the graph is not disconnected. 4 Obtaining image correspondences 2D point correspondences between images, i.e. projections of the same three-dimensional point onto different camera planes, are the basis for epipolar geometry estimation and therefore the first step in the recovery of the camera structure. We obtain them by tracking a sphere over time, establishing one correspondence in every frame of the calibration video sequence. The sphere is a suitable object because it has a unique midpoint, the projections of which can be computed from images alone. The sphere projects to a conic section in the image plane 3 which in a real situation will always be an ellipse. If the focal length of the cameras is not too small the ellipse is nearly circular. This observation leads to our sphere detection algorithm. The sphere detection is run on every frame of the video sequences separately. Our marker object has a color which is not widely present in the scene. Therefore we threshold the image with a color band in Y/Cb/Cr space. We find connected components in the resulting binary image and threshold them according to their size to exclude noise and small artifacts from further processing. A circular Hough transform is performed on every remaining connected component. This yields a best circular fit for each of them. To decide the best fit for the image, the Hough scores can not be compared directly as they depend on the size of the object. Therefore we calculate the density of object pixels in the circle candidates. A candidate from a round component will typically fill the whole disk and gain a high score. The highest scoring circle is picked as the winner. Finally the score is thresholded. If the score is higher or equal to the threshold we have found the sphere, otherwise we assume that the it is not visible in the image. If the sphere was found we refine the estimate using orthogonal least squares ellipse fitting. For this purpose we extract the edges of the original image around the position found by the Hough transform. The area for edge detection is limited by the estimated radius plus some safety region. The detected edge pixels are used to robustly fit an ellipse using the RANSAC paradigm [Fischler and Bolles 1981]. We use the squared sum of orthogonal euclidean distances between the data points and the ellipse as an error measure. This gives a sub-pixel estimate of the ellipse midpoint and compensates for small mislocalizations introduced by the Hough transform. We use those midpoints as correspondences. 5 Computation of pairwise position and orientation Having extracted a set of point correspondences, we can proceed in computing the pairwise relative position and orientation (R ij, t ij ) of any camera pair (C i, C j ). This is done by first undoing the effects of the internal camera parameters on the midpoints, followed by the computation of the essential matrix E ij. We use a robust variation of Zhang s approach [Zhang 1996] to essential matrix computation. The difference between Zhang s algorithm and our version is the initial guess of the

4 PSfrag replacements S 2 e 2 Figure 3: Projection of spheres S 1 and S 2 onto the image plane Π using the focal point p. S 1 is located on a ray through p and the center of Π, thus projection to the middle of the image plane. S 2, on the other hand, lies off from this ray at some angle, and projects to the outer regions of Π. e 1 and e 2 can be viewed of as the image formed by Π cutting the projection cone from the spheres to p. In the case of S 1 the projection, e 1, is a perfect circle as Π is parallel to the base of the projection cone. As for S 2, it deviates from the center of projection, and its projection cone is cut by Π at some angle forming an ellipsoid. fundamental matrix F ij, which is obtained using RANSAC [Fischler and Bolles 1981] with the seven point algorithm for estimation of the fundamental matrix [Hartley and Zisserman 2000]. This estimate fulfills already the rank two constraint on the fundamental matrix F ij. The following nonlinear refinement step minimizes the symmetric epipolar distance, i.e. the distance of the epipolar line induced by a point to the corresponding point in the other image, using the Nelder-Mead simplex method [Nelder and Mead 1965]. Given the internal parameter matrices K i and K j an initial guess of the essential matrix is computed E ij = K j T F ij K i. This guess usually does not fulfill the additional constraint that the two nonzero singular values of E ij are equal. The nearest essential matrix fulfilling the constraints is obtained by setting the two unequal singular values to their mean value [Faugeras and Luong 2001]. However if the two nonzero singular values differ widely, this is not a good guess for the essential matrix minimizing the symmetric epipolar distance. Therefore an additional nonlinear minimization with the simplex-algorithm is performed, this time using only the five parameters of rotation and translation direction to compute the final guess for matrix E ij. p e 1 S 1 Π The essential matrix is of the form E = [t] R and rotation as well as translation can be extracted from it [Horn 1990]. The decomposition of matrix E yields four solutions for R and t. The solutions arise because of the unknown scale of t which can be either positive or negative. The rotations are related by a rotation of 180 around the baseline, connecting the two cameras. In most cases this ambiguity can be resolved by the demand that reconstructed points are in front of both cameras [Hartley and Zisserman 2000; Faugeras and Luong 2001]. Sometimes, in cases of a near 180 angle between the principal axes of the two cameras, it is only possible to resolve a twofold ambiguity, two solutions remain. Since the reprojection error of both solutions is the same there is no measure based on image distances that can be chosen to decide the solution. Therefore we apply a heuristical measure: we choose the solution whose reconstruction has a smaller convex hull. Since we are dealing with euclidean reconstructions this is a reasonable choice. As a measure of quality we compute a local reconstruction for each camera pair (C i, C j ) that yielded a solution (R ij, t ij ) and evaluate the reprojection error caused by that reconstruction. We apply a threshold on that error to exclude unstable estimates. 6 Graph building and analysis From the previously computed pairwise positions and orientations we build a graph that mirrors the availability of rotation/translation information for all camera pairs. The vertices of the graph correspond to the cameras, and edges connecting them indicate a stable solution to the relative position and orientation problem for the cameras connected by that edge. Looking at the problem from a graph theoretical point of view has several advantages. There are standard solutions for problems occurring in general camera setups. It is for example simple to check if a calibration containing all cameras is possible. It is sufficient to check, that the graph is connected. This can already be done during data collection and the user can be guided to create more correspondences for camera pairs that have not sufficient data available yet. If there are unconnected subsets of vertices in the graph no global calibration can be found, but the subsets of cameras can be calibrated separately. Cameras corresponding to isolated vertices cannot be calibrated.

5 6.1 Registration of cyclic connected components We proceed by registering all cameras Ci k contained in a cyclic connected component G k of the graph G with each other. This is done for all k separately. For improved readability we skip the subscript k in the further discussion. Recall that all relative translations t ij are defined only up to a scale factor and reside in the local coordinate system of camera C i. The task of the registration procedure is to find consistent scale factors s ij for the t ij and their transformations to a global coordinate system shared by all cameras. The registration is based on cycles in the graph G = (V, E). We denote a cycle as Z = [z 1... z n ], z i {C 1... C m } and introduce t ij as translation direction t ij transformed to a common coordinate system. Then a condition on the scale factors s ij for one cycle can be written as: ( n 1 ) s zi,z i+1 t zi,z i+1 s zn,z 1 t zn,z 1 = 0 (1) i=1 Equation 1 is valid for all cycles Z h, h {1... l}. It describes the condition that every walk along a cycle in the registered set of cameras returns to its origin, or all translations along a cycle sum up to zero. Every cycle contributes three equations to the system of equations in equation 1, one for each coordinate. Overall there are 3l equations that make up a usually overdetermined linear system of equations Ts = 0, which can be solved to obtain the scale factors s ij. Matrix T and vector s contain the t ij and s ij in appropriate positions respectively. To apply this for the registration, a number of preliminary steps are necessary which will be explained in more detail later: 1. find all cycles Z h in graph G = (V, E), 2. transform all t ij into a common coordinate system, i.e. compute t ij, 3. construct and solve the system of linear equations in equation 1, 4. using the computed scale factors s ij compute a consistent calibration 1) An algorithm for computing all cycles in a graph is presented in [Kim et al. 2003]. 2) We choose one of the cameras local coordinate systems as the common coordinate system. The corresponding camera is referred to as base camera C b. We use the camera related to the vertex with the highest degree for that purpose. This choice minimizes the number of edges that have to be traversed to transform the other cameras into the base cameras coordinate system. For a given camera C i we transform the translation directions t ij, (C i, C j ) E originating in C i into the common coordinate system, yielding t ij. This is done using the quaternion weighted mean rotation R ib between C i and C b : Rotation R ib is computed by finding all paths between cameras C i and C b, accumulating rotation matrices along these paths and taking the weighted mean value of their quaternion representations using iterative spherical linear quaternion interpolation 1. 3) The solution s of the linear system Ts = 0 is defined up to a scale factor only. Therefore matrix T is singular. This is because the overall scale factor of the camera setup cannot be determined from image measurements alone. To solve the linear system, we compute the basis vector of the right null-space of T. Using the standard approach to this problem, we compute the singular value decomposition (SVD) of matrix T: T = UDV T and extract the column of V corresponding to the zero singular value. 4) To compute a consistent calibration we multiply the transformed translations t ij by their corresponding scale factors s ij. If the solution was perfect the calibration would be readily available since every path from C i to C b in the graph could be taken to transform camera C i into the base coordinate system. But because s is a least squares solution this is generally not true. To get the best estimate of the true translations we compute the weighted mean translation t ib similarly to the computation of R ib in step 2). The calibrated cameras C i are now defined as having the projection matrices P i = K i R ib ( 1 tib ) (2) with R bb being the unit matrix and t bb the zero translation for the base camera C b. 6.2 Bundle adjustment The calibration obtained so far captures the coarse structure of the camera setup quite well, but since it minimizes an algebraic error, namely the least squares error which has no physical meaning, the 1 The weighted mean value x = 1 n n i=1 w ix i i=1 w i can be computed in an iterative manner where a linear interpolation is performed in every iteration. Replacing the x i by quaternions and using spherical linear interpolation a weighted mean value of rotations can be computed.

6 standard mean reprojectioprojection mean re- deviation of uniform error left error right Gaussian image image noise Table 1: Mean reprojection error in the left and right images for pairwise relationship computation (inliers only) for different noise levels (experiment 1). standard mean reprojectioprojection mean re- deviation of uniform error over error after Gaussian all camera bundle noise pairs adjustment Table 2: Mean reprojection error of the reconstruction from the linear solution to the calibration dependent on the noise level and after bundle adjustment (experiment 2). reprojection error of the points reconstructed with this calibration is quite high. Nevertheless it serves as a good initial guess for bundle adjustment [Triggs et al. 2000]. We implemented a reduced version which optimizes only rotation and translation parameters of the final calibration using a nonlinear optimization method. Usually all parameters (internal,external and reconstructed point positions) are optimized which is a huge problem with hundreds of variables and can only be handled by exploiting the coarse structure of the problem [Pollefeys 1999]. We parametrize the problem in a similar way as in the essential matrix refinement step, but this time for all cameras simultaneously. The rotation is parametrized as the normalized rotation axis scaled by the rotation angle. These values are obtained from the quaternion representation of the rotation. The translation values are used directly for the parametrization. As the error measure that is to be minimized, we adopt the sum of the mean reprojection errors of the reconstructed points in all cameras. This error measure is optimized using again the simplex algorithm and the previous calibration as an initial guess. 7 Experiments We performed three experiments to validate our calibration method. The first two experiments use synthetic data with different levels of uniform Gaussian noise to assess the robustness of our method to noise. The first experiment shows the accuracy of the pairwise position and orientation computation (see section 5) under noise. The second experiment evaluates the same for the initial calibration obtained by solving the linear system of scale factors, section 6.1 and the improvements of bundle adjustment. The third experiment uses real data and is the calibration of our in-house multi-video studio [Theobalt et al. 2003], which is obtained using the full method. Errors are mean reprojection errors in pixels where not indicated otherwise. 7.1 Performance of the pairwise relationship estimation For this experiment we use 100 synthetic 3D data points. The data points are projected onto 5 cameras and disturbed by uniform Gaussian noise with standard deviations ranging from 0.1 to 1.9 pixels. The dependency of the pairwise relationship estimation on the noise level is shown in table 1. The threshold for the RANSAC method was set to 1.0 pixels. The results show, that the method is robust against uniform Gaussian noise and the reprojection error raises slower than the noise level. Nevertheless with higher noise levels fewer points are detected as inliers and the computation time, which raises when the ratio of inliers to total number of points gets smaller, becomes longer. 7.2 Performance of the linear solution to the calibration The second experiment was performed to test the performance of the linear calibration method of

7 camera number median reprojection error Table 3: Median reprojection error of the reconstruction from the linear solution to the calibration dependent on the noise level and after bundle adjustment (experiment 3). section 6.1. The estimated pairwise positions and orientations from the previous experiment were used to perform the linear calibration. After calibrating the synthetic cameras we reconstructed the 3D points from the noisy image points and reprojected them onto the camera planes. The mean euclidean distance between the image points and their reprojections and the improvements achieved by using bundle adjustment are shown in table 2. It can be noted that the linear method produces high errors quite fast. Nevertheless it gets near the desired minimum of the nonlinear cost function quite well and the bundle adjustment can recover the correct calibration up to the noise level. 7.3 Performance on real data The third experiment uses real data extracted from a video sequence of 480 frames, recorded at 15 fps. Our sphere detection algorithm is applied to extract the midpoints of the sphere. These midpoints are then used to perform the full calibration. For reconstruction we use all cameras that observed a point. A visualization of the reconstructed virtual calibration object, i.e. the path of the marker object and the reconstructed camera setup are shown in Fig. 4. For the evaluation we use the median reprojection error this time. This is because the data includes outlier which disturb the mean error computation. The results can be seen in table 3. Except for camera 7 which has a very high error all cameras are calibrated reasonably well. The remaining error is most likely due to errors in the internal calibration and due to the fact that we use the midpoints of the detected ellipses as correspondences which do not generally correspond to the projected midpoint of the sphere [Heikkila and Silven 1997]. This introduces some bias which we plan to remove in the future. Figure 4: A top view of the reconstructed virtual calibration object and the resulting camera calibration. The lines show the path of the marker object, the pyramids depict the reconstructed camera positions. This is a cross-eye image that can be fused to give a depth impression. 8 Conclusions and Future Work We have presented a flexible camera calibration system that can be used even if some of the cameras have no common field of view. It requires no especially manufactured calibration object and the calibration process can be partly interactive in the way that the user can be guided to create correspondences where it would be helpful. The method is robust against noise and lifts two restrictions imposed by earlier methods. These are: the scene does not have to be dark the cameras can be calibrated as long as their camera graph is connected For the future we plan to include the computation of the real projected midpoints of the sphere instead of using the midpoints of the detected ellipses as correspondences. Furthermore is the detection of all cycles in the graph computationally expensive. Since the search for all cycles is a breadth first search this search visits longer cycles later. Therefore the search can be stopped after sufficiently many cycles have been found. The longer cycles are supposedly the less important ones.

8 References Azarbayejani, A., and Pentland, A Camera self-calibration from one point correspondence. Tech. Rep. 341, MIT Media Lab. Chen, X., Davis, J., and Slusallek, P Wide Area Camera Calibration Using Virtual Calibration Objects. In Proceedings of CVPR, vol. 2, Faugeras, O., and Luong, Q.-T The Geometry of Multiple Images. The MIT Press. Fischler, M. A., and Bolles, R. C Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Communications of the ACM 24, 6 (June). Hartley, R., and Zisserman, A Multiple View Geometry. Cambridge University Press. Heikkila, J., and Silven, O A fourstep camera calibration procedure with implicit image correction. In CVPR 97, Horn, B. K., Recovering baseline and orientation from essential matrix, January. Kim, E. J., Yum, K. H., Das, C. R., Yousif, M., and Duato, J., Performance Enhancement Techniques for InfiniBand T M Architecture. Nelder, J. A., and Mead, R A simplex method for function minimization. Computer Journal 7, Pollefeys, M Self-calibration and metric 3D reconstruction from uncalibrated image sequences. PhD thesis. Theobalt, C., Li, M., Magnor, M., and Seidel, H.-P A flexible and versatile studio for synchronized multi-view video recording. Research Report MPI-I , Max- Planck-Institut für Informatik, Stuhlsatzenhausweg 85, Saarbrücken, Germany, April. Triggs, B., McLauchlan, P., Hartley, R., and Fitzgibbon, A Bundle adjustment A modern synthesis. In Vision Algorithms: Theory and Practice, W. Triggs, A. Zisserman, and R. Szeliski, Eds., LNCS. Springer Verlag, Tsai, R. Y A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses. IEEE Journal of Robotics and Automation RA-3, 4 (August). Zhang, Z A new multistage approach to motion and structure estimation: From essential parameters to euclidean motion via fundamental matrix. Tech. Rep. RR-2910, INRIA, June.

External camera calibration for synchronized multi-video systems

External camera calibration for synchronized multi-video systems External camera calibration for synchronized multi-video systems Ivo Ihrke Lukas Ahrenberg Marcus Magnor Max-Planck-Institut für Informatik D-66123 Saarbrücken ihrke@mpi-sb.mpg.de ahrenberg@mpi-sb.mpg.de

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

CS231A Midterm Review. Friday 5/6/2016

CS231A Midterm Review. Friday 5/6/2016 CS231A Midterm Review Friday 5/6/2016 Outline General Logistics Camera Models Non-perspective cameras Calibration Single View Metrology Epipolar Geometry Structure from Motion Active Stereo and Volumetric

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Camera Calibration with a Simulated Three Dimensional Calibration Object

Camera Calibration with a Simulated Three Dimensional Calibration Object Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek

More information

How to Compute the Pose of an Object without a Direct View?

How to Compute the Pose of an Object without a Direct View? How to Compute the Pose of an Object without a Direct View? Peter Sturm and Thomas Bonfort INRIA Rhône-Alpes, 38330 Montbonnot St Martin, France {Peter.Sturm, Thomas.Bonfort}@inrialpes.fr Abstract. We

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Engin Tola 1 and A. Aydın Alatan 2 1 Computer Vision Laboratory, Ecóle Polytechnique Fédéral de Lausanne

More information

Image correspondences and structure from motion

Image correspondences and structure from motion Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.

More information

Catadioptric camera model with conic mirror

Catadioptric camera model with conic mirror LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación

More information

Vision par ordinateur

Vision par ordinateur Epipolar geometry π Vision par ordinateur Underlying structure in set of matches for rigid scenes l T 1 l 2 C1 m1 l1 e1 M L2 L1 e2 Géométrie épipolaire Fundamental matrix (x rank 2 matrix) m2 C2 l2 Frédéric

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

EECS 442: Final Project

EECS 442: Final Project EECS 442: Final Project Structure From Motion Kevin Choi Robotics Ismail El Houcheimi Robotics Yih-Jye Jeffrey Hsu Robotics Abstract In this paper, we summarize the method, and results of our projective

More information

On Robust Regression in Photogrammetric Point Clouds

On Robust Regression in Photogrammetric Point Clouds On Robust Regression in Photogrammetric Point Clouds Konrad Schindler and Horst Bischof Institute of Computer Graphics and Vision Graz University of Technology, Austria {schindl,bischof}@icg.tu-graz.ac.at

More information

Mei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract

Mei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract Scene Reconstruction from Multiple Uncalibrated Views Mei Han Takeo Kanade January 000 CMU-RI-TR-00-09 The Robotics Institute Carnegie Mellon University Pittsburgh, PA 1513 Abstract We describe a factorization-based

More information

Automatic 3D Model Construction for Turn-Table Sequences - a simplification

Automatic 3D Model Construction for Turn-Table Sequences - a simplification Automatic 3D Model Construction for Turn-Table Sequences - a simplification Fredrik Larsson larsson@isy.liu.se April, Background This report introduces some simplifications to the method by Fitzgibbon

More information

Coplanar circles, quasi-affine invariance and calibration

Coplanar circles, quasi-affine invariance and calibration Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Robust Geometry Estimation from two Images

Robust Geometry Estimation from two Images Robust Geometry Estimation from two Images Carsten Rother 09/12/2016 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation Process 09/12/2016 2 Appearance-based

More information

Multiple-View Structure and Motion From Line Correspondences

Multiple-View Structure and Motion From Line Correspondences ICCV 03 IN PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, NICE, FRANCE, OCTOBER 003. Multiple-View Structure and Motion From Line Correspondences Adrien Bartoli Peter Sturm INRIA

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

Multiple View Geometry. Frank Dellaert

Multiple View Geometry. Frank Dellaert Multiple View Geometry Frank Dellaert Outline Intro Camera Review Stereo triangulation Geometry of 2 views Essential Matrix Fundamental Matrix Estimating E/F from point-matches Why Consider Multiple Views?

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Visualization 2D-to-3D Photo Rendering for 3D Displays

Visualization 2D-to-3D Photo Rendering for 3D Displays Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS Cha Zhang and Zhengyou Zhang Communication and Collaboration Systems Group, Microsoft Research {chazhang, zhang}@microsoft.com ABSTRACT

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

3D Reconstruction with two Calibrated Cameras

3D Reconstruction with two Calibrated Cameras 3D Reconstruction with two Calibrated Cameras Carlo Tomasi The standard reference frame for a camera C is a right-handed Cartesian frame with its origin at the center of projection of C, its positive Z

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES

A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES Yuzhu Lu Shana Smith Virtual Reality Applications Center, Human Computer Interaction Program, Iowa State University, Ames,

More information

DRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri

DRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri The 23 rd Conference of the Mechanical Engineering Network of Thailand November 4 7, 2009, Chiang Mai A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking Viboon Sangveraphunsiri*, Kritsana Uttamang,

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Model Fitting, RANSAC. Jana Kosecka

Model Fitting, RANSAC. Jana Kosecka Model Fitting, RANSAC Jana Kosecka Fitting: Overview If we know which points belong to the line, how do we find the optimal line parameters? Least squares What if there are outliers? Robust fitting, RANSAC

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

3D Computer Vision. Structure from Motion. Prof. Didier Stricker

3D Computer Vision. Structure from Motion. Prof. Didier Stricker 3D Computer Vision Structure from Motion Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Structure

More information

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn

More information

Combining Two-view Constraints For Motion Estimation

Combining Two-view Constraints For Motion Estimation ombining Two-view onstraints For Motion Estimation Venu Madhav Govindu Somewhere in India venu@narmada.org Abstract In this paper we describe two methods for estimating the motion parameters of an image

More information

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching -- Computational Optical Imaging - Optique Numerique -- Single and Multiple View Geometry, Stereo matching -- Autumn 2015 Ivo Ihrke with slides by Thorsten Thormaehlen Reminder: Feature Detection and Matching

More information

Homographies and RANSAC

Homographies and RANSAC Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position

More information

Camera Calibration Using Line Correspondences

Camera Calibration Using Line Correspondences Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Structure from Motion

Structure from Motion Structure from Motion Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions Structure from motion Recover both 3D scene geoemetry and camera positions SLAM: Simultaneous

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

arxiv: v1 [cs.cv] 18 Sep 2017

arxiv: v1 [cs.cv] 18 Sep 2017 Direct Pose Estimation with a Monocular Camera Darius Burschka and Elmar Mair arxiv:1709.05815v1 [cs.cv] 18 Sep 2017 Department of Informatics Technische Universität München, Germany {burschka elmar.mair}@mytum.de

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Contents. 1 Introduction Background Organization Features... 7

Contents. 1 Introduction Background Organization Features... 7 Contents 1 Introduction... 1 1.1 Background.... 1 1.2 Organization... 2 1.3 Features... 7 Part I Fundamental Algorithms for Computer Vision 2 Ellipse Fitting... 11 2.1 Representation of Ellipses.... 11

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

Epipolar geometry. x x

Epipolar geometry. x x Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections

More information

Factorization with Missing and Noisy Data

Factorization with Missing and Noisy Data Factorization with Missing and Noisy Data Carme Julià, Angel Sappa, Felipe Lumbreras, Joan Serrat, and Antonio López Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona,

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers Augmented reality Overview Augmented reality and applications Marker-based augmented reality Binary markers Textured planar markers Camera model Homography Direct Linear Transformation What is augmented

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Structure from Motion and Multi- view Geometry. Last lecture

Structure from Motion and Multi- view Geometry. Last lecture Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,

More information

Recovering structure from a single view Pinhole perspective projection

Recovering structure from a single view Pinhole perspective projection EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their

More information

CS231M Mobile Computer Vision Structure from motion

CS231M Mobile Computer Vision Structure from motion CS231M Mobile Computer Vision Structure from motion - Cameras - Epipolar geometry - Structure from motion Pinhole camera Pinhole perspective projection f o f = focal length o = center of the camera z y

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

A Stratified Approach for Camera Calibration Using Spheres

A Stratified Approach for Camera Calibration Using Spheres IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. Y, MONTH YEAR 1 A Stratified Approach for Camera Calibration Using Spheres Kwan-Yee K. Wong, Member, IEEE, Guoqiang Zhang, Student-Member, IEEE and Zhihu

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16 Announcements Stereo III CSE252A Lecture 16 HW1 being returned HW3 assigned and due date extended until 11/27/12 No office hours today No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in

More information

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo -- Computational Optical Imaging - Optique Numerique -- Multiple View Geometry and Stereo -- Winter 2013 Ivo Ihrke with slides by Thorsten Thormaehlen Feature Detection and Matching Wide-Baseline-Matching

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X

More information

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Multiple Views Geometry

Multiple Views Geometry Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective

More information

3D Model Acquisition by Tracking 2D Wireframes

3D Model Acquisition by Tracking 2D Wireframes 3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract

More information

Structure and motion in 3D and 2D from hybrid matching constraints

Structure and motion in 3D and 2D from hybrid matching constraints Structure and motion in 3D and 2D from hybrid matching constraints Anders Heyden, Fredrik Nyberg and Ola Dahl Applied Mathematics Group Malmo University, Sweden {heyden,fredrik.nyberg,ola.dahl}@ts.mah.se

More information

The Projective Vision Toolkit

The Projective Vision Toolkit The Projective Vision Toolkit ANTHONY WHITEHEAD School of Computer Science Carleton University, Ottawa, Canada awhitehe@scs.carleton.ca Abstract: Projective vision research has recently received a lot

More information

Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics

Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics Heiko Hirschmüller, Peter R. Innocent and Jon M. Garibaldi Centre for Computational Intelligence, De Montfort

More information

Augmenting Reality, Naturally:

Augmenting Reality, Naturally: Augmenting Reality, Naturally: Scene Modelling, Recognition and Tracking with Invariant Image Features by Iryna Gordon in collaboration with David G. Lowe Laboratory for Computational Intelligence Department

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

Metric Structure from Motion

Metric Structure from Motion CS443 Final Project Metric Structure from Motion Peng Cheng 1 Objective of the Project Given: 1. A static object with n feature points and unknown shape. 2. A camera with unknown intrinsic parameters takes

More information

MAPI Computer Vision. Multiple View Geometry

MAPI Computer Vision. Multiple View Geometry MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

Minimal Projective Reconstruction for Combinations of Points and Lines in Three Views

Minimal Projective Reconstruction for Combinations of Points and Lines in Three Views Minimal Projective Reconstruction for Combinations of Points and Lines in Three Views Magnus Oskarsson, Andrew Zisserman and Kalle Åström Centre for Mathematical Sciences Lund University,SE 221 00 Lund,

More information

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles Carlo Colombo, Dario Comanducci, and Alberto Del Bimbo Dipartimento di Sistemi ed Informatica Via S. Marta 3, I-5139

More information

Summary Page Robust 6DOF Motion Estimation for Non-Overlapping, Multi-Camera Systems

Summary Page Robust 6DOF Motion Estimation for Non-Overlapping, Multi-Camera Systems Summary Page Robust 6DOF Motion Estimation for Non-Overlapping, Multi-Camera Systems Is this a system paper or a regular paper? This is a regular paper. What is the main contribution in terms of theory,

More information

Fitting. Fitting. Slides S. Lazebnik Harris Corners Pkwy, Charlotte, NC

Fitting. Fitting. Slides S. Lazebnik Harris Corners Pkwy, Charlotte, NC Fitting We ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features in the image by grouping multiple features according

More information

Euclidean Reconstruction and Auto-Calibration from Continuous Motion

Euclidean Reconstruction and Auto-Calibration from Continuous Motion Euclidean Reconstruction and Auto-Calibration from Continuous Motion Fredrik Kahl and Anders Heyden Λ Centre for Mathematical Sciences Lund University Box 8, SE- Lund, Sweden {fredrik, andersp}@maths.lth.se

More information