Multi-Camera Calibration with One-Dimensional Object under General Motions

Similar documents
Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration With One-Dimensional Objects

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Coplanar circles, quasi-affine invariance and calibration

calibrated coordinates Linear transformation pixel coordinates

Flexible Calibration of a Portable Structured Light System through Surface Plane

Planar pattern for automatic camera calibration

Camera Calibration with a Simulated Three Dimensional Calibration Object

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices

Perception and Action using Multilinear Forms

Camera Calibration and Light Source Estimation from Images with Shadows

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Multiple View Geometry in Computer Vision Second Edition

Camera Calibration and Light Source Orientation Estimation from Images with Shadows

Euclidean Reconstruction and Auto-Calibration from Continuous Motion

A Case Against Kruppa s Equations for Camera Self-Calibration

Metric Rectification for Perspective Images of Planes

Multiple Motion Scene Reconstruction from Uncalibrated Views

arxiv: v1 [cs.cv] 28 Sep 2018

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

A Factorization Method for Structure from Planar Motion

Camera Calibration from Surfaces of Revolution

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Vision Review: Image Formation. Course web page:

Robust Camera Calibration from Images and Rotation Data

A simple method for interactive 3D reconstruction and camera calibration from a single view

Globally Optimal Algorithms for Stratified Autocalibration

A Flexible New Technique for Camera Calibration

Visual Recognition: Image Formation

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images

An Image Based 3D Reconstruction System for Large Indoor Scenes

Detection of Concentric Circles for Camera Calibration

Camera Calibration and 3D Scene Reconstruction from image sequence and rotation sensor data

On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Pose Estimation from Circle or Parallel Lines in a Single Image

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics

Camera Calibration Using Line Correspondences

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License

Projective geometry for Computer Vision

How to Compute the Pose of an Object without a Direct View?

Camera calibration with spheres: Linear approaches

Using Pedestrians Walking on Uneven Terrains for Camera Calibration

A Stratified Approach for Camera Calibration Using Spheres

Minimal Projective Reconstruction for Combinations of Points and Lines in Three Views

Camera Self-calibration with Parallel Screw Axis Motion by Intersecting Imaged Horopters

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

A Stratified Approach to Metric Self-Calibration

3D reconstruction class 11

Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance

Stereo Image Rectification for Simple Panoramic Image Generation

EXPERIMENTAL RESULTS ON THE DETERMINATION OF THE TRIFOCAL TENSOR USING NEARLY COPLANAR POINT CORRESPONDENCES

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

3-D D Euclidean Space - Vectors

Multiple-View Structure and Motion From Line Correspondences

An Overview of Matchmoving using Structure from Motion Methods

Unit 3 Multiple View Geometry

Stereo and Epipolar geometry

Camera Self-calibration Based on the Vanishing Points*

Rectification and Distortion Correction

Linear Auto-Calibration for Ground Plane Motion

Pin Hole Cameras & Warp Functions

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

1 Projective Geometry

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang

Camera models and calibration

Refining Single View Calibration With the Aid of Metric Scene Properties

More on single-view geometry class 10

Structure from motion

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Auto-Configuration of a Dynamic Non-Overlapping Camera Network

A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES

A Theory of Multi-Layer Flat Refractive Geometry

Self-recalibration of a structured light system via plane-based homography

Linear Multi View Reconstruction and Camera Recovery Using a Reference Plane

Structure from Motion. Prof. Marco Marcon

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

CS231M Mobile Computer Vision Structure from motion

Agenda. Rotations. Camera models. Camera calibration. Homographies

Projective Rectification from the Fundamental Matrix

Camera Calibration by a Single Image of Balls: From Conics to the Absolute Conic

Linear Multi-View Reconstruction of Points, Lines, Planes and Cameras using a Reference Plane

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Invariance of l and the Conic Dual to Circular Points C

Structure from motion

Guang-Hui Wang et al.: Single View Based Measurement on Space Planes 375 parallelism of lines and planes. In [1, ], a keypoint-based approach to calcu

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

CSE 252B: Computer Vision II

Self-Calibration of a Rotating Camera with Varying Intrinsic Parameters

Lecture 10: Multi-view geometry

Transcription:

Multi-Camera Calibration with One-Dimensional Obect under General Motions L. Wang, F. C. Wu and Z. Y. Hu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, P. O. Box 7, Beiing, P.R.China {wangliang, fcwu, huzy}@nlpr.ia.ac.cn Abstract It is well known that in order to calibrate a single camera with a one-dimensional (D) calibration obect, the obect must undertake some constrained motions, in other words, it is impossible to calibrate a single camera if the obect motion is of general one. For a multi-camera setup, i.e., when the number of camera is more than one, can the cameras be calibrated by a D obect under general motions? In this work, we prove that all cameras can indeed be calibrated and a calibration algorithm is also proposed and experimentally tested. In contrast to other multi-camera calibration method, no one calibrated base camera is needed. In addition, we show that for such multi-camera cases, the minimum condition of calibration and critical motions are similar to those of calibrating a single camera with D calibration obect.. Introduction In D computer vision, camera calibration is a necessary step in order to extract metric information from D images. According to the dimension of the calibration obects, the camera calibration techniques can be roughly classified into four categories. In D reference obect calibration an obect with known geometry in D space is used [,, ]. In D plane based calibration planar patterns are used [,, ]. In D obect based calibration a D segment with three or more markers is used [, 7, 9]. In D approach or selfcalibration only correspondences of image points or some special kinds of motion are used and it does not use any physical calibration obects [,, 7,, ]. In the above techniques, much work has been done expect for D obect based calibration. Camera calibration using D obect was proposed by Zhang in [9]. Here the D calibration obect consists of a set of three or more collinear points with known distances. The motion of the obect is constrained by one point being fixed. Hammarstedt et al. analyze the critical configuration of D calibration and provide simplified closed-form solutions in Zhang s setup []. Wu et al. prove that the rotating D obect used in [9] is essentially equivalent to a familiar D planar obect, and such equivalence still holds when D obect undergoes a planar motion rather than the rotation around a fixed point [7]. The advantages of using D obects for calibration are: (). D obects with known geometry are easy to construct. In practice, the D obect can be constructed by marking three points on a stick. (). In a multi-camera setup, all cameras can observe the whole calibration obect simultaneously, which is a prerequisite for calibration and hard to satisfy with D or D calibration obects. A shortcoming of the D calibration is that the D obect should be controlled to undertake some especial motions, such as rotations around a fixed point and planar motions. If the motions of D obect are of general rigid motions, can cameras be calibrated? The problem will be discussed in this work. We know that camera calibration is impossible by using a single camera to observe the D obect under general rigid motions [9]. However, the following result in this paper is proved: If in a multi-camera, all cameras synchronously observe the D obect undergoing at least times general motions (assuming these motions do not lie on a conic), then the intrinsic parameters and the pose relations of all these cameras can be calibrated. It is different with [] that there is no need of one base camera being calibrated in advance. The paper is organized as follows. Some preliminaries are introduced in Section. In Section calibration algorithm for the camera set is presented. Then calibration experiments are reported in Section. Section are some concluding remarks.. Preliminaries.. Camera model In this paper, a D point is denoted by m =[u, v] T,a D point by M =[X, Y, Z] T. The corresponding homogeneous vector is denoted respectively by m =[u, v, ] T, M = [X, Y, Z, ] T. With the standard pinhole camera model, the relationship between a D point M and its image 97----/7/$. 7 IEEE

proection matrices and the metric proection matrices can be gained. Unlike the traditional stratified calibration, our algorithm does not need any prior knowledge on the cameras. And unlike the existing D calibration methods, D calibration obects can undertake general rigid motions instead of especial motions [7, 9] and there is no need of one calibrated base camera []. Figure. Illustration of D calibration obects. point m (perspective proection) is given by s m = K[R t] M, K = α γ u β v () where s is a scale factor (proection depth of D point M), P = K[R t] is called the camera matrix, [R t], called the extrinsic matrix, is the rotation and translation which relates the world coordinate system to the camera coordinate system. And K is called the camera intrinsic matrix, with α, β denotes the scale factors in the image u and v axes, γ the skew, and [u,v ] the principal point... D calibration obect Assume D calibration obect has three points say A, B, C, and A C = d, B C = d. (Here we only consider the minimal configuration of D calibration obect which consists of three collinear points.) For the convenience of statement, the D calibration obect is also said the line-segment (ABC). Moreover, the line defined by the line-segment (ABC) is denoted by L ABC.. Calibration algorithm Refer to Figure. Given the image points {a i, b i, c i =,,..., n, i =,,,..., m} of the line-segment (ABC) under the th rigid motion in the i th camera, our goal is to compute the metric proection matrices under the th camera coordinate system: P (e) = K [I ] = K [R t ]. ()... m = K m [R m t m ] The parameters of multi-camera system can be linearly determined. Firstly vanishing points of D calibration obects are computed. With these vanishing points, infinite homographies between cameras can be computed. Then the affine.. Affine calibration Let the D calibration obect undertakes a series of general rigid motions in the field of multi-camera s view. The correspondence of image points {a i, b i, c i =,,..., n, i =,,,..., m} can be established. Since the geometry of D calibration obect is known, we can compute the vanishing point v i of the line L AB C in the i th camera. The simple ratio of the collinear points A, B and C is Simple(A, B ; C )=d /d () Then the cross ratio of the collinear points {A, B ; C, V } is also d /d, i.e. Cross(A, B ; C, V )=d /d () where V is the infinite point of the line L AB C. By the perspective transformation preserving the cross ratio, we can obtain the linear constraints on v i : Cross(a i, b i ; c i, v i )=d /d. () Hence, we can obtain the vanishing points v i. Since {v v i =,,..., n} are the image point correspondences of the infinite plane π, the infinite homography between the th camera and the i th camera satisfy the following equations: H i ṽ = λ i ṽ i, (i =,,..., m; =,,..., n) () Eliminating unknown scale factors λ i, we can obtain linear constrained equations: [ṽ i ] H i ṽ =, (i =,,..., m; =,,..., n). (7) By solving the linear equations (7), we can determine the infinite homographies H i. With homographies H i and image points {a i, b i, c i }, the proective reconstruction of points and cameras can be computed simultaneously with the technique of proective reconstruction using planes []. Considering that the plane inducing homographies H i is the infinite plane, the computed structure and motion are affine. So we have the affine camera matrices: P (a) =[I ] P (a) =[H e ]. ()... P (a) m =[H m e m ]

and the affine reconstructions of the space points {A (a), B (a), C (a) }... Metric calibration Based on the affine proection matrices showed in (), the metric camera matrices must be of the following form []: i = P (a) i diag(k, ), (i =,,..., m) (9) and the metric reconstruction of the space points {A (e), B (e), C (e) } satisfy the following equations: A (e) B (e) C (e) A(a) B(a) C(a), ( =,,..., n). () Here K is the intrinsic parameter matrix of the th camera. Since A (e) C (e) = d ; B (e) C (e) = d,from equations () we have the linear constraints on K : { (C (a) (C (a) A (a) ) T ϖ (C (a) A (a) )=d B (a) ) T ϖ (C (a) B (a) )=d, () where ϖ = K T K. Remark. Since the affine transformations preserve the sample ratio of three collinear points, the two equations of () are not independent of each other for each. Hence, as is the calibration method with D obect under especial motions, in our method that D obect moves at least times is also necessary for calibration. Remark. From (), it is not difficult to see that the critical motions of the D obect are similar to those of the calibration method with D obect rotating around a known fixed point, i.e., the motion is critical if and only if the infinite points {Ṽ = C Ã =,,...,n} determined by the D obect lie on a conic. From (), we obtain the linear solution of ϖ, and obtain the intrinsic parameter matrix, K, using Cholesky decomposition of ϖ. Hence, the metric proection matrices are: =[K ] =[H K e ]... m =[H m K e m ] and the metric reconstructions of the space points are: A (e) B (e) C (e) A(a) B(a) C(a) (). () Using QR decomposition, we can extract the intrinsic parameter matrix, K i, and the motion parameters, R i and t i, of the i th camera from the metric proection matrix, i = K i [R i t i ], (i =,,..., m). ().. Bundle adustment The above solution is obtained through minimizing an algebraic distance which is not physically meaningful. We can refine it through bundle adustment. Bundle adustment is a nonlinear procedure involving the proection matrices and space points, attempting to maximize the likelihood of the reconstruction, being equivalent to minimizing the reproection error when the noise on measured image points has an identical and independent Gaussian distribution. Here bundle adustment is to solve the following nonlinear minimization: min d(m i, ˆP i [ ˆM, ] T ), () ˆP i, ˆM i where d(, ) is the geometric distance between two points, ˆP i and ˆM {Â, ˆB, Ĉ } are estimated proection matrices and D points, m i is the detected (measured) image point of D point M in the i th camera. Here A, B and C are collinear, so they are not independent. Since the direction of the line L AB C can be expressed as: n = C A C A the point B and C are given by and sin φ cos θ sin φ sin θ cos φ, () B = A + A B n = A +(d d )n (7) C = A + A C n = A + d n. () Let â i (ˆb i, ĉ i ) be the reproection of A (B, C ). Then the minimization problem () can be rewritten as: min ˆP i,â, ˆφ,ˆθ m i= = n ( a i â i + b i ˆb i + c i ĉ i ) (9) Regarding the metric reconstruction in the above section as an initial value, the nonlinear minimization can be done using the Levenberg-Marquardt algorithm [9].. Experiments The proposed algorithm has been tested on computer simulated data and real image data.

X Y Z X Y Z X Y Z (a) the st camera (b) the nd camera (c) the rd camera Z Y X Z w X Z Y Z Y X X w w (d) the st camera (e) the nd camera (f) the rd camera Figure. The planform of synthetic experimental setup in one trial. Figure. Sample images of camera set for D calibration. camera st nd rd th th th α β γ.. -. -. Table. Intrinsic parameters of six simulated cameras... Simulated data We perform a lot of simulations with two cameras, three cameras, six cameras and so on. Due to the limitation of space, we only report results of simulation with six cameras. In the simulation, the six simulated cameras intrinsic parameters are shown in Table. All the principal points are [, ] and the image resolutions are of 7. The first camera locates at one vertex of an regular hexagon whose side is of. And the optical axis coincides with the line oined the vertex to the common center of the regular hexagon. Each of the other five cameras is inside a cube of, and the cube s center is at one of the other five vertexes of the regular hexagon. And for each of the other five cameras, the angle between the optical axis and the line oined the corresponding vertex to the common center of the regular hexagon is of a random value between - and + degrees. The length of the simulated line-segment (ABC) is 9, and point B is the trisection point of (ABC) and AB =. Sowehaved =9and d =.Let the line-segment (ABC) undertake times of general motions inside the cube of whose center coincides with the common center of the regular hexagon, and insure the image of line-segment (ABC) inside 7. Figure shows the planform of cameras setup and D obect s position in one trial. Add Gaussian noise with mean and standard derivations σ to image points. Here we use two methods, the linear D calibration algorithm and the bundle adustment algorithm, to calibrate the camera set. The estimated camera (a) the st camera (b) the nd camera (c) the rd camera Figure. Sample images of camera set for D calibration. (a) the st camera (b) the nd camera (c) the rd camera Figure 7. The image triplet of D obect for reconstruction. parameters are compared with the ground truth, and RMS errors are measured. Vary the noise level σ from. pixels to. pixels in steps of.pixels. At each noise level, independent trials are performed, and the results are shown in Figure and. Figure displays the relative errors of intrinsic parameters. Here we measure the relative errors with respect to α, as proposed by Triggs in []. Errors increase almost linearly with the noise level. The bundle adustment can produce significantly better results than the linear algorithm. At the. pixels noise level, the errors for the linear algorithm are about %, while those for the bundle adustment are about %. Figure (a), (b), (c), (d) and (e) display the errors of the other cameras Euler angles relative to the first camera. Figure (f) displays the errors of the other cameras translation relative to the first camera. These errors also increase almost linearly with the noise level. And the bundle adustment can significantly refine the results of the linear algorithm.

7 α β u v γ α β u v γ α β u v γ α β u v γ α β u v γ α β u v γ.. (a) the st camera.. (b) the nd camera.. (c) the rd camera α β u v γ α β u v α β u v γ α β u v α β u v γ α β u v γ γ γ.. (d) the th camera.. (e) the th camera Figure. The relative errors of six simulated cameras intrinsic parameters... (f) the th camera.. Real Images For the experiment with real data, three cameras are used. The image resolution of three cameras is of, 7 and 7 respectively. We used three toy beads and strung them together with a stick. The distance between the first and the second bead is cm, and that of the second and the third is cm. Put one end of the stick on a tripod, let it undertake general rigid motion. Fifteen triplets of images are taken,two of them are shown in Figure. Figure (a), (b) and (c), corresponding to the st camera, the nd camera and the rd camera respectively, is a triplet corresponding to the th motion. Figure (d), (e) and (f) is another triplet corresponding to the th motion. Here white circle marks are used to give prominence to beads in images. The beads are extracted from images manually. The cameras parameters are calibrated with the proposed D algorithm with bundle adustment. For comparison, we also used the D plane-based calibration technique described in [] to calibrate the same cameras. Fifteen triplets of images are taken, and the th of them are shown in Figure. Figure (a), (b) and (c) respectively correspond to the st camera, the nd camera and the rd camera. camera α β u v γ D. 9.9. 9.. D 9.9.7 9... D 7.9.7. 7.. D 9....7 -. D..7 9.. -. D.. 9..7 -. Table. Calibration results of three cameras. method θ θ θ d σd D 9..9 9...7 D.7 7..77.. Table. Reconstruction results with calibration results. Three cameras intrinsic parameters with D and D calibration methods are shown in Table. For each camera the first row shows the estimation from the D calibration algorithm, while the second row shows the result of the D plane-based method. We can see that there is little difference between two methods results. We also perform reconstruction with calibration results to compare two methods.

9 7 ψ θ φ ψ θ φ ψ θ φ ψ θ φ ψ θ φ ψ θ φ.. (a) the nd camera s rotation.. (b) the rd camera s rotation.. (c) the th camera s rotation ψ θ φ ψ θ φ 9 7 ψ θ φ ψ θ φ Relative errors(%) T T T T T T T T T T.. (d) the th camera s rotation.. (e) the th camera s rotation Figure. The relative errors of six simulated cameras extrinsic parameters... (f) translation The image triplet of D calibration obect for performing reconstruction is shown in Figure 7. The reconstruction results and the pose relations of three cameras are shown in Figure. Three planes of the D calibration obect are fitted with reconstructed D points, and angles between two of them, θ, θ and θ (whose ground truth is 9 degrees), are computed, as shown in Table. The ground truth of distance between each pair of neighbor corner points is.cm. The mean d and standard deviation σ d of these distances computed by the reconstructed D points are also shown in Table. We can see that the two methods are comparable. In some cases, the D calibration can outperform the D calibration. It may be chiefly because: ) our D pattern, a printed paper on a whiteboard, is not accurate enough; ) the D obect can be freely placed on the interested scene volume, which can increase the calibration accuracy. There is little difference between the reconstruction results of D calibration and the ground truth. The difference may come from several sources. One is the image noise and inaccuracy of the extracted data points. Another is our current rudimentary experimental setup: There was eccentricity between the holes made manually and the real axis of the bead, which inevitably results in errors of the extracted data points. Besides, the positioning of the beads was done with a ruler with barely eye inspection. Although there are so many error sources, the proposed D calibration algorithm is comparable with the D planebased method. This demonstrates the applicability of proposed D calibration algorithm in practice.. Conclusions In this paper, we have investigated the possibility of multi-camera calibration using D obects undertaking general rigid motion. A linear algorithm for multi-camera calibration is proposed, followed by a bundle adustment to refine the results. Both the computer simulated and real image data have been used to test the proposed algorithm. It shows that the proposed algorithm is valid and robust. In addition, for multiple cameras mounted apart from each other, all cameras must observe the whole calibration obect simultaneously, which is a prerequisite for calibration and hard to satisfy with D or D calibration obects. This is not a problem for D obect. Most importantly, the proposed calibration algorithm is very easier and more practical due to D obect performing general rigid motion instead of rotation around a fixed point or planar motion and no need of one calibrated base camera.

Z Y X Z Y X Z Y (a) with D calibration results Z Y X Z Y X Z Y (b) with D calibration results X X Figure. Reconstruction results of D obect. Acknowledgments: This study was partially supported by the National Natural Science Foundation of China (79, ) and National Key Basic Research and Development Program (CB7). References [] Y. I. Abdel-Aziz and H. M. Karara. Direct linear transformation from comparator coordinates into obect space coordinates, 97. ASP Symposium on Colse-Range Photogrammetry, Virginia, USA. [] P. Hammarstedt, P. Sturm, and A. Heyden. Degenerate cases and closed-form solutions for camera calibration with onedimensional obects,. In Proceedings of the ICCV, Beiing, China. pages 7-. [] R. Hartley. Estimation of relative camera positions for uncalibrated cameras, 99. In Proceedings of the ECCV 9, Genova, Italy. pages 79-7. [] R. Hartley and A. Zisserman. Mmultiple view geometry in computer vision,. Cambridge: Cambridge University Press. [] Y. Koima, T. Fuii, and M. Tanimoto. New multiple camera calibration method for a large number of cameras,. Proceedings of the SPIE, Volume, pp -. [] Q. T. Luong and O. D. Faugeras. Self-calibration of a moving camera from point correspondence and fundamental matrices. Int. J. Comput. Vision, (): 9, 997. [7] S. J. Maybank and O. D. Faugeras. A theory of selfcalibration of a moving camera. Int. J. Comput. Vision, ():, 99. [] X. Q. Meng, H. Li, and Z. Y. Hu. A new easy camera calibration technique based on circular points,. In Proceedings of the BMVC, Bristol, UK. pages 9. [9] J. J. More. The levenberg-marquardt algorithm, implementation and theory, 977. Numerical Analysis G.A. Watson, ed. Springer Verlag. [] M. Pollefeys, L. V. Gool, and O. A. The modulus constraint: a new constraint for self-calibration, 99. In Proceedings of the ICPR 9, Vienna, Austria. pages -. [] C. Rother. Multi-view reconstruction and camera recovery using a real or virtual reference plane,. PhD thesis, Computational and Active Perception Laboratory, Kungl Tekniska Högskolan. [] P. Sturm and S. J. Maybank. On plane-based camera calibration: a general algorithm, singularities, applications, 999. In Proceedings of the CVPR 99, Colorado, USA. pages 7. [] B. Triggs. Auto-calibration and the absolute quadric, 997. In Proceedings of the CVPR 97, Puerto Rico, USA. pages 9-. [] B. Triggs. Autocalibration from planar scenes, 99. In Proceedings of the ECCV 9. pages 9-. [] R. Tsai. An efficient and accurate camera calibration technique for d machine vision, 9. In Proceedings of the CVPR, Miami Beach, USA. pages 7. [] J. Y. Weng, P. Cohen, and M. Herniou. Camera calibration with distortion model and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell, ():9 9, 99. [7] F. C. Wu, Z. Y. Hu, and H. J. Zhu. Camera calibration with moving one-dimensional obects. Pattern Recognition, ():7 7,. [] Z. Y. Zhang. Flexible camera calibration by viewing a plane from unknown orientations, 999. In Proceedings of the ICCV 99, Kerkya, Greece. pages 7. [9] Z. Y. Zhang. Camera calibration with one-dimensional obects. IEEE Trans. Pattern Anal. Mach. Intell, (7):9 99,.