Omnivergent Stereo-panoramas with a Fish-eye Lens

Size: px
Start display at page:

Download "Omnivergent Stereo-panoramas with a Fish-eye Lens"

Transcription

1 CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Omnivergent Stereo-panoramas with a Fish-eye Lens (Version 1.) Hynek Bakstein and Tomáš Pajdla bakstein@cmp.felk.cvut.cz, pajdla@cmp.felk.cvut.cz CTU CMP RESEARCH REPORT ISSN Available at ftp://cmp.felk.cvut.cz/pub/cmp/articles/bakstein/bakstein-tr pdf This research was supported by the following grants: GAČR 12/1/971, EU Fifth Framework Programme project OMNIVIEWS IST , MŠMT KONTAKT 21/9, and MŠMT Research Reports of CMP, Czech Technical University in Prague, No. 22, 21 Published by Center for Machine Perception, Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University Technická 2, Prague 6, Czech Republic fax , phone , www:

2 Omnivergent Stereo-panoramas with a Fish-eye Lens Hynek Bakstein and Tomáš Pajdla Abstract We present a novel approach to the calibration of an ultra wide angle fish eye lenses. These lenses have the field of view larger than 18, therefore the common calibration methods based on the classical pinhole camera model with a planar retina cannot be employed. A new camera model is proposed together with a calibration method for extracting its parameters. Experiments evaluating the proposed technique are presented. We also present an example of a camera that can be described by the proposed model and calibrated by the proposed method. It consists of a standard of-the-shelf fish eye adapter from Nikon and a common CCD camera. Possible fields of employment of this sensor include construction of 36 x 36 mosaics. 1. Introduction Large field of view (FOV) is useful or even mandatory for some computer vision applications. Larger FOV assures that more points are visible in one image, which is good for algorithms for the 3D reconstruction. Several ways to enlarge the FOV exist. Mirrors, lenses, moving parts, or a combination of the previous can be employed for this purpose. In this paper we focus on the use of a special lens, the Nikon FC-E8 fish eye converter [3], which provides FOV of 183. Such a sensor can be used in a practical realization of the 36 x 36 mosaics [7]. The 36 x 36 mosaic is a good example of a noncentral camera where the light rays are tangent to a circle, see Figure 1. Let us describe the geometry of the light rays that form the mosaic. A plane π is rotated on a circular path C with a radius r. The plane π is perpendicular to the plane δ and moreover, it is tangent to the circle C. At each rotation position, all light rays lie in the plane π and intersect the point where the plane π touches the circle C. Because each point outside the circle C can be observed by two light rays, the camera provides a complete spherical mosaic for both the left and the right eye after a rotation of the plane π about 36. One important property of the 36 x 36 mosaic is that it produces a pair of two stereo images where the corresponding points lie on the same image row. This simplifies the correspondence search to a one dimensional search along the lines. 1

3 π r δ Figure 1: Geometry of 36 x 36 mosaic. Let us mention two possible realizations of the 36 x 36 mosaics. One requires a mirror, which can be either conical observed by a telecentric lens (depicted in Figure 2(a)), or of a specially designed shape observed by a classical pinhole camera [7]. Other realization can be made with an easily available off the shelf Nikon fish eye converter with the FOV equal to 183. The points in the image corresponding to the light rays lying in the plane π have to be selected at each rotation step from the image. Therefore, the lens has to be calibrated. (a) (b) Figure 2: Two possible realizations of the 36 x 36 mosaic. (a) a telecentric camera and a conical mirror, (b) a central camera with Nikon fish eye converter. Methods for the calibration of wide angle lenses that can be found in the literature [1, 13, 12, 2] assume that the corrected image can be unwarped to a planar 2

4 retina. That is not possible with the FOV larger than 18 where a spherical retina has to be used instead. A new omnidirectional central camera model has to be proposed. The structure of the paper is as follows. In Section 2 we propose the camera model. A method for estimating its parameters is then presented in Section 3. Experimental results can be found in Section Camera model The main gist of our novel approach is that a planar retina cannot be used in the model of an omnidirectional camera. Let us adopt the definition of the omnidirectional camera from [8]: An image is directional iff there exists vector u R 3 such that ux > for all vectors x pointing from the camera center towards the scene points. We say that the image is omnidirectional if there is no such u. Image obtained by a 183 FOV FC-E8 lenses can be omnidirectional. Therefore, we cannot construct the light rays in the camera coordinate system like in the directional case by adding a unit z coordinate to point coordinates in the image plane (u, v), see Figure 3(a). The light rays have to be constructed in a truly omnidirectional way, by transforming the image plane coordinates (u, v) into coordinates (x, y, z) of ray directions vectors expressed with respect to some camera coordinate system, as it is depicted in Figure 3(b). (u,v) (u,v) a) b) Figure 3: From image coordinates to light rays: (a) a directional and (b) an omnidirectional camera. 3

5 By a camera model we understand the mapping from image points to rays that pass through the camera center. What is the camera model of a single CCD image obtained with the FC-E8 lens? Figure 4 shows a plane with concentric circles placed in front of the lens. The center of the circles is placed approximately into the center of the image. The radii of the circles equal R tan θ, where θ is the angle between the camera rays passing through the points on the circle and the optical axis of the lens and R is the distance of the plane with circles from the camera center. We can notice that the circles image to circles and that they have a constant step in their radii, measured from the image center. Therefore we can formulate the observation that there is a linear relationship between the angle θ and the radius r of a point in the image. We have verified these observations experimentally, see Figures 7-9. These figures show dependence of the radius r on the angle θ. Notice that the function is almost linear, however, its first and second derivatives show that it can be more precisely approximated by a polynomial function. We will discuss this fact later. Figure 4: Image of circles with radii set to a tangent of a constantly incremented angle results in concentric circles with a constant increment in radii in the image. Let (u, v) denote coordinates of a point in the image measured in an orthogonal basis as shown in Figure 5. CCD chips often have a different distance between the cells in the vertical and the horizontal direction. This results in a distortion because the image is not scaled equally in the horizontal and vertical direction. Therefore, we introduce a parameter β representing the ratio between the scales of the horizontal respectively the vertical axis. This distortion causes the circles to 4

6 appear as ellipses in the image, as shown in Figure 5. A matrix expression of the distortion can be written in the following form: 1 u K 1 = β βv. (1) 1 This matrix is a simplified intrinsic calibration matrix of a pinhole camera [5]. The displacement of the center of the image is expressed by terms u and v, the skewness of the image axes is neglected in our case, because we suppose that our camera has orthogonal pixels. v u u v K 1 Figure 5: A circle in the image plane is distorted due to a different length of the axes. Therefore we observe an ellipse instead of a circle in the image. Under the above observations, we can formulate the model of the camera. Provided with coordinates of some image point (u, v, 1) T, we are able to compute their rectified image coordinates u = (u, v, 1) T by the multiplication by K 1 u = K 1 u. (2) We can compute the corresponding polar coordinates with respect to the center of the distortion (, ), that is the radius r of the circle on which the point lies and the angle ϕ between the point and the u axis of the image coordinate system, see Figure 6(a). The radius r can be computed as r = (u ) 2 + (v ) 2. (3) The angle ϕ is then expressed as ϕ = atan2(u, v ). (4) 5

7 u v ϕ r (u,v ) (u,v ) z = optical axis θ (x,y,z ) ϕ x (a) y (b) Figure 6: (a) From orthogonal coordinates (u, v ) to polar coordinates (r, ϕ). (b) Camera coordinate system and its relation to the angles θ and ϕ. The value of r determines the angle θ between the light ray in the camera centered coordinate system and the FC-E8 optical axis (which is equal to the z axis of the camera centered coordinate system). We have observed that this relation is approximately linear and can be more precisely expressed as a polynomial of the third degree. See Figure 7 for the graph of the dependence of the radius r of an image point on the angle θ and Figure 8 and Figure 9 for its first and second derivatives which were estimated using finite differences. Notice that the second derivative is a linear function and therefore the polynomial has degree equal to three. Now we are ready to express the angle θ as a function of r: θ = ar 3 + br 2 + cr, (5) where a, b, and c are unknown coefficients of the polynomial. Provided with ϕ, θ, and r, the light ray x = (x, y, z ) T in the camera centered coordinate frame can be computed as x = x y z = sin θ cos ϕ sin θ sin ϕ cos θ, (6) as it is depicted in Figure 6(b). Directions x are determined by six parameters, the image center (u, v ), the three polynomial coefficients, and β. These parameters can be estimated from unknown lines in the scene without the full metric calibration of the camera. For a metric camera calibration, a known scene with calibration points measured with respect to some scene coordinate systems is required. 6

8 5 Measured points Cubic fit r(θ) θ Figure 7: Dependence of the r(θ) coordinate of a point on angle θ First derivative of r(θ) Measured points Quadratic fit θ Figure 8: Dependence of the first derivative of the r(θ) coordinate of a point on angle θ. 7

9 1.5 1 Measured points Line fit θ Figure 9: Dependence of the second derivative of the r(θ) coordinate of a point on angle θ. Denoting the scene coordinates of the calibration points by X = (x, y, z) T, we can express their coordinates in the camera centered coordinate system as X = ( x, ỹ, z) T by equation X = RX + T, (7) where R represents a rotation and T stands for a translation. The matrix R has three degrees of freedom and the vector T = (t 1, t 2, t 3 ) T is expressed by three parameters. This yields us six unknown parameters. Together with the image center, three parameters for the non-linear distortion camera, and the parameter β, we are left with twelve unknowns. 3. Estimation of the model parameters The camera can be calibrated from a known scene. However, to eliminate the nonlinear distortion, we need to observe only a set of straight lines similarily as it was done for perspective cameras [9]. After determining the parameters of the transformation projecting the straight lines in the scene to curves we obtain an internally calibrated omnidirectional central camera. Full calibration, which means the estimation of the intrinsic parameters (the image center and β), the nonlinear distortion, and the extrinsic parameters [4], can be performed under knowledge of known points in the scene and their corresponding images. Figure 11(a) shows the scene, which was used during our experiments and Figure 11(b) contains images of these points. Note that the middle lines of the points in both directions go through the center of the image (marked by + ). The 8

10 green circle illustrated the field of view of an fish eye image and the blue ellipse marks points corresponding to the light rays which lie in the plane π v (a) u (b) Figure 1: (a) Points in the scene, the black dot denotes the camera center, and (b) their images, where the + is the image center Calibration of internal parameters The main idea of an elimination of the nonlinear distortion is based on the fact, that the light rays respective to points lying on a line in the scene span a two dimensional subspace of the three dimensional scene space. Therefore, if we write down their coordinates in a matrix form A j = x 1 x 2... x n y 1 y 2... y n z 1 z 2... z n, (8) where n is the number of points on a line and j = 1..l, where l denotes the number of the lines, the matrix A j should have rank 2. Moreover, the autocorrelation matrix B j = A j A jt (9) has to be of rank 2 as well. Therefore we can formulate an objective function l J = (λ j 3 )2, (1) j=1 9

11 where λj3 is the smallest eigenvalue of the matrix Bj. We minimize this function with respect to the six parameters (the image center, β, and the three polynomial coefficients) used to compute the light rays in (6) Complete camera calibration Putting together all parameters described at the end of Section 2 we are left with twelve unknown parameters to estimate. We define the objective function J= N X i=1 X x kx k kx k, (11) where k...k denotes the Euclidean norm and N is the number of points. This function closely approximates the angle between the rays provided by the camera model and the true ones. A MATLAB implementation of the Levenberg-Marquardt [6] minimization was employed in order to minimize the objective function (11). 4. Experimental results In this section we show the experiments verifying each step of the proposed calibration method. Our experimental setup is shown in Figure 11(a). It consisted of a Pulnix TM-11 camera with a standard 12.5mm lens on which the Nikon fish eye converter FC-E8 was mounted. The camera was placed on a motorized turntable. Seven calibration points in one line were deployed in the scene. The turntable was rotated from 9 to 9 with a step of 1. Therefore a set of 19 images was acquired. In Figure 11(b), you can see one of the images. (a) (b) Figure 11: (a) experimental setup and (b) one of the images in the sequence used for calibration showing the seven points in one row. 1

12 We can create a scene with all the points composing a 3D object instead of a line if we fix a scene coordinate system to one orientation of the camera and for its each rotation we compute the coordinates of the calibration points with respect to this fixed coordinate system. The points in the scene will lie on a cylinder, see Figure 1(a). We will refer to the line of 7 points corresponding to points acquired at one rotation step, and therefore one image, by the term column of points. We can also create an image of all these points by composing their coordinates detected in each of the 19 images into one synthetic image, as it is depicted in Figure 1(b). We will use this artificial scene in some of the following experiments. However, a the axis of rotation does not have to be axactly in the centre of projection of the camera. Then, the camera will not only rotate, but move along a circle. This does not change the scene coordinates of the points on the cylinder, instead it influences the translation of the camera with respect to the scene coordinate system by the value of the radius of the circle on which is the camera rotated. First, we show that provided with image coordinates of some points we are able to determine the intrinsic parameters of our camera, that is the image center, β, and the coefficients of the polynomial defined in (5). We set the image center to the center of the circular image formed by the lens. The parameter β was set to 1 and the polynomial coefficients values were set to an initial fit obtained from one line of points in the image passing through the distortion center, as it is shown in Figure 7. A line fit error, that is the smallest eigenvalue of the matrix B j from (9) for this initial guess of the parameters and after the optimization of these parameters (described in Section 3.1) is depicted in Figure 12(a). Note the significant decrease of the line fit error. 4.5 x Initial estimate Nonlinear correction (Section 3.1).2 Nonlinear correction (Section 3.1) Full optimization (Section 3.2) Line fit error Calibration error Line number (a) Calibration point (b) Figure 12: (a) The line fit error for all 19 lines. (b) The calibration error (the distance between X and X ) for odd calibration points. 11

13 Then, we compare the results of the estimation of parameters estimated in the full optimization procedure when minimizing the objective function (11) with the values obtained with parameters computed in the previous step. The parameters were estimated for odd columns of the calibration points and were tested with the even columns. The resulting calibration error measured in the camera coordinate system units can be seen in Figure 12(b). The value of the objective function (11) decreased from 5.13 to 1.91 and the maximal error for the columns of the points decreased from.185 to.65. Figure 13(a) shows the development of the parameters during the full optimization procedure described in Section 3.2. The value of the objective function (11) is shown in blue and it is depicted in Figure 13(b). Note that the values do not change after 3 iterations. 1.2 Parameter values Parameter values Iteration (a) Iteration (b) Figure 13: Values of the camera parameters and the calibration error during (a) the full optimization (only a detailed view on the values of a subset of parameters during the first 7 iterations is displayed) and (b) Development of the calibration error during the full optimization. In the next experiment, we compare the coordinates of points in the scene with point coordinates obtained by a back projection of their images. We obtained the back projections of the points by intersecting a light ray, defined by the point images, with the cylinder on which the scene points lie. Figure 14 shows this back projection in 3D. Scene coordinates of the points on the cylinder are marked by blue dots. Lines joining the points in the figure correspond to one rotation step, which means that the lines correspond to one column of the points. Red dots and lines denote the projection of points using camera parameters computed by the nonlinear optimization described in Section 3.1. Green dots and lines mark the points projected using the parameters obtained by the partial optimization. Finally, black dots denote the points computed with parameters estimated in the full opti- 12

14 mization procedure, described in Section 3.2. Note that the points projected using the parameters estimated in the nonlinear optimization (Section 3.1) have a significant reprojection error. However, both the partial and the full optimization brought the points closer to the measured coordinates Figure 14: Back projection of detected points to the scene with parameters obtained from the nonlinear optimization (Section 3.1), the partial optimization and the full optimization (Section 3.2). For better clarity, we illustrate the back projection of the points to the scene on a cylinder unwarped to a plane. Figure 15 show the comparison between the points back projected using the parameters estimated in the nonlinear optimization (Section 3.1). Note a rotation of the points. Figure 16 shows the same situations as above but in this case, the points back projected employing the parameters estimated in the full optimization (Section 3.2) were used. The lines joining the measured and back projected points were scaled in Figure 17 for better illustration of the calibration error. Note that there is still some systematic distortion. The nature of this eeror has to be still found. Finally we show that we are able to select an ellipse corresponding to light rays lying in the plane π. The angle between these rays and the optical axis is 13

15 Figure 15: Back projection of detected points to the scene with the parameters obtained from the nonlinear optimization (Section 3.1). The results are displayed on a cylinder unrolled to a plane, the blue x denotes the measured points and the red + marks the reprojected points. Red lines joining the corresponding points illustrate the reprojection error. π 2, therefore we have to select points which have the angle θ (see (5)) equal to π 2. This step is crucial for the employment of the proposed sensor in a realization of the 36 x 36 mosaic. The selection of the proper ellipse assures that the corresponding points in the mosaic pair will be on the same image rows, which simplifies the correspondence search algorithms. Figures 18 and 19 show the right and the left eye mosaic respectively. Note the significant disparity of objects in the scene. Five examples of corresponding points are marked by yellow x. Enlarged parts of the mosaic showing one corresponding point can be found in Figures 2(a) for right mosaic and Figures 2(b) for the left mosaic. Notice that the points lie on the same image row. 14

16 Figure 16: Back projection of detected points to the scene with the parameters obtained from the full optimization (Section 3.2). The results are displayed on a cylinder unrolled to a plane, the blue x denotes the measured points. Black lines joining the corresponding points illustrate the reprojection error. 5. Conclusion and outlook We have proposed a novel camera model and a method for estimating its parameters. This model describes optics which can be used in a realization of the 36 x 36 mosaics. This realization uses standard off the shelf components and does not require a mirror, thus simplifying the design of the 36 x 36 mosaics camera. Experimental results verify that the model describes the real camera and that the parameters can be recovered with a reasonable precision. However, there are some questions open. How many lines are needed to estimate the parameters of the nonlinear distortion and what are the degenerate cases? What is the number of points required for the full calibration? Also, we would like to test our sensor with some dense stereo reconstruction algorithm, for example [1]. Mosaic images obtained with our camera are rectified and therefore the 15

17 Figure 17: Back projection of detected points to the scene with the parameters obtained from the full optimization (Section 3.2). The results are displayed on a cylinder unrolled to a plane, the blue x denotes the measured points. Black lines joining the corresponding points, illustrating the reprojection error, were scaled 2.5 times. employment of such an algorithm should be straightforward, however, it has to be adapted to reflect the topology of the mosaic images, which is the torus [11]. References [1] A. Basu and S. Licardie. Alternative models for fish-eye lenses. Pattern Recognition Letters, 16(4): , [2] S. S. Beauchemin, R. Bajcsy, and Givaty G. A unified procedure for calibrating intrinsic parameters of fish-eye lenses. In Vision Interface (VI 99), pages , May [3] Nikon Corp. Nikon www pages:

18 Figure 18: Right eye mosaic with selected points marked by yellow x. Figure 19: Left eye mosaic with selected points marked by yellow x. [4] O. Faugeras. Three-dimensional computer vision A geometric viewpoint. MIT Press, [5] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge, UK, 2. [6] J.J. Moré. The levenberg-marquardt algorithm: Implementation and theory. In G. A. Watson, editor, Numerical Analysis, Lecture Notes in Mathematics 17

19 (a) (b) Figure 2: Detail of one corresponding pair of points (a) in the right mosaic and (b) in the left mosaic. Note that the points lie on the same image row. 63, pages Springer Verlag, [7] S. K. Nayar and A. Karmarkar. 36 x 36 mosaics. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR ), Hilton Head, South Carolina, volume 2, pages , June 2. [8] T. Pajdla, T. Svoboda, and V. Hlaváč. Epipolar geometry of central panoramic cameras. In R. Benosman and S. B. Kang, editors, Panoramic Vision : Sensors, Theory, and Applications. Springer Verlag, Berlin, Germany, 1st edition, 2. [9] Tomáš Pajdla, Tomáš Werner, and Václav Hlaváč. Correcting radial lens distortion without knowledge of 3-D structure. Technical Report K335-CMP , FEE CTU, FEL ČVUT, Karlovo náměstí 13, Praha, Czech Republic, June [1] R. Šára. Stable monotonic matching for stereoscopic vision. In Reinhard Klette and Shmuel Peleg, editors, Robot Vision, Proceedings International Workshop RobVis 21, number 1998 in LNCS, pages , Berlin, Germany, February 21. Springer Verlag. [11] H.-Y. Shum, A. Kalai, and S. M. Seitz. Omnivergent stereo. In Proc. of the International Conference on Computer Vision (ICCV 99), Kerkyra, Greece, volume 1, pages 22 29, September

20 [12] R. Swaminathan and S.K. Nayar. Non-metric calibration of wide-angle lenses. In DARPA Image Understanding Workshop, pages , [13] Y. Xiong and K. Turkowski. Creating image based vr using a self-calibrating fisheye lens. In IEEE Computer Vision and Pattern Recognition (CVPR97), pages ,

Calibration of a fish eye lens with field of view larger than 180

Calibration of a fish eye lens with field of view larger than 180 CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Calibration of a fish eye lens with field of view larger than 18 Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek

More information

Using RANSAC for Omnidirectional Camera Model Fitting

Using RANSAC for Omnidirectional Camera Model Fitting CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Using RANSAC for Omnidirectional Camera Model Fitting Branislav Mičušík and Tomáš Pajdla {micusb,pajdla}@cmp.felk.cvut.cz REPRINT Branislav Mičušík

More information

Precise Omnidirectional Camera Calibration

Precise Omnidirectional Camera Calibration Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional

More information

Camera Calibration with a Simulated Three Dimensional Calibration Object

Camera Calibration with a Simulated Three Dimensional Calibration Object Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek

More information

3D Metric Reconstruction from Uncalibrated Omnidirectional Images

3D Metric Reconstruction from Uncalibrated Omnidirectional Images CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY 3D Metric Reconstruction from Uncalibrated Omnidirectional Images Branislav Mičušík, Daniel Martinec and Tomáš Pajdla micusb1@cmp.felk.cvut.cz,

More information

Catadioptric camera model with conic mirror

Catadioptric camera model with conic mirror LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

3D Metric Reconstruction from Uncalibrated Omnidirectional Images

3D Metric Reconstruction from Uncalibrated Omnidirectional Images CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY 3D Metric Reconstruction from Uncalibrated Omnidirectional Images Branislav Mičušík, Daniel Martinec and Tomáš Pajdla {micusb1, martid1, pajdla}@cmp.felk.cvut.cz

More information

Constraints on perspective images and circular panoramas

Constraints on perspective images and circular panoramas Constraints on perspective images and circular panoramas Marc Menem Tomáš Pajdla!!"# $ &% '(# $ ) Center for Machine Perception, Department of Cybernetics, Czech Technical University in Prague, Karlovo

More information

Mathematics of a Multiple Omni-Directional System

Mathematics of a Multiple Omni-Directional System Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Para-catadioptric Camera Auto Calibration from Epipolar Geometry

Para-catadioptric Camera Auto Calibration from Epipolar Geometry Para-catadioptric Camera Auto Calibration from Epipolar Geometry Branislav Mičušík and Tomáš Pajdla Center for Machine Perception http://cmp.felk.cvut.cz Department of Cybernetics Faculty of Electrical

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Coplanar circles, quasi-affine invariance and calibration

Coplanar circles, quasi-affine invariance and calibration Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

Towards Generic Self-Calibration of Central Cameras

Towards Generic Self-Calibration of Central Cameras Towards Generic Self-Calibration of Central Cameras Srikumar Ramalingam 1&2, Peter Sturm 1, and Suresh K. Lodha 2 1 INRIA Rhône-Alpes, GRAVIR-CNRS, 38330 Montbonnot, France 2 Dept. of Computer Science,

More information

A Calibration Algorithm for POX-Slits Camera

A Calibration Algorithm for POX-Slits Camera A Calibration Algorithm for POX-Slits Camera N. Martins 1 and H. Araújo 2 1 DEIS, ISEC, Polytechnic Institute of Coimbra, Portugal 2 ISR/DEEC, University of Coimbra, Portugal Abstract Recent developments

More information

Two-View Geometry of Omnidirectional Cameras

Two-View Geometry of Omnidirectional Cameras CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Two-View Geometry of Omnidirectional Cameras PhD Thesis Branislav Mičušík micusb1@cmp.felk.cvut.cz CTU CMP 2004 07 June 21, 2004 Available at ftp://cmp.felk.cvut.cz/pub/cmp/articles/micusik/micusik-thesis-reprint.pdf

More information

Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints

Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints Wei Jiang Japan Science and Technology Agency 4-1-8, Honcho, Kawaguchi-shi, Saitama, Japan jiang@anken.go.jp

More information

SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR

SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR Juho Kannala, Sami S. Brandt and Janne Heikkilä Machine Vision Group, University of Oulu, Finland {jkannala, sbrandt, jth}@ee.oulu.fi Keywords:

More information

arxiv:cs/ v1 [cs.cv] 24 Mar 2003

arxiv:cs/ v1 [cs.cv] 24 Mar 2003 Differential Methods in Catadioptric Sensor Design with Applications to Panoramic Imaging Technical Report arxiv:cs/0303024v1 [cs.cv] 24 Mar 2003 R. Andrew Hicks Department of Mathematics Drexel University

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

A Computer Vision Sensor for Panoramic Depth Perception

A Computer Vision Sensor for Panoramic Depth Perception A Computer Vision Sensor for Panoramic Depth Perception Radu Orghidan 1, El Mustapha Mouaddib 2, and Joaquim Salvi 1 1 Institute of Informatics and Applications, Computer Vision and Robotics Group University

More information

Camera Calibration Using Line Correspondences

Camera Calibration Using Line Correspondences Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of

More information

Omni Stereo Vision of Cooperative Mobile Robots

Omni Stereo Vision of Cooperative Mobile Robots Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)

More information

University of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract

University of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract Mirror Symmetry 2-View Stereo Geometry Alexandre R.J. François +, Gérard G. Medioni + and Roman Waupotitsch * + Institute for Robotics and Intelligent Systems * Geometrix Inc. University of Southern California,

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

How to Compute the Pose of an Object without a Direct View?

How to Compute the Pose of an Object without a Direct View? How to Compute the Pose of an Object without a Direct View? Peter Sturm and Thomas Bonfort INRIA Rhône-Alpes, 38330 Montbonnot St Martin, France {Peter.Sturm, Thomas.Bonfort}@inrialpes.fr Abstract. We

More information

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de

More information

Efficient Stereo Image Rectification Method Using Horizontal Baseline

Efficient Stereo Image Rectification Method Using Horizontal Baseline Efficient Stereo Image Rectification Method Using Horizontal Baseline Yun-Suk Kang and Yo-Sung Ho School of Information and Communicatitions Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro,

More information

A Framework for 3D Pushbroom Imaging CUCS

A Framework for 3D Pushbroom Imaging CUCS A Framework for 3D Pushbroom Imaging CUCS-002-03 Naoyuki Ichimura and Shree K. Nayar Information Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST) Tsukuba,

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Bo Li, Kun Peng, Xianghua Ying, and Hongbin Zha The Key Lab of Machine Perception (Ministry of Education), Peking University,

More information

Minimal Solutions for Generic Imaging Models

Minimal Solutions for Generic Imaging Models Minimal Solutions for Generic Imaging Models Srikumar Ramalingam Peter Sturm Oxford Brookes University, UK INRIA Grenoble Rhône-Alpes, France Abstract A generic imaging model refers to a non-parametric

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

A 3D Pattern for Post Estimation for Object Capture

A 3D Pattern for Post Estimation for Object Capture A 3D Pattern for Post Estimation for Object Capture Lei Wang, Cindy Grimm, and Robert Pless Department of Computer Science and Engineering Washington University One Brookings Drive, St. Louis, MO, 63130

More information

Geometric camera models and calibration

Geometric camera models and calibration Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Calibrating an Overhead Video Camera

Calibrating an Overhead Video Camera Calibrating an Overhead Video Camera Raul Rojas Freie Universität Berlin, Takustraße 9, 495 Berlin, Germany http://www.fu-fighters.de Abstract. In this section we discuss how to calibrate an overhead video

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Computer Vision I - Appearance-based Matching and Projective Geometry

Computer Vision I - Appearance-based Matching and Projective Geometry Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation

More information

Region matching for omnidirectional images using virtual camera planes

Region matching for omnidirectional images using virtual camera planes Computer Vision Winter Workshop 2006, Ondřej Chum, Vojtěch Franc (eds.) Telč, Czech Republic, February 6 8 Czech Pattern Recognition Society Region matching for omnidirectional images using virtual camera

More information

Metric Rectification for Perspective Images of Planes

Metric Rectification for Perspective Images of Planes 789139-3 University of California Santa Barbara Department of Electrical and Computer Engineering CS290I Multiple View Geometry in Computer Vision and Computer Graphics Spring 2006 Metric Rectification

More information

Computer Vision I - Appearance-based Matching and Projective Geometry

Computer Vision I - Appearance-based Matching and Projective Geometry Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 01/11/2016 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation

More information

Agenda. Rotations. Camera models. Camera calibration. Homographies

Agenda. Rotations. Camera models. Camera calibration. Homographies Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate

More information

A Stratified Approach for Camera Calibration Using Spheres

A Stratified Approach for Camera Calibration Using Spheres IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. Y, MONTH YEAR 1 A Stratified Approach for Camera Calibration Using Spheres Kwan-Yee K. Wong, Member, IEEE, Guoqiang Zhang, Student-Member, IEEE and Zhihu

More information

Matching of Line Segments Across Multiple Views: Implementation Description (memo)

Matching of Line Segments Across Multiple Views: Implementation Description (memo) Matching of Line Segments Across Multiple Views: Implementation Description (memo) Tomas Werner Visual Geometry Group Department of Engineering Science University of Oxford, U.K. 2002 1 Introduction This

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix

Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix J Math Imaging Vis 00 37: 40-48 DOI 0007/s085-00-09-9 Authors s version The final publication is available at wwwspringerlinkcom Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential

More information

Monitoring surrounding areas of truck-trailer combinations

Monitoring surrounding areas of truck-trailer combinations Monitoring surrounding areas of truck-trailer combinations Tobias Ehlgen 1 and Tomas Pajdla 2 1 Daimler-Chrysler Research and Technology, Ulm tobias.ehlgen@daimlerchrysler.com 2 Center of Machine Perception,

More information

3D Geometry and Camera Calibration

3D Geometry and Camera Calibration 3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often

More information

Camera calibration for miniature, low-cost, wide-angle imaging systems

Camera calibration for miniature, low-cost, wide-angle imaging systems Camera calibration for miniature, low-cost, wide-angle imaging systems Oliver Frank, Roman Katz, Christel-Loic Tisse and Hugh Durrant-Whyte ARC Centre of Excellence for Autonomous Systems University of

More information

A Novel Stereo Camera System by a Biprism

A Novel Stereo Camera System by a Biprism 528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel

More information

On the Epipolar Geometry of the Crossed-Slits Projection

On the Epipolar Geometry of the Crossed-Slits Projection In Proc. 9th IEEE International Conference of Computer Vision, Nice, October 2003. On the Epipolar Geometry of the Crossed-Slits Projection Doron Feldman Tomas Pajdla Daphna Weinshall School of Computer

More information

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Projective Geometry and Camera Models

Projective Geometry and Camera Models /2/ Projective Geometry and Camera Models Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Note about HW Out before next Tues Prob: covered today, Tues Prob2: covered next Thurs Prob3:

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

IN the last few years, we have seen an increasing interest. Calibration of Cameras with Radially Symmetric Distortion

IN the last few years, we have seen an increasing interest. Calibration of Cameras with Radially Symmetric Distortion TARDIF et al.: CALIBRATION OF CAMERAS WITH RADIALLY SYMMETRIC DISTORTION Calibration of Cameras with Radially Symmetric Distortion Jean-Philippe Tardif, Peter Sturm, Martin Trudeau, and Sébastien Roy Abstract

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Multi-View Omni-Directional Imaging

Multi-View Omni-Directional Imaging Multi-View Omni-Directional Imaging Tuesday, December 19, 2000 Moshe Ben-Ezra, Shmuel Peleg Abstract This paper describes a novel camera design or the creation o multiple panoramic images, such that each

More information

Arm coordinate system. View 1. View 1 View 2. View 2 R, T R, T R, T R, T. 12 t 1. u_ 1 u_ 2. Coordinate system of a robot

Arm coordinate system. View 1. View 1 View 2. View 2 R, T R, T R, T R, T. 12 t 1. u_ 1 u_ 2. Coordinate system of a robot Czech Technical University, Prague The Center for Machine Perception Camera Calibration and Euclidean Reconstruction from Known Translations Tomas Pajdla and Vaclav Hlavac Computer Vision Laboratory Czech

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Agenda. Rotations. Camera calibration. Homography. Ransac

Agenda. Rotations. Camera calibration. Homography. Ransac Agenda Rotations Camera calibration Homography Ransac Geometric Transformations y x Transformation Matrix # DoF Preserves Icon translation rigid (Euclidean) similarity affine projective h I t h R t h sr

More information

Self-calibration of a pair of stereo cameras in general position

Self-calibration of a pair of stereo cameras in general position Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible

More information

Computer Vision. Geometric Camera Calibration. Samer M Abdallah, PhD

Computer Vision. Geometric Camera Calibration. Samer M Abdallah, PhD Computer Vision Samer M Abdallah, PhD Faculty of Engineering and Architecture American University of Beirut Beirut, Lebanon Geometric Camera Calibration September 2, 2004 1 Computer Vision Geometric Camera

More information

Projective geometry for Computer Vision

Projective geometry for Computer Vision Department of Computer Science and Engineering IIT Delhi NIT, Rourkela March 27, 2010 Overview Pin-hole camera Why projective geometry? Reconstruction Computer vision geometry: main problems Correspondence

More information

Two-View Geometry (Course 23, Lecture D)

Two-View Geometry (Course 23, Lecture D) Two-View Geometry (Course 23, Lecture D) Jana Kosecka Department of Computer Science George Mason University http://www.cs.gmu.edu/~kosecka General Formulation Given two views of the scene recover the

More information

Jump Stitch Metadata & Depth Maps Version 1.1

Jump Stitch Metadata & Depth Maps Version 1.1 Jump Stitch Metadata & Depth Maps Version 1.1 jump-help@google.com Contents 1. Introduction 1 2. Stitch Metadata File Format 2 3. Coverage Near the Poles 4 4. Coordinate Systems 6 5. Camera Model 6 6.

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

Camera Calibration With One-Dimensional Objects

Camera Calibration With One-Dimensional Objects Camera Calibration With One-Dimensional Objects Zhengyou Zhang December 2001 Technical Report MSR-TR-2001-120 Camera calibration has been studied extensively in computer vision and photogrammetry, and

More information

The Radial Trifocal Tensor: A tool for calibrating the radial distortion of wide-angle cameras

The Radial Trifocal Tensor: A tool for calibrating the radial distortion of wide-angle cameras The Radial Trifocal Tensor: A tool for calibrating the radial distortion of wide-angle cameras SriRam Thirthala Marc Pollefeys Abstract We present a technique to linearly estimate the radial distortion

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images Gang Xu, Jun-ichi Terai and Heung-Yeung Shum Microsoft Research China 49

More information

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images

Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Gian Luca Mariottini and Domenico Prattichizzo Dipartimento di Ingegneria dell Informazione Università di Siena Via Roma 56,

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Course Overview: Lectures

Course Overview: Lectures Course Overview: Lectures Content: Elements of projective geometry Perspective camera Geometric problems in 3D vision Epipolar geometry Optimization methods in 3D vision 3D structure and camera motion

More information

Geometry of image formation

Geometry of image formation eometry of image formation Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 3, 2008 Talk Outline

More information

Cameras and Stereo CSE 455. Linda Shapiro

Cameras and Stereo CSE 455. Linda Shapiro Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe : Martin Stiaszny and Dana Qu LECTURE 0 Camera Calibration 0.. Introduction Just like the mythical frictionless plane, in real life we will

More information

521466S Machine Vision Exercise #1 Camera models

521466S Machine Vision Exercise #1 Camera models 52466S Machine Vision Exercise # Camera models. Pinhole camera. The perspective projection equations or a pinhole camera are x n = x c, = y c, where x n = [x n, ] are the normalized image coordinates,

More information

Camera Calibration. COS 429 Princeton University

Camera Calibration. COS 429 Princeton University Camera Calibration COS 429 Princeton University Point Correspondences What can you figure out from point correspondences? Noah Snavely Point Correspondences X 1 X 4 X 3 X 2 X 5 X 6 X 7 p 1,1 p 1,2 p 1,3

More information

A Real-Time Catadioptric Stereo System Using Planar Mirrors

A Real-Time Catadioptric Stereo System Using Planar Mirrors A Real-Time Catadioptric Stereo System Using Planar Mirrors Joshua Gluckman Shree K. Nayar Department of Computer Science Columbia University New York, NY 10027 Abstract By using mirror reflections of

More information