Camera Calibration Using Two Concentric Circles
|
|
- Shanon Robbins
- 5 years ago
- Views:
Transcription
1 Camera Calibration Using Two Concentric Circles Francisco Abad, Emilio Camahort, and Roberto Vivó Universidad Politécnica de Valencia, Camino de Vera s/n, Valencia 4601, Spain {jabad, camahort, WWW home page: Abstract. We present a simple calibration method or computing the extrinsic parameters (pose) and intrinsic parameters (ocal length and principal point) o a camera by imaging a pattern o known geometry. Usually, the patterns used in calibration algorithms are complex to build (three orthogonal planes) or need a lot o eatures (checkerboard-like pattern). We propose using just two concentric circles that, when projected onto the image, become two ellipses. With a simple mark close to the outer circle, our algorithm can recover the ull pose o the camera. Under the perect pinhole camera assumption, the pose and the ocal length can be recovered rom just one image. I the principal point o the camera has to be computed as well, two images are required. We present several results, using both synthetic and real images, that show the robustness o our method. 1 Introduction In the past two decades, several methods have been proposed or calibrating a camera by taking images o a pattern with known geometry. First in photogrammetry and then in computer vision, researchers have developed methods to recover a camera s extrinsic parameters (position and orientation) and intrinsic parameters (ocal length and principal point). Those methods usually require expensive laboratory settings, or use complex iducials [1]. In order to take the computer vision rom the laboratory to the home user, robust, inexpensive and eective techniques are needed. In this paper, we present an algorithm that easily recovers the pose and the ocal length o a camera by taking a single photo o a simple calibration pattern. We use a pattern made o two concentric circles o known radii, usually printed on a sheet o paper. We show how this pattern can be used in a simple setup to recover the camera parameters. Our method can be applied to camera tracking and related problems like robotics, entertainment and augmented reality. This paper is organized as ollows. The next section presents previous work in the ield o camera calibration with circular markers. Section 3 presents the This work was partially unded by the Programa de Incentivo a la Investigación o the Polytechnic University o Valencia, and by project TIC C03-01 o Spanish Ministry o Science and Technology
2 theoretical model and mathematical oundations o our work. In the ollowing section we present some results o our method, and discuss the tests we run with both synthetic and real data. Our paper inishes with some conclusions and directions or uture work. Previous Work Early work that used conics or computer vision applications was reported in [ 4]. Circular markers have been extensively used in tracking applications due to their robustness properties [6, 7]. Kim et al. [8, 9] proposed a calibration method using two concentric circles. Their algorithm requires some initial inormation about the camera to get an initial value or the intrinsic matrix. They deine a cost unction on the calibration parameters and minimize it. This method only recovers the normal o the marker s supporting plane. Another method that recovers the supporting plane o the circles was proposed in [10]. The method computes the plane s normal and a point on it expressed in camera coordinates. The method assumes that the principal point is at the center o the image. Unlike the previous methods, our algorithm does not require any a priori inormation about the camera parameters to calibrate it. Furthermore, we recover the pose (the ull rotation matrix and the translation vector) using a simple marker. Finally, we also compute the position o the principal point. 3 Calibrating the Camera 3.1 Detecting the Marker Our marker is composed o two concentric circles or radii r 1 and r, and an exterior mark that intersects with a circle o radius r 3 (see Fig. 1). The ellipses can be automatically recovered rom an image by applying standard methods in Computer Vision. Pixel chains are extracted rom the image and ellipses are itted with, e.g., Fitzgibbon s algorithm [5]. See or example [7] or an explanation o an automatic extraction algorithm. To ind the X axis mark, a circle o radius r 3 has to be projected using the same (unknown) camera as the other two. In an Appendix we explain how to project a circle o arbitrary radius concentric to two circles whose projections are known. 3. Pinhole Projection o a Circle The pinhole camera coniguration (assuming zero skew and square pixels) is usually described using an intrinsic parameter matrix (A), that describes the ocal length and principal point (see Fig. ), and an extrinsic parameter matrix (M), that establishes the camera pose (position and orientation) rom a given global coordinate system:
3 Y Yw Xw Zw v u xc u 0 xm r 1 X y x ym yc v 0 r r 3 T Zc Xc Yc Fig. 1. Design o our iducial Fig.. Pinhole camera in the scene 0 u 0 R 11 R 1 R 13 Tx A = 0 v 0 M = R 1 R R 3 Ty. (1) R 31 R 3 R 33 Tz In Fig., the world coordinate system (WCS) has its origin at the center o the concentric circles. Those circles are in the XwYw plane o the WCS, so the Zw axis is perpendicular to them. The projection operator (P), that computes the image pixel coordinates that corresponds to a 3D point in WCS is P = AM. Given a point X in WCS, equation λx = PX computes its homogeneous coordinates in the image coordinate system. The two circles are located on the plane Zw = 0, so we can write: λu p 11 p 1 p 13 p 14 λv = p 1 p p 3 p 4 λ p 31 p 3 p 33 p 34 Xw Yw 0 1 = p 11 Xw + p 1 Yw + p 14 p 1 Xw + p Yw + p 4. p 31 Xw + p 3 Yw + p 34 I we assume that the image coordinate system is centered at the principal point o the image, then u 0 = 0 and v 0 = 0 in (1) and we can write (see [10]): Xw = xt (R T) x t R 3 and Yw = xt (T R 1 ) x t R 3, () where x = [ u v ] t, and M = [ R1 R R 3 T ]. In the WCS, the exterior circle o radius r has coordinates C(Xw,Yw) = X w+y w r = 0. Substituting () in this equation and actoring, we can express the exterior circle in terms o the image coordinate system as ollows: C (x,y) = Ax + Bxy + Cy + Dx + Ey + F = 0. (3)
4 3.3 Recovering the Circles Projected Center Under perspective projection any conic is transormed into another conic. Speciically, circles are transormed into elipses when imaged by a camera. The projected center o the original circle, however, does not generally coincide with the center o the ellipse in the image. The projected center o the circles has to be computed in order to recover the normal to the supporting plane. The direction o the Zw axis in the camera coordinate system (or R 3 ) can be computed as ollows [, 3]: R 13 xc R 3 = ±N Q yc, (4) R 33 where (xc,yc) are the coordinates o the projected circle center in the image coordinate system (see Fig. ), N represents the normalization to a unit vector, and Q is the matrix that describes the ellipse, as deined in [3]: A B/ D/ Q = B/ C E/. (5) D/ E/ F/ Parameters A to F are those deined in (3) and is the ocal length. Two methods to recover the projected center o two concentric circles can be ound in [8] and [9]. In the Appendix we present our own original method. 3.4 Recovering the Pose Each parameter o the ellipse in (3) can be expressed in terms o, /T z and a constant term by substituting (5) in (4) [10]. This derivation uses the properties o the rotation matrices and the ollowing relations derived rom the pinhole camera model in Fig. : The result is Tx = T zxc and Ty = T zyc. (6) α1 α1r (α1 + α)y c + α α 3yc + α3 α 1α α 1α r (α α 3xc + (α1 + α)xcyc + α 1α 3yc) Q α α = r (α1 + α)x c + α 1α 3xc + α 3 α 1(α 1xc + α yc) α 1α 3r α 3(α 3xc + α xcyc α 1yc) (7) α (α 1xc + α yc) α α 3r α 3( α x c + α 1xcyc + α 3yc) (α 1xc + α yc) α3r α3(x c + yc) where: α 1 = Ax c + Byc + D α = Bx c + Cyc + E α 3 = Dx c + Eyc + F.
5 Thereore, (3) can be expressed as: t C (x,y) = Q /Tz G = 0, (8) 1 where G = [ x xy y x y 1 ] t. The unknowns to be computed are and /T z, so we rearrange (8) to leave the constant terms on the right-hand side o the expresion: ([ q11 q 1 q 31 q 41 q 51 q 61 q 1 q q 3 q 4 q 5 q 6 ] ) [ ] t G = [ ] q 13 q 3 q 33 q 43 q 53 q 63 G, (9) Tz where q ij is the element o row i, column j o matrix Q in (7). Given N points o the ellipse in the image we can build an N-degree over-determined system WX = B: W 11 W 1 [ ] B 1 W 1 W B.. =. Tz., (10) W N1 W N B N where W i1, W i and B i are computed using (9) with (x,y) replaced by the coordinates (x i,y i ) o the i-th point on the ellipse. This system can be solved using the least square pseudo-inverse technique: [ ] T z = ( W t W ) 1 W t B. Solving the system leads to and Tz. The components o R 3 can be computed by replacing in (4). Tx and Ty can be recovered rom (6). Following the previous steps we recover the normal to the plane that contains the circles (R 3 ) and the position o the origin o the WCS in camera coordinates (T) (see Fig. ). Fremont [10] proposed a calibration pattern that uses three orthogonal planes to recover the other two axes (Xw and Yw). Instead, we use a single mark on the exterior circle that deines the Xw direction, an idea that has been used beore in marker detection [7]. Given the pixel coordinates o the Xw axis mark in the image, we reproject it onto the plane o the concentric circles. That plane is completely deined by its normal (R 3 ) and a point on it (T). Let the X axis mark position be (r 3,0,0) in WCS, (xm,ym) in image coordinates, and Xm in camera coordinates. Then Xm = µ [ xm ym ] t where µ = D R 13 xm + R 3 ym + R 33,
6 and D = R 3 t T. Having the 3D coordinates o the Xw axis mark given in camera coordinates, and the 3D coordinates o the origin o the WCS, given in camera coordinates as well, the Xw axis (or R 1 ) is deined by Xw = N {Xm T } where N is a normalization operator. Obviously, in a right-handed coordinate system, Yw = Zw Xw, or R = R 3 R Recovering the Principal Point So ar we have assumed that the optical axis o the camera is perectly centered at the image (i.e., the principal point is the center o the image). In this section we remove this assumption and compute the principal point using the results o the previous sections. Due to the error in the estimation o the principal point, reprojecting the original circle using the parameters computed in the previous sections does not produce the ellipses in the image. This misalignment is proportional to the error incurred in the estimation o the position o the principal point. By minizating that error, the principal point can be recovered. When processing a video stream with multiple rames, the principal point can be recovered once and kept ixed or the remaining rames. This is true as long as the internal camera settings are not changed. Once the parameters that deine the projection have been recovered, we can reproject the circle o radius r onto an ellipse in the image. By minimizing the error in the reprojection, a good approximation to the principal point can be computed. We have ound that the error o reprojection can be deined as the distance between the center o the ellipse used or the calibration and the center o the reprojected ellipse. Alternatively, we can deine the error in terms o the angle between the principal axes o those two ellipses. The algorithm would be: 1. Start with an initial guess o the principal point (i.e., the center o the image).. Deine the ellipses and the X axis marker o the image with respect to that principal point. 3. Calibrate the camera. 4. Reproject the original circle (o radius r ) using the parameters obtained in the previous step. 5. Compute the reprojection error and update the working principal point accordingly. Optimization methods like Levenberg-Marquardt [11] (implemented in MIN- PACK) can eiciently ind the D position o the principal point that minimizes the error o reprojection. 4 Validating our Method We have validated our method using both synthetic and real data. We use synthetic data to determine how robust is our method in the presence o noise.
7 14 1 T R 3 Relative error (%) Pixels Fig. 3. Relative errors in the estimations o T and R 3 (Zw) 4.1 Robustness To check the robustness o the algorithm, we project two concentric circles using a known synthetic camera coniguration.then, we perturb the points o the projected circles by adding random noise to their coordinates. We it an ellipse to each set o perturbed points using Fitzgibbon s algorithm [5]. Finally, we compute the camera parameters using these two ellipses. Figure 3 shows the errors that the added noise produces in the recovered normal o the supporting plane (R 3 ) and the translation vector (T). Note that the error incurred is relatively small. We have ound that the system is very robust in the presence o systematic errors, i.e., when both ellipses are aected by the same error (or instance, with a non-centered optical axis). On the other hand, i the parameters o the ellipses are perturbed beyond a certain limit, the accuracy o the results decreases dramatically. 4. Experimental Results In order to validate the computed calibration with real images, we have applied our algorithm to several images taken with a camera. Figure 4 shows an example o the process. First, the ellipses were recovered rom the image and the camera parameters were computed. By using those parameters, we can draw the WCS axes on the image. Furthermore, the marker has been reprojected using the same parameters. The marker seen in the image has the ollowing properties r 1 =.6 cm, r = 5 cm and r 3 = 6.5 cm. 5 Conclusions and Future Work In this paper we introduce a camera calibration technique that uses a very simple pattern made o two circles. The algorithm obtains accurate intrinsic and extrinsic camera parameters. We show that our method behaves in a robust manner in the presence o dierent types o input errors. We also show that the algorithm
8 Fig. 4. Reprojecting the marker and the coordinate system in the images works well with real world images as long as good ellipse extraction and itting algorithms are used. Our work has a lot o applications, particularly in camera tracking and related ields. Our marker is easy to build and use. This makes it particularly well suited or augmented reality and entertainment applications. We are currently working on applications in these two areas. We are also trying to extend our camera model to take into account skew and lense distortion, in order to better approximate the behavior o a real camera. We are exploring the working limits o our algorithm and we are studying techniques to make the results more stable in the presence o noise. Reerences 1. Zhang, Z.: A Flexible New Technique or Camera Calibration. IEEE Trans. Patt. Anal. Machine Intell., vol., no. 11, (000) Forsyth, D., Mundy, et al.: Invariant Descriptors or 3-D Object Recognition and Pose. IEEE Trans. Patt. Anal. Machine Intell., vol. 13, no. 10, (1991) Kanatani, K., Liu, W.: 3D Interpretation o Conics and Orthogonality. CVGIP: Image Undestanding, Vol. 58, no. 3, (1993) Rothwell, C.A., Zisserman, A., et al.: Relative Motion and Pose rom Arbitrary Plane Curves. Image and Vision Computing, vol. 10, no. 4, May (199) Fitzgibbon, A.W., Pilu, M., Fisher, R.B.: Direct Least Squares Fitting o Ellipses. IEEE Trans. Patt. Anal. Machine Intell., vol. 1, no. 5, (1999) Ahn, S.J., Rauh, W., Kim, S.I.: Circular Coded Target or Automation o Optical 3D-Measurement and Camera Calibration. Int. Jour. Patt. Recog. Artiicial Intell., vol. 15, no. 6, (001) Lo pez de Ipin a, D., Mendonc a, P.R.S., Hopper, A.: TRIP: a Low-Cost Vision-Based Location System or Ubiquitous Computing. Personal and Ubiquitous Computing Journal, Springer, Vol. 6, no. 3, (May 00) Kim, J.S., Kweon, I.S.: A New Camera Calibration Method or Robotic Applications. Int. Con. Intelligent Robots and Systems, Hawaii, (Oct 001) Kim, J.S., Kim, H.W., Kweon, I.S.: A Camera Calibration Method using Concentric Circles or Vision Applications. 5th Asian Con. Computer Vision (00) 10. Fremont, V., Chellali, R.: Direct Camera Calibration using Two Concentric Circles rom a Single View. 1th Int. Con. Artiicial Reality and Telexistence (00) 11. More, J.J.: The Levenberg Marquardt algorithm: implementation and theory. Numerical Analysis, G.A. Watson ed., Springer-Verlag (1977)
9 Appendix Given two concentric circles whose projections are known, we show how to project a third circle o known radius using the same projection. A circle C o radius r centered at the origin and located in the plane Z = 0 is deined by: X t CX = [ X Y 1 ] X Y. 0 0 r 1 A projection matrix P projects a circle C onto an ellipse Q by λq = P t CP 1. We compute the dierence between the projection o a circle o radius r + α and the projection o a circle o radius r: λ 1 Q r+α λ Q r = P t (C r+α C r )P 1 = α(α + r)m, (11) where M = q t q and q is the third row o matrix P 1. Thereore, we can write: { Q3 = Q 1 α 1 (α 1 + r 1 )M Q 3 = Q α (α + r )M, where α 1 = r 3 r 1 and α = r 3 r. Using these two equations we can express Q 3 as: Q 3 = kq Q 1 r 3 r r 3 r 1, (1) where k is a scale correcting actor o Q 1 and Q. That actor can be computed applying the rank 1 condition to the ellipses (note that as C r+α C r in equation (11) has rank 1, λ 1 Q r+α λ Q r should have rank 1, too) [9]. Solving or k in Q 1 kq to have a rank 1 matrix leads to a scale correcting actor. Thereore, equation (1) allows us to project a circle o any radius given the projection o two circles, all o them concentric. This process can be used to ind the projected center o the concentric circles as well. As the projected center o a circle is always enclosed in its projected ellipse, i we project circles o smaller and smaller radii, we will be reducing the space where the projected center can be. In the limit, a circle o radius zero should project onto the projected center o the circles. Applying equation (1) to a circle o radius r 3 = 0 results in an ellipse (o radius zero) whose center is at the projected center o the concentric circles (xc,yc). The center o an ellipse in matrix orm is given by [8]: xc = Q (,)Q (1,3) Q (1,) Q (,3) ( Q(1,) ) Q(1,1) Q (,) and yc = Q (,3)Q (1,1) Q (1,) Q (1,3) ( Q(1,) ) Q(1,1) Q (,).
2 DETERMINING THE VANISHING POINT LOCA- TIONS
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.??, NO.??, DATE 1 Equidistant Fish-Eye Calibration and Rectiication by Vanishing Point Extraction Abstract In this paper we describe
More information521466S Machine Vision Exercise #1 Camera models
52466S Machine Vision Exercise # Camera models. Pinhole camera. The perspective projection equations or a pinhole camera are x n = x c, = y c, where x n = [x n, ] are the normalized image coordinates,
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationMAPI Computer Vision. Multiple View Geometry
MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry
More information3D Geometry and Camera Calibration
3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationMR-Mirror: A Complex of Real and Virtual Mirrors
MR-Mirror: A Complex of Real and Virtual Mirrors Hideaki Sato 1, Itaru Kitahara 1, and Yuichi Ohta 1 1 Department of Intelligent Interaction Technologies, Graduate School of Systems and Information Engineering,
More informationCamera Model and Calibration
Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationCamera Calibration from the Quasi-affine Invariance of Two Parallel Circles
Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationRigid Body Motion and Image Formation. Jana Kosecka, CS 482
Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3
More informationRoad Sign Analysis Using Multisensory Data
Road Sign Analysis Using Multisensory Data R.J. López-Sastre, S. Lauente-Arroyo, P. Gil-Jiménez, P. Siegmann, and S. Maldonado-Bascón University o Alcalá, Department o Signal Theory and Communications
More informationCatadioptric camera model with conic mirror
LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación
More informationCamera Geometry II. COS 429 Princeton University
Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence
More informationPin Hole Cameras & Warp Functions
Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:
More informationCamera model and multiple view geometry
Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then
More information3-D TERRAIN RECONSTRUCTION WITH AERIAL PHOTOGRAPHY
3-D TERRAIN RECONSTRUCTION WITH AERIAL PHOTOGRAPHY Bin-Yih Juang ( 莊斌鎰 ) 1, and Chiou-Shann Fuh ( 傅楸善 ) 3 1 Ph. D candidate o Dept. o Mechanical Engineering National Taiwan University, Taipei, Taiwan Instructor
More informationA General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras
A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan
More informationROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationGeometry of a single camera. Odilon Redon, Cyclops, 1914
Geometr o a single camera Odilon Redon, Cclops, 94 Our goal: Recover o 3D structure Recover o structure rom one image is inherentl ambiguous??? Single-view ambiguit Single-view ambiguit Rashad Alakbarov
More informationCamera Model and Calibration. Lecture-12
Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationA 3D Pattern for Post Estimation for Object Capture
A 3D Pattern for Post Estimation for Object Capture Lei Wang, Cindy Grimm, and Robert Pless Department of Computer Science and Engineering Washington University One Brookings Drive, St. Louis, MO, 63130
More informationPlanar pattern for automatic camera calibration
Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute
More informationProjective geometry for Computer Vision
Department of Computer Science and Engineering IIT Delhi NIT, Rourkela March 27, 2010 Overview Pin-hole camera Why projective geometry? Reconstruction Computer vision geometry: main problems Correspondence
More informationCamera Calibration by a Single Image of Balls: From Conics to the Absolute Conic
ACCV2002: The 5th Asian Conference on Computer Vision, 23 25 January 2002, Melbourne, Australia 1 Camera Calibration by a Single Image of Balls: From Conics to the Absolute Conic Hirohisa Teramoto and
More informationReminder: Affine Transformations. Viewing and Projection. Shear Transformations. Transformation Matrices in OpenGL. Specification via Ratios
CSCI 420 Computer Graphics Lecture 6 Viewing and Projection Jernej Barbic University o Southern Caliornia Shear Transormation Camera Positioning Simple Parallel Projections Simple Perspective Projections
More informationCS231A Course Notes 4: Stereo Systems and Structure from Motion
CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationProjector Calibration for Pattern Projection Systems
Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationUniversity of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract
Mirror Symmetry 2-View Stereo Geometry Alexandre R.J. François +, Gérard G. Medioni + and Roman Waupotitsch * + Institute for Robotics and Intelligent Systems * Geometrix Inc. University of Southern California,
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More informationVision Review: Image Formation. Course web page:
Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some
More informationShort on camera geometry and camera calibration
Short on camera geometry and camera calibration Maria Magnusson, maria.magnusson@liu.se Computer Vision Laboratory, Department of Electrical Engineering, Linköping University, Sweden Report No: LiTH-ISY-R-3070
More informationFull Camera Calibration from a Single View of Planar Scene
Full Camera Calibration from a Single View of Planar Scene Yisong Chen 1, Horace Ip 2, Zhangjin Huang 1, and Guoping Wang 1 1 Key Laboratory of Machine Perception (Ministry of Education), Peking University
More informationCamera Calibration. COS 429 Princeton University
Camera Calibration COS 429 Princeton University Point Correspondences What can you figure out from point correspondences? Noah Snavely Point Correspondences X 1 X 4 X 3 X 2 X 5 X 6 X 7 p 1,1 p 1,2 p 1,3
More informationCamera models and calibration
Camera models and calibration Read tutorial chapter 2 and 3. http://www.cs.unc.edu/~marc/tutorial/ Szeliski s book pp.29-73 Schedule (tentative) 2 # date topic Sep.8 Introduction and geometry 2 Sep.25
More informationCamera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences
Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Jean-François Lalonde, Srinivasa G. Narasimhan and Alexei A. Efros {jlalonde,srinivas,efros}@cs.cmu.edu CMU-RI-TR-8-32 July
More informationCoplanar circles, quasi-affine invariance and calibration
Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of
More informationCOS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures)
COS429: COMPUTER VISON CMERS ND PROJECTIONS (2 lectures) Pinhole cameras Camera with lenses Sensing nalytical Euclidean geometry The intrinsic parameters of a camera The extrinsic parameters of a camera
More informationRecovering light directions and camera poses from a single sphere.
Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,
More information3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,
3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates
More informationMachine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationComputer Vision cmput 428/615
Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationOutline. ETN-FPI Training School on Plenoptic Sensing
Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera
More informationAbstract. Keywords. Computer Vision, Geometric and Morphologic Analysis, Stereo Vision, 3D and Range Data Analysis.
Morphological Corner Detection. Application to Camera Calibration L. Alvarez, C. Cuenca and L. Mazorra Departamento de Informática y Sistemas Universidad de Las Palmas de Gran Canaria. Campus de Tafira,
More informationPin Hole Cameras & Warp Functions
Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg
More informationImproving camera calibration
Applying a least squares method to control point measurement 27th May 2005 Master s Thesis in Computing Science, 20 credits Supervisor at CS-UmU: Niclas Börlin Examiner: Per Lindström UmeåUniversity Department
More informationProjective geometry, camera models and calibration
Projective geometry, camera models and calibration Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 6, 2008 The main problems in computer vision Image
More informationComputer Vision I - Appearance-based Matching and Projective Geometry
Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation
More informationCalibration of a fish eye lens with field of view larger than 180
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Calibration of a fish eye lens with field of view larger than 18 Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek
More informationUNIT 2 2D TRANSFORMATIONS
UNIT 2 2D TRANSFORMATIONS Introduction With the procedures for displaying output primitives and their attributes, we can create variety of pictures and graphs. In many applications, there is also a need
More informationSimultaneous Vanishing Point Detection and Camera Calibration from Single Images
Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Bo Li, Kun Peng, Xianghua Ying, and Hongbin Zha The Key Lab of Machine Perception (Ministry of Education), Peking University,
More informationCameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models
Cameras and Radiometry Last lecture in a nutshell CSE 252A Lecture 5 Conversion Euclidean -> Homogenous -> Euclidean In 2-D Euclidean -> Homogenous: (x, y) -> k (x,y,1) Homogenous -> Euclidean: (x, y,
More information1.2 Related Work. Panoramic Camera PTZ Dome Camera
3rd International Conerence on Multimedia Technology ICMT 2013) A SPATIAL CALIBRATION METHOD BASED ON MASTER-SLAVE CAMERA Hao Shi1, Yu Liu, Shiming Lai, Maojun Zhang, Wei Wang Abstract. Recently the Master-Slave
More information3-D D Euclidean Space - Vectors
3-D D Euclidean Space - Vectors Rigid Body Motion and Image Formation A free vector is defined by a pair of points : Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Coordinates of the vector : 3D Rotation
More informationAugmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004
Augmented Reality II - Camera Calibration - Gudrun Klinker May, 24 Literature Richard Hartley and Andrew Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2. (Section 5,
More informationCamera matrix calibration using circular control points and separate correction of the geometric distortion field
Camera matrix calibration using circular control points and separate correction o the geometric distortion ield Victoria Rudakova, Pascal Monasse To cite this version: Victoria Rudakova, Pascal Monasse.
More informationPerspective Projection Describes Image Formation Berthold K.P. Horn
Perspective Projection Describes Image Formation Berthold K.P. Horn Wheel Alignment: Camber, Caster, Toe-In, SAI, Camber: angle between axle and horizontal plane. Toe: angle between projection of axle
More informationHW 1: Project Report (Camera Calibration)
HW 1: Project Report (Camera Calibration) ABHISHEK KUMAR (abhik@sci.utah.edu) 1 Problem The problem is to calibrate a camera for a fixed focal length using two orthogonal checkerboard planes, and to find
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More informationSingle View Geometry. Camera model & Orientation + Position estimation. Jianbo Shi. What am I? University of Pennsylvania GRASP
Single View Geometry Camera model & Orientation + Position estimation Jianbo Shi What am I? 1 Camera projection model The overall goal is to compute 3D geometry of the scene from just 2D images. We will
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationCS201 Computer Vision Camera Geometry
CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the
More informationCIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM
CIS 580, Machine Perception, Spring 2016 Homework 2 Due: 2015.02.24. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Recover camera orientation By observing
More informationVision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes
Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes Yuo Uematsu and Hideo Saito Keio University, Dept. of Information and Computer Science, Yoohama, Japan {yu-o,
More informationCOSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor
COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The
More informationRESAMPLING DIGITAL IMAGERY TO EPIPOLAR GEOMETRY
RESAMPLING DIGITAL IMAGERY TO EPIPOLAR GEOMETRY Woosug Cho Toni Schenk Department of Geodetic Science and Surveying The Ohio State University, Columbus, Ohio 43210-1247 USA Mustafa Madani Intergraph Corporation,
More informationAn Overview of Matchmoving using Structure from Motion Methods
An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu
More informationCamera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah
Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D
More informationIntroduction to Homogeneous coordinates
Last class we considered smooth translations and rotations of the camera coordinate system and the resulting motions of points in the image projection plane. These two transformations were expressed mathematically
More informationBowling for Calibration: An Undemanding Camera Calibration Procedure Using a Sphere
Bowling for Calibration: An Undemanding Camera Calibration Procedure Using a Sphere Pietro Cerri, Oscar Gerelli, and Dario Lodi Rizzini Dipartimento di Ingegneria dell Informazione Università degli Studi
More informationIdentifying Car Model from Photographs
Identifying Car Model from Photographs Fine grained Classification using 3D Reconstruction and 3D Shape Registration Xinheng Li davidxli@stanford.edu Abstract Fine grained classification from photographs
More informationAffine Surface Reconstruction By Purposive Viewpoint Control
Affine Surface Reconstruction By Purposive Viewpoint Control Kiriakos N. Kutulakos kyros@cs.rochester.edu Department of Computer Sciences University of Rochester Rochester, NY 14627-0226 USA Abstract We
More informationSynchronized Ego-Motion Recovery of Two Face-to-Face Cameras
Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn
More informationProjection Model, 3D Reconstruction and Rigid Motion Estimation from Non-central Catadioptric Images
Projection Model, 3D Reconstruction and Rigid Motion Estimation rom Non-central Catadioptric Images Nuno Gonçalves and Helder Araújo Institute o Systems and Robotics - Coimbra University o Coimbra Polo
More informationOn the Geometry of Visual Correspondence
International Journal o Computer Vision 21(3), 223 247 (1997) c 1997 Kluwer Academic Publishers. Manuactured in The Netherlands. On the Geometry o Visual Correspondence CORNELIA FERMÜLLER AND YIANNIS ALOIMONOS
More informationPose Estimation from Circle or Parallel Lines in a Single Image
Pose Estimation from Circle or Parallel Lines in a Single Image Guanghui Wang 1,2, Q.M. Jonathan Wu 1,andZhengqiaoJi 1 1 Department of Electrical and Computer Engineering, The University of Windsor, 41
More information3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly
More informationDetection of Concentric Circles for Camera Calibration
Detection of Concentric Circles for Camera Calibration Guang JIANG and Long QUAN Department of Computer Science Hong Kong University of Science and Technology Kowloon, Hong Kong {gjiang,quan}@cs.ust.hk
More informationA Study on the Distortion Correction Methodology of Vision Sensor
, July 2-4, 2014, London, U.K. A Study on the Distortion Correction Methodology of Vision Sensor Younghoon Kho, Yongjin (James) Kwon 1 Abstract This study investigates a simple and effective vision calibration
More informationHomogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.
Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is
More informationUncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images
Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Gian Luca Mariottini and Domenico Prattichizzo Dipartimento di Ingegneria dell Informazione Università di Siena Via Roma 56,
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationThree-Dimensional Viewing Hearn & Baker Chapter 7
Three-Dimensional Viewing Hearn & Baker Chapter 7 Overview 3D viewing involves some tasks that are not present in 2D viewing: Projection, Visibility checks, Lighting effects, etc. Overview First, set up
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationAn idea which can be used once is a trick. If it can be used more than once it becomes a method
An idea which can be used once is a trick. If it can be used more than once it becomes a method - George Polya and Gabor Szego University of Texas at Arlington Rigid Body Transformations & Generalized
More informationHow to Compute the Pose of an Object without a Direct View?
How to Compute the Pose of an Object without a Direct View? Peter Sturm and Thomas Bonfort INRIA Rhône-Alpes, 38330 Montbonnot St Martin, France {Peter.Sturm, Thomas.Bonfort}@inrialpes.fr Abstract. We
More informationModule 4F12: Computer Vision and Robotics Solutions to Examples Paper 2
Engineering Tripos Part IIB FOURTH YEAR Module 4F2: Computer Vision and Robotics Solutions to Examples Paper 2. Perspective projection and vanishing points (a) Consider a line in 3D space, defined in camera-centered
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationGeometry of image formation
eometry of image formation Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 3, 2008 Talk Outline
More informationEuclidean Reconstruction Independent on Camera Intrinsic Parameters
Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationCamera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds
Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds Marta Wilczkowiak Edmond Boyer Peter Sturm Movi Gravir Inria Rhône-Alpes, 655 Avenue de l Europe, 3833 Montbonnot, France
More information