IN the last few years, we have seen an increasing interest in

Size: px
Start display at page:

Download "IN the last few years, we have seen an increasing interest in"

Transcription

1 55 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO. 9, SEPTEMBER 009 Calibration of Cameras with Raially Symmetric Distortion Jean-Philippe Tarif, Peter Sturm, Member, IEEE Computer Society, Martin Trueau, an Sébastien Roy Abstract We present algorithms for plane-base calibration of general raially istorte cameras. By this, we unerstan cameras that have a istortion center an an optical axis such that the projection rays of pixels lying on a circle centere on the istortion center form a right viewing cone centere on the optical axis. The camera is sai to have a single viewpoint (SVP) if all such viewing cones have the same apex (the optical center); otherwise, we speak of NSVP cases. This moel encompasses the classical raial istortion moel [5], fisheyes, an most central or noncentral cataioptric cameras. Calibration consists in the estimation of the istortion center, the opening angles of all viewing cones, an their optical centers. We present two approaches of computing a full calibration from ense corresponences of a single or multiple planes with known eucliean structure. The first one is base on a geometric constraint linking viewing cones an their intersections with the calibration plane (conic sections). The secon approach is a homography-base metho. Experiments using simulate an a broa variety of real cameras show great stability. Furthermore, we provie a comparison with Hartley-Kang s algorithm [], which, however, cannot hanle such a broa variety of camera configurations, showing similar performance. Inex Terms Calibration, omniirectional vision, fisheye, cataioptric camera. Ç INTRODUCTION IN the last few years, we have seen an increasing interest in nonconventional cameras an projection moels, going beyon affine or perspective projection. There exists a large iversity of camera moels, many of which specific to certain types of projections [4], [7], others applicable to families of cameras such as central cataioptric systems [], [], [7], [9]. All of these moels are escribe by a few intrinsic parameters, much like the classical pinhole moel, possibly enhance with raial or ecentering istortion coefficients. Calibration methos exist for all these moels, an they are usually tailor-mae for them, i.e., they cannot be use for any other projection moel [5]. Several works aress the calibration problem from an opposite point of view by aopting a very generic imaging moel that incorporates most commonly use cameras [6], [0], [], [0], []. In the most general case, cameras are moele by attributing an iniviual projection ray to each pixel. Such a moel is highly expressive, but it is ifficult to obtain a stable calibration of cameras with it, at least with few input images taken in an uncontrolle environment. Finally,. J.-P. Tarif is with the Department of Computer an Information Science, University of Pennsylvania, Levine Hall L40, 0 Walnut Street, Philaelphia, PA tarifj@seas.upenn.eu.. P. Sturm is with INRIA Grenoble-Rhône-Alpes, 655 Avenue e l Europe, 80 Montbonnot, France. Peter.Sturm@inrialpes.fr.. M. Trueau is with Mercer, 98 McGill College Suite 800, Montréal, QC, HA T5, Canaa. martin.trueau@mercer.com.. S. Roy is with the Département Informatique et e Recherche Opérationnelle, Université e Montréal, 90 chemin e la Tour, Montréal QC, HT J4, Canaa. roys@iro.umontreal.ca. Manuscript receive 4 Aug. 007; revise 6 Apr. 008; accepte 8 July 008; publishe online 0 July 008. Recommene for acceptance by F. Kahl. For information on obtaining reprints of this article, please sen to: tpami@computer.org, an reference IEEECS Log Number TPAMI Digital Object Ientifier no. 0.09/TPAMI several researchers have propose a compromise between parametric an such generic moels. They assume raial symmetry aroun the istortion center (often consiere as coinciing with the principal point) but with a general istortion function [], [4], [5], [6], [8], [9]. In this paper, we use a moel of this category which we fin sufficiently general to moel many common types of cameras. By having fewer parameters than the fully generic moel, calibration remains easy an stable. It encompasses many common camera moels, such as pinhole (moulo aspect ratio an skew), the classical polynomial raial istortion moel, fisheyes, or any cataioptric system whose mirror is a surface of revolution, an for which the optical axis of the perspective (or affine) camera looking at it is aligne with the mirror s revolution axis. We escribe our moel in the following an refer to Fig. for an illustration. Cameras are moele using the notion of a istortion center in the image whose back-projection yiels the optical axis in D. For cameras with raially symmetric istortion, the projection rays associate with pixels lying on the same istortion circle centere on the istortion center lie on a right viewing cone centere on the optical axis. In the following, we enote by the raius of a istortion circle; for ease of expression, we also speak of istortion circle, meaning the istortion circle of raius. For a perspective camera with focal length f, the opening angle of a istortion circle s viewing cone is given by arctan f. Hence, the opening angles of all viewing cones are ictate by the camera s focal length. In our moel, this is generalize: In the most general case, one may not assume any relation between opening angles of ifferent viewing cones. In other wors, we may have one iniviual focal length f per istortion circle. We may efine a focal length function f ¼ fðþ. In practice, we hanle such a general moel in two ifferent ways: by assuming a polynomial form for the focal length function or by subsampling it an /09/$5.00 ß 009 IEEE Publishe by the IEEE Computer Society

2 TARDIF ET AL.: CALIBRATION OF CAMERAS WITH RADIALLY SYMMETRIC DISTORTION 55 Fig.. Our camera moel in the case of a noncentral projection (see text for explanations). The inlaye illustrations show the istortion center (in blue) an four istortion circles an their corresponing viewing cone an optical center. estimating/using one focal length per iscrete sample of. We also consier its reciprocal, the istortion function that maps the opening angle of a viewing cone to the raius of the associate istortion circle. Let us introuce a few aitional notations. A line spanne by a pixel an the istortion center is calle a raial line an the plane spanne by the projection ray associate with the pixel an the optical axis is calle a raial plane. In our camera moel, we specify that angles between raial lines an associate raial planes are ientical (otherwise, it woul be a very uncommon camera). Our moel comprises central cameras [single viewpoint (SVP)], where all viewing cones have the same apex (the optical center), but also noncentral ones (NSVP), for which the viewing cones apexes lie anywhere on the optical axis. In the latter case, we may speak of one optical center per viewing cone. For convenience, these terms are summarize in Table. Problem statement. We want to calibrate cameras base on the above moel from one or several images of a calibration plane in unknown positions. Calibration consists in estimating the camera s focal length function or, equivalently, its istortion function. Further, for the NSVP case, we have to estimate the optical centers associate with all istortion circles. The problem of calibration from a planar scene of known geometry has been stuie intensively in computer vision. It is wiely accepte that one seeks a solution that minimizes the sum of reprojection errors of the points on the calibration planes. This being a nonconvex function, iterative methos must be use. Besies the problem of convergence to a local minimum, they require some estimate for the internal parameters of the camera, generally provie by the camera manufacturer [], []. Both of these ifficulties can be overcome by noniterative approaches relying on algebraic error functions as long as they provie goo accuracy. Then, these results can be refine iteratively using a geometric error. Contributions. Our contributions take the form of two noniterative calibration algorithms. Their input is the eucliean structure of the plane(s) an a ense or sparse matching between the plane(s) an the camera image. The first approach is base on algebraic constraints relating the projection of the points on the calibration plane on the image. It is base on the observation that each istortion circle an the associate viewing cone can be consiere as an iniviual perspective camera. Hence, the projection of a calibration plane to the image, when reuce to a single istortion circle, can be expresse by a homography an we can reaily apply plane-base calibration algorithms esigne for perspective cameras. Our secon approach is base on a etaile geometric analysis. Consier one image of a calibration plane; each viewing cone of the camera intersects the calibration plane in a conic, which we call a calibration conic. We escribe geometric constraints relating calibration conics to the orientation an position of the camera as well as its intrinsic parameters; these are at the basis of our secon calibration approach. Our first approach usually performs best in practice. Furthermore, it can accurately estimate the projection moel for a noncentral camera. Base on our experimental results, we argue that, in the case of a general moel such as the one we propose, an approach irectly estimating a noncentral projection gives better accuracy than an approach base on: ) estimating an approximate central projection an ) iterative refinement using a noncentral moel [5]. Organization. A geometric stuy of our moel is presente in Section. Our first calibration approach, akin to plane-base calibration of perspective cameras, is escribe in Section, followe by our secon, more geometric, approach in Section 4. Our algorithms assume a known position of the istortion center, but we also show how to estimate it, using repeate calibrations for ifferent caniates, cf. Section 5. Several practical issues an experimental results, incluing a comparison with TABLE Summary of the Terms Relate to Our Camera Moel

3 554 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO. 9, SEPTEMBER 009 Fig.. Illustrations of the geometry of viewing cones, calibration conics (here ellipses), an location of optical center in the SVP case. (a) Complete illustration for one viewing cone. (b) View of the calibration plane, showing many cones calibration ellipses. Note that their major axes are collinear. (c) Sie view of the viewpoint conics (hyperbolas in this case) associate with many calibration conics. Hartley-Kang s (HK) methos [], are presente in Sections 6 an 7, respectively. Finally, we conclue in Section 8. GEOMETRY In this section, we present geometric constraints for our camera moel. Reaers intereste in implementing the homography-base approach may wish to skip to Sections. an.. One Distortion Circle in One Image of a Calibration Plane Let us consier one istortion circle in the image plane. Its associate viewing cone cuts the calibration plane in a calibration conic. From ense matches between image an calibration plane, we can compute this calibration conic (see Section 6 for more etails on the ense matching). This conic can be either an ellipse or a hyperbola (the parabola is only a theoretically interesting case). If we knew the position of the camera s optical center relative to the calibration plane, then we coul irectly compute the viewing cone of our istortion circle, i.e., the cone that has the optical center as apex an that contains the above calibration conic. As mentione, that cone has several useful properties: Its axis is the camera s optical axis an it is a right cone, i.e., rotationally symmetric with respect to its axis. From the cone, the focal length of the consiere istortion circle can be easily compute: The cone s opening angle is equal to the fiel of view. In practice, we o not know the optical center s position relative to the calibration plane. In the following, we show geometrical relations between the calibration conic, the optical center, an the optical axis of the camera, which allow to compute the optical center. Recall that, when talking about optical center, we mean the optical center per istortion circle; they all lie on the optical axis an, in the SVP case, they are ientical. Without loss of generality, we assume that the calibration plane is the plane Z ¼ 0 an that the calibration conic " is given by the matrix " / iagða; b; Þ with jbj a>0 (/ represents equality up to scale), i.e., the X-axis is the conic s major axis. The type of the calibration conic " epens on a an b as follows: b a>0 ellipse ðþ b<0an jbj >a>0 hyperbola: Our aim is to provie constraints on the position of the optical center, as well as on the orientation of the optical axis, from this conic. We first o this for the case of a calibration ellipse, then for the case of a hyperbola. The case of calibration ellipses. Let us first state a wellknown result. Consier a right cone whose apex is a point with real coorinates, an its intersection with a plane. For now, we assume that the intersection is an ellipse (the case of the hyperbola will be iscusse later). It is easy to prove that the orthogonal projection of the cone s apex onto the plane lies on the ellipse s major axis (cf. Fig. a). This implies that the cone s apex lies in the plane that is orthogonal to the ellipse s supporting plane an that contains its major axis. For our problem, this means that the optical center must lie in the plane Y ¼ 0 (since the ellipse lies in plane Z ¼ 0 an has the X-axis as major axis). We may further constrain its position C ¼ðX; 0;Z;Þ >, as follows [4]. The cone with C as apex an that contains the calibration ellipse " is given by / 6 4 az 0 a XZ 0 0 bz 0 0 a XZ 0 ax Z 0 0 Z Z For this cone to be a right one, its upper left matrix must have a ouble eigenvalue. The three eigenvalues are qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi bz ; a ðx þ Z Þ 4a Z þ ð aðx þ Z ÞÞ : ðþ The secon an thir eigenvalues cannot be equal for real values of X an Z (besies, in the trivial case, X ¼ Z ¼ 0, which is exclue since it woul correspon to the optical center lying in the calibration plane). The first eigenvalue is equal to the thir one if Z ¼ 0 (exclue for the same reason as above) an to the secon one if abx þ bða bþ Z þða bþ ¼0: ðþ This equation tells us that the optical center lies on a viewpoint conic given by the following matrix an the associate equation: 0 ab X / 4 bða bþ 5; ðx Z Z A ¼ 0: ð4þ a b This is a hyperbola, ue to b a>0 (cf. ()); it is sketche in Fig. a. Furthermore, its asymptotes correspon to the irections of the two circular cyliners that contain the calibration ellipse. 7 5 :

4 TARDIF ET AL.: CALIBRATION OF CAMERAS WITH RADIALLY SYMMETRIC DISTORTION 555 The case of calibration hyperbolas. The case where the intersection between a cone an the calibration plane yiels a hyperbola is accounte for automatically by the previous formulation. All of the above finings hol here, with the exception that the viewpoint conic is an ellipse. This case typically occurs with very wie-angle cameras or when the angle between the camera s optical axis an the calibration plane is large. Fig.. Viewing cones can also be seen as iniviual perspective cameras with ifferent focal length. A rectifie image can be obtaine by projecting the istortion circles (which lie in ifferent planes) on one plane front (or back for a fiel of view larger than 80 ). Let us now consier the orientation of the optical axis. Due to (), we have an optical center given by sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi abx Z ¼ þ a b : ð5þ bðb aþ Here, we exclue the case a ¼ b, which woul correspon to the camera looking straight at the calibration plane. The irection of the cone s axis is given by the eigenvector associate with the single eigenvalue of, i.e., the thir one, augmente with a homogeneous coorinate 0: p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi >: bðb aþða bx þ a bþ 0 abx 0 ð6þ We now show that the cone s axis is ientical to the tangent of the hyperbola in the optical center C, which is given by the line (in the plane Y ¼ 0) 0 0 X p Z A ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi bðb aþða bx þ a bþ A: a b Its point at infinity is (still in the plane Y ¼ 0) p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi >; bðb aþða bx þ a bþ abx 0 i.e., it is ientical to the point given in (6). Hence, for an optical center on, the optical axis is irectly given by the associate tangent. Coming back to (), the eigenvalues of the cone can be use to compute the focal length f associate to a istortion circle of raius (see Fig. ). The value f = is the tangent of half the opening angle of the viewing cone. Furthermore, it can be shown that ðf =Þ is equal to the negative of the ratio of the ouble an the single eigenvalue of, i.e., the negative of the ratio of the first an the thir eigenvalues given in (). Using (5), we obtain f ¼ bðabx þ a bþ : ð7þ aða bþ This relation will prove useful in Section 4... Multiple Distortion Circles So far, we have shown that, for an iniviual istortion circle, the associate viewing cone can be etermine from the associate calibration conic, up to one egree of freeom (location on the viewpoint conic an associate orientation of the optical axis). We now show how to get a unique solution, when consiering several istortion circles simultaneously. Let us first note that calibration conics corresponing to ifferent istortion circles are not inepenent: their major axes are collinear (cf. Fig. b), even in the NSVP case. Their centers are not ientical, however, unless they are all circles, which can only happen when the camera looks straight at the calibration plane. Let be the viewpoint conic associate with istortion circle (of raius), all being given in the same coorinate frame. In the case of an SVP camera, the optical center must lie on all these conics. Furthermore, the optical axis is tangent to all of them. This implies that all conics touch each other (have a ouble intersection point) in the optical center. This is illustrate in Fig. c. A naive algorithm woul compute the viewpoint conic for all calibration conics an seek their single intersection/contact point. However, very little noise can cause two viewpoint conics to have no real intersection point at all, instea of a ouble one. Interestingly, this constraint gives a geometric explanation of the ambiguity, or correlation, between camera position an focal length that often occurs when calibrating a camera from a single view. Consiering perfectly recovere viewpoint conics (such as in Fig. c), the ambiguity is observe as the area where the viewpoint conics are very close to each other. Clearly, a very low perturbation of the curves can result in significant but correlate errors on the optical center an the focal length. In the NSVP case, a separate optical center correspons to each istortion circle an viewing cone. Hence, the viewpoint conics will not have a single contact point anymore. However, the optical axis is share by all viewing cones. Hence, it is the single (in general) line that is tangent to all viewpoint conics. Furthermore, each contact point with the optical axis is the associate optical center.. Moel Parameterization It is possible to consier the istortion circles an associate viewing cones as iniviual perspective cameras, with ifferent focal lengths but ientical principal points [4], [5], [6]. In the SVP case, extrinsic parameters of all these cameras are ientical, whereas, in the NSVP case, they all share the same orientation an the optical centers are merely isplace along the optical axis. Let us consier the istortion circle of raius an one image of a calibration plane. From point corresponences between pixels on this circle an points on the calibration plane (on the calibration conic), we can compute a plane-toimage homography H. For simplicity, let us assume that

5 556 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO. 9, SEPTEMBER 009 image coorinates have been translate to put the istortion center at the origin. The homography can then be ecompose such that u x v A / y A ¼ K R4 0 t þ t r 0 0 5@ x y A; ð8þ where / means equality up to scale, ðx; yþ is a calibration point, ðu; vþ is a pixel on the istortion circle, an R an t are the rotation matrix an translation vector representing camera pose, respectively (same for all ). The scalar t allows us to moel translational isplacement of iniviual viewing cones along the optical axis (given by r >, the thir row of R), which is neee for NSVP cameras. For SVP cameras, t is set to 0 for all. As for K, it is a calibration matrix iagðf ;f ; Þ, where f is the focal length associate with the consiere istortion circle. We may interpret the relation between an f as a istortion function applie to a perspective camera whose unistorte image plane is front (cf. Fig. ). Note that this parameterization only accounts for viewing cones with fiel of view smaller than 80. Larger fiels of view can be moele by aing a reflection to the rotational component of the pose, R 0 ¼ iagð; ; ÞR, an a corresponing image plane back. Equivalently, one may escribe these cones as cameras with negative focal length as presente in [5]..4 Overview of the Approaches We briefly escribe our two calibration approaches in terms of the concepts previously introuce in this section. Both procee in two steps. In the homography-base algorithm (Section ), the optical axis for each view of the calibration plane is first estimate, each one up to four possibilities. The correct choice is mae at the secon step, which consists of simultaneously estimating the position of the camera on the optical axis for all views an the opening angle (equivalently the focal length) of all the cones. The secon approach is base on the above introuce geometric constraints, although ultimately some algebraic manipulations are require to avoi an iterative solution. We iscuss two solutions for estimating the pose of the camera for each view, using the calibration conics. Then, the opening angles of the cones are estimate by refitting the calibration conics using a parameterization enforcing the fit to be consistent with the pose of the camera. HOMOGRAPHY-BASED CALIBRATION. Unconstraine Approach In [4], an approach base on recovering the Image of the Absolute Conic, associate to each istortion circle, from one or many images of a calibration plane, is introuce. Once the internal parameters are recovere, the camera external parameters are estimate using the approach presente in [9], [], []. The weakness of this approach is that, since the calibration is iniviually performe on each istortion circle, it oes not enforce SVP or NSVP constraints. However, it is useful for estimating the istortion center. As mentione in Section, this moel oes not inclue a skew between pixel axes or an aspect ratio ifferent from. Also, it assumes that the istortion center is at the principal point. as escribe in Section 5. This algorithm will be referre to as IAC.. Constraine Approach We present another homography-base approach that can enforce the constraints irectly. It is relate to HK approach, but it also hanles noncentral omniirectional cameras. SVP case, one image. In the SVP case, t is zero for all. It follows that K q / 4 f 0 5q / R4 0 t 5 Q 0 0 fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} M hols up to scale. If we ivie the first by the secon coorinates of both sies, we obtain an equation that is inepenent from the istortion circle, i.e., q q ¼ ðmqþ ðmqþ : ð9þ ð0þ We finally obtain a linear equation on the six coefficients in the first two rows of the pose matrix M, i.e., q ðm Q þ M Q þ M Q Þ q ðm Q þ M Q þ M Q Þ¼0: ðþ The above equation being linear homogeneous, we may use all point corresponences between the calibration plane an the image to estimate the first two rows of M, up to scale. Until here, our algorithm is very similar to [], but the following iffers significantly. The pose, i.e., R an t, can be partially estimate from M. There exists a scalar such that R R r > t R R r > t ¼ M; ðþ where r > i is the ith row of R an M is the upper part of M. This leas to the following observations:. The rotation matrix R can be estimate up to four solutions, from the left part of M. They iffer by a choice of sign for columns an rows of the solution: If R is one correct solution, then the other three are given by DR, RD, DRD, with D ¼ iagð ; ; Þ. Distinguishing the vali rotation matrix will be one later when estimating the full camera position an internal parameters.. The intersection t 0 between the optical axis of the camera an the calibration plane can be recovere as the right nullspace of M since it is the point on the plane projecting onto the istortion center (the origin). Hence, the translation vector t can be estimate up to one egree of freeom, a isplacement along the optical axis, i.e., t ¼ t 0 þ r : ðþ Estimating can be one simultaneously with estimating the focal lengths of all istortion circles. By inserting the solution for t into (9), we obtain

6 TARDIF ET AL.: CALIBRATION OF CAMERAS WITH RADIALLY SYMMETRIC DISTORTION 557 q / 4 f f 0 5R4 0 t 0 r 5Q: 0 0 Due to the orthonormality of R, we obtain 0 f 0 0 q / 4 f 5 R4 0 t fflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflffl} > C Q: A Let us enote: S ¼ RT 0 Q, which is a known vector. We thus can write the above equation as 0 f S S f q / 4 f S 5 ¼ 4 S 5@ f A ð4aþ S Q Q S =Q S ½qŠ S an obtain the following set of linear equations on the unknowns f an : 4 0 5@ f f A ¼ 0 ð4bþ Q S =Q or 0 q S q Q 4 q S q Q 5 f q S q S A: ð4cþ q S q S 0 0 Note that the equation system is nonhomogeneous, i.e., the solution for the f an is compute exactly, not only up to scale. Also note that the thir equation is useless: In the absence of noise, the term q S q S is zero an gives no constraint on f. With noise, however, the thir equation amits ranom coefficients an using it was foun to bias f towar small values. We thus only use the first two equations of (4c). All point corresponences, from all istortion circles, can be use simultaneously: Each corresponence contributes to estimating an to the focal length of the istortion circle it belongs to. Hence, overall, we have a linear system of size n ðdþþ, where n is the number of point corresponences an D is the number of istortion circles. This assume a known rotation. All rotation matrices among the four possibilities shown above give the same solution for f an, up to ifferent signs. Let 0 be istortion circle of smallest raius. Then, the correction rotation is the one with positive f 0 an. Using many images of a calibration plane. The above equations can be use in a slightly moifie form to simultaneously use many images. Let v be the inex for each view out of V. Each associate partial homography M v can be compute using (). Then, t v0 an R v can be estimate iniviually from each of them. Finally, (4c) is extene naturally to account for many views by simultaneously solving for all isplacements v an focal lengths f. The resulting equation system is of size n ðdþvþ. NSVP case, many views. In the NSVP case, the previous algorithm can be applie nearly without moification. Let us consier the form of the plane-to-image homography for an iniviual istortion circle, given in (8), for the NSVP case. In comparison to the SVP case (9), there is an aitional term t for the position of the isplace optical center on the optical axis: q / H v Q ¼ K R v 4 0 ðt v0 þð v þ t Þr v Þ5Q ð5þ 0 0 ¼ K ðr v T v0 iagð0; 0; v þ t ÞÞQ: Note that t is the same for all views. Since K is a iagonal matrix, the first two coorinates of the right-han sie o not epen on v an t. Hence, like in the SVP case, () can be use to estimate linearly the first two rows of the pose matrix M v, using all point corresponences, for all istortion circles. The pose parameters R v an t v can also be extracte in the same manner, with t v being obtaine up to a isplacement along the optical axis r v. We now consier how to estimate the focal lengths f an the optical center positions v þ t. The equations are ientical to those in the SVP case, with the ifference that v has to be replace by v þ t, i.e., it is not the same for all istortion circles. The set of v an t is an overparameterization, since subtracting a value from all v an aing it to all t leaves (5) unchange. Hence, we may set one of them to any fixe value. The equation system is thus of size n ðd þ V Þ. Polynomial moel. Calibrating a general moel requires many ata points to obtain precise an accurate results, i.e., to avoi overfitting. Otherwise, relying on a more restricte moel may give better results. In [5], [6], [9], a polynomial moel was use to represent the relationship between focal length an raius. Our formulation can also be trivially moifie to use polynomials for f as well as t. For NSVP cameras, it turns out that in practice, estimating the fully general moel can be unstable because of the focal length-isplacement ambiguity mentione in Section.. The result is that both f an t are not smooth functions for small raius values. Instea of relying on smoothing terms (whose weights are ifficult to set) to circumvent this, we prefer using polynomials. The above metho is summarize in Fig. 4 an referre to as HB in the following. In practice, this is the most accurate an flexible of the approaches presente here (see Section 7). 4 GEOMETRIC APPROACH 4. Solving the Calibration an the Pose In the following, we present a secon approach base on the geometric constraints iscusse in Section. We first propose a naive calibration approach; it is not optimal though an a better one will be iscusse in the next section. Recall the constraints relating calibration conics, viewpoint conics, an position of the camera illustrate in Fig.. In the SVP case, one easily euces a calibration algorithm consisting in first estimating the optical center (relative to the calibration plane) as the D point that is closest on average to all viewpoint conics (see below). Then, (7) can be use to compute the focal lengths for all istortion circles. In the NSVP case, a possibility woul be to compute the optical axis: the line L that minimizes the sum of square istances to the viewpoint conics. However, this is not very accurate as iscusse below.

7 558 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO. 9, SEPTEMBER 009 Fig. 4. Complete algorithm for the HB calibration approach for an SVP camera. SVP case: Computing the closest point to the viewpoint conics. This is the original approach presente in [4]. Computing the orthogonal istance of a point to a general conic requires solving a fourth egree polynomial [0]. Using this to compute the closest point to our set of viewpoint conics is not very practical. Instea, we iteratively minimize a cost function subject to constraints. The closest point q is foun by solving X min q;q istðq; q Þ ; subject to q > q ¼ 0; i.e., we also estimate one point per conic that will, after convergence, be the orthogonal projection of q on. Since the function an constraints are polynomial, this problem can be optimize using an algorithm relying on Cylinrical Algebraic Decomposition (CAD), which guarantees a global minimum [8]. Such an algorithm is available through the Minimize function of Mathematica. In this work, we present another solution that gives much better result. Instea of proceeing into two steps, we irectly use the fitte calibration conics to estimate the camera position, as escribe below. 4. A Formulation Enforcing the Constraints Directly In our first experiments, we foun the previous approach to be unstable, even though goo results coul be obtaine in some cases [4]. This is because the formulation has several rawbacks. First, although CAD optimization is algorithmic, it becomes computationally intractable when the number of viewpoint conics increases. Secon, it appears that fining the closest point to the set of the recovere viewpoint conics is not the optimal criterion. As iscusse above an shown in Fig. 5, when the noise is large, the shape an position of the viewpoint conics may become very perturbe. For an SVP camera, a better formulation woul irectly enforce that the viewpoint conics all touch in one location an have the same tangent at this point. Then, CAD optimization coul be avoie. Before giving our solution, we come back to our calibration conics. As mentione above, they share an axis (in the absence of noise). We assume that their common axis can be estimate with high accuracy espite noise in the ata, ue to using many calibration conics. Once the conics axis is estimate, it is convenient to change the coorinate system of the calibration plane such that the conics are aligne with the X-axis. Then, each one of them can be parameterize by its major an minor axis lengths b an a, as well as by a isplacement k along the X-axis: a 0 a k " / 4 0 b 0 5: ð6þ a k 0 a k The parameters a, b, an k can be estimate much like in a classical conic fitting algorithm. We perform all subsequent computations with those axis-aligne conics. This parameterization guarantees that all viewpoint conics lie in the same plane ðy ¼ 0Þ, in which they can be expresse as a b 0 a b k / 4 0 b ða b Þ 0 5: ð7þ a b k 0 a b k þ a b However, this parameterization oes not guarantee other properties given in Section., especially that these viewpoint conics all touch in a single point. To achieve this, we state a new result. For a central raially symmetric camera, the intersections of all its viewing cones with the calibration plane are calibration conics ^" given by 0 þ ^" / 4 0 ð þ Þ 0 5; ð8þ þ 0 þ Fig. 5. The effect on viewpoint conics of errors on the calibration conics (shown in the same plane for visualization). (a) Original configuration with the optical center. (b) Ae noise to the calibration conics an resulting viewpoint conics. Obviously, fining the closest point to these curves will yiel a large error. (c) Recovere optical center in ark/blue an correcte conics given by our approach, cf. Section 4..

8 TARDIF ET AL.: CALIBRATION OF CAMERAS WITH RADIALLY SYMMETRIC DISTORTION 559 TABLE Computing the External Parameters of the Camera from the Parameters The principal plane refers to the plane passing through the optical center an being parallel to the image plane. where,, an encoe the external parameters of the camera an is a parameter for the istortion circle of raius. For each calibration conic ^", the corresponing viewpoint conic is given by ^ / þ 0 þ 0 0 þ ð Þþ 7 5 : ð9þ The erivation an the proof of this formulation are available in [7]. Note that, in practice, we are only intereste in the estimation of ^". Much like (7) can be obtaine for, we can compute ðf =Þ for ^. After some algebraic manipulation using that result (not shown here), we substitute in (8) with ¼ ð Þ ; ð0þ where is chosen so to be equal to ðf =Þ. The formulas for estimating the external parameters of the camera are summarize in Table an briefly erive below. The position of the camera is obtaine by solving ðx; Z; Þ iðx; Z; Þ > ¼ðX; Z; Þ jðx; Z; Þ > for ði 6¼ jþ. The value gives the X-coorinate of the intersection point of the optical axis with the calibration plane: 8 : ð =; 0; Þ ðx; Z; Þ > ¼ 0; where ðx; Z; Þ > is the tangent to the viewpoint conic at the optical center, i.e., the optical axis. One may woner why the extrinsics were not irectly encoe in the parameterization. Inee, such parameterizations exist an were investigate. However, they were abanone because the resulting calibration conics ^" were not as simple as the one we propose. When a calibration algorithm coul be euce from any of these formulations, it was less stable than the one we show next. With this global formulation, all calibration conics ^" shoul be fitte at the same time. However, this task is ifficult since the function is nonlinear an requires an initial estimate of the parameters. Perhaps surprisingly, there exists an analytic solution to computing the extrinsic parameters once the values of a, b, an k for all calibration conics are estimate. The position of the camera can be estimate, while ignoring the focal length at each circle. We exploit the fact that without noise 8 : " ¼ ^", i.e., the unconstraine an constraine formulations shoul give ientical results. We solve this equation for each parameter,, an an obtain ¼ = a b ðb a Þ ; ðþ ~ ¼ a = p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi þ a k þ ; ðþ ~ ¼b pffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a þ b ; ðþ for each calibration conic. Note that the two last equations involve only the extrinsic parameters. We rename them with a~an a subscript because, in the presence of noise, they are ifferent for each calibration conic. This is because each calibration conic is fitte iniviually. Therefore, an estimate for can be obtaine by minimizing the variance of the ~. In our implementation, a minimization over ~ is preferre, since it eliminates the square root in the expression. This results in solving ¼ arg min ; an ¼ X ~ ; ð4þ where is the mean. The expression is a fourth egree polynomial in, i.e., it amits only three extrema in. One of them correspons to ¼ 0 an is a maximum an the other two are minima with ientical absolute value but opposite sign. The one larger than zero is our solution for an can always be taken positive. Given this value, we can estimate the other parameters with ¼ Meanð~ Þ an ¼ Meanð~ Þ using () an (). This sign ambiguity will be resolve later. In the presence of noise, we perform a final optimization: arg min ;; X ð ~ Þ þð ~ Þ ð5þ using a stanar Gauss-Newton algorithm. Once the external parameters of the camera are known, that is, the value of,, an, we are seeking the intrinsics. Using () i not give satisfying results in practice. Instea, we fin ^" that best fits the original ata points. Given the external parameters, each calibration conic can be fitte using a least square algebraic error function resulting in a secon egree polynomial involving the variable. We try both signs for an keep the one that best fits the calibration conics. Many calibration planes. If many calibration planes are available, the focal length of each circle must be consistent over all the views. This is accomplishe by simultaneously fitting the calibration conics ^ for ifferent planes. This global Linear Least Square problem is a secon egree polynomial, where is the only variable. The steps of this algorithm are summarize in Fig. 6. In the following, it is referre to as the Right Cone Constraint (RCC) metho. 4. NSVP Extension Our previous SVP moel can be generalize allowing an NSVP. Due to lack of space, it is not escribe here. Our tests emonstrate that such a parameterization is not that useful in practice. Inee, the parameters can only be recovere by. We scale " an ^" such that they have unity at the upper left coorinate.

9 560 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO. 9, SEPTEMBER 009 Fig. 6. Complete algorithm for the RCC calibration approach. means of optimization that woul require initial estimates of the parameters, which are ifficult to obtain. 5 COMPUTING THE DISTORTION CENTER Until now, we have assume, for both algorithms, that the istortion center was known; this information was use to select the istortion circles. Recall that the istortion center is also the principal point of the camera in our moel. Tests with noiseless simulate ata showe that the calibration may be quite sensitive to a ba choice of istortion center; inee, like for real cameras, using the image center as an approximation was not satisfying in general. Hence, the istortion center must be estimate as part of the calibration process. Note that using HK algorithm is not satisfactory in general since the criteria for choosing this point is in terms of image rectification. Besies not being applicable to noncentral cameras, these criteria o not correspon to our moel. Below, we propose one where the recovere istortion center is ientical to the principal point. The sensitivity of calibration that we observe in simulations suggests that it shoul be possible to estimate the istortion center rather reliably, which was confirme in practice. Algorithm. We use the following heuristics to efine an optimization criterion for the istortion center. Let us apply the IAC approach of Section. with several images as input. The plane-base calibration for each istortion circle is then capable of estimating a principal point, besies the focal length f. It seems plausible that the better the assume istortion center was, the closer the estimate principal points will be to it. Since plane-base calibration is applie on images centere on the assume istortion center, we can consier the average istance of the estimate principal points (one per istortion circle) as a measure for the gooness of the center. Fig. 7 shows the values of this measure, compute for istortion center positions on a gri aroun the image center, for real cameras. The shape of the cost surface inicates that we can fin the optimum istortion center using a simple steepest escent type metho. We implemente such an approach that accurately fins the istortion center within a couple of minutes of computation. Note that the secon column of Fig. 7 shows that, although the principal points use to plot it were compute iniviually Fig. 7. Plots of the gooness measure for the istortion center, obtaine for three teste lenses (cf. Section 7). (a), (c), (e) gri aroun the image center (yellow/ark meaning smaller). (b), (), (f) One slice per plot, through the respective minimum. (a).5 mm. (b) Cata- mm. (c) 8 mm. ().5 mm. (e) Cata- mm. (f) 8 mm. per istortion circle, they are very ensely clustere (average istance to assume istortion center of less than pixels). This suggests a high stability of the calibration. Discussion. Compare to other approaches, our optimization criterion is chosen to fin the best optical axis. Thus, the istortion center is ientical to a principal point in a pinhole camera. This formulation is also use in [9], [8], [9]. Our criterion is ifferent from the one in image-base istortion functions, where the istortion center (together with the istortion function) is chosen to maximize the linearity of rectifie line images [8], [], [5], []. However, the latter can result in instability for the estimation uner very low istortion an, more importantly, is not optimal when the camera is NSVP since image rectification is not possible. In theory, ours is not subject to these problems even when no istortion is present. This is confirme by the nice shape of the error function for the 8.0 mm camera, as shown in Fig. 7. For these reasons, the istortion center computation will not be inclue in the comparison with HK approach. 6 PRACTICAL ISSUES 6. Dense Plane-Image Corresponences The easiest approach we foun to get ense corresponences between the calibration plane an the camera is to use a flat screen. We use a simple coe structure light algorithm [6], which consists of isplaying a sequence of patterns of horizontal an vertical black an white stripes of varying thickness on the screen to encoe the position of each screen pixel (cf. Fig. 8). Then, for each camera pixel, we ientify the corresponing position on the calibration plane by ecoing the observe intensities in each pattern. We foun that, when performe in a controlle environment (low-constant ambient lighting, screen of high contrast an resolution), the accuracy of such a metho is goo enough for calibration. Inee, we only use corresponences of pixel precision (see [] for etails). Since the points locate on the istortion circles are given in floating-point coorinates, we compute their corresponences by a weighte sum of the corresponences recovere for the four closest image pixels. Besies allowing ense corresponences with the calibration plane, these approaches

10 TARDIF ET AL.: CALIBRATION OF CAMERAS WITH RADIALLY SYMMETRIC DISTORTION 56 Fig. 8. Projecte patterns for corresponences are horizontal an vertical black an white stripes. Images taken with (a) the Goyo.5 mm, (b) cataioptric, an (c) paracataioptric camera (cata- mm). rener trivial the problem of recovering the structure of the calibration plane. This is as oppose to using calibration gri images where gri points must be automatically extracte an ientifie. This is especially ifficult when the istortion is very large. 6. Omniirectional Cameras There are several issues worth mentioning for omniirectional cameras. If the fiel of view is larger than 80, then some istortion circles will have viewing cones that actually approach planes. For the RCC approach, this means that fitting the calibration conic may become unstable. These cases can be etecte as the ones whose corresponences on the calibration plane are close to collinear. In practice, they are iscare from the actual calibration proceure. In the case of the homography-base algorithm that uses all matches simultaneously, no special attention is neee. 6. Nonlinear Optimization Because f is a function of the raius in the istorte image, it is not straightforwar to perform the projection of a D point into the image (as oppose to the back-projection of image points to D). This means that (8) cannot be use irectly to perform a nonlinear optimization of the calibration unknowns. Since the focal length function cannot be inverte when it crosses zero, it is preferable to efine the istortion in terms of view angle. As seen in Fig. 4b, this function is generally simple, so easily invertible. Given this function, the raius of the image of a D point is compute from the angle between the optical axis an the line spanne by the optical center an the D point. The nonlinear optimization is then performe with instea of f. The NSVP case cannot be hanle as simply since there is no single optical center relative to which to compute the angle. In this case, we use (8) an estimate a raius i associate to each D-D corresponence, along with the other parameters. This yiels a sparse nonlinear problem. We also enforce monotonicity on f an. For example, this can be one approximatively by aing terms to the cost function, of the form: ðjf f þs j ðf f þs ÞÞ, s>0, which is a quaratic penalty if the constraints are not enforce but gives 0 otherwise. 7 EXPERIMENTS We teste our approaches using ifferent types of cameras with simulate an real ata. They were also compare to the HK algorithm []. Since the view angle of some of our camera is larger than 80, we implement the algorithm for a spherical retina. 7. Simulation SVP cameras. We simulate two types of cameras: wie angle with small raial istortion an fisheyes with large istortion. The focal length (an istortion) function of each type of camera was built ranomly via monotonically ecreasing polynomials of fifth egree. We use an image size comparable to our own camera: megapixel (see real ata). The shapes of focal length functions of the wie-angle cameras were similar to our 5 mm (Fig. 4) an we use f with f 0 ¼ ; pixels. The simulate fisheyes were analogous to our.5 mm, so we use f 0 ¼ pixels, an the shape of the focal length function yiele a fiel of view close to 80. In these tests, we assume a known istortion center an also mae sure the camera was never place in a near fronto-parallel position with respect to the calibration plane. We compare the reprojection error, the error on the pose, an error on the calibration. We efine the latter as the average ifference between the recovere focal length function f an the groun truth. Our tests showe that all of them are highly relate, so we only show results for the reprojection error. The reprojection errors for the three algorithms with respect to noise an the number of use cameras are shown in Fig. 9. We ae Gaussian noise of stanar eviation up to 4 pixels to the original ata, which consiste of 50 points per istortion circle (which is rather small compare to the several hunre usually available from a structure light ense mapping). In all of our tests, ata on the istortion circles are not necessarily evenly istribute. Especially in the case of a very large fiel of view, only a portion of the image effectively sees the calibration plane. We mae sure to properly simulate this effect. Finally, our tests were performe using, 7, an 0 views. In general, all three algorithms performe similarly. The RCC obtains results for the wie-angle camera similar to the other two. However, this is not the case for the fisheye camera because the ata points were not uniformly istribute aroun the istortion circles. Hence, the pose estimation was not as stable. This exposes a weakness of this approach: If the pose for one view is baly estimate, it can potentially estroy the estimation of the focal length function even using many views. This effect is not as important for the HB since the focal length function is estimate with the position of the camera on the optical axis using all the views. NSVP cameras. We performe an in-epth analysis of the performance of our algorithms for NSVP cameras. Three aspects were consiere. First, how well can the isplacement along the optical axis be recovere uner noise? Secon, is the linear NSVP algorithm useful in practice? Thir, oes the NSVP moel overfit when the camera is actually SVP? We performe our tests using the homography-base approach because it is the only one that naturally enforces both SVP an NSVP constraints. Two approaches were teste:. Initialization using linear calibration base on an SVP assumption, followe by nonlinear optimization of the NSVP moel.. Initialization as well as optimization using the NSVP moel.. We i not compare cataioptric cameras with the HK approach because the comparison woul have been unfair.

11 56 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO. 9, SEPTEMBER 009 Fig. 9. Reprojection errors (in pixels) for the simulate wie-angle an fisheye cameras. (a) RCC wie angle. (b) HB wie angle. (c) HK wie angle. () RCC fisheye. (e) HB fisheye. (f) HK fisheye. Fig. 0. The two optimization algorithms (see text for etails) with simulate ata. (a), () The recovere isplacement t. (b), (e) The error istribution. (c), (f) The focal length functions compare to the groun truth. (a) SVP. (b) SVP. (c) SVP. () NSVP. (e) NSVP. (f) NSVP. The tests were performe on simulate SVP an NSVP cataioptric cameras with viewpoints moving along the optical axis (Figs. 0a an 0). We use 0 ifferent positions for the plane an ae Gaussian noise of stanar eviation pixel to the ata. Beyon this noise level, we foun that our approach coul not accurately estimate the viewpoint isplacement. Three hunre points per image were use an polynomial moels of egree 5 for the calibration were use. A typical behavior of the two approaches is shown in Fig. 0. This leas to the following observations:. For an SVP camera, both optimizations shoul lea to similar results (negligible NSVP).. For an NSVP camera, enforcing an SVP yiels a biase focal length function (cf. Fig. 0f). In some cases, this can be satisfying in terms of reprojection error, like for one of our real cameras (cf. Fig. ).. When the moel is refine to inclue t, the optimization might not converge to a satisfying minimum.. If the camera is NSVP, the secon approach shoul perform better, but only if the noise is sufficiently low (cf. Fig. 0). Otherwise, it can result in worse results because solving with (5) is not as stable as with (9). 7. Real Data Several camera configurations were teste. First, a CCTV Basler A0bc was combine with a fisheye Goyo.5 mm lens, to an 8 mm Cosmicar lens with small istortion an to a RemoteReality cataioptric lens combine with a.5 mm Cosmicar lens (referre to as cata- mm ). Secon, a Canon SLR was combine with a fisheye 5 mm lens an to a 0-60 cataioptric lens combine with a 0 mm lens (referre to as cata-0 mm ). In all cases, the calibration plane of known eucliean structure was a 0-inch LCD screen. The number of calibration views was between 8 an 0 for the ifferent experiments. Fig. 4a gives the compute focal length of the 5 mm,.5 mm, cata- mm, an cata-0 mm, with respect to the istance to the istortion center, using all methos. These are the functions that were recovere without further optimization base on the reprojection error. Table shows

12 TARDIF ET AL.: CALIBRATION OF CAMERAS WITH RADIALLY SYMMETRIC DISTORTION 56 TABLE Comparison of the Average Reprojection Errors TABLE 5 Result for Pose Estimation NL refers to nonlinear optimization. TABLE 4 Comparison of the Average Reprojection Error for Different Constraints on the Viewpoint The camera was move to three positions with known relative motion. Coefficients p ij an a ij enote the istance (in centimeters) an relative angle (in egrees) between camera positions i an j. L refers to the linear algorithms an NL to nonlinear optimization of the parameters. the average reprojection errors of the three algorithms for all cameras. In many cases, the cameras coul be calibrate from a single image of the screen (cf. Fig. for the RCC), although, in general, we recommen using at least five images for goo stability. Recall that our structure lightbase matching provies a large number of corresponences. The cata- mm was calibrate with all the approaches (except for HK approach where only the portion of the image corresponing to forwar cones was use) with very similar results (cf. Fig. 4). As for the cata-0 mm, since it was foun to be noncentral, as escribe later in this section. The RCC gave very accurate results, however only with a limite number of planes. The ifficulties arose when not enough ata were available to allow a goo fitting of the calibration conics. We hanle this by ropping these planes an use only the other ones. If iscrete values for f are compute instea of, e.g., a polynomial function, then only a subset of istortion circles are use for calibration; others can then be extrapolate or interpolate from a polynomial fitting of the ata. Let us efine this polynomial p; from the camera moel, it is best to ensure that its erivative at 0 (corresponing to the istortion center) is 0. This constraint is ue to the symmetry of the istortion moel. Another criterion is that the function shoul be monotonically ecreasing. This last constraint is not irectly enforce in our algorithms. However, this i not seem to be an issue in our tests. In Fig.. Image rectification for the Basler camera with the.5-mm lens an the RemoteReality lens (cata- mm). (a) Original image. (b) Rectifie image for a rotate view. practice, polynomials of egree 5 appeare to be sufficient. To hanle the case of omniirectional cameras more appropriately, the interpolation is carrie out with the view angle instea of the focal length. In this case, a monotonically increasing polynomial passing through 0 can also be fitte (see Fig. 4b). Both cataioptric cameras cata- mm an cata-0 mm are typical examples of configurations yieling multiple viewpoints. Inee, both mirrors are parabolic an the mounte lenses are perspective []. However, only the secon one was foun to be NSVP (cf. Fig. an Table 4). We conjecture that, although our.5 mm camera is not orthographic, it has a fiel of view sufficiently small to provie a locus of viewpoints very close to a single effective viewpoint. To verify our hypothesis, it woul be useful to perform the test with more specialize equipment like in []. Evaluating the results base on the reprojection error can lea to biase conclusions in the case of a generic moel. Inee, the moel offers more freeom, which allows to fit the ata better. Meaningful quantitative results were obtaine for the Goyo.5-mm lens, using a pose estimation proceure. Using a translation stage, the camera was move to three positions with known relative motion (no Fig.. Calibration with the RCC approach. (a) Fitte ellipses for the Goyo.5-mm lens an (b) corresponing hyperbolas, compute intersection, an optical axis (gray line). (c) For the cata- mm camera, the intersection between the calibration plane an the cones yiele ellipses an hyperbolas, constraining the viewpoint to lie, respectively, on hyperbolas an ellipses. () Intersection of the viewpoint conics.

13 564 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO. 9, SEPTEMBER 009 Fig.. The two optimization algorithms (see text for etails) with our cataioptric cameras. (a), () The recovere isplacement t. (b), (e) The istribution of reprojection errors. (c), (f) The recovere focal length functions. (a) Cata- mm. (b) Cata- mm. (c) Cata- mm. () Cata-0 mm. (e) Cata-0 mm. (f) Cata-0 mm. rotation, known translation). Using the calibration information (obtaine using other images), the pose of the camera relative to the calibration plane was compute for all three positions. From this, the relative motions were compute an compare to the groun truth. The results presente in Table 5 show a goo stability for all methos. Images from three panoramic cameras were rectifie base on the calibration results (cf. Figs. 6, a, an b). For the wie-angle Goyo lens an the cata- mm, the results seem to be very goo, even towar the image borers (cf. Fig. 6b an the inset images in Fig. b). Finally, a homemae cataioptric evice built from a Fujinon.5 mm lens pointe at a roughly spherical mirror was teste (cf. Fig. 5). Although its raial configuration was not perfect, the istortion center coul be foun an a satisfying calibration coul be obtaine with our methos. The HB approach gave the best results because it coul take avantage of up to eight images, which is more robust to the imperfect configuration of the camera. The rectification is surprisingly goo for a large part of the image, especially aroun the borers (cf. Figs. 6a an 6c). The remaining istortions in the center were foun to be cause by a small bump on the mirror s surface. 8 SUMMARY AND CONCLUSION We have propose new calibration approaches for a camera moel that may be a goo compromise between flexibility an stability for many camera types, especially wie-angle ones. Previous work showe that the RCC approach might have a limite practical usability because of stability issues [4]. This was because only one calibration plane coul be use irectly an because camera position was recovere in two steps: conic fitting an fining the closest point to a set of viewpoint conics. Both issues were aresse in this paper an we showe that the use of the RCC approach is Fig. 4. (a) Recovere focal length (in pixels) for the three algorithms (after polynomial fitting of the ata). (b) Recovere view angle (in egrees). These calibration curves were obtaine without optimization base on the reprojection error. Performing such an optimization le to very similar results in general. Observe that, for the cataioptric cameras, there are negative focal lengths, meaning that their view angle is larger than 80. Fig. 5. Homemae cataioptric camera built from a Basler A0bc camera with a Fujinon.5 mm lens pointe at a Christmas ornament representing a roughly spherical mirror.

14 TARDIF ET AL.: CALIBRATION OF CAMERAS WITH RADIALLY SYMMETRIC DISTORTION 565 Fig. 6. Image rectification. (a) Original images. (b) Rectifie image for the Goyo.5 mm. (c) Rectifie image for the homemae cataioptric camera. Small inset images show rectification of the borer regions. very well suite when performe with only few camera poses. However, our homography-base approach introuce in this paper is preferable. It can be aapte to using a polynomial istortion moel an extene to NSVP configurations. This allows one to perform calibration without a ense plane-to-image matching, unlike the previous approach. Those reasons, an the fact it was the most reliable in our experiments, le us to recommen the HB approach over RCC. The HK approach gives very goo results with the benefit that it has an elegant solution for the istortion center estimation. Our approach gives a similar accuracy but can also eal with NSVP cameras, which is one of the main goals of this work. ACKNOWLEDGMENTS Jean-Philippe Tarif an Peter Sturm gratefully acknowlege the support of the French ANR uner Project CAVIAR. Funing was also provie to Jean-Philippe Tarif by the Natural Sciences an Engineering Research Council of Canaa. The authors woul like to thank the reviewers for useful comments. REFERENCES [] Intel Open Source Computer Vision Library, technology/computing/opencv/, 008. [] S. Baker an S.K. Nayar, A Theory of Single-Viewpoint Cataioptric Image Formation, Int l J. Computer Vision, vol. 5, pp , 999. [] J.P. Barreto an K. Daniiliis, Unifying Image Plane Liftings for Central Cataioptric an Dioptric Cameras, Proc. IEEE Int l Workshop Omniirectional Vision an Camera Networks, pp. 5-6, 004. [4] W. Boehm an H. Prautzsch, Geometric Concepts for Geometric Design. AK Peters, 994. [5] D.C. Brown, Close-Range Camera Calibration, Photogrammetric Eng., vol. 7, pp , 97. [6] G. Champleboux, S. Lavallee, P. Sautot, an P. Cinquin, Accurate Calibration of Cameras an Range Imaging Sensor: The Npbs Metho, Proc. IEEE Int l Conf. Robotics an Automation, pp , 99. [7] D. Claus an A.W. Fitzgibbon, A Rational Function Lens Distortion Moel for General Cameras, Proc. IEEE Int l Conf. Computer Vision an Pattern Recognition, vol., pp. -9, 005. [8] F. Devernay an O. Faugeras, Straight Lines Have to Be Straight, Machine Vision an Applications, vol., pp. 4-4, 00. [9] C. Geyer an K. Daniiliis, A Unifying Theory for Central Panoramic Systems an Practical Implications, Proc. Sixth European Conf. Computer Vision, vol. 9, pp , 000. [0] K.D. Gremban, C.E. Thorpe, an T. Kanae, Geometric Camera Calibration Using Systems of Linear Equations, Proc. IEEE Int l Conf. Robotics an Automation, pp , 988. [] M.D. Grossberg an S.K. Nayar, The Raxel Imaging Moel an Ray-Base Calibration, Int l J. Computer Vision, vol. 6, pp. 9-7, 005. [] R. Hartley an S.B. Kang, Parameter-Free Raial Distortion Correction with Center of Distortion Estimation, IEEE Trans. Pattern Analysis an Machine Intelligence, vol. 9, no. 8, pp. 09-, Aug [] J. Kannala an S.S. Brant, A Generic Camera Moel an Calibration Metho for Conventional, Wie-Angle, an Fish-Eye Lenses, IEEE Trans. Pattern Analysis an Machine Intelligence, vol. 8, no. 8, pp. 5-40, Aug [4] S.S. Lin an R. Bajcsy, True Single View Point Cone Mirror Omni-Directional Cataioptric System, Proc. IEEE Int l Conf. Computer Vision an Pattern Recognition, vol., pp. 0-07, 00. [5] B. Micusik an T. Pajla, Structure from Motion with Wie Circular Fiel of View Cameras, IEEE Trans. Pattern Analysis an Machine Intelligence, vol. 8, no. 7, pp. 5-49, July 006. [6] J. Salvi, J. Pags, an J. Batlle, Pattern Coification Strategies in Structure Light Systems, Pattern Recognition, vol. 7, pp , 004. [7] D.E. Stevenson an M.M. Fleck, Nonparametric Correction of Distortion, TR 95-07, Univ. of Iowa, 995. [8] A. Strzebonski, Cylinrical Algebraic Decomposition, mathworl.wolfram.com/cylinricalalgebraicdecomposition. html, 008. [9] P. Sturm, Algorithms for Plane-Base Pose Estimation, Proc. IEEE Int l Conf. Computer Vision an Pattern Recognition, vol., 000. [0] P. Sturm an S. Ramalingam, A Generic concept for Camera Calibration, Proc. Eighth European Conf. Computer Vision, vol., pp. -, 004. [] P.F. Sturm an S.J. Maybank, On Plane-Base Camera Calibration: A General Algorithm, Singularities, Applications, Proc. IEEE Int l Conf. Computer Vision an Pattern Recognition, vol., pp. 4-47, 999. [] R. Swaminathan, M.D. Grossberg, an S.K. Nayar, Caustics of Cataioptric Cameras, Proc. Eighth IEEE Int l Conf. Computer Vision, pp. -9, 00. [] J.P. Tarif an S. Roy, A MRF Formulation for Coe Structure Light, Proc. Fifth Int l Conf. -D Digital Imaging an Moeling, pp. -9, 005. [4] J.P. Tarif an P. Sturm, Calibration of Cameras with Raially Symmetric Distortion, Proc. IEEE Int l Workshop Omniirectional Vision an Camera Networks, pp. 44-5, 005.

15 566 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO. 9, SEPTEMBER 009 [5] J.P. Tarif, P. Sturm, an S. Roy, Self-Calibration of a General Raially Symmetric Distortion Moel, Proc. Ninth European Conf. Computer Vision, pp , 006. [6] J.P. Tarif, P. Sturm, an S. Roy, Plane-Base Self-Calibration of Raial Distortion, Proc. th IEEE Int l Conf. Computer Vision, pp. -8, 007. [7] J.-P. Tarif, P. Sturm, M. Trueau, an S. Roy, Calibration of Cameras with Raially Symmetric Distortion, technical report, INRIA, 008. [8] S. Thirthala an M. Pollefeys, Multi-View Geometry of D Raial Cameras an Its Application to Omniirectional Camera Calibration, Proc. 0th IEEE Int l Conf. Computer Vision, pp , 005. [9] S. Thirthala an M. Pollefeys, The Raial Trifocal Tensor: A Tool for Calibrating the Raial Distortion of Wie-Angle Cameras, Proc. IEEE Int l Conf. Computer Vision an Pattern Recognition, pp. -8, 005. [0] Z. Zhang, Parameter Estimation Techniques: A Tutorial with Application to Conic Fitting, Image an Vision Computing, vol. 5, pp , 997. [] Z. Zhang, A Flexible New Technique for Camera Calibration, IEEE Trans. Pattern Analysis an Machine Intelligence, vol., no., pp. 0-4, Nov Jean-Philippe Tarif receive the MSc an PhD egrees in computer science from the Université e Montréal in 00 an 007, respectively. In 007, he was a postoctoral researcher in the GRASP Laboratory at the University of Pennsylvania. He is currently a postoctoral fellow in the Center for Intelligent Machines at McGill University. His research interests inclue structure from motion, calibration an self-calibration of omniirectional cameras, passive D reconstruction, an place recognition. Peter Sturm receive the MSc egree from the National Polytechnic Institute of Grenoble (INPG), Grenoble, France, in 994, the MSc egree from the University of Karlsruhe in 994, an the PhD egree from INPG in 997. His PhD thesis was aware the SPECIF Awar (given to one French PhD thesis in computer science per year). After a two-year postoctorate at Reaing University, working with S. Maybank, he joine INRIA in a permanent research position as chargé e recherche in 999. Since 006, he has been a irecteur e recherche (professor). He has been a member of program committees of the major conferences in computer vision, image processing, an pattern recognition. He will be a program chair of the International Conference on Computer Vision (ICCV ) an area chair of ICCV 09 an IEEE Computer Society Conference on Computer Vision an Pattern Recognition (CVPR 09) an was an area chair for the European Conference on Computer Vision (ECCV 06). He is on the eitorial boar of Image an Vision Computing, the Journal of Computer Science an Technology, an the Information Processing Society of Japan (IPSJ) Transactions on Computer Vision an Applications. He is an organization cochair of ECCV 08 an has organize workshops an given tutorials an invite lectures at several conferences. His main research topics are in computer vision, specifically relate to camera (self-) calibration, D reconstruction, an motion estimation, both for traitional perspective cameras an omniirectional sensors. During his unergrauate stuies, he ha his own one-person software company, within which he was mainly writing an selling software for the organization of sports events. He was involve in the organization of the 00 Juo Worl Championships, the 999 Sumo Amateur Worl Championships (the first ever to be hel outsie Japan), the 994 Juo University Worl Championships, two European Championships, an numerous other international an national events. He is a member of the IEEE Computer Society. Martin Trueau receive the BSc egree in mathematics from the Université e Montréal in 987 an the MSc egree in mathematics from Cornell University in 99. He currently works on financial moels for Mercer. He was previously a researcher in the VisionD Laboratory, Université e Montréal. Prior to this, he worke on signal processing algorithms for NMS Communications. His interests inclue almost anything involving mathematics, pure or applie. His first love was number theory. Sébastien Roy is an associate professor in the Computer Science Department (Département Informatique et e Recherche Opérationnelle) an is the hea of the VisionD Laboratory, Université e Montréal. His research is in the fiel of D computer vision. His interests focus on D reconstruction from images, motion, an trajectory analysis from vieo sequences, as well as using multiple projectors to create a large seamless image over arbitrary-shape surfaces. Being a founing member of the Institut Arts Cultures et Technologies, Université e Montréal, he eicates consierable attention to the artistic an cultural applications of his research an collaborates actively in the fiels of music, theater, architecture, an contemporary arts.. For more information on this or any other computing topic, please visit our Digital Library at

IN the last few years, we have seen an increasing interest. Calibration of Cameras with Radially Symmetric Distortion

IN the last few years, we have seen an increasing interest. Calibration of Cameras with Radially Symmetric Distortion TARDIF et al.: CALIBRATION OF CAMERAS WITH RADIALLY SYMMETRIC DISTORTION Calibration of Cameras with Radially Symmetric Distortion Jean-Philippe Tardif, Peter Sturm, Martin Trudeau, and Sébastien Roy Abstract

More information

New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction

New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction New Geometric Interpretation an Analytic Solution for uarilateral Reconstruction Joo-Haeng Lee Convergence Technology Research Lab ETRI Daejeon, 305 777, KOREA Abstract A new geometric framework, calle

More information

1 Surprises in high dimensions

1 Surprises in high dimensions 1 Surprises in high imensions Our intuition about space is base on two an three imensions an can often be misleaing in high imensions. It is instructive to analyze the shape an properties of some basic

More information

Kinematic Analysis of a Family of 3R Manipulators

Kinematic Analysis of a Family of 3R Manipulators Kinematic Analysis of a Family of R Manipulators Maher Baili, Philippe Wenger an Damien Chablat Institut e Recherche en Communications et Cybernétique e Nantes, UMR C.N.R.S. 6597 1, rue e la Noë, BP 92101,

More information

Classical Mechanics Examples (Lagrange Multipliers)

Classical Mechanics Examples (Lagrange Multipliers) Classical Mechanics Examples (Lagrange Multipliers) Dipan Kumar Ghosh Physics Department, Inian Institute of Technology Bombay Powai, Mumbai 400076 September 3, 015 1 Introuction We have seen that the

More information

Classifying Facial Expression with Radial Basis Function Networks, using Gradient Descent and K-means

Classifying Facial Expression with Radial Basis Function Networks, using Gradient Descent and K-means Classifying Facial Expression with Raial Basis Function Networks, using Graient Descent an K-means Neil Allrin Department of Computer Science University of California, San Diego La Jolla, CA 9237 nallrin@cs.ucs.eu

More information

Research Article Inviscid Uniform Shear Flow past a Smooth Concave Body

Research Article Inviscid Uniform Shear Flow past a Smooth Concave Body International Engineering Mathematics Volume 04, Article ID 46593, 7 pages http://x.oi.org/0.55/04/46593 Research Article Invisci Uniform Shear Flow past a Smooth Concave Boy Abullah Mura Department of

More information

Robust Camera Calibration for an Autonomous Underwater Vehicle

Robust Camera Calibration for an Autonomous Underwater Vehicle obust Camera Calibration for an Autonomous Unerwater Vehicle Matthew Bryant, Davi Wettergreen *, Samer Aballah, Alexaner Zelinsky obotic Systems Laboratory Department of Engineering, FEIT Department of

More information

CONSTRUCTION AND ANALYSIS OF INVERSIONS IN S 2 AND H 2. Arunima Ray. Final Paper, MATH 399. Spring 2008 ABSTRACT

CONSTRUCTION AND ANALYSIS OF INVERSIONS IN S 2 AND H 2. Arunima Ray. Final Paper, MATH 399. Spring 2008 ABSTRACT CONSTUCTION AN ANALYSIS OF INVESIONS IN S AN H Arunima ay Final Paper, MATH 399 Spring 008 ASTACT The construction use to otain inversions in two-imensional Eucliean space was moifie an applie to otain

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Shift-map Image Registration

Shift-map Image Registration Shift-map Image Registration Svärm, Linus; Stranmark, Petter Unpublishe: 2010-01-01 Link to publication Citation for publishe version (APA): Svärm, L., & Stranmark, P. (2010). Shift-map Image Registration.

More information

Module13:Interference-I Lecture 13: Interference-I

Module13:Interference-I Lecture 13: Interference-I Moule3:Interference-I Lecture 3: Interference-I Consier a situation where we superpose two waves. Naively, we woul expect the intensity (energy ensity or flux) of the resultant to be the sum of the iniviual

More information

Figure 1: 2D arm. Figure 2: 2D arm with labelled angles

Figure 1: 2D arm. Figure 2: 2D arm with labelled angles 2D Kinematics Consier a robotic arm. We can sen it commans like, move that joint so it bens at an angle θ. Once we ve set each joint, that s all well an goo. More interesting, though, is the question of

More information

CAMERAS AND GRAVITY: ESTIMATING PLANAR OBJECT ORIENTATION. Zhaoyin Jia, Andrew Gallagher, Tsuhan Chen

CAMERAS AND GRAVITY: ESTIMATING PLANAR OBJECT ORIENTATION. Zhaoyin Jia, Andrew Gallagher, Tsuhan Chen CAMERAS AND GRAVITY: ESTIMATING PLANAR OBJECT ORIENTATION Zhaoyin Jia, Anrew Gallagher, Tsuhan Chen School of Electrical an Computer Engineering, Cornell University ABSTRACT Photography on a mobile camera

More information

Unknown Radial Distortion Centers in Multiple View Geometry Problems

Unknown Radial Distortion Centers in Multiple View Geometry Problems Unknown Raial Distortion Centers in Multiple View Geometry Problems José Henrique Brito 1,2, Rolan Angst 3, Kevin Köser 3, Christopher Zach 4, Pero Branco 2, Manuel João Ferreira 2, Marc Pollefeys 3 1

More information

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my Calculus I course that I teach here at Lamar University. Despite the fact that these are my class notes, they shoul be accessible to anyone wanting to learn Calculus

More information

6 Gradient Descent. 6.1 Functions

6 Gradient Descent. 6.1 Functions 6 Graient Descent In this topic we will iscuss optimizing over general functions f. Typically the function is efine f : R! R; that is its omain is multi-imensional (in this case -imensional) an output

More information

A Classification of 3R Orthogonal Manipulators by the Topology of their Workspace

A Classification of 3R Orthogonal Manipulators by the Topology of their Workspace A Classification of R Orthogonal Manipulators by the Topology of their Workspace Maher aili, Philippe Wenger an Damien Chablat Institut e Recherche en Communications et Cybernétique e Nantes, UMR C.N.R.S.

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

State Indexed Policy Search by Dynamic Programming. Abstract. 1. Introduction. 2. System parameterization. Charles DuHadway

State Indexed Policy Search by Dynamic Programming. Abstract. 1. Introduction. 2. System parameterization. Charles DuHadway State Inexe Policy Search by Dynamic Programming Charles DuHaway Yi Gu 5435537 503372 December 4, 2007 Abstract We consier the reinforcement learning problem of simultaneous trajectory-following an obstacle

More information

Refinement of scene depth from stereo camera ego-motion parameters

Refinement of scene depth from stereo camera ego-motion parameters Refinement of scene epth from stereo camera ego-motion parameters Piotr Skulimowski, Pawel Strumillo An algorithm for refinement of isparity (epth) map from stereoscopic sequences is propose. The metho

More information

Exercises of PIV. incomplete draft, version 0.0. October 2009

Exercises of PIV. incomplete draft, version 0.0. October 2009 Exercises of PIV incomplete raft, version 0.0 October 2009 1 Images Images are signals efine in 2D or 3D omains. They can be vector value (e.g., color images), real (monocromatic images), complex or binary

More information

Shift-map Image Registration

Shift-map Image Registration Shift-map Image Registration Linus Svärm Petter Stranmark Centre for Mathematical Sciences, Lun University {linus,petter}@maths.lth.se Abstract Shift-map image processing is a new framework base on energy

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Particle Swarm Optimization Based on Smoothing Approach for Solving a Class of Bi-Level Multiobjective Programming Problem

Particle Swarm Optimization Based on Smoothing Approach for Solving a Class of Bi-Level Multiobjective Programming Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 17, No 3 Sofia 017 Print ISSN: 1311-970; Online ISSN: 1314-4081 DOI: 10.1515/cait-017-0030 Particle Swarm Optimization Base

More information

How to Compute the Pose of an Object without a Direct View?

How to Compute the Pose of an Object without a Direct View? How to Compute the Pose of an Object without a Direct View? Peter Sturm and Thomas Bonfort INRIA Rhône-Alpes, 38330 Montbonnot St Martin, France {Peter.Sturm, Thomas.Bonfort}@inrialpes.fr Abstract. We

More information

Transient analysis of wave propagation in 3D soil by using the scaled boundary finite element method

Transient analysis of wave propagation in 3D soil by using the scaled boundary finite element method Southern Cross University epublications@scu 23r Australasian Conference on the Mechanics of Structures an Materials 214 Transient analysis of wave propagation in 3D soil by using the scale bounary finite

More information

4.2 Implicit Differentiation

4.2 Implicit Differentiation 6 Chapter 4 More Derivatives 4. Implicit Differentiation What ou will learn about... Implicitl Define Functions Lenses, Tangents, an Normal Lines Derivatives of Higher Orer Rational Powers of Differentiable

More information

Computer Vision Project-1

Computer Vision Project-1 University of Utah, School Of Computing Computer Vision Project- Singla, Sumedha sumedha.singla@utah.edu (00877456 February, 205 Theoretical Problems. Pinhole Camera (a A straight line in the world space

More information

Bends, Jogs, And Wiggles for Railroad Tracks and Vehicle Guide Ways

Bends, Jogs, And Wiggles for Railroad Tracks and Vehicle Guide Ways Ben, Jogs, An Wiggles for Railroa Tracks an Vehicle Guie Ways Louis T. Klauer Jr., PhD, PE. Work Soft 833 Galer Dr. Newtown Square, PA 19073 lklauer@wsof.com Preprint, June 4, 00 Copyright 00 by Louis

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

Image Segmentation using K-means clustering and Thresholding

Image Segmentation using K-means clustering and Thresholding Image Segmentation using Kmeans clustering an Thresholing Preeti Panwar 1, Girhar Gopal 2, Rakesh Kumar 3 1M.Tech Stuent, Department of Computer Science & Applications, Kurukshetra University, Kurukshetra,

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection

More information

Geometric camera models and calibration

Geometric camera models and calibration Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October

More information

Holy Halved Heaquarters Riddler

Holy Halved Heaquarters Riddler Holy Halve Heaquarters Riler Anonymous Philosopher June 206 Laser Larry threatens to imminently zap Riler Heaquarters (which is of regular pentagonal shape with no courtyar or other funny business) with

More information

Computer Graphics Chapter 7 Three-Dimensional Viewing Viewing

Computer Graphics Chapter 7 Three-Dimensional Viewing Viewing Computer Graphics Chapter 7 Three-Dimensional Viewing Outline Overview of Three-Dimensional Viewing Concepts The Three-Dimensional Viewing Pipeline Three-Dimensional Viewing-Coorinate Parameters Transformation

More information

Non-homogeneous Generalization in Privacy Preserving Data Publishing

Non-homogeneous Generalization in Privacy Preserving Data Publishing Non-homogeneous Generalization in Privacy Preserving Data Publishing W. K. Wong, Nios Mamoulis an Davi W. Cheung Department of Computer Science, The University of Hong Kong Pofulam Roa, Hong Kong {wwong2,nios,cheung}@cs.hu.h

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Online Appendix to: Generalizing Database Forensics

Online Appendix to: Generalizing Database Forensics Online Appenix to: Generalizing Database Forensics KYRIACOS E. PAVLOU an RICHARD T. SNODGRASS, University of Arizona This appenix presents a step-by-step iscussion of the forensic analysis protocol that

More information

Using Vector and Raster-Based Techniques in Categorical Map Generalization

Using Vector and Raster-Based Techniques in Categorical Map Generalization Thir ICA Workshop on Progress in Automate Map Generalization, Ottawa, 12-14 August 1999 1 Using Vector an Raster-Base Techniques in Categorical Map Generalization Beat Peter an Robert Weibel Department

More information

Section 20. Thin Prisms

Section 20. Thin Prisms OPTI-0/0 Geometrical an Instrumental Optics opyright 08 John E. Greivenkamp 0- Section 0 Thin Prisms Thin Prism Deviation Thin prisms introuce small angular beam eviations an are useful as alignment evices.

More information

UNIT 9 INTERFEROMETRY

UNIT 9 INTERFEROMETRY UNIT 9 INTERFEROMETRY Structure 9.1 Introuction Objectives 9. Interference of Light 9.3 Light Sources for 9.4 Applie to Flatness Testing 9.5 in Testing of Surface Contour an Measurement of Height 9.6 Interferometers

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

FINDING OPTICAL DISPERSION OF A PRISM WITH APPLICATION OF MINIMUM DEVIATION ANGLE MEASUREMENT METHOD

FINDING OPTICAL DISPERSION OF A PRISM WITH APPLICATION OF MINIMUM DEVIATION ANGLE MEASUREMENT METHOD Warsaw University of Technology Faculty of Physics Physics Laboratory I P Joanna Konwerska-Hrabowska 6 FINDING OPTICAL DISPERSION OF A PRISM WITH APPLICATION OF MINIMUM DEVIATION ANGLE MEASUREMENT METHOD.

More information

Skyline Community Search in Multi-valued Networks

Skyline Community Search in Multi-valued Networks Syline Community Search in Multi-value Networs Rong-Hua Li Beijing Institute of Technology Beijing, China lironghuascut@gmail.com Jeffrey Xu Yu Chinese University of Hong Kong Hong Kong, China yu@se.cuh.eu.h

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de

More information

A Plane Tracker for AEC-automation Applications

A Plane Tracker for AEC-automation Applications A Plane Tracker for AEC-automation Applications Chen Feng *, an Vineet R. Kamat Department of Civil an Environmental Engineering, University of Michigan, Ann Arbor, USA * Corresponing author (cforrest@umich.eu)

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Generalized Edge Coloring for Channel Assignment in Wireless Networks

Generalized Edge Coloring for Channel Assignment in Wireless Networks Generalize Ege Coloring for Channel Assignment in Wireless Networks Chun-Chen Hsu Institute of Information Science Acaemia Sinica Taipei, Taiwan Da-wei Wang Jan-Jan Wu Institute of Information Science

More information

Section 19. Thin Prisms

Section 19. Thin Prisms Section 9 Thin Prisms 9- OPTI-50 Optical Design an Instrumentation I opyright 08 John E. Greivenkamp Thin Prism Deviation Thin prisms introuce small angular beam eviations an are useful as alignment evices.

More information

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Peter F Sturm Computational Vision Group, Department of Computer Science The University of Reading,

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Lecture 3: Camera Calibration, DLT, SVD

Lecture 3: Camera Calibration, DLT, SVD Computer Vision Lecture 3 23--28 Lecture 3: Camera Calibration, DL, SVD he Inner Parameters In this section we will introduce the inner parameters of the cameras Recall from the camera equations λx = P

More information

Implicit and Explicit Functions

Implicit and Explicit Functions 60_005.q //0 :5 PM Page SECTION.5 Implicit Differentiation Section.5 EXPLORATION Graphing an Implicit Equation How coul ou use a graphing utilit to sketch the graph of the equation? Here are two possible

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

Fast Window Based Stereo Matching for 3D Scene Reconstruction

Fast Window Based Stereo Matching for 3D Scene Reconstruction The International Arab Journal of Information Technology, Vol. 0, No. 3, May 203 209 Fast Winow Base Stereo Matching for 3D Scene Reconstruction Mohamma Mozammel Chowhury an Mohamma AL-Amin Bhuiyan Department

More information

Conic Sections. College Algebra

Conic Sections. College Algebra Conic Sections College Algebra Conic Sections A conic section, or conic, is a shape resulting from intersecting a right circular cone with a plane. The angle at which the plane intersects the cone determines

More information

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds Marta Wilczkowiak Edmond Boyer Peter Sturm Movi Gravir Inria Rhône-Alpes, 655 Avenue de l Europe, 3833 Montbonnot, France

More information

Coupling the User Interfaces of a Multiuser Program

Coupling the User Interfaces of a Multiuser Program Coupling the User Interfaces of a Multiuser Program PRASUN DEWAN University of North Carolina at Chapel Hill RAJIV CHOUDHARY Intel Corporation We have evelope a new moel for coupling the user-interfaces

More information

Interference and diffraction are the important phenomena that distinguish. Interference and Diffraction

Interference and diffraction are the important phenomena that distinguish. Interference and Diffraction C H A P T E R 33 Interference an Diffraction 33- Phase Difference an Coherence 33-2 Interference in Thin Films 33-3 Two-Slit Interference Pattern 33-4 Diffraction Pattern of a Single Slit * 33-5 Using

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Approximation with Active B-spline Curves and Surfaces

Approximation with Active B-spline Curves and Surfaces Approximation with Active B-spline Curves an Surfaces Helmut Pottmann, Stefan Leopolseer, Michael Hofer Institute of Geometry Vienna University of Technology Wiener Hauptstr. 8 10, Vienna, Austria pottmann,leopolseer,hofer

More information

Local Path Planning with Proximity Sensing for Robot Arm Manipulators. 1. Introduction

Local Path Planning with Proximity Sensing for Robot Arm Manipulators. 1. Introduction Local Path Planning with Proximity Sensing for Robot Arm Manipulators Ewar Cheung an Vlaimir Lumelsky Yale University, Center for Systems Science Department of Electrical Engineering New Haven, Connecticut

More information

Try It. Implicit and Explicit Functions. Video. Exploration A. Differentiating with Respect to x

Try It. Implicit and Explicit Functions. Video. Exploration A. Differentiating with Respect to x SECTION 5 Implicit Differentiation Section 5 Implicit Differentiation Distinguish between functions written in implicit form an eplicit form Use implicit ifferentiation to fin the erivative of a function

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Improving Performance of Sparse Matrix-Vector Multiplication

Improving Performance of Sparse Matrix-Vector Multiplication Improving Performance of Sparse Matrix-Vector Multiplication Ali Pınar Michael T. Heath Department of Computer Science an Center of Simulation of Avance Rockets University of Illinois at Urbana-Champaign

More information

Lesson 11 Interference of Light

Lesson 11 Interference of Light Physics 30 Lesson 11 Interference of Light I. Light Wave or Particle? The fact that light carries energy is obvious to anyone who has focuse the sun's rays with a magnifying glass on a piece of paper an

More information

A Versatile Model-Based Visibility Measure for Geometric Primitives

A Versatile Model-Based Visibility Measure for Geometric Primitives A Versatile Moel-Base Visibility Measure for Geometric Primitives Marc M. Ellenrieer 1,LarsKrüger 1, Dirk Stößel 2, an Marc Hanheie 2 1 DaimlerChrysler AG, Research & Technology, 89013 Ulm, Germany 2 Faculty

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Almost Disjunct Codes in Large Scale Multihop Wireless Network Media Access Control

Almost Disjunct Codes in Large Scale Multihop Wireless Network Media Access Control Almost Disjunct Coes in Large Scale Multihop Wireless Network Meia Access Control D. Charles Engelhart Anan Sivasubramaniam Penn. State University University Park PA 682 engelhar,anan @cse.psu.eu Abstract

More information

Real Time On Board Stereo Camera Pose through Image Registration*

Real Time On Board Stereo Camera Pose through Image Registration* 28 IEEE Intelligent Vehicles Symposium Einhoven University of Technology Einhoven, The Netherlans, June 4-6, 28 Real Time On Boar Stereo Camera Pose through Image Registration* Fai Dornaika French National

More information

AN INVESTIGATION OF FOCUSING AND ANGULAR TECHNIQUES FOR VOLUMETRIC IMAGES BY USING THE 2D CIRCULAR ULTRASONIC PHASED ARRAY

AN INVESTIGATION OF FOCUSING AND ANGULAR TECHNIQUES FOR VOLUMETRIC IMAGES BY USING THE 2D CIRCULAR ULTRASONIC PHASED ARRAY AN INVESTIGATION OF FOCUSING AND ANGULAR TECHNIQUES FOR VOLUMETRIC IMAGES BY USING THE D CIRCULAR ULTRASONIC PHASED ARRAY S. Monal Lonon South Bank University; Engineering an Design 103 Borough Roa, Lonon

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Modifying ROC Curves to Incorporate Predicted Probabilities

Modifying ROC Curves to Incorporate Predicted Probabilities Moifying ROC Curves to Incorporate Preicte Probabilities Cèsar Ferri DSIC, Universitat Politècnica e València Peter Flach Department of Computer Science, University of Bristol José Hernánez-Orallo DSIC,

More information

A Stratified Approach for Camera Calibration Using Spheres

A Stratified Approach for Camera Calibration Using Spheres IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. Y, MONTH YEAR 1 A Stratified Approach for Camera Calibration Using Spheres Kwan-Yee K. Wong, Member, IEEE, Guoqiang Zhang, Student-Member, IEEE and Zhihu

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D

More information

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane?

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? Intersecting Circles This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? This is a problem that a programmer might have to solve, for example,

More information

Agenda. Rotations. Camera models. Camera calibration. Homographies

Agenda. Rotations. Camera models. Camera calibration. Homographies Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate

More information

BIJECTIONS FOR PLANAR MAPS WITH BOUNDARIES

BIJECTIONS FOR PLANAR MAPS WITH BOUNDARIES BIJECTIONS FOR PLANAR MAPS WITH BOUNDARIES OLIVIER BERNARDI AND ÉRIC FUSY Abstract. We present bijections for planar maps with bounaries. In particular, we obtain bijections for triangulations an quarangulations

More information

On the Role of Multiply Sectioned Bayesian Networks to Cooperative Multiagent Systems

On the Role of Multiply Sectioned Bayesian Networks to Cooperative Multiagent Systems On the Role of Multiply Sectione Bayesian Networks to Cooperative Multiagent Systems Y. Xiang University of Guelph, Canaa, yxiang@cis.uoguelph.ca V. Lesser University of Massachusetts at Amherst, USA,

More information

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

A Revised Simplex Search Procedure for Stochastic Simulation Response Surface Optimization

A Revised Simplex Search Procedure for Stochastic Simulation Response Surface Optimization 272 INFORMS Journal on Computing 0899-1499 100 1204-0272 $05.00 Vol. 12, No. 4, Fall 2000 2000 INFORMS A Revise Simplex Search Proceure for Stochastic Simulation Response Surface Optimization DAVID G.

More information

Towards Generic Self-Calibration of Central Cameras

Towards Generic Self-Calibration of Central Cameras Towards Generic Self-Calibration of Central Cameras Srikumar Ramalingam 1&2, Peter Sturm 1, and Suresh K. Lodha 2 1 INRIA Rhône-Alpes, GRAVIR-CNRS, 38330 Montbonnot, France 2 Dept. of Computer Science,

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu Reference Most slides are adapted from the following notes: Some lecture notes on geometric

More information

Figure 1: Schematic of an SEM [source: ]

Figure 1: Schematic of an SEM [source:   ] EECI Course: -9 May 1 by R. Sanfelice Hybri Control Systems Eelco van Horssen E.P.v.Horssen@tue.nl Project: Scanning Electron Microscopy Introuction In Scanning Electron Microscopy (SEM) a (bunle) beam

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Structure from Motion

Structure from Motion Structure from Motion Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions Structure from motion Recover both 3D scene geoemetry and camera positions SLAM: Simultaneous

More information

Ad-Hoc Networks Beyond Unit Disk Graphs

Ad-Hoc Networks Beyond Unit Disk Graphs A-Hoc Networks Beyon Unit Disk Graphs Fabian Kuhn, Roger Wattenhofer, Aaron Zollinger Department of Computer Science ETH Zurich 8092 Zurich, Switzerlan {kuhn, wattenhofer, zollinger}@inf.ethz.ch ABSTRACT

More information

Dual Arm Robot Research Report

Dual Arm Robot Research Report Dual Arm Robot Research Report Analytical Inverse Kinematics Solution for Moularize Dual-Arm Robot With offset at shouler an wrist Motivation an Abstract Generally, an inustrial manipulator such as PUMA

More information

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman   Assignment #1. (Due date: 10/23/2012) x P. = z Computer Vision I Name : CSE 252A, Fall 202 Student ID : David Kriegman E-Mail : Assignment (Due date: 0/23/202). Perspective Projection [2pts] Consider a perspective projection where a point = z y x P

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

B-Splines and NURBS Week 5, Lecture 9

B-Splines and NURBS Week 5, Lecture 9 CS 430/585 Computer Graphics I B-Splines an NURBS Week 5, Lecture 9 Davi Breen, William Regli an Maxim Peysakhov Geometric an Intelligent Computing Laboratory Department of Computer Science Drexel University

More information