2 DETERMINING THE VANISHING POINT LOCA- TIONS

Size: px
Start display at page:

Download "2 DETERMINING THE VANISHING POINT LOCA- TIONS"

Transcription

1 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.??, NO.??, DATE 1 Equidistant Fish-Eye Calibration and Rectiication by Vanishing Point Extraction Abstract In this paper we describe a method to photogrammetrically estimate the intrinsic and extrinsic parameters o ish-eye cameras using the properties o equidistance perspective, particularly vanishing point estimation, with the aim o providing a rectiied image or scene viewing applications. The estimated intrinsic parameters are the optical center and the ish-eye lensing parameter, and the extrinsic parameters are the rotations about the world axes relative to the checkerboard calibration diagram. Index Terms ish-eye, calibration, perspective. 1 INTRODUCTION The spherical mapping o most ish-eye lenses used in machine vision introduces symmetric radial displacement o points rom the ideal rectilinear points in an image around an optical center. The visual eect o this displacement in ish-eye optics is that the image will have a higher resolution in the central areas, with the resolution decreasing nonlinearly towards the peripheral areas o the image. Fish-eye cameras manuactured to ollow the equidistance mapping unction are designed such that the distance between a projected point and the optical center o the image is proportional to the incident angle o the projected ray, scaled only by the equidistance parameter, as described by the projection equation [1], []: r d = θ (1) where r d is the ish-eye radial distance o a projected point rom the center, and θ is the incident angle o a ray rom the 3D point being projected to the image plane. This is one o the more common mapping unctions that ish-eye cameras are designed to ollow, the others being the stereographic, equisolid and orthogonal mapping unctions. In the projection o a set o parallel lines to the image plane using the rectilinear pin-hole perspective model, straight lines will be imaged as straight lines and parallel sets o straight lines converge at a single vanishing point on the image plane. However, in equidistance projection perspective, the projection o a parallel set o lines in worlds system will be a set o curves that converge at two vanishing points, and the line that intersects the two vanishing points also intersects the optical center o the camera. This property was demonstrated explicitly in our earlier work [3]. In that work, it was also demonstrated how the vanishing points could be used to estimate the optical centre o the ish-eye camera, and it was shown that circles provide an accurate approximation o the projection o a straight line to the equidistance image plane. Hartley and Kang [] provide one o the more recently proposed parameter-ree methods o calibrating cameras and compensating or distortion in all types o lenses, rom standard low-distortion lenses through high distortion ish-eye lenses. Other important works on ish-eye calibration includes [5], [6]. While there has been some work done in the area o camera perspective with a view to calibration, there has been little work speciically on wide-angle camera perspective. Early work on calibration via perspective properties included [7], where a hexagonal calibration image was used to extract three co-planar vanishing points, and thus extract the camera parameters. Other work on the calibration o cameras using perspective principles and vanishing points has been done in, or example, [8], [9], [1], [11]. However, none o these publications address isheye cameras with strong optical displacement, i.e., these calibration methods assume that the cameras ollow the pin-hole projection mapping unction. In this paper, we demonstrate how equidistance vanishing points can be used to estimate a cameras intrinsic ish-eye parameters (both the optical center and the equidistance parameter ) and extrinsic orientation parameters (orientation about the world axes), using a standard checkerboard test diagram. Additionally, we compare the accuracy o using circle itting (as proposed in [3]) and the more general conic itting. In Section, we show how the vanishing points can be extracted rom a single ull-rame image o a checkerboard diagram under equidistance projection. To extract the vanishing points, the mapped lines o the test diagram are itted with both circles and the general conic section equation. While the itting o circles is a simpler solution to the calibration problem, we show that the itting o the general conic equation to the mapped lines is the more accurate solution. In Section 3, we show how intrinsic and extrinsic camera parameters can be estimated using the extracted vanishing points and in Section, we test and compare the accuracy o the proposed methods using a large set o synthetically created images, and a smaller set o real images. DETERMINING THE VANISHING POINT LOCA- TIONS In this section we will describe how the vanishing points can be extracted rom a ull-rame equidistance projected wide-angle image o a traditional checkerboard test diagram, such as that shown in Fig 1. The checkerboard diagram consists o two sets o parallel lines, each with a corresponding pair o vanishing points. We will denote the ish-eye vanishing point pair corresponding to the approximately horizontal direction as w X,1 and w X,, as this is the world axis system that these vanishing points are related to. Similarly we denote the vertical vanishing point pair as w Y,1 and w Y,. I the ish-eye

2 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.??, NO.??, DATE Fig. 1. Fish-eye checkerboard test diagram. Fig. 3. The two sets o circles, convergent with the our vanishing points, that correspond to the checkerboard diagram in Fig. 1. Fig.. Extracted edges or use in calibration. (a) Vertical edges, and (b) horizontal edges. vanishing points w X,1, w X,, w Y,1 and w Y, are known, given the optical center and equidistance parameter, the corresponding rectilinear vanishing points v X and v Y can simply be determined by undistorting w X,1 and w Y,1 or w X, and w Y,..1 Using circles Here we describe a method o extracting the vanishing points by itting circles to the equidistance projection o checkerboard lines. Strand and Hayman [1], Barreto and Daniilidis [13] and Bräuer-Burchardt and Voss [1] suggest that the projection o straight lines in cameras exhibiting radial distortion results in circles on the image plane. Bräuer-Burchardt and Voss, in contrast to the others, explicitly use ish-eye images in their examples. All three methods assume that the camera system ollows the division model. However, in our previous work [3], we demonstrate the equidistance projection o straight lines can also be modelled as circles..1.1 Extracting the vanishing points The irst step to extract the vanishing points rom a isheye image o the checkerboard test diagram is to separate the curves into their vertical and horizontal sets. Here, or example, we used an optimized Sobel edge detector [15] to extract the edges. The gradient inormation rom the edge detection was used to separate the edges into their horizontal and vertical sets. Discontinuities in lines caused by corners in the grid are detected using a suitable corner detection method (or example, [16]), and thus corrected. Fig. shows edges extracted rom the checkerboard test image in Fig. 1. Schaalitzky and Zisserman [17] describe how the vanishing point or a non-ish-eye set o rectilinear projected concurrent lines can be estimated using the Maximum Likelihood Estimate (MLE), to provide a set o corrected lines that are exactly concurrent. The method described can be easily extended to support arcs o circles, and two vanishing points, rather than lines and single vanishing points. This is described in detail in [3]. Fig. 3 shows the two sets o itted circles with the MLE determined vanishing points.. Using conics It is proposed that any error in using circle itting can be reduced by allowing a it to a more general conic equation. This gives the it greater degrees o reedom, and thereore greater potential or accuracy. A conic on the image plane can be described as: Au + Buv + Cv + Du + Ev + F = () Zhang [18] describes several methods o itting conics to image data. Barreto and Araujo [19] also describe in detail methods o itting conics to projected lines, though in that case the itting is or paracatadioptric projections (i.e. projections involving a camera that consists o lens and mirror arrangements), rather than equidistance projections. In this case, we use a method similar to that o using circles, i.e., the edges are extracted in the same manner, and the MLE extraction o the vanishing point is used. However, instead o using a least-mean-squares (LMS) algorithm to it circles, we use a similar algorithm to it the general equation o a conic (which has 5 degrees o reedom, compared to the 3 degrees o reedom that the equation o a circle has). Other than using conic itting, the optimization and other steps are the same as or the circle it. 3 DETERMINING THE CAMERA PARAMETERS We now describe how, having estimated the locations o the vanishing points (by either circle or conic itting), the intrinsic ish-eye parameters and the orientation parameters o the camera can be extracted. In Section, we examine the accuracy o using circle and conic itting to determine the parameters.

3 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.??, NO.??, DATE 3 Fig.. Estimating the optical center and the equidistance parameter. The vanishing points in equidistance projection, w 1 and w, are given by [3]: [ ] where w 1 = [ [ w = [ D x D y D x D y D x D y D x D y ] ] ] E, ϕ D vp x +Dy ( E), ϕ D vp < x +Dy (E π), ϕ D vp x +Dy (π E), ϕ D vp < x +Dy (3) () E = arctan Dx + Dy (5) Where D = [D x, D y, D z ] is the vector that describes the direction o the parallel lines in the scene, and ϕ vp is the angle the line that intersects the vanishing point and the optical centre makes with the u-axis o the image plane ( π < ϕ vp π ). In our prior work [3], it was demonstrated that the line that joins the two vanishing points (the horizon line) also intersects the optical center. From the previous section, two pairs o vanishing points (with their two corresponding horizon lines) can be estimated rom a ish-eye image o a checkerboard test diagram. The intersection o these two horizon lines is, thereore, an estimate o the optical center (Fig. ). To estimate, we examine the distance between the vanishing points described by (3) and (), which is: w 1 w = (w 1,u w,u ) + (w 1,v w,v ) (6) It is straightorward to show that this reduces to: π w 1 w = DX + D D X + π Y DX + D D Y Y D z = π = π (7) Fig. 5. Rotation o the camera/world coordinate systems Thereore, given the vanishing points, can simply be determined as: = w 1 w (8) π As shown in Figs. 3 and, there are two distances d X and d Y that have corresponding equidistance parameters X and Y. The estimated value or is simply taken as the average o X and Y. Given the ish-eye parameters, the aim is to convert the ish-eye image to a rectilinear image. This is done by converting the points in the equistance image plane to their equivalent points on the rectilinear (pin-hole) image plane. Rectilinear projection is described as: r u = tan (θ) (9) The unction that describes the conversion o an equidistance projected point to a rectilinear projected point is obtained by combining (1) and (9): ( ) rd r u = tan (1) and its inverse: r d = arctan 3.1 Camera orientation ( ) ru (11) In previous sections, we assumed that the world coordinate system was centered on the center o projection o the camera. However, in order to take into account rotation between the camera coordinate system and the test diagram, it is more convenient or the world coordinates to be located at the test diagram, as described in Fig. 5, and rotated away rom the camera coordinate systems. Extending the rectilinear pinhole camera model to include translation and rotation gives x u = K[R T]X (1) where R is a 3 3 rotation matrix and T is a three element column vector describing the translational mapping rom world coordinates to camera coordinates. Gallagher [] describes how the location o the vanishing

4 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.??, NO.??, DATE points is independent o the translation vector T, and is given by V = [v X, v Y, v Z ] = KR (13) where V is the matrix o vanishing point locations, made up rom the homogeneous column vectors v X, v Y and v Z that describe the vanishing points on the image plane related to each o the X-, Y - and Z-axes respectively. Under rotation about the X- and Y - axes only, the vanishing points become [] V XY = cos α sin α sin β sin α cos α sin β cos α cos β sin α sin α cos β cos α (1) This is equivalent to irst rotating about the Y -axis by angle β, and then rotating about the original X-axis by angle α. It is important to note that ater the homogeneous v Y is converted to image plane coordinates, it is constrained to all on the image plane v-axis. Adding urther rotation about the Z-axis by angle γ results in where R Z = V = R Z V XY (15) cos γ sin γ sin γ cos γ 1 (16) Thus, rotating about the Z-axis results in the vanishing points V XY being rotated by the same angle. The method or estimating v X and v Y was described in the previous section. Considering that v Y is constrained to all on the v-axis in image plane coordinates, the angle o rotation about the Z-axis can be determined by measuring the angle γ that v Y makes with the v- axis at the origin []. Given v X, v Y and γ, the other two orientation parameters can be determined rom the irst two column vectors in (15). Thus, given the three orietation parameters, the rotation matrix R can be constructed and used to rectiy a given image. RESULTS.1 Synthetic images To determine the accuracy o the parameter estimation, the algorithm was run on a set o synthetic images. The synthetic images were created with a set o known parameters, and a number o imperections were added (sharpness, noise, and imperect illumination). The method used to create the synthetic images is the same as that described in [3]. Fig. 6 shows two o the synthetically created calibration images. A total o 68 synthetic images were created, and the equidistance parameter, optical center and orientations were estimated using both circle itting and conic itting. Fig. 7 shows the distributions o the errors o the parameter estimates or the synthetic images, and Table 1 shows the mean and standard deviations o the errors. The errors are calculated as the dierence o the estimated parameters and the modeled parameters. It Fig. 6. Synthetic image samples. TABLE 1 ERRORS OF THE PARAMETER ESTIMATIONS FOR THE SYNTHETIC IMAGE SET Circle Fitting Conic Fitting µ σ µ σ c u c v α β γ The mean µ and the standard deviation σ o the errors o the camera parameter estimation using the synthetic images., c u and c v are in pixels; α, β and γ are in degrees. can be seen rom both Fig. 7 and Table 1 that, in general, both the average error and the deviation are smaller or the conic itting method than the circle it method. This is particularly true or the error in the estimation o or the circle itting method, which is considerably higher than or the conic itting method (Table 1). The error in estimating c v is larger than that or c u or both the circle and conic itting methods. This can be explained by the act that the estimation o c u is primarily determined by the horizontal horizon line. An error in the estimation o the vertical horizon line will have less o an impact on the estimation o c u than an error in the estimation o the horizontal horizon line. The opposite is true or the estimation o c v. Considering that the horizontal curves imaged in, or example, Fig. 6 have more data points (6 data points as opposed to 8), it is reasonable to assume that the circle/conic itting to these imaged curves will be more accurate than to the vertical curves. Thereore, the estimation o the horizontal horizon line will be more accurate than the vertical, and thus the estimation o c u tends to be more accurate than c v.. Real Images In this section, we examine the accuracy o the calibration algorithms using real images o the checkerboard calibration diagram captured using three dierent wide-angle cameras. For each camera, 15 images were captured and processed using the parameter estimation algorithms. Fig. 8 shows the results rom the parameter estimations or each image rom each camera. The results in

5 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.??, NO.??, DATE 5 estimation error (pixels) Synthetic image number estimation error (pixels) Synthetic image number v estimation error (pixels) u estimation error (pixels) u estimation error (pixels) v v estimation error (pixels) γ - γ - - α - β - α - β Fig. 7. Parameter estimation errors using synthetic images. (a) Distribution o the estimation errors o using circle itting, and (b) using conic itting. (c) Distribution o the optical center estimation errors using circle itting, and (d) using conic itting. (e) Distribution o the camera orientation estimations using circle itting (in degrees), and () using conic itting (in degrees). The resolution or all images was 6 8. general back up the synthetic image results. The estimation o using circle itting returns a higher value or than the conic itting method. While the ground truth data is unknown, the synthetic image results suggest that the result rom the conic itting is closer to the actual equidistance parameter o the camera. This is supported by the act that the distribution o the estimations using conic itting tends to be smaller than the estimations using circle itting. The results or the optical center estimation are similar or both cases, with the circle itting again showing a slightly larger distribution than the conic itting. These statements are supported by the data in Table, which shows the means and standard deviation o the sample real images or each camera. Note that Table gives absolute averages and standard deviations or the parameters o the real cameras, in contrast to Table 1 which gives averages and standard deviations or the errors in the parameter estimation. The orientation parameter estimation errors are more diicult to quantiy or the real images. The intrinsic parameters remain constant or each camera over the 15 images, while the extrinsic parameters, being the orientation o the camera against the checkerboard test diagram, will be dierent or each image due to the manual set up o the calibration rig. Table 3 shows the results or the orientation estimation or each o the 15 images rom one o the cameras. The results or both the circle and the conic itting methods are similar (never more than.5 o a degree in the dierence), which suggests that the error in using either method will be similar. Additionally, the results rom the synthetic images suggest that this error will be small. Fig. 9 shows a checkerboard image rom Camera 3, transormed to the rectilinear image using (11) with a back mapping method [1] and bilinear interpolation. Rotation is removed using the parameters rom both

6 6 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.??, NO.??, DATE Camera Camera 3 (pixels) 3 3 Camera 1 (pixels) Camera 8 Camera Image number Camera Image number Camera 1 Camera Camera 1 Camera 3 u (pixels) 3 u (pixels) 3 1 Camera v (pixels) 1 Camera v (pixels) Fig. 8. Parameter estimation errors rom real images taken rom three cameras (15 images per camera). (a) Estimation o using the circle itting method or each image rom each camera; (b) using the conic itting method; (c) distributions o the optical center estimation using the circle itting method; (d) using the conic itting method. Camera 1 is a 17 ield-o-view equidistance ish-eye camera; Camera has a 163 ield-o-view; Camera 3 is a moderate wide-angle camera with a 13 ield-o-view. The sample resolution o all images was 6 8 pixels. TABLE INTRINSIC FISH-EYE PARAMETER ESTIMATIONS USING REAL IMAGES TABLE 3 EXTRINSIC ORIENTATION PARAMETER ESTIMATIONS USING REAL IMAGES Camera 1 Camera Camera3 µ σ µ σ µ σ Circle Fitting c u c v Conic Fitting c u c v The mean µ and the standard deviation σ o the errors o the camera parameter estimations o the three cameras (15 images per camera) used in the experiments. All values are in pixels. Camera 1 is a 17 ield-o-view equidistance ish-eye camera; Camera has a 163 ield-o-view; Camera 3 has a 13 ield-o-view. circle and conic itting methods, using the determined rotation matrix R and also using back mapping. It can be seen that using the parameters obtained rom the conic itting method returns an image with a greater degree o rectilinearity. This supports the results rom the synthetic images that the estimation o is more accurate using conic itting. Fig. 1 shows urther examples rom Camera. Circle Fitting Conic Fitting Im. # α β γ α β γ The estimated orientation parameters or the 15 calibration images o Camera 1. All values are in degrees. Similar results were obtained or Cameras and 3. 5 CONCLUSIONS In this paper we have demonstrated a method or extracting all the camera inormation required to provide

7 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.??, NO.??, DATE 7 Fig. 9. Rectiication o a checkerboard image rom Camera 3. (a) Original image; (b) calibrated using circle it parameters; (d) using conic it parameters; (e) rectiication using circle it parameters; () rectiication using conic it parameters. a rectilinear, rectiied image rom a ish-eye camera. We have presented two methods: circle-itting and conic itting. While the itting o circles is undoubtedly a simpler method than itting the general conic section equation to the project lines, we have shown that the itting o conics returns better results, particularly in the equidistance parameter estimation. The estimation o the opical center is a necessity. Hartley and Kang stated that (in a 6 8 image) the optical center can be as much as 3 pixels rom the image center []. However, we have shown here that it can be greater than that with some consumer cameras. Our results rom Camera 1 show a optical center that is almost 5 pixels rom the image center, and Camera is over pixels. These deviations are signiicant, and should not be ignored in ish-eye camera calibration. In our tests on real images using moderate wide-angle to ish-eye cameras, the optical center estimation appears to be at least as good as the results presented by Hartley and Kang []. However, Hartley and Kang use averaging to reduce the eect o noise in their optical center estimations, which we have ound is not necessary or our method. The estimation results o, particularly or the conic itting method, also show a low distribution. Future work in the area would involve the examination o similar perspective properties or other lens models, as well as the examination o the eect o radial distortion parameters or ish-eye lenses that do not ollow a particular model accurately. ACKNOWLEDGMENTS This research was co-unded by Enterprise Ireland and Valeo Vision Systems (ormerly Connaught Electronics Ltd.) under the Enterprise Ireland Innovation Partnerships Scheme. REFERENCES [1] K. Miyamoto, Fish eye lens, Journal o the Optical Society o America, vol. 5, no. 8, pp , Aug [] C. Slama, Ed., Manual o Photogrammetry, th ed. American Society o Photogrammetry, 198. [3] C. Hughes, R. McFeely, P. Denny, M. Glavin and E. Jones, Equidistant (θ) ish-eye perspective with application in distortion centre estimation, Image and Vision Computing, vol. 8, no. 3, pp , Mar. 1. [] R. Hartley and S. B. Kang, Parameter-ree radial distortion correction with center o distortion estimation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 9, no. 8, pp , Aug. 7. [5] S. Shah and J. K. Aggarwal, Intrinsic parameter calibration procedure or a (high-distortion) ish-eye lens camera with distortion model and accuracy estimation, Pattern Recognition, Vol. 9, no. 11, pp , Nov [6] F. Devernay and O. Faugeras, Straight lines have to be straight: automatic calibration and removal o distortion rom scenes o structured environments, International Journal o Machine Vision and Applications, vol. 13, no. 1, pp. 1, Aug. 1. [7] L. L. Wang and W.-H. Tsai, Camera calibration by vanishing lines or 3-d computer vision, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no., pp , Apr

8 8 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.??, NO.??, DATE Fig. 1. Further rectiication examples. (a) Checkerboard calibration diagram aligned with the rear wall o the room; (b) the extracted edges and corners; (c) original image; (d) using parameters rom the circle itting, and (e) conic itting; () rectiied using parameters rom the circle itting, and (g) conic itting. [8] B. Caprile and V. Torre, Using vanishing points or camera calibration, International Journal o Computer Vision, vol., no., pp , Mar [9] N. Avinash and S. Murali, Perspective geometry based single image camera calibration, Journal o Mathematical Imaging and Vision, vol. 3, no. 3, pp. 1 3, Mar. 8. [1] G. Wang, J. Wua, and Z. Jia, Single view based pose estimation rom circle or parallel lines, Pattern Recognition Letters, vol. 9, no. 7, pp , May 8. [11] Y. Wu, Y. Li, and Z. Hu, Detecting and handling unreliable points or camera parameter estimation, International Journal o Computer Vision, vol. 79, no., pp. 9 3, Aug. 8. [1] R. Strand and E. Hayman, Correcting radial distortion by circle itting, in Proceedings o the British Machine Vision Conerence, Sep. 5. [13] J. P. Barreto and K. Daniilidis, Fundamental matrix or cameras with radial distortion, in Proceedings o the IEEE International Conerence on Computer Vision, vol. 1, pp , Oct. 5. [1] C. Bräuer-Burchardt and K. Voss, A new algorithm to correct ish-eye- and strong wide-angle-lens-distortion rom single images, in International Conerence on Image Processing, vol. 1, pp. 5 8, Oct. 1. [15] B. Jähne, Digital Image Processing, 5th ed. Springer-Verlag,, ch. 1. [16] Z. Wang, W. Wu, X. Xu, and D. Xue, Recognition and location o the internal corners o planar checkerboard calibration pattern image, Applied Mathematics and Computation, vol. 185, no., pp , Feb. 7. [17] F. Schaalitzky and A. Zisserman, Planar grouping or automatic detection o vanishing lines and points, Image and Vision Computing, vol. 18, no. 9, pp , Jun.. [18] Z. Zhang, Parameter estimation techniques: a tutorial with application to conic itting, Image and Vision Computing, vol. 15, no. 1, pp , Jan [19] J. P. Barreto and H. Araujo, Fitting conics to paracatadioptric projections o lines, Computer Vision and Image Understanding, vol. 11, no. 3, pp , Mar. 6. [] A. C. Gallagher, Using vanishing points to correct camera rotation in images, in Proceedings o the Second Canadian Conerence on Computer and Robot Vision, May 5. [1] K. V. Asari, Design o an eicient VLSI architecture or non-linear spatial warping o wide-angle camera images, Journal o Systems Architecture, vol. 5, no. 1, pp , Dec..

Equidistant Fish-Eye Calibration and Rectification by Vanishing Point Extraction

Equidistant Fish-Eye Calibration and Rectification by Vanishing Point Extraction IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 32, NO. X, XXXXXXX 2010 1 Equidistant Fish-Eye Calibration and Rectification by Vanishing Point Extraction Ciarán Hughes, Member, IEEE,

More information

MAPI Computer Vision. Multiple View Geometry

MAPI Computer Vision. Multiple View Geometry MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry

More information

521466S Machine Vision Exercise #1 Camera models

521466S Machine Vision Exercise #1 Camera models 52466S Machine Vision Exercise # Camera models. Pinhole camera. The perspective projection equations or a pinhole camera are x n = x c, = y c, where x n = [x n, ] are the normalized image coordinates,

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

Camera Model and Calibration

Camera Model and Calibration Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Camera models and calibration

Camera models and calibration Camera models and calibration Read tutorial chapter 2 and 3. http://www.cs.unc.edu/~marc/tutorial/ Szeliski s book pp.29-73 Schedule (tentative) 2 # date topic Sep.8 Introduction and geometry 2 Sep.25

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

Catadioptric camera model with conic mirror

Catadioptric camera model with conic mirror LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación

More information

Wide-Angle Lens Distortion Correction Using Division Models

Wide-Angle Lens Distortion Correction Using Division Models Wide-Angle Lens Distortion Correction Using Division Models Miguel Alemán-Flores, Luis Alvarez, Luis Gomez, and Daniel Santana-Cedrés CTIM (Centro de Tecnologías de la Imagen), Universidad de Las Palmas

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Calibration of a fish eye lens with field of view larger than 180

Calibration of a fish eye lens with field of view larger than 180 CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Calibration of a fish eye lens with field of view larger than 18 Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek

More information

Robust Radial Distortion from a Single Image

Robust Radial Distortion from a Single Image Robust Radial Distortion from a Single Image Faisal Bukhari and Matthew N. Dailey Computer Science and Information Management Asian Institute of Technology Pathumthani, Thailand Abstract. Many computer

More information

3-D D Euclidean Space - Vectors

3-D D Euclidean Space - Vectors 3-D D Euclidean Space - Vectors Rigid Body Motion and Image Formation A free vector is defined by a pair of points : Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Coordinates of the vector : 3D Rotation

More information

Projective geometry for Computer Vision

Projective geometry for Computer Vision Department of Computer Science and Engineering IIT Delhi NIT, Rourkela March 27, 2010 Overview Pin-hole camera Why projective geometry? Reconstruction Computer vision geometry: main problems Correspondence

More information

Performance Study of Quaternion and Matrix Based Orientation for Camera Calibration

Performance Study of Quaternion and Matrix Based Orientation for Camera Calibration Performance Study of Quaternion and Matrix Based Orientation for Camera Calibration Rigoberto Juarez-Salazar 1, Carlos Robledo-Sánchez 2, Fermín Guerrero-Sánchez 2, J. Jacobo Oliveros-Oliveros 2, C. Meneses-Fabian

More information

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Image Transformations & Camera Calibration. Mašinska vizija, 2018. Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize

More information

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993 Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video

More information

Camera Model and Calibration. Lecture-12

Camera Model and Calibration. Lecture-12 Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Computer Vision Project-1

Computer Vision Project-1 University of Utah, School Of Computing Computer Vision Project- Singla, Sumedha sumedha.singla@utah.edu (00877456 February, 205 Theoretical Problems. Pinhole Camera (a A straight line in the world space

More information

Camera Calibration using Vanishing Points

Camera Calibration using Vanishing Points Camera Calibration using Vanishing Points Paul Beardsley and David Murray * Department of Engineering Science, University of Oxford, Oxford 0X1 3PJ, UK Abstract This paper describes a methodformeasuringthe

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Camera calibration for miniature, low-cost, wide-angle imaging systems

Camera calibration for miniature, low-cost, wide-angle imaging systems Camera calibration for miniature, low-cost, wide-angle imaging systems Oliver Frank, Roman Katz, Christel-Loic Tisse and Hugh Durrant-Whyte ARC Centre of Excellence for Autonomous Systems University of

More information

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Bo Li, Kun Peng, Xianghua Ying, and Hongbin Zha The Key Lab of Machine Perception (Ministry of Education), Peking University,

More information

3D Geometry and Camera Calibration

3D Geometry and Camera Calibration 3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often

More information

Calibrating an Overhead Video Camera

Calibrating an Overhead Video Camera Calibrating an Overhead Video Camera Raul Rojas Freie Universität Berlin, Takustraße 9, 495 Berlin, Germany http://www.fu-fighters.de Abstract. In this section we discuss how to calibrate an overhead video

More information

1.2 Related Work. Panoramic Camera PTZ Dome Camera

1.2 Related Work. Panoramic Camera PTZ Dome Camera 3rd International Conerence on Multimedia Technology ICMT 2013) A SPATIAL CALIBRATION METHOD BASED ON MASTER-SLAVE CAMERA Hao Shi1, Yu Liu, Shiming Lai, Maojun Zhang, Wei Wang Abstract. Recently the Master-Slave

More information

Improving Vision-Based Distance Measurements using Reference Objects

Improving Vision-Based Distance Measurements using Reference Objects Improving Vision-Based Distance Measurements using Reference Objects Matthias Jüngel, Heinrich Mellmann, and Michael Spranger Humboldt-Universität zu Berlin, Künstliche Intelligenz Unter den Linden 6,

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne

More information

SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR

SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR Juho Kannala, Sami S. Brandt and Janne Heikkilä Machine Vision Group, University of Oulu, Finland {jkannala, sbrandt, jth}@ee.oulu.fi Keywords:

More information

Rectification and Disparity

Rectification and Disparity Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

More information

Camera Self-calibration Based on the Vanishing Points*

Camera Self-calibration Based on the Vanishing Points* Camera Self-calibration Based on the Vanishing Points* Dongsheng Chang 1, Kuanquan Wang 2, and Lianqing Wang 1,2 1 School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001,

More information

Camera Calibration Using Two Concentric Circles

Camera Calibration Using Two Concentric Circles Camera Calibration Using Two Concentric Circles Francisco Abad, Emilio Camahort, and Roberto Vivó Universidad Politécnica de Valencia, Camino de Vera s/n, Valencia 4601, Spain {jabad, camahort, rvivo}@dsic.upv.es,

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

Robot Vision: Camera calibration

Robot Vision: Camera calibration Robot Vision: Camera calibration Ass.Prof. Friedrich Fraundorfer SS 201 1 Outline Camera calibration Cameras with lenses Properties of real lenses (distortions, focal length, field-of-view) Calibration

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

(Geometric) Camera Calibration

(Geometric) Camera Calibration (Geoetric) Caera Calibration CS635 Spring 217 Daniel G. Aliaga Departent of Coputer Science Purdue University Caera Calibration Caeras and CCDs Aberrations Perspective Projection Calibration Caeras First

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg

More information

To graph the point (r, θ), simply go out r units along the initial ray, then rotate through the angle θ. The point (1, 5π 6

To graph the point (r, θ), simply go out r units along the initial ray, then rotate through the angle θ. The point (1, 5π 6 Polar Coordinates Any point in the plane can be described by the Cartesian coordinates (x, y), where x and y are measured along the corresponding axes. However, this is not the only way to represent points

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS Cha Zhang and Zhengyou Zhang Communication and Collaboration Systems Group, Microsoft Research {chazhang, zhang}@microsoft.com ABSTRACT

More information

Computer Vision I - Appearance-based Matching and Projective Geometry

Computer Vision I - Appearance-based Matching and Projective Geometry Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior

More information

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia MERGING POINT CLOUDS FROM MULTIPLE KINECTS Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia Introduction What do we want to do? : Use information (point clouds) from multiple (2+) Kinects

More information

Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles

Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles 177 Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles T. N. Tan, G. D. Sullivan and K. D. Baker Department of Computer Science The University of Reading, Berkshire

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

CSCI 5980: Assignment #3 Homography

CSCI 5980: Assignment #3 Homography Submission Assignment due: Feb 23 Individual assignment. Write-up submission format: a single PDF up to 3 pages (more than 3 page assignment will be automatically returned.). Code and data. Submission

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

ECE Digital Image Processing and Introduction to Computer Vision. Outline

ECE Digital Image Processing and Introduction to Computer Vision. Outline ECE592-064 Digital Image Processing and Introduction to Computer Vision Depart. of ECE, NC State University Instructor: Tianfu (Matt) Wu Spring 2017 1. Recap Outline 2. Modeling Projection and Projection

More information

Multichannel Camera Calibration

Multichannel Camera Calibration Multichannel Camera Calibration Wei Li and Julie Klein Institute of Imaging and Computer Vision, RWTH Aachen University D-52056 Aachen, Germany ABSTRACT For the latest computer vision applications, it

More information

Mathematics of a Multiple Omni-Directional System

Mathematics of a Multiple Omni-Directional System Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Coplanar circles, quasi-affine invariance and calibration

Coplanar circles, quasi-affine invariance and calibration Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of

More information

Short on camera geometry and camera calibration

Short on camera geometry and camera calibration Short on camera geometry and camera calibration Maria Magnusson, maria.magnusson@liu.se Computer Vision Laboratory, Department of Electrical Engineering, Linköping University, Sweden Report No: LiTH-ISY-R-3070

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

Camera Calibration and Light Source Orientation Estimation from Images with Shadows

Camera Calibration and Light Source Orientation Estimation from Images with Shadows Camera Calibration and Light Source Orientation Estimation from Images with Shadows Xiaochun Cao and Mubarak Shah Computer Vision Lab University of Central Florida Orlando, FL, 32816-3262 Abstract In this

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 More on Single View Geometry Lecture 11 2 In Chapter 5 we introduced projection matrix (which

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Image formation. Thanks to Peter Corke and Chuck Dyer for the use of some slides

Image formation. Thanks to Peter Corke and Chuck Dyer for the use of some slides Image formation Thanks to Peter Corke and Chuck Dyer for the use of some slides Image Formation Vision infers world properties form images. How do images depend on these properties? Two key elements Geometry

More information

DESIGN AND TESTING OF MATHEMATICAL MODELS FOR A FULL-SPHERICAL CAMERA ON THE BASIS OF A ROTATING LINEAR ARRAY SENSOR AND A FISHEYE LENS

DESIGN AND TESTING OF MATHEMATICAL MODELS FOR A FULL-SPHERICAL CAMERA ON THE BASIS OF A ROTATING LINEAR ARRAY SENSOR AND A FISHEYE LENS DESIGN AND TESTING OF MATHEMATICAL MODELS FOR A FULL-SPHERICAL CAMERA ON THE BASIS OF A ROTATING LINEAR ARRAY SENSOR AND A FISHEYE LENS Danilo SCHNEIDER, Ellen SCHWALBE Institute of Photogrammetry and

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Metric Rectification for Perspective Images of Planes

Metric Rectification for Perspective Images of Planes 789139-3 University of California Santa Barbara Department of Electrical and Computer Engineering CS290I Multiple View Geometry in Computer Vision and Computer Graphics Spring 2006 Metric Rectification

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook

More information

Camera Calibration With One-Dimensional Objects

Camera Calibration With One-Dimensional Objects Camera Calibration With One-Dimensional Objects Zhengyou Zhang December 2001 Technical Report MSR-TR-2001-120 Camera calibration has been studied extensively in computer vision and photogrammetry, and

More information

Reflection and Refraction

Reflection and Refraction Relection and Reraction Object To determine ocal lengths o lenses and mirrors and to determine the index o reraction o glass. Apparatus Lenses, optical bench, mirrors, light source, screen, plastic or

More information

Computer Vision cmput 428/615

Computer Vision cmput 428/615 Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views? Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

More information

Jump Stitch Metadata & Depth Maps Version 1.1

Jump Stitch Metadata & Depth Maps Version 1.1 Jump Stitch Metadata & Depth Maps Version 1.1 jump-help@google.com Contents 1. Introduction 1 2. Stitch Metadata File Format 2 3. Coverage Near the Poles 4 4. Coordinate Systems 6 5. Camera Model 6 6.

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

COS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures)

COS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures) COS429: COMPUTER VISON CMERS ND PROJECTIONS (2 lectures) Pinhole cameras Camera with lenses Sensing nalytical Euclidean geometry The intrinsic parameters of a camera The extrinsic parameters of a camera

More information

To graph the point (r, θ), simply go out r units along the initial ray, then rotate through the angle θ. The point (1, 5π 6. ) is graphed below:

To graph the point (r, θ), simply go out r units along the initial ray, then rotate through the angle θ. The point (1, 5π 6. ) is graphed below: Polar Coordinates Any point in the plane can be described by the Cartesian coordinates (x, y), where x and y are measured along the corresponding axes. However, this is not the only way to represent points

More information

Super-resolution on Text Image Sequences

Super-resolution on Text Image Sequences November 4, 2004 Outline Outline Geometric Distortion Optical/Motion Blurring Down-Sampling Total Variation Basic Idea Outline Geometric Distortion Optical/Motion Blurring Down-Sampling No optical/image

More information

Radial Distortion Self-Calibration

Radial Distortion Self-Calibration Radial Distortion Self-Calibration José Henrique Brito IPCA, Barcelos, Portugal Universidade do Minho, Portugal josehbrito@gmail.com Roland Angst Stanford University ETH Zürich, Switzerland rangst@stanford.edu

More information

Fisheye Camera s Intrinsic Parameter Estimation Using Trajectories of Feature Points Obtained from Camera Rotation

Fisheye Camera s Intrinsic Parameter Estimation Using Trajectories of Feature Points Obtained from Camera Rotation Fisheye Camera s Intrinsic Parameter Estimation Using Trajectories of Feature Points Obtained from Camera Rotation Akihiko Hishigi, Yuki Tanaka, Gakuto Masuyama, and Kazunori Umeda Abstract This paper

More information

Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance

Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance Zhaoxiang Zhang, Min Li, Kaiqi Huang and Tieniu Tan National Laboratory of Pattern Recognition,

More information

Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship

Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship Proc. Natl. Sci. Counc. ROC(A) Vol. 25, No. 5, 2001. pp. 300-308 Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship YIH-SHYANG CHENG, RAY-CHENG CHANG, AND SHIH-YU

More information

Agenda. Rotations. Camera models. Camera calibration. Homographies

Agenda. Rotations. Camera models. Camera calibration. Homographies Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate

More information

3-D TERRAIN RECONSTRUCTION WITH AERIAL PHOTOGRAPHY

3-D TERRAIN RECONSTRUCTION WITH AERIAL PHOTOGRAPHY 3-D TERRAIN RECONSTRUCTION WITH AERIAL PHOTOGRAPHY Bin-Yih Juang ( 莊斌鎰 ) 1, and Chiou-Shann Fuh ( 傅楸善 ) 3 1 Ph. D candidate o Dept. o Mechanical Engineering National Taiwan University, Taipei, Taiwan Instructor

More information

Precise Omnidirectional Camera Calibration

Precise Omnidirectional Camera Calibration Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional

More information

Camera Calibration with a Simulated Three Dimensional Calibration Object

Camera Calibration with a Simulated Three Dimensional Calibration Object Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek

More information

Self-calibration of a pair of stereo cameras in general position

Self-calibration of a pair of stereo cameras in general position Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible

More information

Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry

Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry Fernanda A. Andaló 1, Gabriel Taubin 2, Siome Goldenstein 1 1 Institute of Computing, University

More information