Pose Estimation from Circle or Parallel Lines in a Single Image

Size: px
Start display at page:

Download "Pose Estimation from Circle or Parallel Lines in a Single Image"

Transcription

1 Pose Estimation from Circle or Parallel Lines in a Single Image Guanghui Wang 1,2, Q.M. Jonathan Wu 1,andZhengqiaoJi 1 1 Department of Electrical and Computer Engineering, The University of Windsor, 41 Sunset, Windsor, Ontario, Canada N9B 3P4 2 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 18, P.R. China ghwangca@gmail.com, jwu@uwindsor.ca Abstract. The paper is focused on the problem of pose estimation from a single view in minimum conditions that can be obtained from images. Under the assumption of known intrinsic parameters, we propose and prove that the pose of the camera can be recovered uniquely in three situations: (a) the image of one circle with discriminable center; (b) the image of one circle with preassigned world frame; (c) the image of any two pairs of parallel lines. Compared with previous techniques, the proposed method does not need any 3D measurement of the circle or lines, thus the required conditions are easily satisfied in many scenarios. Extensive experiments are carried out to validate the proposed method. 1 Introduction Determining the position and orientation of a camera from a single image with respect to a reference frame is a basic and important problem in robot vision field. There are many potential applications such as visual navigation, robot localization, object recognition, photogrammetry, visual surveillance and so on. During the past two decades, the problem was widely studied and many approaches have been proposed. One well known pose estimation problem is the perspective-n-point (PnP) problem, which was first proposed by Fishler and Bolles [5]. The problem is to find the pose of an object from the image of n points at known location on it. Following this idea, the problem was further studied by many researchers [6,8,9,15,14]. One of the major concerns of the PnP problem is the multi-solution phenomenon, all PnP problems for n 5 have multiple solutions. Thus we need further information to determine the correct solution [6]. Another kind of localization algorithm is based on line correspondences. Dhome et al. [4] proposed to compute the attitude of object from three line correspondences. Liu et al. [12] discussed some methods to recover the camera pose linearly or nonlinearly by using different combination of line and point features. Ansar and Daniilidis [1] presented a general framework which allows for a novel set of linear solutions to the pose estimation problem for both n points and n lines. Chen [2] proposed a polynomial approach to find close form solution for Y. Yagi et al. (Eds.): ACCV 27, Part II, LNCS 4844, pp , 27. c Springer-Verlag Berlin Heidelberg 27

2 364 G. Wang, Q.M. Jonathan Wu, and Z. Ji pose determination from line-to-plane correspondences. The line based methods also suffer from the problem of multiple solutions. The above methods assume that the camera is calibrated and the positions of the points and lines are known. In practice, it may be hard to obtain the accurate measurements of these features in space. However, some geometrical constraints, such as coplanarity, parallelity and orthogonality, are abundant in many indoor and outdoor structured scenarios. Some researchers proposed to recover the camera pose from the image of a rectangle, two orthogonal parallel lines and some other scene constraints [7,18]. Circle is another very common pattern in man-made objects and scenes, many studies on camera calibration were based on the image of circles [1,11,13]. In this paper, we try to compute the camera s pose from a single image based on geometrical configurations in the scene. Different from previous methods, we propose to use the image of only one circle, or the image of any two pairs of parallel lines that may not be coplanar or orthogonal. The proposed method is widely applicable since the conditions are easily satisfied in many scenarios. 2 Perspective Geometry and Pose Estimation 2.1 Camera Projection and Pose Estimation Under perspective projection, a 3D point x R 3 in space is projected to an image point m R 2 via a rank-3 projection matrix P R 3 4 as s m = P x = K[R, t] x = K[r 1, r 2, r 3, t] x (1) where, x =[x T,w] T and m =[m T,w] T are the homogeneous forms of points x and m respectively, R and t are the rotation matrix and translation vector from the world system to the camera system, s is a non-zero scalar, K is the camera calibration matrix. In this paper, we assume the camera is calibrated, thus we may set K = I 3 = diag(1, 1, 1), which is equivalent to normalize the image coordinates by applying transformation K 1. In this case, the projection matrix is simplified to P =[R, t] =[r 1, r 2, r 3, t]. When all space points are coplanar, the mapping between the space points and their images can be modeled by a plane homography H which is a nonsingular 3 3 homogeneous matrix. Without loss of generality, we may assume the coordinates of the space plane as [,, 1, ] T for a specified world frame, then we have H =[r 1, r 2, t]. Obviously, the rotation matrix R and translation vector t can be factorized directly from the homography. Proposition 1. When the camera is calibrated, the pose of the camera can be recovered from two orthogonal vanishing points in a single view. Proof. Without loss of generality, let us set the X and Y axes of the world system in line with the two orthogonal directions. In the normalized world coordinate system, the direction of X and Y axes are x w =[1,,, ] T and ỹ w =[, 1,, ] T

3 Pose Estimation from Circle or Parallel Lines in a Single Image 365 respectively, and the homogeneous vector of the world origin is õ w =[,,, 1] T. Under perspective projection, we have: s x ṽ x = P x w =[r 1, r 2, r 3, t][1,,, ] T = r 1 (2) s y ṽ y = P ỹ w =[r 1, r 2, r 3, t][, 1,, ] T = r 2 (3) s o ṽ o = P õ w =[r 1, r 2, r 3, t][,,, 1] T = t (4) Thus the rotation matrix can be computed from r 1 = ± ṽx ṽ x, r 2 = ± ṽy ṽ y, r 3 = r 1 r 2 (5) where the rotation matrix R = [r 1, r 2, r 3 ] may have four solutions if righthanded coordinate system is adopted. While only two of them can ensure that the reconstructed objects lie in front of the camera, which may be seen by the camera. In practice, if the world coordinate frame is preassigned, the rotation matrix may be uniquely determined [19]. Since we have no metric information of the given scene, the translation vector can only be defined up to scale as t v o. This is to say that we can only recover the direction of the translation vector. In practice, the orthonormal constraint should be enforced during the computation since r 1 and r 2 in (5) may not be orthogonal due to image noise. Suppose the SVD decomposition of R 12 =[r 1, r 2 ]isuσ V T,whereΣ is a 3 2matrix made of the two singular values of R 12. Thus we may obtain the best[ approximation to the rotation matrix in the least square sense from R 12 = U 1 V T, 1 ] since a rotation matrix should have unit singular values. 2.2 The Circular Points and Pose Estimation The absolute conic (AC) is a conic on the ideal plane, which can be expressed in matrix form as Ω = diag(1, 1, 1). Obviously, Ω is composed of purely imaginary points on the infinite plane. Under perspective projection, we can obtain the image of the absolute conic (IAC) as ω a =(KK T ) 1, which depends only on the camera calibration matrix K. The IAC is an invisible imaginary point conic in an image. It is easy to verify that the absolute conic intersects the ideal line at two ideal complex conjugate points, which are called the circular points. The circular points can be expressed in canonical form as I =[1,i,, ] T, J =[1, i,, ] T. Under perspective projection, their images can be expressed as: s i m i = P I =[r 1, r 2, r 3, t][1,i,, ] T = r 1 + i r 2 (6) s j m j = P J =[r 1, r 2, r 3, t][1, i,, ] T = r 1 i r 2 (7) Thus the imaged circular points (ICPs) are a pair of complex conjugate points, whose real and imaginary parts are defined by the first two columns of the rotation matrix. However, the rotation matrix can not be determined uniquely from the ICPs since (6) and (7) are defined up to scales.

4 366 G. Wang, Q.M. Jonathan Wu, and Z. Ji Proposition 2. Suppose m i and m j are the ICPs of a space plane, the world system is set on the plane. Then the pose of the camera can be uniquely determined from m i and m j if one direction of the world frame is preassigned. Proof. It is easy to verify that the line passing through the two imaged circular points is real, which is the vanishing line of the plane and can be computed from l = m i m j. Suppose ox is the image of one axis of the preassigned world frame, its vanishing point v x can be computed from the intersection of line ox with l. If the vanishing point v y of Y direction is recovered, the camera pose can be determined accordingly from Proposition 1. Since the vanishing points of two orthogonal directions { are conjugate with v T respect to the IAC, thus v y can be easily computed from x ωv y = l T v y =.Onthe other hand, since two orthogonal vanishing points are harmonic with respect to the ICPs, their cross ratio Cross(v x v y ; m i m j )= 1. Thus v y can also be computed from the cross ratio. 3 Methods for Pose Estimation 3.1 Pose Estimation from the Image of a Circle Lemma 1. Any circle Ω c in a space plane π intersects the absolute conic Ω at exactly two points, which are the circular points of the plane. Without loss of generality, let us set the XOY world frame on the supporting plane. Then any circle on the plane can be modelled in homogeneous form as (x wx ) 2 +(y wy ) 2 w 2 r 2 =. The plane π intersects the ideal plane π at the vanishing line L. In the extended plane of the complex domain, L has at most two intersections with Ω c. It is easy to verify that the circular points are the intersections. Lemma 2. The image of the circle Ω c intersects the IAC at four complex points, which can be divided into two pairs of complex conjugate points. Under perspective projection, any circle Ω c on space plane is imaged as a conic ω c = H T Ω c H 1, which is an ellipse in nondegenerate case. The absolute conic is projected to the IAC. Both the IAC and ω c are conics of second order that can be written in homogeneous form as x T ω c x =. According to Bézout s theorem, the two conics have four imaginary intersection points since the absolute conic and the circle have no real intersections in space. Suppose the complex point [a + bi] is one intersection, it is easy to verify that the conjugate point [a bi] is also a solution. Thus the four intersections can be divided into two complex conjugate pairs. It is obvious that one pair of them is the ICPs, but the ambiguity can not be solved in the image with one circle. If there are two or more circles on the same or parallel space plane, the ICPs can be uniquely determined since the imaged circular points are the common intersections of each circle with the IAC in the image. However, we may have only one circle in many situations, then how to determine the ICPs in this case?

5 Pose Estimation from Circle or Parallel Lines in a Single Image 367 Proposition 3. The imaged circular points can be uniquely determined from the image of one circle if the center of the circle can be detected in the image. Proof. As shown in Fig.1, the image of the circle ω c intersects the IAC at two pairs of complex conjugate points m i, m j and m i, m j. Let us define two lines as l = m i m j, l = m i m j (8) then one of the lines must be the vanishing line and the two supporting points must be the ICPs. Suppose o c is the image of the circle center and l is the vanishing line, then there is a pole-polar relationship between the center image o c and the vanishing line with respect to the conic. λl = ω c o c (9) where λ is a scalar. Thus the true vanishing line and imaged circular points can be determined from (9). Under perspective projection, a circle is transformed into a conic. However, the center of the circle in space usually does not project to the center of the corresponding conic in the image, since perspective projection (1) is not a linear mapping from the space to the image. Thus the imaged center of the circle can not be determined only from the contour of the imaged conic. There are several possible ways to recover the projected center of the circle by virtue of more geometrical information, such as by two or more lines passing through the center [13] or by two concentric circles [1,11]. Space plane Image plane v y vy m j Y O c Ω c y ω c o c m j ω a m i m i l vx l O X o x v x (a) (b) Fig. 1. Determining the ICPs from the image of one circle. (a) a circle and preassigned world frame in space; (b) the imaged conic of the circle. Proposition 4. The imaged circular points can be recovered from the image of one circle with preassigned world coordinate system. Proof. As shown in Fig.1, suppose line x and y are the imaged two axes of the preassigned world frame, the two lines intersect l and l at four points. Since the two ICPs and the two orthogonal vanishing points form a harmonic relation. Thus the true ICPs can be determined by verifying the cross ratio of the

6 368 G. Wang, Q.M. Jonathan Wu, and Z. Ji two pairs of quadruple collinear points {m i, m j, v x, v y } and {m i, m j, v x, v y }. Then the camera pose can be computed according to Proposition Pose Estimation from Two Pairs of Parallel Lines Proposition 5. The pose of the camera can be recovered from the image of any two general pairs of parallel lines in the space. Proof. As shown in Fig.2, suppose L 11, L 12 and L 21, L 22 are two pairs of parallel lines in the space, they may not be coplanar or orthogonal. Their images l 11, l 12 and l 21, l 22 intersect at v 1 and v 2 respectively, then v 1 and v 2 must be the vanishing points of the two directions, and the line connecting the two points must be the vanishing line l.thusm i and m j can be computed from the intersections of l with the IAC. Suppose v 1 is one direction of the world frame, o 2 is the image of the world origin. Then the vanishing point { v1 of the v T direction that is orthogonal to v 1 can be easily computed from 1 ωv1 = or l T v 1 = Cross(v 1 v1 ; m im j )= 1, and the pose of the camera can be recovered from Proposition 1. Specifically, the angle α between the two pairs of parallel lines in v the space can be recovered from cos α = T 1 ωav2. If the two pairs of v T 1 ω av 1 v T 2 ω av 2 lines are orthogonal with each other, then we have v1 = v 2. v1 v 2 ω a L22 L 21 m j l 22 l 21 mi l v 1 L 11 l 11 L 12 o 2 α o 1 l 12 Fig. 2. Pose estimation from two pairs of parallel lines. Left: two pairs of parallel lines in the space; Right: the image of the parallel lines. 3.3 Projection Matrix and 3D Reconstruction After retrieving the pose of the camera, the projection matrix with respect to the world frame can be computed from (1). With the projection matrix, any geometry primitive in the image can be back-projected into the space. For example, a point in the image is back-projected to a line, a line is back-projected to a plane and a conic is back-projected to a cone. Based on the scene constraints, many geometrical entities, such as the length ratios, angles, 3D information of some planar surfaces, can be recovered via the technique of single view metrology [3,17,18]. Therefore the 3D structure of some simple objects and scenarios can be reconstructed only from a single image.

7 Pose Estimation from Circle or Parallel Lines in a Single Image Experiments with Simulated Data During simulations, we generated a circle and two orthogonal pairs of parallel lines in the space, whose size and position in the world system are shown in Fig.3. Each line is composed of 5 evenly distributed points, the circle is composed of 1 evenly distributed points. The camera parameters were set as follows: focal length f u = f v = 18, skew s =, principal point u = v =, rotation axis r = [.717,.359,.598], rotation angle α =.84, translation vector t =[2, 2, 1]. The image resolution was set to 6 6 and Gaussian image noise was added on each imaged point. The generated image with 1-pixel Gaussian noise is shown in Fig L 22 L 21 Ω c L 11 L l 21 l 22 ω c o l 11 l 12 Fig. 3. The synthetic scenario and image for simulation In the experiments, the image lines and the imaged conic were fitted via least squares. We set L 11 and L 21 as the X and Y axes of the world frame, and recover the ICPs and camera pose according to the proposed methods. Here we only give the result of the recovered rotation matrix. For the convenience of comparison, we decomposed the rotation matrix into the rotation axis and rotation angle, we define the error of the axis as the angle between the recovered axis and the ground truth and define the error of the rotation angle as the absolute error of the recovered angle with the ground truth. We varied the noise level from to 3 pixels with a step of.5 during the test, and took 2 independent tests at each noise level so as to obtain more statistically meaningful results. The mean and.1 Mean of rotation axis error.5 Mean of rotation angle error.1 STD of rotation axis error.5 STD of rotation angle error Alg.2.3 Alg.2.6 Alg.2.3 Alg Noise level Noise level Noise level Noise level Fig. 4. The mean and standard deviation of the errors of the rotation axis and rotation angle with respect to the noise levels

8 37 G. Wang, Q.M. Jonathan Wu, and Z. Ji standard deviation of the two methods are shown in Fig.4. It is clear that the accuracy of the two methods are comparable at small noise level (< 1.5 pixels), while the vanishing points based method (Alg.2) is superior to the circle based one () at large noise level. 5 Tests with Real Images All images in the tests were captured by Canon Powershort G3 with a resolution of The camera was pre-calibrated via Zhang s method [2]. Test on the tea box image: For this test, the selected world frame, two pairs of parallel lines and the two detected conics by the Hough transform are shown in Fig.5. The line segments were detected and fitted via orthogonal regression algorithm [16]. We recovered the rotation axis, rotation angle (unit: rad) and translation vector by the two methods as shown in Table 1, where the translation vector is normalized by t = 1. The results are reasonable with the imaging conditions, though we do not have the ground truth. ω c2 ω c1 y x Fig. 5. Test results of the tea box image. Upper: the image and the detected conics and parallel lines and world frame for pose estimation; Lower: the reconstructed tea box model at different viewpoints with texture mapping. In order to give further evaluation of the recovered parameters, we reconstructed the 3D structure of the scene from the recovered projection matrix via the method in [17]. The result is shown from different viewpoints in Fig.5. We manually took the measurements of the tea box and the grid in the background and registered the reconstruction to the ground truth. Then we computed the relative error E1 of the side length of the grid, the relative errors E2, E3 of the diameter and height of the circle. As listed in Table 1, we can see that the reconstruction error is very small. The results verifies the accuracy of the recovered parameters in return. Test on the book image: The image with detected conic and preassigned world frame and two pairs of parallel lines are shown in Fig.6. We recovered the

9 Pose Estimation from Circle or Parallel Lines in a Single Image 371 Table 1. Test results and performance evaluations for real images Images Method Box Alg.2 Book Alg.2 Raxis [-.9746,.1867,-.1238] [-.9748,.1864,-.1228] [-.9173,.3452,-.1984] [-.9188,.346,-.1899] Rangle t [-.8,.13,.98] [-.8,.13,.98] [-.2,.9,.99] [-.2,.9,.99] E1 (%) E2 (%) E3 (%) pose of the camera by the proposed methods, then computed the relative errors E1, E2 and E3 of the three side lengths of the book with respect to the ground truth taken manually. The results are shown in Table 1. The reconstructed 3D structure of the book is shown Fig.6. The results are realistic with good accuracy. ωc y x Fig. 6. Pose estimation and 3D reconstruction of the book image 6 Conclusion In this paper, we proposed and proved the possibility to recover the pose of the camera from a single image of one circle or two general pairs of parallel lines. Compared with previous techniques, less conditions are required by the proposed method. Thus the results in the paper may find wide applications. Since the method utilizes the least information in computation, it is important to adopt some robust techniques to fit the conics and lines. Acknowledgment The work is supported in part by the Canada Research Chair program and the National Natural Science Foundation of China under grant no References 1. Ansar, A., Daniilidis, K.: Linear pose estimation from points or lines. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), (23) 2. Chen, H.H.: Pose determination from line-to-plane correspondences: Existence condition and closed-form solutions. IEEE Trans. Pattern Anal. Mach. Intell. 13(6), (1991)

10 372 G. Wang, Q.M. Jonathan Wu, and Z. Ji 3. Criminisi, A., Reid, I., Zisserman, A.: Single view metrology. International Journal of Computer Vision 4(2), (2) 4. Dhome, M., Richetin, M., Lapreste, J.T.: Determination of the attitude of 3D objects from a single perspective view. IEEE Trans. Pattern Anal. Mach. Intell. 11(12), (1989) 5. Fischler, M.A., Bolles, R.C.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartomated cartography. Communications of the ACM. 24(6), (1981) 6. Gao, X.S., Tang, J.: On the probability of the number of solutions for the P4P problem. J. Math. Imaging Vis. 25(1), (26) 7. Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (24) 8. Horaud, R., Conio, B., Leboulleux, O., Lacolle, B.: An analytic solution for the perspective 4-point problem. CVGIP 47(1), (1989) 9. Hu, Z.Y., Wu, F.C.: A note on the number of solutions of the noncoplanar P4P problem. IEEE Trans. Pattern Anal. Mach. Intell. 24(4), (22) 1. Jiang, G., Quan, L.: Detection of concentric circles for camera calibration. In: Proc. of ICCV, pp (25) 11. Kim, J.S., Gurdjos, P., Kweon, I.S.: Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(4), (25) 12. Liu, Y., Huang, T.S., Faugeras, O.D.: Determination of camera location from 2-D to 3-D line and point correspondences. IEEE Trans. Pattern Anal. Mach. Intell. 12(1), (199) 13. Meng, X., Li, H., Hu, Z.: A new easy camera calibration technique based on circular points. In: Proc. of BMVC (2) 14. Nistér, D., Stewénius, H.: A minimal solution to the generalised 3-point pose problem. J. Math. Imaging Vis. 27(1), (27) 15. Quan, L., Lan, Z.: Linear n-point camera pose determination. IEEE Trans. Pattern Anal. Mach. Intell. 21(8), (1999) 16. Schmid, C., Zisserman, A.: Automatic line matching across views. In: Proc. of CVPR, pp (1997) 17. Wang, G.H., Hu, Z.Y., Wu, F.C., Tsui, H.T.: Single view metrology from scene constraints. Image Vision Comput. 23(9), (25) 18. Wang, G.H., Tsui, H.T., Hu, Z.Y., Wu, F.C.: Camera calibration and 3D reconstruction from a single view based on scene constraints. Image Vision Comput. 23(3), (25) 19. Wang, G.H., Wang, S., Gao, X., Li, Y.: Three dimensional reconstruction of structured scenes based on vanishing points. In: Proc. of PCM, pp (26) 2. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), (2)

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Coplanar circles, quasi-affine invariance and calibration

Coplanar circles, quasi-affine invariance and calibration Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Camera Calibration and Light Source Estimation from Images with Shadows

Camera Calibration and Light Source Estimation from Images with Shadows Camera Calibration and Light Source Estimation from Images with Shadows Xiaochun Cao and Mubarak Shah Computer Vision Lab, University of Central Florida, Orlando, FL, 32816 Abstract In this paper, we describe

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Pose determination and plane measurement using a trapezium

Pose determination and plane measurement using a trapezium Available online at www.sciencedirect.com Pattern Recognition Letters 29 (2008) 223 231 www.elsevier.com/locate/patrec Pose determination and plane measurement using a trapezium Fuqing Duan a,b, *, Fuchao

More information

Camera Pose Measurement from 2D-3D Correspondences of Three Z Shaped Lines

Camera Pose Measurement from 2D-3D Correspondences of Three Z Shaped Lines International Journal of Intelligent Engineering & Systems http://www.inass.org/ Camera Pose Measurement from 2D-3D Correspondences of Three Z Shaped Lines Chang Liu 1,2,3,4, Feng Zhu 1,4, Jinjun Ou 1,4,

More information

Full Camera Calibration from a Single View of Planar Scene

Full Camera Calibration from a Single View of Planar Scene Full Camera Calibration from a Single View of Planar Scene Yisong Chen 1, Horace Ip 2, Zhangjin Huang 1, and Guoping Wang 1 1 Key Laboratory of Machine Perception (Ministry of Education), Peking University

More information

Multiple View Geometry in computer vision

Multiple View Geometry in computer vision Multiple View Geometry in computer vision Chapter 8: More Single View Geometry Olaf Booij Intelligent Systems Lab Amsterdam University of Amsterdam, The Netherlands HZClub 29-02-2008 Overview clubje Part

More information

Camera calibration with spheres: Linear approaches

Camera calibration with spheres: Linear approaches Title Camera calibration with spheres: Linear approaches Author(s) Zhang, H; Zhang, G; Wong, KYK Citation The IEEE International Conference on Image Processing (ICIP) 2005, Genoa, Italy, 11-14 September

More information

Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry

Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry Fernanda A. Andaló 1, Gabriel Taubin 2, Siome Goldenstein 1 1 Institute of Computing, University

More information

Guang-Hui Wang et al.: Single View Based Measurement on Space Planes 375 parallelism of lines and planes. In [1, ], a keypoint-based approach to calcu

Guang-Hui Wang et al.: Single View Based Measurement on Space Planes 375 parallelism of lines and planes. In [1, ], a keypoint-based approach to calcu May 004, Vol.19, No.3, pp.374 38 J. Comput. Sci. & Technol. Single View Based Measurement on Space Planes Guang-Hui Wang, Zhan-Yi Hu, and Fu-Chao Wu National Laboratory of Pattern Recognition, Institute

More information

Metric Rectification for Perspective Images of Planes

Metric Rectification for Perspective Images of Planes 789139-3 University of California Santa Barbara Department of Electrical and Computer Engineering CS290I Multiple View Geometry in Computer Vision and Computer Graphics Spring 2006 Metric Rectification

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles Carlo Colombo, Dario Comanducci, and Alberto Del Bimbo Dipartimento di Sistemi ed Informatica Via S. Marta 3, I-5139

More information

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds Marta Wilczkowiak Edmond Boyer Peter Sturm Movi Gravir Inria Rhône-Alpes, 655 Avenue de l Europe, 3833 Montbonnot, France

More information

Camera Calibration and Light Source Orientation Estimation from Images with Shadows

Camera Calibration and Light Source Orientation Estimation from Images with Shadows Camera Calibration and Light Source Orientation Estimation from Images with Shadows Xiaochun Cao and Mubarak Shah Computer Vision Lab University of Central Florida Orlando, FL, 32816-3262 Abstract In this

More information

Detection of Concentric Circles for Camera Calibration

Detection of Concentric Circles for Camera Calibration Detection of Concentric Circles for Camera Calibration Guang JIANG and Long QUAN Department of Computer Science Hong Kong University of Science and Technology Kowloon, Hong Kong {gjiang,quan}@cs.ust.hk

More information

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

arxiv: v1 [cs.cv] 18 Sep 2017

arxiv: v1 [cs.cv] 18 Sep 2017 Direct Pose Estimation with a Monocular Camera Darius Burschka and Elmar Mair arxiv:1709.05815v1 [cs.cv] 18 Sep 2017 Department of Informatics Technische Universität München, Germany {burschka elmar.mair}@mytum.de

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Camera calibration with two arbitrary coaxial circles

Camera calibration with two arbitrary coaxial circles Camera calibration with two arbitrary coaxial circles Carlo Colombo, Dario Comanducci, and Alberto Del Bimbo Dipartimento di Sistemi e Informatica Via S. Marta 3, 50139 Firenze, Italy {colombo,comandu,delbimbo}@dsi.unifi.it

More information

More on single-view geometry class 10

More on single-view geometry class 10 More on single-view geometry class 10 Multiple View Geometry Comp 290-089 Marc Pollefeys Multiple View Geometry course schedule (subject to change) Jan. 7, 9 Intro & motivation Projective 2D Geometry Jan.

More information

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Bo Li, Kun Peng, Xianghua Ying, and Hongbin Zha The Key Lab of Machine Perception (Ministry of Education), Peking University,

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Multi-Camera Calibration with One-Dimensional Object under General Motions

Multi-Camera Calibration with One-Dimensional Object under General Motions Multi-Camera Calibration with One-Dimensional Obect under General Motions L. Wang, F. C. Wu and Z. Y. Hu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences,

More information

Refining Single View Calibration With the Aid of Metric Scene Properties

Refining Single View Calibration With the Aid of Metric Scene Properties Goack Refining Single View Calibration With the Aid of Metric Scene roperties Manolis I.A. Lourakis lourakis@ics.forth.gr Antonis A. Argyros argyros@ics.forth.gr Institute of Computer Science Foundation

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

A simple method for interactive 3D reconstruction and camera calibration from a single view

A simple method for interactive 3D reconstruction and camera calibration from a single view A simple method for interactive 3D reconstruction and camera calibration from a single view Akash M Kushal Vikas Bansal Subhashis Banerjee Department of Computer Science and Engineering Indian Institute

More information

University of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract

University of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract Mirror Symmetry 2-View Stereo Geometry Alexandre R.J. François +, Gérard G. Medioni + and Roman Waupotitsch * + Institute for Robotics and Intelligent Systems * Geometrix Inc. University of Southern California,

More information

A Stratified Approach for Camera Calibration Using Spheres

A Stratified Approach for Camera Calibration Using Spheres IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. Y, MONTH YEAR 1 A Stratified Approach for Camera Calibration Using Spheres Kwan-Yee K. Wong, Member, IEEE, Guoqiang Zhang, Student-Member, IEEE and Zhihu

More information

Projective geometry for Computer Vision

Projective geometry for Computer Vision Department of Computer Science and Engineering IIT Delhi NIT, Rourkela March 27, 2010 Overview Pin-hole camera Why projective geometry? Reconstruction Computer vision geometry: main problems Correspondence

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Determining pose of a human face from a single monocular image

Determining pose of a human face from a single monocular image Determining pose of a human face from a single monocular image Jian-Gang Wang 1, Eric Sung 2, Ronda Venkateswarlu 1 1 Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore 119613 2 Nanyang

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Geometric Interpretations of the Relation between the Image of the Absolute Conic and Sphere Images

Geometric Interpretations of the Relation between the Image of the Absolute Conic and Sphere Images IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 8, NO. 1, DECEMBER 006 01 Geometric Interpretations of the Relation between the Image of the Absolute Conic and Sphere Images Xianghua

More information

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Peter F Sturm Computational Vision Group, Department of Computer Science The University of Reading,

More information

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License Title D camera geometry and Its application to circular motion estimation Author(s Zhang, G; Zhang, H; Wong, KKY Citation The 7th British Machine Vision Conference (BMVC, Edinburgh, U.K., 4-7 September

More information

Circular Motion Geometry by Minimal 2 Points in 4 Images

Circular Motion Geometry by Minimal 2 Points in 4 Images Circular Motion Geometry by Minimal 2 Points in 4 Images Guang JIANG 1,3, Long QUAN 2, and Hung-tat TSUI 1 1 Dept. of Electronic Engineering, The Chinese University of Hong Kong, New Territory, Hong Kong

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

Invariance of l and the Conic Dual to Circular Points C

Invariance of l and the Conic Dual to Circular Points C Invariance of l and the Conic Dual to Circular Points C [ ] A t l = (0, 0, 1) is preserved under H = v iff H is an affinity: w [ ] l H l H A l l v 0 [ t 0 v! = = w w] 0 0 v = 0 1 1 C = diag(1, 1, 0) is

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Yan Lin 1,2 Bin Kong 2 Fei Zheng 2 1 Center for Biomimetic Sensing and Control Research, Institute of Intelligent Machines,

More information

Pin-hole Modelled Camera Calibration from a Single Image

Pin-hole Modelled Camera Calibration from a Single Image Pin-hole Modelled Camera Calibration from a Single Image Zhuo Wang University of Windsor wang112k@uwindsor.ca August 10, 2009 Camera calibration from a single image is of importance in computer vision.

More information

How to Compute the Pose of an Object without a Direct View?

How to Compute the Pose of an Object without a Direct View? How to Compute the Pose of an Object without a Direct View? Peter Sturm and Thomas Bonfort INRIA Rhône-Alpes, 38330 Montbonnot St Martin, France {Peter.Sturm, Thomas.Bonfort}@inrialpes.fr Abstract. We

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Camera Self-calibration with Parallel Screw Axis Motion by Intersecting Imaged Horopters

Camera Self-calibration with Parallel Screw Axis Motion by Intersecting Imaged Horopters Camera Self-calibration with Parallel Screw Axis Motion by Intersecting Imaged Horopters Ferran Espuny 1, Joan Aranda 2,andJosé I. Burgos Gil 3 1 Dépt. Images et Signal, GIPSA-Lab, Grenoble-INP Ferran.Espuny@gipsa-lab.grenoble-inp.fr

More information

3D reconstruction class 11

3D reconstruction class 11 3D reconstruction class 11 Multiple View Geometry Comp 290-089 Marc Pollefeys Multiple View Geometry course schedule (subject to change) Jan. 7, 9 Intro & motivation Projective 2D Geometry Jan. 14, 16

More information

3D Motion from Image Derivatives Using the Least Trimmed Square Regression

3D Motion from Image Derivatives Using the Least Trimmed Square Regression 3D Motion from Image Derivatives Using the Least Trimmed Square Regression Fadi Dornaika and Angel D. Sappa Computer Vision Center Edifici O, Campus UAB 08193 Bellaterra, Barcelona, Spain {dornaika, sappa}@cvc.uab.es

More information

Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance

Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance Zhaoxiang Zhang, Min Li, Kaiqi Huang and Tieniu Tan National Laboratory of Pattern Recognition,

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Using Pedestrians Walking on Uneven Terrains for Camera Calibration

Using Pedestrians Walking on Uneven Terrains for Camera Calibration Machine Vision and Applications manuscript No. (will be inserted by the editor) Using Pedestrians Walking on Uneven Terrains for Camera Calibration Imran N. Junejo Department of Computer Science, University

More information

On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications

On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications ACCEPTED FOR CVPR 99. VERSION OF NOVEMBER 18, 2015. On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications Peter F. Sturm and Stephen J. Maybank Computational Vision Group,

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Camera calibration and 3D reconstruction from a single view based on scene constraints

Camera calibration and 3D reconstruction from a single view based on scene constraints Image and Vision Computing 3 (005) 311 33 www.elsevier.com/locate/imavis Camera calibration and 3D reconstruction from a single view based on scene constraints Guanghui Wang a,b, *, Hung-Tat Tsui a, Zhanyi

More information

Camera Calibration Using Line Correspondences

Camera Calibration Using Line Correspondences Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of

More information

Homography Estimation from the Common Self-polar Triangle of Separate Ellipses

Homography Estimation from the Common Self-polar Triangle of Separate Ellipses Homography Estimation from the Common Self-polar Triangle of Separate Ellipses Haifei Huang 1,2, Hui Zhang 2, and Yiu-ming Cheung 1,2 1 Department of Computer Science, Hong Kong Baptist University 2 United

More information

Camera Self-calibration Based on the Vanishing Points*

Camera Self-calibration Based on the Vanishing Points* Camera Self-calibration Based on the Vanishing Points* Dongsheng Chang 1, Kuanquan Wang 2, and Lianqing Wang 1,2 1 School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001,

More information

PROJECTIVE SPACE AND THE PINHOLE CAMERA

PROJECTIVE SPACE AND THE PINHOLE CAMERA PROJECTIVE SPACE AND THE PINHOLE CAMERA MOUSA REBOUH Abstract. Here we provide (linear algebra) proofs of the ancient theorem of Pappus and the contemporary theorem of Desargues on collineations. We also

More information

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS Cha Zhang and Zhengyou Zhang Communication and Collaboration Systems Group, Microsoft Research {chazhang, zhang}@microsoft.com ABSTRACT

More information

Multiple Motion Scene Reconstruction from Uncalibrated Views

Multiple Motion Scene Reconstruction from Uncalibrated Views Multiple Motion Scene Reconstruction from Uncalibrated Views Mei Han C & C Research Laboratories NEC USA, Inc. meihan@ccrl.sj.nec.com Takeo Kanade Robotics Institute Carnegie Mellon University tk@cs.cmu.edu

More information

Recovering light directions and camera poses from a single sphere.

Recovering light directions and camera poses from a single sphere. Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,

More information

Projective geometry, camera models and calibration

Projective geometry, camera models and calibration Projective geometry, camera models and calibration Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 6, 2008 The main problems in computer vision Image

More information

Circular Motion Geometry Using Minimal Data. Abstract

Circular Motion Geometry Using Minimal Data. Abstract Circular Motion Geometry Using Minimal Data Guang JIANG, Long QUAN, and Hung-tat TSUI Dept. of Electronic Engineering, The Chinese University of Hong Kong Dept. of Computer Science, The Hong Kong University

More information

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn

More information

Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices

Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices Toshio Ueshiba Fumiaki Tomita National Institute of Advanced Industrial Science and Technology (AIST)

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

Computer Vision I - Appearance-based Matching and Projective Geometry

Computer Vision I - Appearance-based Matching and Projective Geometry Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation

More information

Projective Reconstruction of Surfaces of Revolution

Projective Reconstruction of Surfaces of Revolution Projective Reconstruction of Surfaces of Revolution Sven Utcke 1 and Andrew Zisserman 2 1 Arbeitsbereich Kognitive Systeme, Fachbereich Informatik, Universität Hamburg, Germany utcke@informatik.uni-hamburg.de

More information

Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix

Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix J Math Imaging Vis 00 37: 40-48 DOI 0007/s085-00-09-9 Authors s version The final publication is available at wwwspringerlinkcom Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential

More information

Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming

Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming Jae-Hak Kim 1, Richard Hartley 1, Jan-Michael Frahm 2 and Marc Pollefeys 2 1 Research School of Information Sciences and Engineering

More information

A stratified approach for camera calibration using spheres. Creative Commons: Attribution 3.0 Hong Kong License

A stratified approach for camera calibration using spheres. Creative Commons: Attribution 3.0 Hong Kong License Title A stratified approach for camera calibration using spheres Author(s) Wong, KYK; Zhang, G; Chen, Z Citation Ieee Transactions On Image Processing, 2011, v. 20 n. 2, p. 305-316 Issued Date 2011 URL

More information

Minimal Projective Reconstruction for Combinations of Points and Lines in Three Views

Minimal Projective Reconstruction for Combinations of Points and Lines in Three Views Minimal Projective Reconstruction for Combinations of Points and Lines in Three Views Magnus Oskarsson, Andrew Zisserman and Kalle Åström Centre for Mathematical Sciences Lund University,SE 221 00 Lund,

More information

Linear Auto-Calibration for Ground Plane Motion

Linear Auto-Calibration for Ground Plane Motion Linear Auto-Calibration for Ground Plane Motion Joss Knight, Andrew Zisserman, and Ian Reid Department of Engineering Science, University of Oxford Parks Road, Oxford OX1 3PJ, UK [joss,az,ian]@robots.ox.ac.uk

More information

Multiple View Geometry of Projector-Camera Systems from Virtual Mutual Projection

Multiple View Geometry of Projector-Camera Systems from Virtual Mutual Projection Multiple View Geometry of rojector-camera Systems from Virtual Mutual rojection Shuhei Kobayashi, Fumihiko Sakaue, and Jun Sato Department of Computer Science and Engineering Nagoya Institute of Technology

More information

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Wei Guan Suya You Guan Pang Computer Science Department University of Southern California, Los Angeles, USA Abstract In this paper, we present

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Robot Vision: Projective Geometry

Robot Vision: Projective Geometry Robot Vision: Projective Geometry Ass.Prof. Friedrich Fraundorfer SS 2018 1 Learning goals Understand homogeneous coordinates Understand points, line, plane parameters and interpret them geometrically

More information

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Douglas R. Heisterkamp University of South Alabama Mobile, AL 6688-0002, USA dheister@jaguar1.usouthal.edu Prabir Bhattacharya

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Camera models and calibration

Camera models and calibration Camera models and calibration Read tutorial chapter 2 and 3. http://www.cs.unc.edu/~marc/tutorial/ Szeliski s book pp.29-73 Schedule (tentative) 2 # date topic Sep.8 Introduction and geometry 2 Sep.25

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

METR Robotics Tutorial 2 Week 2: Homogeneous Coordinates

METR Robotics Tutorial 2 Week 2: Homogeneous Coordinates METR4202 -- Robotics Tutorial 2 Week 2: Homogeneous Coordinates The objective of this tutorial is to explore homogenous transformations. The MATLAB robotics toolbox developed by Peter Corke might be a

More information

COMP 558 lecture 19 Nov. 17, 2010

COMP 558 lecture 19 Nov. 17, 2010 COMP 558 lecture 9 Nov. 7, 2 Camera calibration To estimate the geometry of 3D scenes, it helps to know the camera parameters, both external and internal. The problem of finding all these parameters is

More information

Camera Calibration with a Simulated Three Dimensional Calibration Object

Camera Calibration with a Simulated Three Dimensional Calibration Object Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois Pinhole Camera Model /5/7 Computational Photography Derek Hoiem, University of Illinois Next classes: Single-view Geometry How tall is this woman? How high is the camera? What is the camera rotation? What

More information

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential

More information

Single-view metrology

Single-view metrology Single-view metrology Magritte, Personal Values, 952 Many slides from S. Seitz, D. Hoiem Camera calibration revisited What if world coordinates of reference 3D points are not known? We can use scene features

More information

Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles

Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles 177 Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles T. N. Tan, G. D. Sullivan and K. D. Baker Department of Computer Science The University of Reading, Berkshire

More information

Camera calibration. Robotic vision. Ville Kyrki

Camera calibration. Robotic vision. Ville Kyrki Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration , pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information

More information