Guang-Hui Wang et al.: Single View Based Measurement on Space Planes 375 parallelism of lines and planes. In [1, ], a keypoint-based approach to calcu

Size: px
Start display at page:

Download "Guang-Hui Wang et al.: Single View Based Measurement on Space Planes 375 parallelism of lines and planes. In [1, ], a keypoint-based approach to calcu"

Transcription

1 May 004, Vol.19, No.3, pp J. Comput. Sci. & Technol. Single View Based Measurement on Space Planes Guang-Hui Wang, Zhan-Yi Hu, and Fu-Chao Wu National Laboratory of Pattern Recognition, Institute of Automation, The Chinese Academy of Sciences Beijing , P.R. China Received June 0, 00; revised April 8, 003. Abstract The plane metrology using a single uncalibrated image is studied in the paper, and three novel approaches are proposed. The first approach, namely key-line-based method, is an improvement over the widely used key-point-based method, which uses line correspondences directly to compute homography between the world plane and its image so as to increase the computational accuracy. The second and third approaches are both based on a pair of vanishing points from two orthogonal sets of parallel lines in the space plane together with two unparallel referential distances, but the two methods deal with the problem in different ways. One is from the algebraic viewpoint which first maps the image points to an affine space via a transformation constructed from the vanishing points, and then computes the metric distance according to the relationship between the affine space and the Euclidean space, while the other is from the geometrical viewpoint based on the invariance of cross ratios. The second and third methods avoid the selection of control points and are widely applicable. In addition, a brief description on how to retrieve other geometrical entities on the space plane, such as distance from a point to a line, angle formed by two lines, etc., is also presented in the paper. Extensive experiments on simulated data as well as on real images show that the first and the second approaches are of better precision and stronger robustness than the key-point-based one and the third one, since these two approaches are fundamentally based on line information. Keywords 1 Introduction single view metrology, projective geometry, geometrical parameter retrieval, plane homography One of the main aims of Computer Vision is to take measurements of the environment and reconstruct its 3D model. Using vision to measure world distance has attracted a lot of attention and found wide applications in recent years [1 10], such as architectural and indoor measurement, reconstruction from paintings, forensic measurement and traffic accident investigation [1 5]. Traditional approach to measurement is to take all the distances manually by using metric tapes or rulers or by some special devices such as ultrasonic devices, laser range finder, etc. These approaches are time consuming, prone to errors and invasive. With computer vision based methods, what one needs to do is only to take several pictures, then all measurements can be done offline with higher accuracy, flexibility and efficiency. There are several potential advantages for this kind of approaches. First, it is user friendly. Once the images are acquired, users can take measurements desktoply and store them in a database. Second, the data acquisition process is rapid, simple and minimally invasive, since it only involves a camera to take pictures of the environment to be measured. Third, the acquired data are stored digitally in a disk ready for reuse at any time negating to go back to the original scene when new measurements are needed. Finally, the hardware involved is cheap and easy to use. Generally speaking, the methods of computer vision based measurements in the literature may be broadly divided into two categories. The classical method is to reconstruct the metric structure of the scene from or more images by stereo vision techniques [1;9 11]. If we can obtain the Euclidean reconstruction of a scene, then any geometrical information about the scene can be retrieved accordingly. However, this is a hard task due to the problem of seeking for correspondences from different views. Besides, the precision of reconstruction is closely related with that of correspondences and camera calibration. The other one is to directly use a single uncalibrated image [1 8]. It is well known that only one image cannot provide enough information for a complete 3D reconstruction. However, some metrical quantities can be inferred directly from one image under the knowledge of some geometrical scene constraints such as planarity of points and Λ Regular Paper The work was supported by the National Natural Science Foundation of China under Grant Nos , and the National High Technology Development 863 Program of China under Grant No.00AA

2 Guang-Hui Wang et al.: Single View Based Measurement on Space Planes 375 parallelism of lines and planes. In [1, ], a keypoint-based approach to calculate the Euclidean distance of two points on a world plane was proposed. In this method, at least the coordinates of four control points on the world plane and their corresponding image points should be known beforehand. In [1, 3], the authors described another approach to compute 3D affine measurement from a single perspective image. It is assumed that the vanishing line of a reference plane in the scene as well as a vanishing point in a reference direction (not parallel to the plane) can be determined from the image, then three canonical type measurements (i.e., distances between any planes which are parallel to the reference plane, area and length ratio on these planes and the camera's position) can be computed. In [4 7], some methods were also investigated for object reconstruction from measurements in a single view in both computer vision community and photogrammetric community. These methods were based on the constraints of the object to be reconstructed, such as edges, coplanarity, parallelism, perpendicularity, etc. Since distance measurement on a plane is of great importance and has wide applications, in this paper, we introduce three novel single-view-based methods for distance measurement as well as a brief description about the retrieval of other geometrical parameters on a planar scene. In addition, a comparative study of the proposed methods, together with the widely used key-point-based method, is carried out. The paper is organized as follows. In Section, some preliminaries on homography, vanishing point and cross ratio are introduced briefly. Then the three novel approaches are elaborated in Section 3. In Section 4, a brief description about how to retrieve other geometrical parameters is presented. In Section 5, all the methods for distance measurement are compared experimentally with both simulated data and real images. Some conclusions are given at the end of this paper. Some Preliminaries In order to facilitate our discussions in the subsequent sections, some preliminaries on homography, vanishing point and cross ratio are presented here. The key-point-based method for distance measurement is also outlined. In this paper, the following denotation is used: a 3D or D column vector is denoted by x, while a homogeneous (augmented by adding w or 1) vector is denoted by ~x = [x T ;w] T or ~x = [x T ; 1] T..1 Plane to Plane Homography Under the pinhole camera model, a 3D point ~x in space is projected to an image point fm via a 3 4 projection matrix P as: sfm = P ~x = [p 1 ; p ; p 3 ]~x (1) where, fm and ~x are homogeneous coordinates in the form of fm = (u; v; w) T, ~x = (X; Y; Z; W) T, and s is a nonzero scalar. For 3D coplanar points, without loss of generality, we assume Z = 0, then s 4 u 3 3 X v 5 6 Y 7 = P 4 5 = [p 0 1 ; p ] 4 X 3 Y 5 z } w W W H 3 3 = 4 h 11 h 1 h 13 h 1 h h X Y 5 h 31 h 3 h 33 W Hence, mapping between corresponding points on the plane and its image is sfm = H ~x () where, H = [p 1 ; p ] is called plane to plane homography. Usually, H is a non-singular 3 3 homogeneous matrix (degeneracy occurs if and only if camera center is on the reference plane) with 8 degrees of freedom because it can only be defined meaningfully up to a scale factor. According to (), each image to world point correspondence can give rise to two linear constraints on the 9 elements of homography, thus, given N (N = 4) coplanar space points in general position (no three points are collinear) and their correspondences in image, the homography matrix can be uniquely determined. If N > 4, the matrix is over determined. For nonperfect data, H can be estimated by a suitable minimization scheme [1;6]. Once the homography matrix between the world and image planes is determined, an image point can be back projected to a point on the world plane via H 1, hence the distance between two points on the world plane can then be simply computed from the Euclidean distance between their back-projected images. This is the basic principle of the key-point-based plane measurement [1;]. It is clear that the accuracy of this method depends greatly on that of the selection of key points and the detection of their corresponding image points. In real applications, if we directly use the point correspondences to compute the homography, it may be subject to a loss of accuracy due to the noise in extracted image points.

3 376 J. Comput. Sci. & Technol., May 004, Vol.19, No.3. Vanishing Point and Vanishing Line Under the perspective projection, straight object edges in 3D space project to straight lines in images, and parallel lines in space project to concurrent lines in the image plane. The intersection point, possibly located at infinity, is called vanishing point. Different sets of parallel lines on the same space plane define different vanishing points, and all these vanishing points lie on the same line on the image plane named vanishing line as shown in Fig.1. Vanishing points and vanishing lines convey a lot of information about the direction of lines and orientation of planes in space. These entities can be estimated directly from images and no explicit knowledge of the relative geometry between camera and viewed scene is required. There are many methods for the detection and calculation of the vanishing points and vanishing lines in the literature [1 14]. If more than two lines are available, then a Maximum Likelihood Estimation (MLE) algorithm or Least-Square technique can be employed to estimate the vanishing point [1;14]. Fig.1. Vanishing point and vanishing line..3 Cross Ratio As shown in Fig., for four collinear points A 1 ;A ;A 3 ;A 4, the cross ratio is defined as: (A 1 A ; A 3 A 4 ) = A 1A 3 A A 4 A A 3 A 1 A 4 (3) where, A i A j can be taken as the distance from point A i to point A j (i = 1;, and j = 3; 4). If point A 4 is at infinity, then the cross ratio becomes a simple ratio, i.e., (A 1 A ; A 3 A 4 ) = A 1A 3. A A 3 Cross ratio is an important projective invariant. In other words, it does not change under a projective transformation. Therefore, in Fig., we have the equality (A 1 A ; A 3 A 4 ) = (a 1 a ; a 3 a 4 ). When the cross ratio (a 1 a ; a 3 a 4 ) = 1, it is called a harmonic configuration, and the couples (a 1 a ) and (a 3 ;a 4 ) are said to be harmonic pairs [15]. The harmonic cross ratio is invariant under interchange of points in each couple and under interchange of couples, so the couple can be considered to be unordered. Given (a 1 ;a ), a 3 is said to be conjugate to a 4 if (a 1 a ; a 3 a 4 ) forms a harmonic configuration. Fig.. Cross ratio, a projective invariant. 3 Novel Approaches for Distance Measurement In this section, we introduce three novel approaches for distance measurement. The first one is an improvement over the widely used key-pointbased method. It uses line correspondences directly to compute homography between a world plane and its image so as to increase the computational accuracy. The second one and the third one are both based on a pair of vanishing points from two orthogonal sets of parallel lines in the space plane. The basic difference between them is that one is from the algebraic viewpoint, which first maps the image points to an affine space via a transformation constructed from the vanishing points, and then computes the metric distance according to the relationship between the affine space and the Euclidean space, while the other is from the geometrical viewpoint based on the invariance of cross ratios. 3.1 Key-Line-Based Approach Let ~x 1, ~x be two points on the space plane, and fm 1 ; fm be the corresponding image points under the perspective projection, then fm 1 = s 1 H ~x 1 and ~m = s H ~x, where, s 1 and s are two nonzero scales. Let L be the space line passing through the points ~x 1 and ~x, then its corresponding image line l must pass through the two image points fm 1 and fm. So we have l = fm 1 fm = (s 1 H ~x 1 ) (s H ~x ) = sh T (~x 1 ~x ) = sh T L

4 Guang-Hui Wang et al.: Single View Based Measurement on Space Planes 377 sl = H T l (4) where, s = s 1 s jhj, " stands for vector product of two vectors, and we usually use fm 1 fm = [fm 1 ] fm to compute the vector product, where [fm 1 ] is defined as the skew symmetric matrix of vector fm 1 in the following form. 4 u 3 0 v 5 = 4 w v 3 w 0 u 5 (5) w v u 0 In (4), each line correspondence can give rise to two constraints on homography H. So given four coplanar line correspondences in general position (i.e., no three are concurrent), the homography can be determined uniquely [11]. If more than four correspondences are available, then a suitable minimization scheme can be adopted to give a more faithful estimation of the homography. 3. Vanishing-Point-Based Approach For the convenience of description, we call the space plane on which the measurements are taken as the reference plane, then we have the following results. Given the image of two orthogonal sets of parallel lines and the lengths of two unparallel line segments on the reference plane, then distance between any two points on the reference plane can be measured. Define a space coordinate system O XY on the reference plane, with X and Y axes parallel to the two sets of parallel lines respectively. Denote the vanishing points of the parallel lines in X and Y directions as v X and v Y respectively. From (), it is clear that v X = s 1 p 1, v Y = s p and the vanishing line v L = v X v Y, where, s 1 and s are nonzero scales. As stated in Subsection.1, there exists a homography between the reference plane and its image. We can select the first two columns of H as v X and v Y. The homography should be of rank three, otherwise, the mapping from reference plane to image is degenerate. Therefore, the last column of H must not lie on the vanishing line, and we can select the last column h 3 to be any vector which is linearly independent of v X and v Y, such as h 3 = v L. Through the constructed homography H = [v X ; v Y ; h 3 ], an image point fm can be mapped to a point ~x H in an affine space. We name it H-space here and after. Proposition 1. The distance between a pair of points on the reference plane can be uniquely determined by the corresponding distance in the H-space with two unknown common factors, which are independent of the space points in question. Proof. From () and the above analysis, wehave ρ fm = ρ1 P 0 ~x = ρ 1 [p 1 ; p ]~x fm = ρ H ~x H = ρ [s 1 p 1 ;s p ; h 3 ]~x H (6) where, fm, ~x and ~x H are all in normalized homogeneous form with fm = (u; v; 1) T, ~x = (X; Y; 1) T, ~x H = (X H ;Y H ; 1) T, P 0 = [p 1 ; p ], and ρ 1, ρ are nonzero scales. Eliminate fm and expand the above equation, we have: ~x = ρ ρ 1 P 1 0 H ~x H = ρ ρ1 [p 1; p ] 1 [s 1 p 1 ;s p ; h 3 ]~x H = ρ 4 s a 1 0 s a 5 4 X 3 H Y H 5 ρ a 1 = k 1 0 k 5 4 X 3 H Y H 5 = A~x H (7) z } A where, 1 = ρ s 1 =ρ 1, = ρ s =ρ 1, k 1 = ρ a 1 =ρ 1, k = ρ a =ρ 1, ρ a 3 =ρ 1 = 1, a 1 = jh 3; p j jp 1 ; p j, a = jp 1; h 3 j jp 1 ; p ; p, a 3 = jp 1; p ; h 3 j 4 j jp 1 ; p ; p. We can see 4 j from (7) that 1, are independent of the correspondence of ~x and ~x H. Thus, a point in the Euclidean space and its corresponding point in the H-space are related with an affine transformation A. For two points ~x 1 and ~x in the Euclidean space and their corresponding points ~x H1 and ~x H in the H-space, we have: ~x 1 ~x = 4 3 1X H1 + k 1 Y H1 + k X H + k 1 Y H + k = 4 3 1(X H1 X H ) (Y H1 Y H ) 5 = X H Y H (8) Therefore, the distance between any two points in the Euclidean space is a function of the two common scales ( 1 ; ) and the coordinate difference of the two computed points in the H-space. If we know two unparallel reference line segments and their lengths d 1 and d, then we have: ρ d 1 = 1 X + H1 Y H1 d = 1 X + (9) H Y H

5 378 J. Comput. Sci. & Technol., May 004, Vol.19, No.3 The two scales 1 and can be uniquely determined from the above equations, then all the metric distances between any two points in the reference plane can be obtained immediately according to the following equation: d = q 1 X H + Y H : (10) Some remarks: (i) In (7), the last column of A means a translation between the origin of the affine coordinate system and that of the Euclidean coordinate system, while 1 and mean the two scalings between the X and Y axes of the two coordinate systems. If the world coordinate system is selected as shown in Fig.3, then h 3 will be the image of the origin O of the world coordinate system, hence we have h 3 = s 0 p 4. In this case, a = a 3 = 0, k 1 = k = 0, and (7) becomes: ~x = A~x H = ~x H : (11) Therefore, given 1, and an image point, we can immediately obtain the coordinates of its corresponding space point, and furthermore, we can retrieve the homography between the world plane and the image plane. where, μ 1 ;μ ;μ 3 ;μ 4 are unknown nonzero scales. Vanishing points v X and v Y can be computed from the image as follows: 8 v X = (o a) (b c) = μ 1 μ μ 3 μ 4 (P 0 O P 0 A) (P 0 B P 0 C) = μ 1 μ μ 3 μ 4 jp 0 jp 0 (O A)(B C) >< = μ 1 μ μ 3 μ 4 a bjp 0 jp 1 = s 1 p 1 v Y = (o c) (a b) = μ 1 μ μ 3 μ 4 (P 0 O P 0 C) (P 0 A P 0 B) = μ 1 μ μ 3 μ 4 jp 0 jp 0 (O C)(A B) >: = μ 1 μ μ 3 μ 4 ab jp 0 jp = s p (13) where, s 1 = μ 1 μ μ 3 μ 4 a bjp 0 j, s = μ 1 μ μ 3 μ 4 ab jp 0 j, p 1 and p are the first and second columns of P 0 respectively. Therefore, from (13) and (7), it is easy to see that the ratio of 1 and is equal to that of the two sides of the rectangle, i.e., 1 : = a : b (14) (iii) If the two sets of parallel lines form a square, then the two scalings are equal, i.e., 1 =. In this case, the distance between any two points in the reference plane and the corresponding distance in the H-space are equal up to one common scale, and one reference distance is enough to determine the scale. 3.3 Cross-Ratio-Based Approach Fig.3. Two sets of parallel lines and their image. (ii) If each set of parallel lines contains only two lines, and the two sets of parallel lines form a rectangle, then we have Proposition for the two scalings 1 and in this case. Proposition. The ratio of 1 and is equal to that of the rectangle's two sides. Proof. Without loss of generality, we can denote the coordinates of intersections and the image of the rectangle as shown in Fig.3. Assume all the points in Fig.3 are in homogeneous coordinates, then we have: ρ o = μ1 P 0 O; b = μ 3 P 0 B; a = μ P 0 A c = μ 4 P 0 C (1) In this section, we will describe a geometric approach for distance measuring based on cross ratio. As shown in Fig.4, the left part gives two sets of orthogonal parallel line segments L 1 ;L ;L 3 and L 4 in a space plane, and the right one is its corresponding image. If we know the lengths of line segments L 1 and L, then the distance between any two points X 1 and X can be computed via the invariance of cross ratios. Fig.4. A sketch of cross-ratio-based approach.

6 Guang-Hui Wang et al.: Single View Based Measurement on Space Planes 379 In Fig.4, assume S 1 ;S ;S 3 ;S 4 are the intersections of the two sets of parallel lines in the space, and B 1 ;B ;B 3 ;B 4 are the intersections of the line through points X 1 and X with the two sets of parallel lines. While s 1 ;s ;s 3 ;s 4 and b 1 ;b ;b 3 ;b 4 are their corresponding image points, v 1 ;v are the vanishing points of the two directions. With the definition of cross ratio, it is easy to compute the cross ratio of the four collinear points s 1 ;s ;b 4 ;v from the image: r 1 = (s 1 s ; b 4 v ) = s 1b 4 s v s b 4 s 1 v : (15) In the similar way, we can also obtain r = (s 4 s 3 ; b 3 v ), r 3 = (s s 3 ; b v 1 ), r 4 = (x 1 b ; b 3 b 4 ), r 5 = (x b ; b 3 b 4 ) directly from the image. Suppose the corresponding space points of v 1 ;v are V 1 ;V, which are located at infinity on the two sets of parallel lines, then the four space points S 1 ;S ;B 4 ;V have the same cross ratio as the four collinear image points s 1 ;s ;b 4 ;v, since cross ratio is a projective invariant, i.e., (S 1 S ; B 4 V ) = S 1B 4 S V S B 4 S 1 V = S 1S + S B 4 S B 4 = (s 1 s ; b 4 v ) = r 1 (16) From the above equation, it is easy to compute the distance S B 4 between points S and B 4, and the distances S 3 B 3 and S 3 B can also be solved similarly. Hence, we can obtain the distances B B 3 and B 3 B 4 via the Cosine theorem (in the case that the two sets of parallel lines are not orthogonal). Then, from the cross ratios (X 1 B ; B 3 B 4 ) = r 4 and (X B ; B 3 B 4 ) = r 5, we can eventually retrieve the distances X 1 B and X B. Therefore, the distance between any two space points X 1 and X can be expressed as: X 1 X = X 1 B X B (17) Some remarks: (i) If the world coordinate system is selected as shown in Fig.4, then the cross ratio can be computed by either the X-coordinate or Y -coordinate of the four collinear points rather than by the distances, this can simplify the computation and increase the accuracy as well. In this case, not only the distance between points X 1 and X, but also the coordinate of each individual point, can be obtained easily. Furthermore, the homography between the world plane and the image plane can be retrieved in the same way as stated in the previous section. (ii) In the above analysis, we suppose that the lengths of segments L 1 and L in Fig.4 are known. If the lengths are unavailable, we can retrieve them via the following proposition. Proposition 3. Given two unparallel reference distances, the lengths of segments L 1 and L can be determined uniquely. Proof. We only give an analytical proof here. Let the lengths of segments L 1 and L be x and y respectively, while the reference distances be d 1 and d. Then we have: S B 4 = y r 1 1 ; S 3B 3 = y r 1 ; S 3B = x r 3 1 s y B B 3 = (r 1) + x (r 3 1) ; r 1 B 3 B 4 = x + 1 y r 1 1 : r 1 Both d 1 and d are functions of x and y, ρ d1 = f 1 (x ;y ) d = f (x ;y ) (18) The two equations in (18) are independent of each other if the two reference distances are unparallel. Hence, given two unparallel reference distances d 1 and d, considering that x > 0 and y > 0, x and y can be uniquely determined from (18). (iii) It is worth noting that if X 1 and X lie on the diagonal lines through S 1 and S 3 or S and S 4, the situation becomes degenerated. In this case, we can denote the intersection of S 1 S 3 and S S 4 as point C, then from (s 1 s 3 ; x 1 c) and (s 1 s 3 ; x c) or (s s 4 ; x 1 c) and (s s 4 ; x c), it turns out to be easier to compute the distance between X 1 and X. 4 Retrieving Other Geometrical Entities In the previous sections, we propose some methods for distance measurements. Actually, once the homography is recovered, we can retrieve almost all the geometrical information within the space plane, such as distance from a point toa line, angle formed by two lines, area of a planar object, etc. Next, we will briefly introduce some approaches to retrieving these geometrical measurements. Let m, l 1 and l be an image point and two image lines, x, L 1 and L be their corresponding point and lines in the space plane, then we have ~x = s 1 H 1 fm, L 1 = s H T l 1 and L = s 3 H T l. Suppose the coordinates of the space point x are [x s ;y s ] T, the parameters of the space

7 380 J. Comput. Sci. & Technol., May 004, Vol.19, No.3 lines are [a 1 ;b 1 ;c 1 ] T and [a ;b ;c ] T respectively. Then through a simple computation, it is easy to obtain the distance from x to L 1 as: d = ja 1x s + b 1 y s + c 1 j p a 1 + b 1 The angle formed by L 1 and L is: (19) = tan 1 a 1b a b 1 a 1 a + b 1 b (0) The intersection coordinates of L 1 and L are: homography between the reference plane and image plane. Then an image point can be mapped to the Euclidean space via formula ~x = H 1 fm and the distance between two space points can be determined accordingly. While for the key-linebased one, we use the line correspondences directly to compute the homography. For the proposed vanishing-point-based approach and crossratio-based one, we take the rectangle as two sets of orthogonal parallel lines and take two unparallel line segments as the reference lengths so as to determine the two scales. I(L 1 ; L ) = h b1 c b c 1 a 1 b a b 1 ; a c 1 a 1 c a 1 b a b 1 i T (1) As for the area of a certain planar object, we can just extract its contour in the image, then back project all the edge points onto the world plane via the homography, and estimate the enclosed area by integration. For some regular objects, such as a conic or a polygon, we can fit the back-projected edge points into a conic or a polygon via the least-squares or some other robust estimation techniques, then the area can be computed according to the corresponding formula. 5 Experiments Fig.5. Relative error at different noise levels. In this section, a comparative study of the proposed approaches, as well as the key-point-based method, is done with both simulated and real data. 5.1 Experiments with Simulated Data During the simulations, the camera's setting is f u = 1; 00, f v = 1; 000, ff = 1, u 0 = 51, v 0 = 384. The image resolution is 1; pixels. The extrinsic camera parameters are: rotation axes r = [; 1; 4] T, rotation angle ff = ß=6 and translation t = [ 5; 10; 800] T. We generate a rectangle in the reference space plane and evenly select 100 points on each side of the rectangle. The Gaussian image noise (unit: pixel) is added to each image point, and the 100 image points on each side are fitted to a line via a least-squares algorithm. In order to ensure the comparability of all the methods, all tests are taken under the same camera parameters and simulation data. For the key-point-based method, first, we calculate the four corners of the generated rectangle by computing the intersections of the four fitted side lines and use these points to calculate the Fig.6. Standard deviation at different noise levels. In order to provide more statistically meaningful results, we vary the Gaussian noise level from 0 to 4 pixels with a step of 0.1 pixel during the test. At each noise level, we randomly select 100 pairs of space points and use their corresponding image points to estimate their distances by the four approaches respectively. The relative error of the estimated distances at different noise levels is shown in Fig.5, where, the value at each noise level is the

8 Guang-Hui Wang et al.: Single View Based Measurement on Space Planes 381 mean of 100 independent tests. Fig.6 is the corresponding standard deviations. For the convenience of display, the results are shown at every fourth step in the figures. From the above simulations, we can see that all the four approaches have a good precision in distance measurement, even under higher level of noise addition. The key-line- and vanishing-pointbased approaches are better than the other two, since those two methods are based on line correspondences, and as we know, line extraction from image is much less prone to noise. 5. Experiments with Real Images In real image test, images are taken by a Nikon Coolpix 990 digital camera with resolution of Fig.7 shows two images of the test set, where the first building is the research center of Institute of Automation, the Chinese Academy of Sciences, and the second one is Jade Palace Restaurant. During the test, we take the story's height and the window's size as reference. First, we use Canny edge detector to detect the edge points, and use a least-squares technique to fit the detected edge points into lines, then all the measurements within the front face of the buildings can be taken according to the above described approaches. During the test, we select four equal distances at different spots on each image so as to assess the performance of each approach. The test results are shown in Table 1, where, the true distances are taken manually on the spot. From the test results, we can learn that the precision of all the measurements is acceptable, however, the approaches based on key lines and vanishing points perform better than others, this conclusion is the same as that in simulation. Fig.7. Two images of the test data. Table 1. Real Image Test Results Line segments in image S 1 S S 3 S 4 S 5 S 6 S 7 S 8 True distance (m) Key-point-based Measured distance (m) approach Relative error (%) Key-line-based Measured distance (m) approach Relative error (%) Vanishing-point- Measured distance (m) based approach Relative error (%) Cross-ratio-based Measured distance (m) approach Relative error (%) Conclusions In this paper, we mainly focus on plane distance measurement from a single view and propose three novel approaches. Both simulations and real image tests validate the proposed methods and show that the key-line- and vanishing-point-based approaches are of better accuracy and robustness. In some structured environments, orthogonal lines are not rare, such as the contour of a building, the frame of a window, a door, etc., the applicability of the proposed approaches seems not too limited. Usually, the images of vanishing points are beyond the image area, this will not affect the applications of our methods, however, some extreme configuration should be avoided, so as not to make

9 38 J. Comput. Sci. & Technol., May 004, Vol.19, No.3 the image of vanishing points at infinity, since this may cause a loss of accuracy. Beside, it is clear that all the proposed approaches are based on some specific geometrical information of the scene, and the precision of the measurement depends greatly on that of the image pretreatment, such as edge detection, line fitting and vanishing point calculations. Hence, it is crucial to select a robust edge detecting and line fitting technique so as to improve the accuracy of measurement. Based on the proposed approaches, we have developed a prototype system to take plane measurement automatically from single image. The software can be downloaded from the author's personal homepage: ia.ac.cn/english/rv/ghwang/. Any comments and suggestions are welcome. References [1] Criminisi A. Accurate visual metrology from single and multiple images [Dissertaton]. University of Oxford, [] Criminisi A, Reid I, Zisserman A. A plane measuring device. Image and Vision Computing, 1999, 17(8): [3] Criminisi A, Reid I, Zisserman A. Single view metrology. In Proc. International Conference on Computer Vision, Kekyrn, Greece, Sept. 1999, pp [4] Gurdjos P, Payrissat R. About conditions for recovering the metric structure of perpendicular planes from the single ground plane to image homography. In Proc. International Conference on Pattern Recognition, Barcelona, Spain, Sept. 000, I: [5] Kim T, Seo Y, Hong K. Physics-based 3D position analysis of a soccer ball from monocular image sequences. In Proc. International Conference on Computer Vision, Bombay, India, Jan. 1998, pp [6] Liebowitz D, Zisserman A. Metric rectification for perspective images of planes. In Proc. IEEE International Conference on Computer Vision & Pattern Recognition, Santa Barbara, CA, June, 1998, pp [7] Heuvel F A van den. 3D reconstruction from a single image using geometric constraints. ISPRS Journal of Photogrammetry & Remote Sensing, 1998, 53(6): [8] Wilczkowiak M, Boyer E, Sturm P. Camera calibration and 3D reconstruction from single images using parallelepipeds. In Proc. International Conference on Computer Vision, Vancouver, Canada, July 001, I: [9] Liebowitz D, Criminisi A, Zisserman A. Creating architectural models from images. In Proc. Eurographics, Milan, Italy, Sept. 1999, pp [10] Reid I, Zisserman A. Goal-directed video metrology. In Proc. European Conference of Computer Vision, Cambridge, UK, April 1996, II: [11] Hartley R, Zisserman A. Multiple View Geometry in Computer Vision. Cambridge University Press, 000. [1] Collins R T, Weiss R S. Vanishing point calculation as a statistical inference on the unit sphere. In Proc. International Conference on Computer Vision, Osaka, Japan, Dec. 1990, pp [13] McLean G, Kotturi D. Vanishing point detection by line clustering. IEEE Trans. Pattern Analysis and Machine Intelligence, 1995, 7(11): [14] Heuvel F A van den. Vanishing point detection for architectural photogrammetry. In International Archives of Photogrammetry and Remote Sensing, Hakodate, Japan, 1998, pp [15] Sturmfels B. Algorithms in Invariant Theory. Springer, Wien, Guang-Hui Wang received his B.S. and M.S. degrees in 1990 and 000 from the Airforce University of Engineering and Jilin University of Technology respectively. Now he is a Ph.D. candidate of National Laboratory of Pattern Recognition, the Chinese Academy of Sciences. His research interests include computer vision, single view metrology, 3D reconstruction, autonomous mobile robot localization, intelligent control, etc. Zhan-Yi Hu received his B.S. degree in automation from the North China University of Technology in 1985, the Ph.D. degree (Docteur d'etat) in computer vision from the University of Liege, Belgium, in Jan Since 1993, he has been with the Institute of Automation, the Chinese Academy of Sciences, where he is now a professor. His research interests are in robot vision, camera calibration, 3D reconstruction, active vision, image-based modeling and rendering. Fu-Chao Wu received his B.S. degree in mathematics from Anqing Teacher's College in 198. Since 001, he has been with the Institute of Automation, the Chinese Academy of Sciences, where he is a professor. His research interests are computer vision, including camera calibration, 3D reconstruction, active vision, etc.

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Coplanar circles, quasi-affine invariance and calibration

Coplanar circles, quasi-affine invariance and calibration Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

A simple method for interactive 3D reconstruction and camera calibration from a single view

A simple method for interactive 3D reconstruction and camera calibration from a single view A simple method for interactive 3D reconstruction and camera calibration from a single view Akash M Kushal Vikas Bansal Subhashis Banerjee Department of Computer Science and Engineering Indian Institute

More information

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds Marta Wilczkowiak Edmond Boyer Peter Sturm Movi Gravir Inria Rhône-Alpes, 655 Avenue de l Europe, 3833 Montbonnot, France

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Pose Estimation from Circle or Parallel Lines in a Single Image

Pose Estimation from Circle or Parallel Lines in a Single Image Pose Estimation from Circle or Parallel Lines in a Single Image Guanghui Wang 1,2, Q.M. Jonathan Wu 1,andZhengqiaoJi 1 1 Department of Electrical and Computer Engineering, The University of Windsor, 41

More information

Projective geometry for Computer Vision

Projective geometry for Computer Vision Department of Computer Science and Engineering IIT Delhi NIT, Rourkela March 27, 2010 Overview Pin-hole camera Why projective geometry? Reconstruction Computer vision geometry: main problems Correspondence

More information

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Bo Li, Kun Peng, Xianghua Ying, and Hongbin Zha The Key Lab of Machine Perception (Ministry of Education), Peking University,

More information

Camera Calibration Using Line Correspondences

Camera Calibration Using Line Correspondences Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of

More information

Detection of Concentric Circles for Camera Calibration

Detection of Concentric Circles for Camera Calibration Detection of Concentric Circles for Camera Calibration Guang JIANG and Long QUAN Department of Computer Science Hong Kong University of Science and Technology Kowloon, Hong Kong {gjiang,quan}@cs.ust.hk

More information

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images Gang Xu, Jun-ichi Terai and Heung-Yeung Shum Microsoft Research China 49

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan

More information

CS-9645 Introduction to Computer Vision Techniques Winter 2019

CS-9645 Introduction to Computer Vision Techniques Winter 2019 Table of Contents Projective Geometry... 1 Definitions...1 Axioms of Projective Geometry... Ideal Points...3 Geometric Interpretation... 3 Fundamental Transformations of Projective Geometry... 4 The D

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment

More information

Metric Rectification for Perspective Images of Planes

Metric Rectification for Perspective Images of Planes 789139-3 University of California Santa Barbara Department of Electrical and Computer Engineering CS290I Multiple View Geometry in Computer Vision and Computer Graphics Spring 2006 Metric Rectification

More information

Multiple View Geometry in Computer Vision Second Edition

Multiple View Geometry in Computer Vision Second Edition Multiple View Geometry in Computer Vision Second Edition Richard Hartley Australian National University, Canberra, Australia Andrew Zisserman University of Oxford, UK CAMBRIDGE UNIVERSITY PRESS Contents

More information

Single View Metrology

Single View Metrology International Journal of Computer Vision 40(2), 123 148, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Single View Metrology A. CRIMINISI, I. REID AND A. ZISSERMAN Department

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Single-view metrology

Single-view metrology Single-view metrology Magritte, Personal Values, 952 Many slides from S. Seitz, D. Hoiem Camera calibration revisited What if world coordinates of reference 3D points are not known? We can use scene features

More information

Camera Calibration and Light Source Estimation from Images with Shadows

Camera Calibration and Light Source Estimation from Images with Shadows Camera Calibration and Light Source Estimation from Images with Shadows Xiaochun Cao and Mubarak Shah Computer Vision Lab, University of Central Florida, Orlando, FL, 32816 Abstract In this paper, we describe

More information

Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance

Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance Zhaoxiang Zhang, Min Li, Kaiqi Huang and Tieniu Tan National Laboratory of Pattern Recognition,

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

An Image Based 3D Reconstruction System for Large Indoor Scenes

An Image Based 3D Reconstruction System for Large Indoor Scenes 36 5 Vol. 36, No. 5 2010 5 ACTA AUTOMATICA SINICA May, 2010 1 1 2 1,,,..,,,,. : 1), ; 2), ; 3),.,,. DOI,,, 10.3724/SP.J.1004.2010.00625 An Image Based 3D Reconstruction System for Large Indoor Scenes ZHANG

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

Camera Calibration With One-Dimensional Objects

Camera Calibration With One-Dimensional Objects Camera Calibration With One-Dimensional Objects Zhengyou Zhang December 2001 Technical Report MSR-TR-2001-120 Camera calibration has been studied extensively in computer vision and photogrammetry, and

More information

Camera Calibration and Light Source Orientation Estimation from Images with Shadows

Camera Calibration and Light Source Orientation Estimation from Images with Shadows Camera Calibration and Light Source Orientation Estimation from Images with Shadows Xiaochun Cao and Mubarak Shah Computer Vision Lab University of Central Florida Orlando, FL, 32816-3262 Abstract In this

More information

A Stratified Approach for Camera Calibration Using Spheres

A Stratified Approach for Camera Calibration Using Spheres IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. Y, MONTH YEAR 1 A Stratified Approach for Camera Calibration Using Spheres Kwan-Yee K. Wong, Member, IEEE, Guoqiang Zhang, Student-Member, IEEE and Zhihu

More information

Pose determination and plane measurement using a trapezium

Pose determination and plane measurement using a trapezium Available online at www.sciencedirect.com Pattern Recognition Letters 29 (2008) 223 231 www.elsevier.com/locate/patrec Pose determination and plane measurement using a trapezium Fuqing Duan a,b, *, Fuchao

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two

More information

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Yan Lin 1,2 Bin Kong 2 Fei Zheng 2 1 Center for Biomimetic Sensing and Control Research, Institute of Intelligent Machines,

More information

Computer Vision I - Appearance-based Matching and Projective Geometry

Computer Vision I - Appearance-based Matching and Projective Geometry Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation

More information

Full Camera Calibration from a Single View of Planar Scene

Full Camera Calibration from a Single View of Planar Scene Full Camera Calibration from a Single View of Planar Scene Yisong Chen 1, Horace Ip 2, Zhangjin Huang 1, and Guoping Wang 1 1 Key Laboratory of Machine Perception (Ministry of Education), Peking University

More information

Refining Single View Calibration With the Aid of Metric Scene Properties

Refining Single View Calibration With the Aid of Metric Scene Properties Goack Refining Single View Calibration With the Aid of Metric Scene roperties Manolis I.A. Lourakis lourakis@ics.forth.gr Antonis A. Argyros argyros@ics.forth.gr Institute of Computer Science Foundation

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Ground Plane Rectification by Tracking Moving Objects

Ground Plane Rectification by Tracking Moving Objects Ground Plane Rectification by Tracking Moving Objects Biswajit Bose and Eric Grimson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 2139, USA

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License Title D camera geometry and Its application to circular motion estimation Author(s Zhang, G; Zhang, H; Wong, KKY Citation The 7th British Machine Vision Conference (BMVC, Edinburgh, U.K., 4-7 September

More information

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles Carlo Colombo, Dario Comanducci, and Alberto Del Bimbo Dipartimento di Sistemi ed Informatica Via S. Marta 3, I-5139

More information

3D Geometric Computer Vision. Martin Jagersand Univ. Alberta Edmonton, Alberta, Canada

3D Geometric Computer Vision. Martin Jagersand Univ. Alberta Edmonton, Alberta, Canada 3D Geometric Computer Vision Martin Jagersand Univ. Alberta Edmonton, Alberta, Canada Multi-view geometry Build 3D models from images Carry out manipulation tasks http://webdocs.cs.ualberta.ca/~vis/ibmr/

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Camera calibration and 3D reconstruction from a single view based on scene constraints

Camera calibration and 3D reconstruction from a single view based on scene constraints Image and Vision Computing 3 (005) 311 33 www.elsevier.com/locate/imavis Camera calibration and 3D reconstruction from a single view based on scene constraints Guanghui Wang a,b, *, Hung-Tat Tsui a, Zhanyi

More information

Announcements. Stereo

Announcements. Stereo Announcements Stereo Homework 2 is due today, 11:59 PM Homework 3 will be assigned today Reading: Chapter 7: Stereopsis CSE 152 Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where relative

More information

Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices

Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices Toshio Ueshiba Fumiaki Tomita National Institute of Advanced Industrial Science and Technology (AIST)

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

Determining pose of a human face from a single monocular image

Determining pose of a human face from a single monocular image Determining pose of a human face from a single monocular image Jian-Gang Wang 1, Eric Sung 2, Ronda Venkateswarlu 1 1 Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore 119613 2 Nanyang

More information

Quasi-Euclidean Uncalibrated Epipolar Rectification

Quasi-Euclidean Uncalibrated Epipolar Rectification Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report September 2006 RR 43/2006 Quasi-Euclidean Uncalibrated Epipolar Rectification L. Irsara A. Fusiello Questo

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications

On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications ACCEPTED FOR CVPR 99. VERSION OF NOVEMBER 18, 2015. On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications Peter F. Sturm and Stephen J. Maybank Computational Vision Group,

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry

Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry Detecting vanishing points by segment clustering on the projective plane for single-view photogrammetry Fernanda A. Andaló 1, Gabriel Taubin 2, Siome Goldenstein 1 1 Institute of Computing, University

More information

Agenda. Rotations. Camera models. Camera calibration. Homographies

Agenda. Rotations. Camera models. Camera calibration. Homographies Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate

More information

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration , pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information

More information

Camera models and calibration

Camera models and calibration Camera models and calibration Read tutorial chapter 2 and 3. http://www.cs.unc.edu/~marc/tutorial/ Szeliski s book pp.29-73 Schedule (tentative) 2 # date topic Sep.8 Introduction and geometry 2 Sep.25

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

University of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract

University of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract Mirror Symmetry 2-View Stereo Geometry Alexandre R.J. François +, Gérard G. Medioni + and Roman Waupotitsch * + Institute for Robotics and Intelligent Systems * Geometrix Inc. University of Southern California,

More information

An Algorithm for Seamless Image Stitching and Its Application

An Algorithm for Seamless Image Stitching and Its Application An Algorithm for Seamless Image Stitching and Its Application Jing Xing, Zhenjiang Miao, and Jing Chen Institute of Information Science, Beijing JiaoTong University, Beijing 100044, P.R. China Abstract.

More information

Multiple Views Geometry

Multiple Views Geometry Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective

More information

Computer Vision I - Appearance-based Matching and Projective Geometry

Computer Vision I - Appearance-based Matching and Projective Geometry Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 01/11/2016 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

AUTOMATIC RECTIFICATION OF LONG IMAGE SEQUENCES. Kenji Okuma, James J. Little, David G. Lowe

AUTOMATIC RECTIFICATION OF LONG IMAGE SEQUENCES. Kenji Okuma, James J. Little, David G. Lowe AUTOMATIC RECTIFICATION OF LONG IMAGE SEQUENCES Kenji Okuma, James J. Little, David G. Lowe The Laboratory of Computational Intelligence The University of British Columbia Vancouver, British Columbia,

More information

Geometry of Single Axis Motions Using Conic Fitting

Geometry of Single Axis Motions Using Conic Fitting IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 25, NO., OCTOBER 23 343 Geometry of Single Axis Motions Using Conic Fitting Guang Jiang, Hung-tat Tsui, Member, IEEE, Long Quan, Senior

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:

More information

A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images Peter F Sturm and Stephen J Maybank Computational Vision Group, Department of Computer Science The University of

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2 Engineering Tripos Part IIB FOURTH YEAR Module 4F2: Computer Vision and Robotics Solutions to Examples Paper 2. Perspective projection and vanishing points (a) Consider a line in 3D space, defined in camera-centered

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Peter F Sturm Computational Vision Group, Department of Computer Science The University of Reading,

More information

Viewpoint Invariant Features from Single Images Using 3D Geometry

Viewpoint Invariant Features from Single Images Using 3D Geometry Viewpoint Invariant Features from Single Images Using 3D Geometry Yanpeng Cao and John McDonald Department of Computer Science National University of Ireland, Maynooth, Ireland {y.cao,johnmcd}@cs.nuim.ie

More information

An Overview of Matchmoving using Structure from Motion Methods

An Overview of Matchmoving using Structure from Motion Methods An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

Efficient Object Shape Recovery via Slicing Planes

Efficient Object Shape Recovery via Slicing Planes Efficient Object Shape Recovery via Slicing Planes Po-Lun Lai and Alper Yilmaz Photogrammetric Computer Vision Lab, Ohio State University 233 Bolz Hall, 2036 Neil Ave., Columbus, OH 4320, USA http://dpl.ceegs.ohio-state.edu/

More information

Pin-hole Modelled Camera Calibration from a Single Image

Pin-hole Modelled Camera Calibration from a Single Image Pin-hole Modelled Camera Calibration from a Single Image Zhuo Wang University of Windsor wang112k@uwindsor.ca August 10, 2009 Camera calibration from a single image is of importance in computer vision.

More information

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential

More information

Self-calibration of a pair of stereo cameras in general position

Self-calibration of a pair of stereo cameras in general position Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Circular Motion Geometry by Minimal 2 Points in 4 Images

Circular Motion Geometry by Minimal 2 Points in 4 Images Circular Motion Geometry by Minimal 2 Points in 4 Images Guang JIANG 1,3, Long QUAN 2, and Hung-tat TSUI 1 1 Dept. of Electronic Engineering, The Chinese University of Hong Kong, New Territory, Hong Kong

More information

DD2429 Computational Photography :00-19:00

DD2429 Computational Photography :00-19:00 . Examination: DD2429 Computational Photography 202-0-8 4:00-9:00 Each problem gives max 5 points. In order to pass you need about 0-5 points. You are allowed to use the lecture notes and standard list

More information

Auto-calibration. Computer Vision II CSE 252B

Auto-calibration. Computer Vision II CSE 252B Auto-calibration Computer Vision II CSE 252B 2D Affine Rectification Solve for planar projective transformation that maps line (back) to line at infinity Solve as a Householder matrix Euclidean Projective

More information

Homography Estimation from the Common Self-polar Triangle of Separate Ellipses

Homography Estimation from the Common Self-polar Triangle of Separate Ellipses Homography Estimation from the Common Self-polar Triangle of Separate Ellipses Haifei Huang 1,2, Hui Zhang 2, and Yiu-ming Cheung 1,2 1 Department of Computer Science, Hong Kong Baptist University 2 United

More information

3D Reconstruction from Two Views

3D Reconstruction from Two Views 3D Reconstruction from Two Views Huy Bui UIUC huybui1@illinois.edu Yiyi Huang UIUC huang85@illinois.edu Abstract In this project, we study a method to reconstruct a 3D scene from two views. First, we extract

More information

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003 CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information