Self-recalibration of a structured light system via plane-based homography

Size: px
Start display at page:

Download "Self-recalibration of a structured light system via plane-based homography"

Transcription

1 Pattern Recognition 4 (27) wwwelseviercom/locate/pr Self-recalibration of a structured light system via plane-based homography B Zhang, YF Li, YH Wu Department of Manufacturing Engineering Engineering Management City University of Hong Kong, Kowloon, Hong Kong Received 22 July 25; received in revised form 28 September 25; accepted 3 April 26 Abstract Self-recalibration of the relative pose in a vision system plays a very important role in many applications much research has been conducted on this issue over the years However, most existing methods require information of some points in general three-dimensional positions for the calibration, which is hard to be met in many practical applications In this paper, we present a new method for the selfrecalibration of a structured light system by a single image in the presence of a planar surface in the scene Assuming that the intrinsic parameters of the camera the projector are known from initial calibration, we show that their relative position orientation can be determined automatically from four projection correspondences between an image a projection plane In this method, analytical solutions are obtained from second order equations with a single variable the optimization process is very fast Another advantage is the enhanced robustness in implementation via the use of over constrained systems Computer simulations real data experiments are carried out to validate our method 26 Pattern Recognition Society Published by Elsevier Ltd All rights reserved Keywords: Self-recalibration; Plane-based homography; Eigenvalue decomposition; Relative pose; Cheirality constraint; Structured light system 1 Introduction Camera calibration 3D reconstruction have been studied for many years, but is still an active research topic in robot vision [1 3] This problem is related to structure from motion, stereo vision, pose determination so on, which results in its applications including object modeling, mobile robot navigation localization, environments building In general, the problem of camera calibration 3D reconstruction can be approached in three different ways When both the intrinsic extrinsic parameters of a vision system are known, the 3D reconstruction can be simply realized by traditional triangulation method When the parameters of the vision system are totally uncalibrated, the 3D structure can be reconstructed up to a projective transformation from two uncalibrated images, as concluded by Hartley [4] Faugeras [5] in their early work to address this problem Some efforts have been made to extend these results to Euclidean case by stratification method [6,7] Corresponding author Tel: ; fax: address: meyfli@cityueduhk (YF Li) In between the above two extreme cases, the vision system may be assumed to have some of its intrinsic extrinsic parameters calibrated while others unknown This is referred to as semi-calibrated vision system by some authors Usually, the intrinsic parameters are assumed to be known while the extrinsic parameters need to be calibrated [8 1] Nister s recent work [9] presented a method to solve for the relative pose problem using five non-coplanar points In the semi-calibrated case, some authors have noticed that the relative pose problem can also be solved from the correspondences between images of a scene plane The computational efficiency for the pose problem is of criticalimportance in robotic applications Planar surfaces are encountered frequently in some robotic tasks, eg in navigation of a mobile robot along ground plane wall climbing robot for cleaning, inspection maintenance of buildings Yet, the traditional calibration methods, such as the eight-point algorithm five-point algorithm, will fail or give poor performance in the planar or near planar environments since they require a pair of images from the three-dimensional scene Therefore, methods using only planar information need to be explored Hay [11] was the first to report the observation /$3 26 Pattern Recognition Society Published by Elsevier Ltd All rights reserved doi:1116/jpatcog2641

2 B Zhang et al / Pattern Recognition 4 (27) that two planar surfaces undergoing different motions could give rise to the same image motion Tsai [12] used the correspondences of at least four image points to determine the two interpretations of planar surfaces undergoing large motions, where a sixth-order polynomial of one variable was involved A year late, Tsai [13] approached the same problem by computing the singular value decomposition of a 3 3 matrix containing eight pure parameters Longuet- Higgins [14,15] showed that three dimensional interpretations were obtainable by diagonalizing the matrix, where the relative pose of the system normal vector of the planar surface could be achieved simultaneously by a secondorder polynomial Zhang [16] proposed a method for this problem from a case by case analysis of different geometric situations, where as many as six cases were considered Recently, Chen [17] also proposed a method for recalibrating the structured light system by using planar information However, there are two major differences between our work that of [17]: (1) Our work is based on Homography matrix, while fundamental matrix is used in [17] the minimum number of points required is 4 in our method but is 6 by Chen s; (2) Our method provides an analytic solution, while Chen s method does not since a fourth-order polynomial system with three variables is involved In summary, the existing methods can be divided into two categories One needs to solve high order equations [11 13,17] The other needs to discuss, respectively, many possible cases to obtain solutions from quadric equal-constraints on variables [14 16] In this paper, we will concentrate on the pose problem using planar information in a semi-calibrated vision system Specifically, we will develop a novel method for selfrecalibrating the relative pose parameters using plane-based Homography Here, self-recalibration refers to situations where the system has been initially calibrated but needs to be calibrated again due to changed relative pose Unlike the previous methods, we take two steps to solve the pose problem We will first obtain the translation vector then solve for the rotation matrix The advantages are that not only analytic solutions can be obtained but also an overconstraint system is constructed in each step to increase the robustness Besides, the normal vector of a scene plane can be computed simply after the translation rotation are obtained To solve the ambiguity of correspondence between feature points for environments with non-textured surfaces, we adopt a structured light system [18] referred to as an active vision system Here, the projector can generate many feature points on the surface of the scene by a predefined light pattern However, our method is not restricted to structured light systems It can be adopted for passive stereovision if a textured environment is encountered The remainder of this paper is organized as follows Section 2 describes the calibration task Section 3 presents the method for self-calibrating the relative pose of the vision system in details Section 4 gives some computer simulations real image experiments Finally, we conclude this paper in Section 5 2 Calibration task The structured light vision system here consists of a projector a camera (Fig 1) The projector is controlled by a computer to project onto the scene a light pattern The light pattern will be distorted by the surface of the scene These distortions are captured by the camera used for calibration of the system then reconstruction of the scene Throughout this paper, vectors matrices are denoted by bold case letters Scalars are denoted by Italic letters Superscript T in Greek style presents the transpose of a vector or matrix The bold case letter I always denotes the identity matrix The symbol [ ] represents skew symmetric matrix of a vector For the camera projector, we define a right-hed coordinate system with the origin at their optical centers respectively Let R t be the rotation matrix translation vector from the camera to the projector, the world coordinate system coincides with the camera coordinate system In this paper, the projector is regarded as a pseudo camera the camera is of pinhole model Then the intrinsic parameters of the projector camera can be given via the following two matrices [ f u s u ] K p = f v v (1) 1 [ ] fu s u K c = f v v, (2) 1 Camera Image Plane z c x c y c [R, t] Object y p Light Pattern z p x p Projector Fig 1 Geometrical relations in the vision system

3 137 B Zhang et al / Pattern Recognition 4 (27) where f u f v represent the focal length of the camera in pixel along u- v-axis respectively, (u v ) T is the principal point, s is a skew factor of the camera representing the cosine value of the angle subtended between u- v- axis Similar notations are defined for the projector For an arbitrary 3D point M =[X Y Z] T, its image in the camera the projector can be expressed as m c = αk c M (3) m p = βk p (RM + t), (4) where m c =[u v1] T m p =[u v 1] T are the projection points on the image plane the projector plane, α β are nonzero scale factors Let ] K p R = K p t = [ k1 k 2 k 3 ] k 2 k 3 [ k1 Then from (3) (4), we have four equations on the coordinates of point M AM = a, (5) where A = u k 3 k 1 v k 3 k 2 f u s u u f v v v k 1 k 3 u k a = 2 k 3 v According to (5), the 3D world point on the object surface can be determined by M = (A T A) 1 A T a (6) This formula describes the basic principle for 3D reconstruction using triangulation method Once the intrinsic extrinsic parameters of the camera the projector are obtained, we can compute the 3-D coordinates of object points by the above (5) (6) The whole calibration of the structured light system here consists of two parts The first part concerns the calibration of the intrinsic parameters (such as focal lengths optical centers) of the camera the projector, called static calibration The static calibration needs to be performed only once The second part deals with calibration of the extrinsic parameters of the relative pose in which there are six unknown parameters, three for the 3-axis rotation three for three-dimensional translation (Fig 1) To determine the relative pose between the camera the projector is the focus of this paper 3 Self-recalibration of the relative pose Before giving the method to solve the relative pose, two propositions are established below Proposition 1 Let g be any 3 1 nonzero vector G be a 3 3 nonzero symmetric matrix If [g] G[g] =, then the determinant of G is zero Proof Let g =[g 1 g 2 g 3 ] T [ ] G11 G 12 G 13 G = G 12 G 22 G 23 G 13 G 23 G 33 Since g is nonzero, without loss of generality, we let g 1 = to prove this proposition Exping [g] G[g] =, we have g1 2 G 22 + g2 2 G 11 2g 1 g 2 G 12 =, g1 2 G 33 + g3 2 G 11 2g 1 g 3 G 13 =, g1 2 G 23 + g 2 g 3 G 11 g 1 g 2 G 13 g 1 g 3 G 12 = from which G 22, G 33, G 23 are given as G 22 = g2 2 G g 1 g 2 G 12 /g1 2, G 33 = g3 2 G g 1 g 3 G 13 /g1 2, G 23 = g 2 g 3 G 11 + g 1 g 2 G 13 + g 1 g 3 G 12 /g1 2 (7) Then we substitute (7) into the expression of the determinant of G We obtain det(g) = The proposition can be proved similarly if g 2 = or g 3 = Proposition 2 Let f g be any two 3 1 non-zero vectors The three eigenvalues of the matrix I + fg T + gf T satisfy either (a) or (b): (a) The three eigenvalues are distinct from each other, the middle one is 1 (b) Two of the eigenvalues are both 1 while the third is not 1 Proof Denote I + fg T + gf T as Q let f =[f 1 f 2 f 3 ] T, g[g 1 g 2 g 3 ] T From the definition of characteristic function of Q, we have det(i + fg T + gf T δi) = (8) Exping (8) gives (1 δ)((1 δ) 2 + p(1 δ) + q) =, (9)

4 B Zhang et al / Pattern Recognition 4 (27) where p = 2(f 1 g 1 + f 2 g 2 + f 3 g 3 ) q = (f 1 g 2 f 2 g 1 ) 2 (f 1 g 3 f 3 g 1 ) 2 (f 2 g 3 f 3 g 2 ) 2 Therefore, one of the eigenvalues of Q is 1 The other two are the roots of (1 δ) 2 +p(1 δ)+q = Letting γ=1 δ changes this equation into γ 2 + pγ + q = (1) According to the expressions of p, we know that p = means that f is orthogonal to g From q =, we have f 1 /g 1 = f 2 /g 2 = f 3 /g 3, which indicates that vector f is parallel with g Therefore, p q cannot be zero simultaneously q So there are in total the following two cases: (a) If q =, then q< Thus, the two solutions of γ have different signs by (1) It follows that one of δ is larger than 1 the other is smaller than 1 (b) If q =, then p = By (1), we obtain γ=, γ= p =, from which we get δ = 1, δ = 1 + p = 1 In the following, we will introduce our method in details It should be noted that the camera the projector cannot be located at the same position Thus the translation between them is not zero Also the camera the projector lie on the same side of a scene plane 31 Computation of the plane-based homography Assume there is a plane π in the scene ahose images in the camera the projector are I c I p, respectively Let M be an arbitrary point on the plane Its corresponding projections between the image plane the projector plane are m c m p According to the projective geometry, there is a 3 3 transformation matrix H between I c I p satisfying m p = σhm c, (11) where σ is a nonzero scale factor In general, this matrix H is called plane-based homography Let [ ] h1 h 2 h 3 H = h 4 h 5 h 6 h 7 h 8 1 h = (h 1,h 2,h 3,h 4,h 5,h 6,h 7,h 8 ) T, From (11), each pair of corresponding points gives two constraints on the homography: (u, v, 1,,,, u u, u v)h = u, (,,,u,v,1, v u, v v)h = v (12) So given n(n 4) pairs of corresponding image points of the scene, we have the following 2n equations: Bh = b, (13) where u 1 v 1 1 u 1 u 1 u 1 v 1 u 1 v 1 1 v 1 x 1 v 1 v 1 B = u n u n 1 u n u n u n v n u n v n 1 v n u n v n v n b = (u 1 v 1 u n v n )T Then the homography can be determined up to a scale factor in a least squares sense according to h = (B T B) 1 B T b (14) 32 Constraints from the homography on translation Assuming that the equation of the plane π is n T M = 1, where n T is the normal vector of the plane From (4), we have m p = βk p (R + tn T )M (15) Combining (3) (15) produces m p = β α K p(r + tn T )K 1 c m c (16) By (11) (16), the explicit formula for the homography is λh = K p (R + tn T )K 1 c, (17) where λ is a scalar The equivalent form of (17) is λ H = λk 1 p HK c = R + tn T, (18) where H is the calibrated homography Since H, K p K c are known, H is known Let the translation be t =[t 1 t 2 t 3 ] T Its skew symmetric matrix is [ ] t3 t 2 [t] = t 3 t 1 t 2 t 1 The matrix [t] has some nice properties, for example, [t] t = [t] T = [t] Hence, multiplying matrix [t] on both sides of (18), we have λ[t] H =[t] R (19) It is well known that the right side of (19) is the essential matrix So this equation reveals the relationship between the calibrated homography the essential matrix As R is a rotation matrix, RR T = I From (19), we have λ 2 [t] HH T [t] =[t] [t] (2) Rearranging (2) gives [t] W[t] =, (21) where W = λ 2 HH T I is symmetric λ is an unknown scalar

5 1372 B Zhang et al / Pattern Recognition 4 (27) Determining the scale factor λ Since W is symmetric, according to Proposition 1, we have det(w) = det(λ 2 HH T I) =, (22) Eq (22) indicates that λ 2 is the inverse of one eigenvalue of matrix HH T There are three eigenvalues for HH T Next, we will discuss which eigenvalue would satisfy (2) by Proposition 2 From (18), we have λ 2 HH T = (R + tn T ) (R + tn T ) T (23) This can be equivalently changed into λ 2 HH T = I + Rtn T + tn T R T + n T ntt ( ) ( T ) = I + Rn + nt n 2 t t T + t n T R T + nt n 2 tt = I + st T + ts T, (24) where s = Rn + (n T n/2)t Because the camera the projector lie on the same side of the scene plane are located at different positions, both t s are nonzero Thus, according to Proposition 2, λ 2 HH T or I + st T + ts T, will have one eigenvalue as 1, which lies between the other two different eigenvalues of I + st T + ts T or which is as the eigenvalue with multiplicity two Since the eigenvalues of HH T are 1/λ 2 times of those of λ 2 HH T, we have the following conclusions: (a) If the three eigenvalues are distinct from each other, 1/λ 2 is the eigenvalue of HH T that lies between the other two eigenvalues of HH T ; (b) If one of the eigenvalues of HH T is mulplicity two, 1/λ 2 is this eigenvalue By these conclusions, we can determine the scalar λ that satisfies (2) 34 Determining the translation vector After λ is solved, from (21) there are six homogeneous constraints on the translation vector We assume t 3 = 1 The constraints on the translation vector are the following equations from (21) w 33 t 2 1 2w 13t 1 + w 11 =, w 33 t 2 2 2w 23t 2 + w 22 =, w 33 t 1 t 2 w 23 t 1 w 13 t 2 + w 12 =, w 13 t 2 2 w 23t 1 t 2 + w 22 t 1 w 12 t 2 =, w 23 t 2 1 w 13t 1 t 2 w 12 t 1 + w 11 t 2 =, w 22 t w 11t 2 2 2w 12t 1 t 2 =, (25) where w ij denotes the ij th element of matrix W Clearly, t 1, t 2 can be obtained analytically from the first two equations in (25), which satisfying the last four equations gives two solutions in general In case of noise data, these six equations then are used to do optimization 35 Determining the rotation matrix Since λ, t H have been determined in the previous sections, the left side of (19), say C = λ[t] H is known According to the first second columns of both sides of (19), we have r 21 t 2 r 31 = c 11, r 11 + t 1 r 31 = c 21, t 2 r 11 t 1 r 21 = c 31, r r r2 31 = 1, (26) r 22 t 2 r 32 = c 12, r 12 + t 1 r 32 = c 22, t 2 r 12 t 1 r 22 = c 32, r r r2 32 = 1, (27) where r ij c ij denote the ij th elements of matrix R C, respectively From (26) (27), the first second column vectors of matrix R can be determined analytically, the third column vector of R is then given by the cross product of these two columns 36 Implementation procedure Assuming that the intrinsic parameters of the camera the projector have been calibrated in static calibration stage, when the configuration of the system is changed, the procedure for self-recalibration of a structured light vision system is proposed as follows: Step 1: Computing the homography matrix between the camera plane projector plane according to (14); Step 2: Establishing constraints (21) on t from H determining the scale factor λ by the method in Section 33; Step 3: Calculating the translation vector from (25); Step 4: Calculating the rotation matrix from (26) (27); Step 5: Optionally, the results can be improved by bundle adjustment, after having obtained the relative pose Remark Once the rotation matrix, translation vector, H λ are obtained, the norm n of the scene plane can be linearly determined by the equations λ H R = tn T (28) The normal vector of the plane cannot be determined in this way if the translation vector is null, ie t= Fortunately, this case does not occur in our structured light system When it does happen in a passive vision, we can firstly perform 3D reconstruction using (6) to reconstruct some points on the plane, then compute its normal vector with these points

6 B Zhang et al / Pattern Recognition 4 (27) Experiments 41 Numerical simulation 411 Ambiguity of the solutions As to the ambiguity of the solutions, Tsai [12,13] Longuet-Higgins [14,15] showed how the two possible interpretations of the camera motion could be determined in a closed form from the correspondences in two images of a planar surface Negahdaripour [19,2] determined the relationship between the two solutions in a closed form In addition, the derivation showed the explicit relationship between the ambiguity associated with planar scenes that associated with curved surfaces Knowledge of the explicit relationship between the two interpretations permits the calculation of one solution directly from the other In this experiment, we will show that the results from our method coincide with their conclusions by simulations Here, we assume that the intrinsic parameters of both the camera the projector have been calibrated in the static calibration stage For each repeated experiment, the translation vector three rotation angles of the rotation matrix are selected romly the normal vector of the plane surface is also selected romly in order to cover all cases in practice Here, 1 rom simulations were performed to reveal the ambiguity of the solutions It should be noted that multiple solutions would be obtained by simply solving the given equations discarding the complex ones In order to determine which choice corresponds to the true configuration, the cheirality constraint [21] (the constraint that the scene points should be in front of the cameras) is imposed The final results are shown in Table 1 for distribution of solutions after imposing the cheirality constraint in these simulations Fig 2 shows the graphs of the distributions from Table 1 From these data, we can see that there can only be one or two solutions in most cases, with the probability of about 9565 percent Based on further observations, only one corresponds to the true configuration The other one corresponds to the reflections of the true configuration Here, a twisted pair is treated as two solutions However, the convention as to whether a twisted pair counts as one solution or two solutions varies from author to author (see p 21 in [22]) Therefore, the experimental results from our method coincide with their conclusions 412 Robustness of self-calibration In this test example, we assume that the rotation angles the translation vector between the projector the camera s coordinate system are r = (π/8, π/5, π/4) Table 1 Distributions of number of solutions No of solutions Frequency Frequencies Distribution of the number of solutions Fig 2 Distributions of number of solutions from Table 1 t = (8, 4, 1) respectively There is a plane in the scene whose equation is denoted as Z = X + 2Y + 1 The world coordinate system coincides with that of the camera The projector projects a virtual grid light pattern onto the scene while the camera captures the illuminated scene by an image By romly selecting n = 5 illuminated points on the scene plane, the correspondence on the camera plane projector plane is first identified used for calculating the homography matrix according to (14) Then the rotation matrix translation vector are obtained using the procedure given in Section 36 To test the robustness of the procedure, the residual error, which is the discrepancy between theoretical value computed result, is evaluated Here, different levels of Gaussian noise, ie N(, σ 2 ), were added to the projection points We varied the noise level from to 1 pixel For each noise level, we performed 5 trials calculated the average residual errors of the rotation angles translation vector Since there may exist multiple solutions, the minimum residual error for the rotation angles translation vector is defined as follows min i ( r i r i r r ) ( t i min i t i t t ) (29) where r i t i are the calculated solutions For comparison, we tested Higgins s method [15] under the same condition since this method also gives analytic solutions for the pose problem by analyzing plane-based Homography from two perspective views Here, the mean value stard deviation of minimum residual errors from experimental results are computed The results of the rotation angles translation vector are shown in Figs 3 4, where data1 data2 represent the results from our method Higgins s method respectively From these figures, we can see that both methods work well when the noise is less than 5 pixel When the noise level increases, the residual

7 1374 B Zhang et al / Pattern Recognition 4 (27) Mean value of Rotation: radian data 1 data 2 Mean value of Translation: mm data 1 data 2 (a) Gauss white noise: pixel (a) Gauss white noise: pixel Stard deviation of Rotation: radian data 1 data 2 Stard deviation of Translation: mm data 1 data 2 (b) Gauss white noise: pixel (b) Gauss white noise: pixel Fig 3 Rotation angles vs noise levels: (a) mean values of the residual errors; (b) stard deviation of the residual errors Fig 4 Translation vectors vs noise levels: (a) average results of the residual errors; (b) stard deviation of the residual errors errors increase too the graphs are somewhat bumpy This can be improved if we chose the points used more carefully (eg when the points are more evenly distributed) Furthermore, to reduce the bumpy effect, bundle adjustment can be used to improve the results obtained from the analytic solution On the whole, our method outperforms Higgins s method in the presence of noise data For example, when the noise level is 5 pixel, the average error residuals of our method are 141 ± 86 percent 241 ± 16 percent for the rotation angles translation vector, while the results from Higgins s method are 219 ± 311 percent 346±227 percent In our method, an over-constrained system can be easily constructed, eg six equations are used for solving two variables in (25) to increase the robustness Hence, the stard deviation for translation vector in our method is smaller, as seen in Fig 4b We notice that the estimates of the translation vector are a bit more sensitive to noise than the estimates of rotation This observation has already been discussed by other authors, eg [23] However, our method shows its robustness for the translation vector since an over-constrained system is employed In the simulations, we also compared our method with the classical 8-point algorithm [24] where the fundamental matrix is computed then decomposed directly into rotation matrix translation vector We observed that the residual errors from our method the 8-point algorithm are in a similar range, which validates our method In this paper, our motivation is to recalibrate the vision system using planar information, in which case the 8-point algorithm will fail as it requires points in general 3D positions Furthermore, the minimum number of points required in the 8-point algorithm is 8 while 4 points are sufficient in our method 42 Experiments with real image data In the real data experiments, the system setup consists of two major components, a PULNIX TMC-97 CCD camera a PLUS V131 DLP projector (Fig 5a) The relative pose

8 B Zhang et al / Pattern Recognition 4 (27) Extrinsic parameters (projector-centered) o p z p -1 x p -1-5 y p Fig 6 Ten positions of planar pattern for initial calibration Table 2 Intrinsic parameters obtained from static calibration Parameters f u f v u V Camera Projector Fig 5 The configuration of our structured light system: (a) layout of the experimental system; (b) color-encoded light pattern between the camera the projector can be changed freely If this occurs, the system will perform self-recalibration (instead of static calibration with manual operations) using the procedure proposed in Section 36 The projector is controlled by a computer to generate an illumination pattern (Fig 5b) The illumination pattern consists of many color-encoded grid blocks, which can be used to uniquely identify the correspondences between the projector plane the image plane [25] Here, seven different colors, ie red, green, blue; white, cyan, magenta; yellow, were used According to [25], we can have a matrix of Considering the requirement of our vision system, a4 51 sub-matrix, in which any 3 3 neighbor is unique, was selected The final light pattern is shown in Fig 5b The intrinsic parameters of the camera the projector were firstly calibrated by a planar pattern using Zhang s method [26] Theoretically, two positions of the pattern were sufficient for the calibration task [26] In our experiments, we placed the pattern at ten different positions as illustrated in Fig 6 to increase the calibration accuracy The results are shown in Table 2 This process was required to be performed only once in a static stage in our method When calibrating the extrinsic parameters, four or more point correspondences from a planar surface in the scene were chosen between the projector plane the camera image Then a linear system (13) was constructed for computing the Homography The final result was h = [144, 87, 4995, 791, 32, 39811, 1, 2] Steps 2 4 of the procedure were implemented for the relative pose The experimental results for the rotation matrix translation vector were R =[539, 9978, 378, 8318, 239, 5545, 5524, 613, 8313] t =[ 148, 368, 1], respectively After the system had been self-recalibrated, we performed 3-D object reconstruction using (6) to test the selfrecalibrated results qualitatively Fig 7a gives an image of a fan model, with the light pattern superimposed as illustrated in Fig 7b Fig 7c d show polygonized results of the reconstructed point clouds the CAD model of the clouds After the first experiment, we adjusted the relative pose between the camera the projector to test the system further The self-recalibration procedure was performed again to calibrate the new rotation matrix translation vector Then a phone hle was used for 3D reconstruction The experimental results were shown in Fig 8 It should be noted that in the current experiments, we do not make special efforts in the image processing the initial calibration, as we aim mainly at verifying the validity of our method With more sophisticated methods adopted in the image processing initial calibration, better results can be expected with the proposed method

9 1376 B Zhang et al / Pattern Recognition 4 (27) Our assumption of a planar surface can be satisfied in many practical applications In fact, planar or near planar scenes are encountered frequently, eg roadway or ground plane in mobile robot navigation, walls or ceilings for a climbing robot during cleaning, inspection maintenance of a building The traditional methods will fail or give a poor performance since they require a pair of images from the three-dimensional scene In these cases, our method provides a good solution Acknowledgement The work described in this paper was fully supported by a grant from the Research Grants Council of Hong Kong [Project no CityU126/4E] Fig 7 An experiment on the fan model: (a) a fan model used for the experiment; (b) image of model illuminated by the light pattern; (c) polygonized results of reconstructed point clouds; (d) CAD model of the reconstructed result Fig 8 Another experiment on a phone hle: (a) image of the phone hle illuminated by the light pattern; (b) polygonized results of reconstructed point clouds; (c) CAD model of the reconstructed results 5 Conclusions We have presented a new method for self-recalibration of a color-encoded structured light system When the relative pose between the camera the projector is changed, the system will recalibrate itself automatically, allowing immediate 3-D reconstruction to be followed This method can be used in a dynamic as well as a static application since only a single image is needed The experimental results show that the method is conceptually simple can be efficiently implemented with acceptable results When the scene is textureless, the projector will produce many feature points on the scene s surface by the light pattern These feature points will enable the computation of the Homography matrix for use in the on-line calibration by our method Our method is not restricted to structured light systems It can be adopted for passive stereo vision if a textured environment is present References [1] TS Huang, AN Netravali, Motion structure from feature correspondences: a review, Proc IEEE 82 (2) (1994) [2] Z Zhang, Q-T Luong, O Faugeras, Motion of an uncalibrated stereo rig: self-calibration metric reconstruction, IEEE Trans Robot Autom 12 (1) (1996) [3] A Zomet, L Wolf, A Shashua, Omni-rig: linear self-recalibration of a rig with varying internal external parameters, Proc 8th IEEE Int Conf Comput Vision 1 (21) [4] R Hartley, R Gupta, T Chang, Stereo from uncalibrated cameras, in: Proceedings of IEEE Conference on Computer Vision Pattern Recognition, Urbana, IL, 1992, pp [5] O Faugeras, What can be seen in three dimensions with an uncalibrated stereo rig?, in: L Santa Margherita (Ed), IEEE Proceedings of the European Conference on Computer Vision, 1992, pp [6] T Vieville, C Zeller, L Robert, Using collineations to compute motion structure in an uncalibrated image sequence, Int J Comput Vision 2 (3) (1996) [7] M Pollefeys, L Van Gool, A stratified approach to metric selfcalibration, in: IEEE Proceedings of the Conference on Computer Vision Pattern Recognition, Puerto Rico, 1997, pp [8] HC Longuet-Higgins, A computer algorithm for reconstructing a scene from two projections, Nature 293 (1) (1981) [9] D Nister, An efficient solution to the five-point relative pose problem, IEEE Trans Pattern Anal Mach Intell 26 (6) (24) [1] J Philip, Critical Point Configurations of the 5-, 6-, 7-, 8- point Algorithms for Relative Orientation, TRITA-MAT-1998-MA- 13, February, 1998 [11] JC Hay, Optical motion space perception: an extension of Gibson s analysis, Psychol Rev 73 (6) (1966) [12] R Tsai, T Huang, Estimating three-dimensional motion parameters of a rigid planar patch, IEEE Trans Acoust Speech Signal Process ASSP-29 (1981) [13] R Tsai, T Huang, W Zhu, Estimating three dimensional motion parameters of a rigid planar patch, II: singular value decomposition, IEEE Trans Acoust Speech Signal Process ASSP-3 (1982) [14] HC Longuet-Higgins, The visual ambiguity of a moving plane, Proc Roy Soc London Series B 223 (1231) (1984) [15] HC Longuet-Higgins, The reconstruction of a plane surface from two perspective projections, Proc Roy Soc London Series B 227 (1249) (1986)

10 B Zhang et al / Pattern Recognition 4 (27) [16] Z Zhang, AR Hanson, Scaled Euclidean 3D reconstruction based on externally uncalibrated cameras, in: IEEE International Symposium on Computer Vision, Coral Gables, FL, November 1995, pp [17] SY Chen, YF Li, Self-recalibration of a color-encoded light system for automated three-dimensional measurements, Meas Sci Technol 14 (1) (23) 33 4 [18] B Zhang, YF Li, Dynamic recalibration of an active vision system via generic homography, in: Proceedings of IEEE International Conference on Robotics Automation, Barcelona, Spain, April 25, pp [19] S Negahdaripour, Closed form relationship between the two interpretations of a moving plane, J Opt Soc Am 7 (2) (199) [2] S Negahdaripour, Multiple interpretations of the shape motion of objects from two perspective images, IEEE Trans Pattern Anal Mach Intell 12 (11) (199) [21] Richard Hartley, Chirality, Int J Comput Vision 26 (1) (1998) [22] S Maybank, Theory of Reconstruction from Image Motion, Springer, New York, Inc, Secaucus, NJ, 1992 [23] T Tian, C Tomasi, D Heeger, Comparison of approaches to egomotion computation, in: Proceedings of IEEE Conference on Computer Vision Pattern Recognition, San Francisco, June 1996, pp [24] RI Hartley, Estimation of Relative Camera Positions for Uncalibrated Cameras, ECCV, 1992 pp [25] PM Griffin, LS Narasimhan, SR Yee, Generation of uniquely encoded light patterns for range data acquisition, Pattern Recognition 25 (6) (1992) [26] Z Zhang, A flexible new technique for camera calibration, IEEE Trans Pattern Anal Mach Intell 22 (11) (2)

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

Camera Calibration Using Line Correspondences

Camera Calibration Using Line Correspondences Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Coplanar circles, quasi-affine invariance and calibration

Coplanar circles, quasi-affine invariance and calibration Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of

More information

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images Gang Xu, Jun-ichi Terai and Heung-Yeung Shum Microsoft Research China 49

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential

More information

Computing Matched-epipolar Projections

Computing Matched-epipolar Projections Computing Matched-epipolar Projections Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract This paper gives a new method for

More information

Multi-Camera Calibration with One-Dimensional Object under General Motions

Multi-Camera Calibration with One-Dimensional Object under General Motions Multi-Camera Calibration with One-Dimensional Obect under General Motions L. Wang, F. C. Wu and Z. Y. Hu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences,

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Structure and motion in 3D and 2D from hybrid matching constraints

Structure and motion in 3D and 2D from hybrid matching constraints Structure and motion in 3D and 2D from hybrid matching constraints Anders Heyden, Fredrik Nyberg and Ola Dahl Applied Mathematics Group Malmo University, Sweden {heyden,fredrik.nyberg,ola.dahl}@ts.mah.se

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

Computing Matched-epipolar Projections

Computing Matched-epipolar Projections Computing Matched-epipolar Projections Richard Hartley and Rajiv Gupta GE - Corporate Research and Development, P.O. Box 8, Schenectady, NY, 12301. Ph : (518)-387-7333 Fax : (518)-387-6845 email : hartley@crd.ge.com

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Quasi-Euclidean Uncalibrated Epipolar Rectification

Quasi-Euclidean Uncalibrated Epipolar Rectification Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report September 2006 RR 43/2006 Quasi-Euclidean Uncalibrated Epipolar Rectification L. Irsara A. Fusiello Questo

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

Structure from Motion and Multi- view Geometry. Last lecture

Structure from Motion and Multi- view Geometry. Last lecture Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,

More information

Lecture 3: Camera Calibration, DLT, SVD

Lecture 3: Camera Calibration, DLT, SVD Computer Vision Lecture 3 23--28 Lecture 3: Camera Calibration, DL, SVD he Inner Parameters In this section we will introduce the inner parameters of the cameras Recall from the camera equations λx = P

More information

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Peter F Sturm Computational Vision Group, Department of Computer Science The University of Reading,

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary

More information

Camera Calibration with a Simulated Three Dimensional Calibration Object

Camera Calibration with a Simulated Three Dimensional Calibration Object Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Lecture'9'&'10:'' Stereo'Vision'

Lecture'9'&'10:'' Stereo'Vision' Lecture'9'&'10:'' Stereo'Vision' Dr.'Juan'Carlos'Niebles' Stanford'AI'Lab' ' Professor'FeiAFei'Li' Stanford'Vision'Lab' 1' Dimensionality'ReducIon'Machine'(3D'to'2D)' 3D world 2D image Point of observation

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Self-calibration of a pair of stereo cameras in general position

Self-calibration of a pair of stereo cameras in general position Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Recovering light directions and camera poses from a single sphere.

Recovering light directions and camera poses from a single sphere. Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

Metric Rectification for Perspective Images of Planes

Metric Rectification for Perspective Images of Planes 789139-3 University of California Santa Barbara Department of Electrical and Computer Engineering CS290I Multiple View Geometry in Computer Vision and Computer Graphics Spring 2006 Metric Rectification

More information

An idea which can be used once is a trick. If it can be used more than once it becomes a method

An idea which can be used once is a trick. If it can be used more than once it becomes a method An idea which can be used once is a trick. If it can be used more than once it becomes a method - George Polya and Gabor Szego University of Texas at Arlington Rigid Body Transformations & Generalized

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

Multiple Motion Scene Reconstruction from Uncalibrated Views

Multiple Motion Scene Reconstruction from Uncalibrated Views Multiple Motion Scene Reconstruction from Uncalibrated Views Mei Han C & C Research Laboratories NEC USA, Inc. meihan@ccrl.sj.nec.com Takeo Kanade Robotics Institute Carnegie Mellon University tk@cs.cmu.edu

More information

Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland

Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland New Zealand Tel: +64 9 3034116, Fax: +64 9 302 8106

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

Recovering structure from a single view Pinhole perspective projection

Recovering structure from a single view Pinhole perspective projection EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

Epipolar geometry. x x

Epipolar geometry. x x Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections

More information

Multiple Views Geometry

Multiple Views Geometry Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Rectification for Any Epipolar Geometry

Rectification for Any Epipolar Geometry Rectification for Any Epipolar Geometry Daniel Oram Advanced Interfaces Group Department of Computer Science University of Manchester Mancester, M13, UK oramd@cs.man.ac.uk Abstract This paper proposes

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Robust Geometry Estimation from two Images

Robust Geometry Estimation from two Images Robust Geometry Estimation from two Images Carsten Rother 09/12/2016 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation Process 09/12/2016 2 Appearance-based

More information

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia MERGING POINT CLOUDS FROM MULTIPLE KINECTS Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia Introduction What do we want to do? : Use information (point clouds) from multiple (2+) Kinects

More information

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS Cha Zhang and Zhengyou Zhang Communication and Collaboration Systems Group, Microsoft Research {chazhang, zhang}@microsoft.com ABSTRACT

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds

Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds Camera Calibration and 3D Reconstruction from Single Images Using Parallelepipeds Marta Wilczkowiak Edmond Boyer Peter Sturm Movi Gravir Inria Rhône-Alpes, 655 Avenue de l Europe, 3833 Montbonnot, France

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X

More information

3D Reconstruction with two Calibrated Cameras

3D Reconstruction with two Calibrated Cameras 3D Reconstruction with two Calibrated Cameras Carlo Tomasi The standard reference frame for a camera C is a right-handed Cartesian frame with its origin at the center of projection of C, its positive Z

More information

Geometric camera models and calibration

Geometric camera models and calibration Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October

More information

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne

More information

An Overview of Matchmoving using Structure from Motion Methods

An Overview of Matchmoving using Structure from Motion Methods An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu

More information

Announcements. Stereo

Announcements. Stereo Announcements Stereo Homework 2 is due today, 11:59 PM Homework 3 will be assigned today Reading: Chapter 7: Stereopsis CSE 152 Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where relative

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

3D shape from the structure of pencils of planes and geometric constraints

3D shape from the structure of pencils of planes and geometric constraints 3D shape from the structure of pencils of planes and geometric constraints Paper ID: 691 Abstract. Active stereo systems using structured light has been used as practical solutions for 3D measurements.

More information

Euclidean Reconstruction and Auto-Calibration from Continuous Motion

Euclidean Reconstruction and Auto-Calibration from Continuous Motion Euclidean Reconstruction and Auto-Calibration from Continuous Motion Fredrik Kahl and Anders Heyden Λ Centre for Mathematical Sciences Lund University Box 8, SE- Lund, Sweden {fredrik, andersp}@maths.lth.se

More information

Lecture 10: Multi-view geometry

Lecture 10: Multi-view geometry Lecture 10: Multi-view geometry Professor Stanford Vision Lab 1 What we will learn today? Review for stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

3D reconstruction class 11

3D reconstruction class 11 3D reconstruction class 11 Multiple View Geometry Comp 290-089 Marc Pollefeys Multiple View Geometry course schedule (subject to change) Jan. 7, 9 Intro & motivation Projective 2D Geometry Jan. 14, 16

More information

A Case Against Kruppa s Equations for Camera Self-Calibration

A Case Against Kruppa s Equations for Camera Self-Calibration EXTENDED VERSION OF: ICIP - IEEE INTERNATIONAL CONFERENCE ON IMAGE PRO- CESSING, CHICAGO, ILLINOIS, PP. 172-175, OCTOBER 1998. A Case Against Kruppa s Equations for Camera Self-Calibration Peter Sturm

More information

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Peter Sturm To cite this version: Peter Sturm. Critical Motion Sequences for the Self-Calibration

More information

Robust Camera Calibration from Images and Rotation Data

Robust Camera Calibration from Images and Rotation Data Robust Camera Calibration from Images and Rotation Data Jan-Michael Frahm and Reinhard Koch Institute of Computer Science and Applied Mathematics Christian Albrechts University Kiel Herman-Rodewald-Str.

More information

Factorization with Missing and Noisy Data

Factorization with Missing and Noisy Data Factorization with Missing and Noisy Data Carme Julià, Angel Sappa, Felipe Lumbreras, Joan Serrat, and Antonio López Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona,

More information

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn

More information

Camera Calibration With One-Dimensional Objects

Camera Calibration With One-Dimensional Objects Camera Calibration With One-Dimensional Objects Zhengyou Zhang December 2001 Technical Report MSR-TR-2001-120 Camera calibration has been studied extensively in computer vision and photogrammetry, and

More information

3D Reconstruction from Two Views

3D Reconstruction from Two Views 3D Reconstruction from Two Views Huy Bui UIUC huybui1@illinois.edu Yiyi Huang UIUC huang85@illinois.edu Abstract In this project, we study a method to reconstruct a 3D scene from two views. First, we extract

More information

Plane-Based Calibration for Linear Cameras

Plane-Based Calibration for Linear Cameras Plane-Based Calibration for Linear Cameras Jamil Drareni, Peter Sturm, Sébastien Roy To cite this version: Jamil Drareni, Peter Sturm, Sébastien Roy Plane-Based Calibration for Linear Cameras OMNIVIS 28-8th

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation. 3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly

More information

3D Geometry and Camera Calibration

3D Geometry and Camera Calibration 3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

Comments on Consistent Depth Maps Recovery from a Video Sequence

Comments on Consistent Depth Maps Recovery from a Video Sequence Comments on Consistent Depth Maps Recovery from a Video Sequence N.P. van der Aa D.S. Grootendorst B.F. Böggemann R.T. Tan Technical Report UU-CS-2011-014 May 2011 Department of Information and Computing

More information

Camera Calibration and Light Source Estimation from Images with Shadows

Camera Calibration and Light Source Estimation from Images with Shadows Camera Calibration and Light Source Estimation from Images with Shadows Xiaochun Cao and Mubarak Shah Computer Vision Lab, University of Central Florida, Orlando, FL, 32816 Abstract In this paper, we describe

More information

Mathematics of a Multiple Omni-Directional System

Mathematics of a Multiple Omni-Directional System Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba

More information

Multiple View Geometry in Computer Vision Second Edition

Multiple View Geometry in Computer Vision Second Edition Multiple View Geometry in Computer Vision Second Edition Richard Hartley Australian National University, Canberra, Australia Andrew Zisserman University of Oxford, UK CAMBRIDGE UNIVERSITY PRESS Contents

More information

A Stratified Approach for Camera Calibration Using Spheres

A Stratified Approach for Camera Calibration Using Spheres IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. Y, MONTH YEAR 1 A Stratified Approach for Camera Calibration Using Spheres Kwan-Yee K. Wong, Member, IEEE, Guoqiang Zhang, Student-Member, IEEE and Zhihu

More information

Descriptive Geometry Meets Computer Vision The Geometry of Two Images (# 82)

Descriptive Geometry Meets Computer Vision The Geometry of Two Images (# 82) Descriptive Geometry Meets Computer Vision The Geometry of Two Images (# 8) Hellmuth Stachel stachel@dmg.tuwien.ac.at http://www.geometrie.tuwien.ac.at/stachel th International Conference on Geometry and

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

A Real-Time Catadioptric Stereo System Using Planar Mirrors

A Real-Time Catadioptric Stereo System Using Planar Mirrors A Real-Time Catadioptric Stereo System Using Planar Mirrors Joshua Gluckman Shree K. Nayar Department of Computer Science Columbia University New York, NY 10027 Abstract By using mirror reflections of

More information

3D Computer Vision. Class Roadmap. General Points. Picturing the World. Nowadays: Computer Vision. International Research in Vision Started Early!

3D Computer Vision. Class Roadmap. General Points. Picturing the World. Nowadays: Computer Vision. International Research in Vision Started Early! 3D Computer Vision Adrien Bartoli CNRS LASMEA Clermont-Ferrand, France Søren I. Olsen DIKU Copenhagen, Denmark Lecture 1 Introduction (Chapter 1, Appendices 4 and 5) Class Roadmap (AB, lect. 1-3) Aug.

More information

Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix

Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix J Math Imaging Vis 00 37: 40-48 DOI 0007/s085-00-09-9 Authors s version The final publication is available at wwwspringerlinkcom Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential

More information