Self-calibration of Multiple Laser Planes for 3D Scene Reconstruction

Size: px
Start display at page:

Download "Self-calibration of Multiple Laser Planes for 3D Scene Reconstruction"

Transcription

1 Self-calibration of Multiple Laser Planes for 3D Scene Reconstruction Ryo Furukawa Faculty of Information Sciences, Hiroshima City University, Japan Hiroshi Kawasaki Faculty of Information Engineering, Saitama University, Japan Abstract Self-calibration is one of the most active issues concerning vision-based 3D measurements. However, in the case of the light sectioning method, there has been little research conducted on self-calibration techniques. In this paper, we study the problem of self-calibration for an active vision system which uses line lasers and a single camera. The problem can be defined as the estimation of multiple laser planes from the curves of laser reflections observed from a sequence of images captured by a single camera. The constraints of the problem can be obtained from observed intersection points between the curves. In this condition, the problem is formulated as simultaneous polynomial equations, in which the number of equations is larger than the number of variables. Approximated solutions of the equations can be computed by using Gröbner bases. By refining them using nonlinear optimization, the final result can be obtained. We developed an actual 3D measurement system using the proposed method, which consists of only a laser projector with two line lasers and a single camera. Users are just required to move the projector freely so that the projected lines sweep across the surface of the scene to get the 3D shape. 1. Introduction In the field of computer vision, calibration methods for two or more cameras have been studied extensively. Especially, self-calibration methods for extrinsic parameters, in which 6-DOF pose parameters are estimated without a-priori information about the scene, have been studied by many researchers; these studies include passive systems [7] and active systems [15, 13] because of their usefulness and mathematical interest. There are other kinds of calibrations. For example, in order to construct or use a range finder based on the light sectioning method, the relative position of the laser plane from the camera should be known. This may involve calibration to estimate the 3-DOF parameters of the plane. Examples of this kind of calibration are methods using objects of known shapes. Also, there are on-line calibration techniques using markers attached to fixed positions relative to the laser planes [9]. In the case of the light sectioning method, there has been little research on self-calibration. One of the reasons may be that it is not possible to form equations sufficient for solving the problem with a typical system formed by a camera and a line laser. This is in contrast to the case of stereo camera systems, in which much information is retrieved from a pair of images. Actually, redundant information obtained from the reflections of a line laser which can be used for self-calibration is very limited in comparison to stereo camera systems. One example to solve this problem is to fix the laser plane and the camera, and to scan the same scene multiple times using a mechanical platform[12]. Another approach is to use the information from multiple planes with the fixed scene to obtain more constraints, which is discussed in this paper. To achieve this, two methods can be considered: one is to project multiple laser planes from a projector which is composed of multiple line lasers, and the other is to capture a sequence of images by moving the laser projector. Either way, new equations can be formulated from the intersections of the detected curves of the laser reflections. If we can acquire a sufficient number of equations, the calibration problem may be solved. In this paper, we study the problem of self-calibration by using a sufficient number of equations derived from the intersections of the multiple laser planes detected in a single image or a sequence of images. Formulation of this problem involves simultaneous polynomial equations with a large number of variables. These equations are difficult to solve. We show that, in some configurations, the problem is solvable and we provide a method to obtain numerical solutions. The solution is obtained by computing approximated solutions by using Gröbner bases and refining them using nonlinear optimization. To test the proposed method, we developed an actual 3D measurement system, which consists of only a laser projector with two line lasers and a single camera. Because the

2 system is self-calibrated, users can start 3D scanning without any pre-calibration process and freely move the projector so that the projected lines sweep across the surface of the scene to get the 3D shape. Laser plane Laser projector Reflection curve in i-th frame 2. Related works There are many existing commercial products which use the light sectioning method [5, 11]. With such systems that use the light sectioning method, the laser plane is fixed, or moved by precision mechanical devices. For these systems, laser plane calibration is required only once and precise calibration can be applied previously[19, 16]. On the other hand, in the case of a handheld 3D scanner, which is recently attracting wide attention because of its maneuverability and simplicity, the relationship between the laser plane and the camera position varies every time and must be calibrated on-line. For example, Woo and Jung [2] have proposed an explicit calibration method; that is, they put a cubic frame in the field of view of the camera, and place an object inside this frame. Then, a laser emits a line beam to the object, and they can detect the bright point on the cube to estimate the pose of the beam plane and the 3D information. Bouguet and Perona [1] use shadows to calibrate the laser plane. They cast shadows on the known plane and estimate the parameters. Fisher et al. [8] have modified this idea. Furukawa and Kawasaki [9] have adopted a more active method. They place LED markers on the sensor itself and capture the markers with a single camera to estimate the laser plane. The abovementioned methods can solve the on-line laser plane calibration problems, however, they still have some drawbacks: for example, they require an object of a known shape [2, 1, 8] or markers [9] to be captured in the same frames for calibration. Note that there are some handheld 3D scanners in which the laser plane projector and a camera are at a fixed position to each other [10, 18]. Such devices require laser plane calibration only once, therefore, the on-line laser plane calibration is not usually necessary. There is another handheld 3D scanner which uses a laser plane only for the robust retrieval of correspondences for stereo vision [6]. With this system, the parameters of the laser plane are not used, so the calibrations are not performed. As mentioned above, although many explicit calibration methods for the light sectioning method have been proposed, no self-calibration method for movable laser planes has yet been proposed. This is mainly because a projected pattern on an unknown object from a single line-laser projector does not provide enough constraints for a solution. In this paper, we achieve self-calibration for the light sectioning method by using multiple line lasers. With our proposed method, all extra devices for explicit calibration of Camera Captured image Intersection Reflection curve in j-th frame Figure 1. Configuration of the active vision system. Captured image Intersections Reflection curves in different frames Figure 2. Points of intersection of the reflection curves. laser plane can be eliminated; thus, the user can arbitrarily place a camera and freely move the laser projector without requiring known objects or attached markers to be captured in the frames. 3. Self-calibration for laser planes and camera 3.1. Problem definition The minimum configuration of our target active vision system consists of a camera and a line laser projector, as shown in Figure 1. The intrinsic parameters of the camera such as the focal length are assumed to be known. Laser line projected from the projector is reflected at the surfaces of the scene and detected by the camera as curves in the images. We call these curves reflection curves. A scanning process is performed by capturing a sequence of images with a fixed camera and moving the projector back and forth either manually or mechanically. Multiple reflection curves are obtained from the image sequence since they move in the image with the motion of the projector. The problem to solve is the estimation of the positions of the projected laser planes from the observed reflection curves. Once the positions of the laser planes become known, reconstruction of the 3D shapes of the scenes can be performed by triangulation.

3 By drawing all the reflection curves in different frames in a common image coordinates, those curves generally have intersections (Figure 2). Since the camera is fixed, an intersection of reflection curves corresponds to a 3D point on the scene. The point is on both of the two laser planes generating the curves. Therefore, we can acquire simultaneous equations using equations of laser planes and the observed coordinates of intersections as follows. A plane which is not parallel to the z axis can be represented as ax + by + z + c = 0 with a,b, and c as its parameters. Suppose that two planes whose indices are i and j and a surface have an intersection. The position of the point is (u i,j, v i,j ) in screen coordinates. Let the depth of the intersection from the camera be t i,j, then the 3D point (u i,j t i,j, v i,j t i,j, t i,j ) is on the planes i and j. The z coordinate is negative since we assume the z-axis is directed backward from the camera. Representing the parameters of the plane i by a i, b i, c i, a i (u i,j t i,j ) + b i (v i,j t i,j ) t i,j + c i = 0, a j (u i,j t i,j ) + b j (v i,j t i,j ) t i,j + c j = 0. (1) These equations can be obtained from all the intersections. If the number of lines is n and the number of points of intersection is m, the total number of unknown parameters is 3n+m, while the number of the constraints is 2m. Obviously, the solutions are not fixed if the number of variables exceeds the number of equations. Therefore, 3n + m 2m (i.e., 3n m) is a necessary condition for this problem to have a finite number of solutions. If we use multiple line lasers with fixed spatial relationships instead of a single line laser, additional constraints can be introduced between planes. For example, suppose a laser projector which consists of two line lasers is aligned precisely at 90 degrees, as shown in Figure 3 (a). Using laser planes with different colors, these planes can be easily distinguished. Then, two different reflection curves, each of which corresponds to one of the laser planes, can be acquired from an image. From N images, 2N reflection curves are observed. Simultaneous equations can be obtained in the same way as the case of the single line laser. Additional constraints are that two laser planes observed in the same image are perpendicular to each other. Indexing the laser planes at image k(0 k N 1) as the 2k-th and (2k + 1)-th planes leads to this equation: a 2k a 2k+1 + b 2k b 2k = 0. (2) Using more laser planes attached to the projector, more constraints can be obtained. In some conditions of the problem, the scaling factor of the scenes cannot be fixed. Typical examples of these conditions are the cases where a single laser plane or two laser planes are used. In these cases, an additional equation Laser Planes Laser projector with 2 line lasers (a) Laser projector with two line lasers moved along a slanted line (b) Laser planes Figure 3. A laser projector with two line lasers: (a) the configuration of the projector, (b) the grid pattern formed by moving the projector. should be used to avoid an ambiguity of the solutions. The equation is t 0,0 = 1. (3) An example of the opposite case is that, when multiple line lasers are placed in parallel, the scaling of the scene can be determined Examples of configurations For the problem defined in Section 3.1 there are various system configurations in which the combination of two or more line lasers can be considered. These compositions are roughly classified into two methods. The first method utilizes a sequence of measurements. In this case, the sequence of captured images with the laser projector moving and projecting the reflection curves onto the object, are used. By accumulating multiple captured images into one, intersections of projected laser curves are obtained to formulate the equations. The other method is the case in which self-calibration can be performed from a single measurement. If enough line lasers are fixed to a device with their spatial relationships known, the position information of the device can be estimated from only one image Solution using a sequence of measurements In the case of configuring a laser projector with only a small number of line lasers, the configuration of two line lasers fixed at a certain angle gives a minimum configuration of line lasers in a projector in order for the problem to be solvable. Let us consider the cross-plane laser projector mentioned in Section 3.1 (Figure 3 (a)). When we move the laser projector along a slanted line and capture 5 images as shown in Figure 3 (b), we can obtain a 5 5 grid pattern and 25 intersections. Then, the number of planes is 10 and the unknown parameters for the planes are 30 and 25 for depth.

4 Reflection curves Intersections Reflection curves Intersections Line projector with multiple line lasers (a) (b) (c) Figure 4. Configurations of laser projectors with multiple line lasers. Thus, the number of total unknown parameters is 55. There are 50 equations for the intersections and 5 constraints of perpendicularity. With this configuration, uncertainty remains for the scaling factor; therefore, by adding the forms of equation (3), we have 56 equations. These simultaneous equations have finite solutions and can be solved. We will explain the details in Section 4. Note that, a laser projector made of a single line laser is the simplest configuration; however, with this configuration, the simultaneous equation does not have finite solutions and cannot be solved. It is provable, but we ommit the proof because of the limited number of pages. The fact that the single-line-laser configuration is not solvable also shows the importance of the constraints of perpendicularity, since they are the only difference between the single-linelaser configuration and the two-line-laser configuration Solution using a single measurement In the case that four or more line lasers are attached to the projector and many intersections are observed at once, it is possible to perform self-calibration from a single image depending on the configurations. For example, suppose that four line lasers are attached to a projector at known angles and six intersections are obtained from one image, as shown in Figure 4(a). By following the formulation described previously, 18 (i.e., ) unknowns and 18 (i.e., ) constraints are obtained. In this case, it is possible to perform a self-calibration from one view. If five line lasers are used as shown in Figure 4 (b), more constraints can be used and the solution can be estimated more precisely. In order to arrange many line lasers on the projector, aligning them to form a parallel or radiate grid is a practical method (Figure 4 (c)). If a parallel grid is adopted, it is also possible to determine scaling factor, making use of information on the intervals of the grid. In these cases, it is easier to solve the problem by representing the relative pose between the projector and the camera as a maximum of 6-DOF parameters, rather than using the parameters of all the planes. Under the formulation using the relative pose as the parameters, the problem becomes very similar to the the problem of self-calibration of the extrinsic parameters of a pair of cameras used in the stereo vision system. If all the laser planes share one point, the problem becomes equivalent to self-calibration between pinhole cameras, which can be solved by using the intersections of the reflection curves as correspondence points. The case where all the laser planes do not share one point is equivalent to the self-calibration between a pinhole camera and a generalized camera. 4. Range finder by projecting a cross-shaped pattern 4.1. System configuration From the configurations described in the previous section, we developed a numerical solver for the case of two laser planes which intersect perpendicularly. The reasons we adopted this setup are as follows: (1) as mentioned previously, it is the simplest solvable setup which requires the smallest number of line lasers, (2) distinction of the two laser planes can be easily achieved by using red and green line lasers, which are available as offthe-shelf products, and (3) mounting two planes perpendicularly is easy. The actual scanning is performed as follows. First, we take five images while moving the projector, so that the reflection curves form a grid. Then an initial solution is obtained under the approximation of orthogonal projection (this process is described in Section 4.2). The solution is refined by a nonlinear optimization (this process is described in Section 4.4). The refined result of the self-calibration can be used for estimating the poses of the projector for all the frames in the image sequence and reconstructing the 3D scene Approximation for initial solution In this section, the numerical solution for the problem described in Section 4.1 will be given. A brief description of the solution is as follows. First, the equation system which consists of 55 variables, 51 linear equations, and five quadratic equations is obtained from five images by approximating this problem using the assumption of orthogonal projection. This equation system can be converted to five quadratic equations with four variables by solving the linear equations. The converted simultaneous quadratic equations can be solved using Gröbner bases. Although the solution is for the approximated problem, the solution of the original problem can be obtained by performing nonlinear optimization using the obtained solution as initial parameters. The reason for solving the approximated problem instead of the original one is that, since the original problem includes many (55) quadratic equations, the calculation of

5 Gröbner bases of the original equation set is intractable and the numerical solution cannot be obtained. By approximating the problem using the assumption of orthogonal projection, most of the equations become linear, resulting in only five quadratic equations with four variables. So, by reducing the scale of the problem using approximation, it becomes possible to apply the numerical solving method using Gröbner bases [3, 4]. Let the parameters of the vertical plane for the i-th image be (a v,i, b v,i, c v,i ). The same goes for the horizontal plane, whose parameters are (a h,i, b h,i, c h,i ). By limiting the intersections to those between the horizontal and vertical curves, a point can be indexed by two numbers: the image indices of the horizontal and vertical curves that form the intersection. Let the intersection of the vertical curve on the i-th image and the horizontal curve on the j-th image be expressed as intersection (i, j). Also, let the depth of intersection (i, j) be t i,j, and the image coordinates of intersection (i, j) be (u i,j, v i,j ), using the screen coordinates of a normalized camera. From the forms of equation (1), we can obtain the simultaneous equations as follows: a v,i (u i,j t i,j ) + b v,i (v i,j t i,j ) t i,j + c v,i = 0, a h,j (u i,j t i,j ) + b h,j (v i,j t i,j ) t i,j + c h,j = 0, (4) for 1 i, j 5. The forms of equation (2) lead to a v,i a h,i + b v,i b h,i + 1 = 0 (5) for 1 i 5. Now, we assume orthogonal projection. As already mentioned, using the configuration of two planes, the scaling of the object remains ambiguous. Therefore, we can assume that the distance to the object is a value near 1. Suppose the variation of t i,j is sufficiently small and all t i,j are near 1 (i.e., when the assumption of orthogonal projection can be applied), then, using the the forms of equation (4) can be transformed to a v,i (u i,j ) + b v,i (v i,j ) t i,j + c v,i = 0, a h,j (u i,j ) + b h,j (v i,j ) t i,j + c h,j = 0 (6) for 1 i, j 5. The equations are linear (note that u i,j and v i,j are constants), whereas the original equations are quadratic. Since there are 25 intersections, 50 linear equations of these forms are obtained. The perpendicularity constraints between planes remain quadratic, which are expressed in the forms a v,i a h,i + b v,i b h,i + 1 = 0 (7) for 1 i 5. There are 5 equations of this form. As mentioned in Section 3.2.1, the perpendicularity constraints are important for the problem to be solvable. Since the perpendicularity constraints are quadratic of estimated parameters, it is diffucult to convert the problem to simultaneous linear equations which are easy to solve. Thus, we require a method to solve simultaneous quadratic equations. Using orthogonal projection, the average of the depths to the scene remains ambiguous. To avoid the ambiguity, the equation (3) is added to the constraints Numerical solution By the formulation described above, the number of variables is 55. The number of linear equations is 51. Using a matrix, these formulas are represented as follows: Mx = b (8) where M is a matrix whose elements of are coefficients of equation (6), and x is a 55-dimensional vector whose elements are variables of a h,j, b h,j, t i,j, and b is a 51-dimensional vector whose elements are 0, with the exception of one of them, which is 1 from equation (3). This linear equation is under-constrained. By applying SVD (singular value decomposition) to the matrix, we can obtain a 5D linear space of x in which equation (8) is approximately fulfilled. This can be done by the following steps. By applying SVD to M, the equation becomes as follows: UΣVx = b. (9) where U is a matrix and V is a matrix. Now, let the matrix formed by the upper 50 rows of V be V 1, the the matrix formed by the lower 5 rows of V be V 2, and U 1 be the upper right, submatrix of U. Assuming that the rank of M is 50 or 51, then the approximated solution space of the equation can be expressed as x = V t 1Σ 1 U t 1b + V t 2f (10) where f = (f 0, f 1, f 2, f 3, f 4 ) is an arbitrary 5D vector. The assumption of M having a rank of 50 or higher will be empirically confirmed in Section 5. By substituting the elements of x in the equation (7) using (10), five simultaneous quadratic equations with variables f 0, f 1, f 2, f 3 and f 4 are obtained. We solve these equations numerically using Gröbner bases. The group of a polynomial equation can be treated as an ideal in the field of commutative algebra. The ideal which represents simultaneous polynomial equations can be generated by finite bases in the same way that vector space is spanned by basis vectors. Bases with a special property which generate an ideal are called Gröbner bases. The method to obtain the numerical solution of simultaneous polynomial equations using Gröbner bases is described in [3, 4] and [17]. More specifically, Gröbner bases are obtained from the original equations; then, a matrix called the action matrix corresponding to the target variable is calculated. The eigenvalues of this action matrix constitute the solutions.

6 From the solver of simultaneous quadratic equations based on Gröbner bases, multiple complex solutions of (f 0, f 1, f 2, f 3, f 4 ) are obtained. From these solutions, we select solutions which are near to the real solutions. Typically, we obtain several (nearly) real solutions of (f 0, f 1, f 2, f 3, f 4 ). Solutions of x are obtained from the solutions of (f 0, f 1, f 2, f 3, f 4 ) Refinement of the solution The numerical solution estimated in the previous section is a solution of the approximated problem. Therefore, the solution of the original problem is estimated by improving the obtained solution by nonlinear optimization. We can use the errors of equation (6),(7) and (3) to form the objective function to be optimized. However, the forms of equation (6) are soft constraints (i.e., they may not be strictly fulfilled) since they use observed values, whereas the forms of equation (7) are hard constraints (i.e., they should be strictly fulfilled). Moreover, it is desirable that the objective function represents the observation errors in screen space as bundle adjustments. Therefore, we take another approach. The poses of the projector which projects two planes are used as parameters of the objective function. The degree of freedom of such a pose is 5. The actual parameters of a pose are the rotation parameters of roll, pitch, and yaw, and distances from the origin to each of the two planes. By expressing the direction of the two planes by the rotation, the forms of equation (7) are strictly fulfilled. The line of intersection of the vertical plane of the i-th image and the horizontal plane of the j-th image is calculated, and is projected in the image. Representing the projected line as l i,j, the intersection (i, j) should be on l i,j. Therefore, the objective function is calculated as the sum of the squared distances between intersections (i, j) and the lines l i,j for all i, j. In this formulation, the depths are not parameters. The depths are estimated from the poses as the depth of the point on l i,j nearest to intersection (i, j). Furthermore, to fix the scale, the squared error of equality 1 is added to the objective function. Minimization of the objective function is performed using the Levenberg-Marquardt method. 5. Experiments In order to confirm the validity of the self-calibration technique described in Section 4, some experiments were conducted with simulated and actual data. We used two types data for simulation. One generated precise, numerically synthesized data, fulfilling the equations (4) and (5) with a precision on the order of The other data was generated by a CG renderer. Using this renderer, we could obtain data of an arbitrary 3D model. Actual data was obtained by the specially configured device Numerically synthesized data The numerically synthesized test data is generated as follows. Five poses of the projector are generated. They are similar but varied randomly. From the poses, 25 intersection points are calculated, whose depths are varied randomly. The area of the bounding box of the points was from to for x, from to for y, and from to 0.97 for z. Then, the points are projected onto the screen coordinates, using both orthogonal projection and perspective projection. The projected coordinates are denoted as (u i,j,v i,j )(orthogonal) and (u i,j,v i,j )(perspective). To formulate a problem with simultaneus equations, the Jacobian of the equations should be full rank; otherwise, the equations may have infinite number of solutions instead of finite number of solutions. To verify that the problem has finite solutions, we calculated the singular values of the Jacobian. From equations (3)(4) and (5), the Jacobian was calculated symborically, and the variables of the Jacobian were substituted by the synthesized data. The maximum singular value of the Jacobian was 2.8 and the minimum singular value was This calculation was performed with Mathematica. Thus, it was full rank. This means that, in general, the self-calibration problem of this configuration has finite solutions. As described in Section 4.3, we assumed that the rank of matrix M is either 50 or 51. For the simulated data, the maximum singular value in the double-precision floatingpoint format was The 50th singular value was and the 51st was To perform SVD, the LAPACK numerical library [14] was used. Considering the precision, we can say that the rank of M is either 50 or 51. Next, we tested the numerical solver described in Section 4.3 with the synthesized data. We applied the solver to the data and obtained the parameters of all the laser planes and the depths of the intersection points. Since the scaling factor cannot be determined, we scaled the solution by the true value of t 0,0 so that the solution could be compared. The solution from the data of the orthogonal projection should be the same as the true values. In fact, for the solution calculated from (u i,j,v i,j ), the RMS (root of mean squared) error of t i,j (1 i, j 5) was So, the solver worked correctly. The solution obtained from (u i,j,v i,j ) was deviated from the true values becuase of the approximation described in Section 4.2. The deviation is shown in Figure 5. Then, we tested the refinement of the solution described in Section 4.4. We applied the refinement algorithm to the solution obtained from the data of perspective projection. Figures 6 (a) and (b) show the result. Compareing the result with figure 5, it is confirmed that the solution was improved, although there remain errors.

7 Figure 5. The solution of the approximated problem with the true values. (The lines drawn from five of the grid points show the directions from the point to the projector.) (a) (b) We also tested the refinement process with noised data. To the synthesized data, we added uniform noise distributed between and to the the constants u i,j and v i,j. Generating ten sets of data with this method, the refinement was performed for each of the data. The results are shown in figures 6 (c) and (d) CG-synthesized data Next, we tested the method with the data genetated by CG rendering. Figure 7 shows the example of a scene composed of a cube and a plane. In the figure, (a) describes all the pattens which is detected by the camera. The angle of sight of the height of the image was 10 degrees. The boarders of the black and white patten on the scene indicates where the laser curves should be observed. Five of the intersections marked with crosses are the intersections where the laser planes intersect at right angles. Figure 7 (b) shows the 3D wireframe of the true shape of the grid in the scene. Figure 7 (c) is the result of the solution using approximation, and (d) is the result of the refinement. These results are shown with the true shape of the grid. With this example, the refinement worked very well, so that the refined solution and the true values became very close Actual system We built an actual active vision system based on the method described in section 3. The laser projector consists of two line lasers fixed at 90 degrees; one of them is red and the other is green. The video sequence is captured by DV camera and intersection points are obtained by detecting laser-curves on each frame. For experiments, we scanned a plastic object with a sinusoidal curve, as shown in Figure 8(a). Figure 8 (b) shows the sample input images. The estimated 3D points and shapes are shown in Figures 8(c) - (g). With those results, we can confirm that the shape is successfully recovered with our proposed method. (c) (d) Figure 6. The refined solution: (a),(b) the solution (shown with the true values), (c),(d) the solutions with noised data. (The lines drawn from five of the grid points show the directions from the point to the projector.) 6. Conclusion In this paper, we proposed a self-calibration method of multiple laser planes, which are parts of a laser sectioning method. The algorithm is as follows: first, we capture an image sequence while moving a laser projector back and forth; second, we obtain the intersections of the projected laser curves from the image sequence; and, finally, we solve the simultaneous equations which are formulated from the intersections. We presented, as the simplest solvable configuration, an implementation with a configuration of two line lasers at fixed angles (90 degrees). With this device, an enough number of constraint equations to solve the problem are derived from 5 images. The equations are difficult to solve, since the size of the problem is large and it is difficult to convert them into linear equations. We proposed a method to solve the the equations, in which approximated solutions are obtained by using Gröbner bases and are refined by nonlinear optimization. Using the proposed method, we can create a 3D measurement system which consists of only a laser projector with two line lasers and a single camera. In order to verify the reliability and effectiveness of the proposed method, we conducted a simulation for evaluation. We also implemented an actual 3D scanner and performed several experiments with the system. The results of our experiments confirmed the effectiveness of the proposed system.

8 (a) (b) (a) (b) (c) (d) Figure 7. Results of the data generated by CG : (a) the detected pattern, (b) the true 3D shape of the grid, (c) the result of the approximated problem (with the true shape), and (d) the result of the refined solution (with the true shape). (c) (d) (e) References [1] J. Y. Bouguet and P. Perona. 3D photography on your desk. In Int. Conf. Computer Vision, pages , [2] C. W. Chu, S. Hwang, and S. K. Jung. Calibration-free approach to 3D reconstruction using light stripe projections on a cube frame. In Third Int. Conf. on 3DIM, pages 13 19, [3] D. A. Cox, J. B. Little, and D. O Shea. Ideals, Varieties, and Algorithms. Springer-Verlag, NY, 2nd edition, pages. [4] D. A. Cox, J. B. Little, and D. O Shea. Using Algebraic Geometry, volume 185 of Graduate Texts in Mathematics. Springer-Verlag, NY, pages. [5] Cyberware. Desktop 3D scanner. In [6] J. Davis and X. Chen. A laser range scanner designed for minimum calibration complexity. In Int. Conf. on 3DIM2001, pages 91 98, [7] O. Faugeras. Three-Dimensional Computer Vision - A Geometric Viewpoint. Artificial intelligence. M.I.T. Press Cambridge, MA, [8] R. B. Fisher, A. P. Ashbrook, C. Robertson, and N. Werghi. A low-cost range finder using a visually located, structured light source. In Second Int. Conf. on 3DIM, pages 24 33, [9] R. Furukawa and H. Kawasaki. Interactive shape acquisition using marker attached laser projector. In Int. Conf. on 3DIM2003, pages , [10] P. Hebert. A self-referenced hand-held range sensor. In Int. Conf. on 3DIM2001, pages 5 12, June [11] L. D. Inc. 3D laser scanners. In [12] O. Jokinen. Self-calibration of a light striping system by matching multiple 3-d profile maps. In 2nd International (f) (g) Figure 8. Scanning of a sinusoidal curved object: (a) the target object and capturing scene, (b) projected laser-lines and (c),(d),(e) (f) and (g) result of 3D estimation. Conference on 3D Digital Imaging and Modeling (3DIM 99), pages , [13] H. Kawasaki and R. Furukawa. Uncalibrated multiple image stereo system with arbitrarily movable camera and projector for wide range scanning. In Int. Conf. on 3DIM2005, pages , June [14] lapack@cs.utk.edu. Lapack linear algebra package. In [15] Y. F. Li and R. S. Lu. Uncalibrated euclidean 3D reconstruction using an active vision system. IEEE Transactions on Robotics and Automation, 20(1):15 25, [16] A. M. McIvor. Nonlinear calibration of a laser stripe profiler. Optical Engineering, 41: , Jan [17] H. Stewenius. Grobner Basis Methods for Minimal Problems in Computer Vision. PhD thesis, Lund Institute of Technology, [18] K. H. Strobl, W. Sepp, E. Wahl, T. Bodenmueller, M. Suppa, J. F. Seara,, and G. Hirzinger. The dlr multisensory handguided device: The laser stripe profiler. In ICRA2004, pages , apr [19] G. B. Wolfgang Stocher. Automated simultaneous calibration of a multi-view laser stripe profiler. In ICRA2005, 2005.

Geometrical constraint based 3D reconstruction using implicit coplanarities

Geometrical constraint based 3D reconstruction using implicit coplanarities Geometrical constraint based 3D reconstruction using implicit coplanarities Ryo Furukawa Faculty of information sciences, Hiroshima City University, Japan ryo-f@cs.hiroshima-cu.ac.jp Hiroshi Kawasaki Faculty

More information

3D shape from the structure of pencils of planes and geometric constraints

3D shape from the structure of pencils of planes and geometric constraints 3D shape from the structure of pencils of planes and geometric constraints Paper ID: 691 Abstract. Active stereo systems using structured light has been used as practical solutions for 3D measurements.

More information

Shape Reconstruction and Camera Self-Calibration Using Cast Shadows and Scene Geometries

Shape Reconstruction and Camera Self-Calibration Using Cast Shadows and Scene Geometries DOI 0.007/s263-008-088-7 Shape Reconstruction and Camera Self-Calibration Using Cast Shadows and Scene Geometries Hiroshi Kawasaki Ryo Furukawa Received: 3 January 2008 / Accepted: 4 October 2008 Springer

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

EASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS

EASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS EASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS Tatsuya Hanayama 1 Shota Kiyota 1 Ryo Furukawa 3 Hiroshi Kawasaki 1 1 Faculty of Engineering, Kagoshima

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

Multi-view reconstruction for projector camera systems based on bundle adjustment

Multi-view reconstruction for projector camera systems based on bundle adjustment Multi-view reconstruction for projector camera systems based on bundle adjustment Ryo Furuakwa, Faculty of Information Sciences, Hiroshima City Univ., Japan, ryo-f@hiroshima-cu.ac.jp Kenji Inose, Hiroshi

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Lecture 10: Multi-view geometry

Lecture 10: Multi-view geometry Lecture 10: Multi-view geometry Professor Stanford Vision Lab 1 What we will learn today? Review for stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

An idea which can be used once is a trick. If it can be used more than once it becomes a method

An idea which can be used once is a trick. If it can be used more than once it becomes a method An idea which can be used once is a trick. If it can be used more than once it becomes a method - George Polya and Gabor Szego University of Texas at Arlington Rigid Body Transformations & Generalized

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Lecture 10: Multi view geometry

Lecture 10: Multi view geometry Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

3D Model Acquisition by Tracking 2D Wireframes

3D Model Acquisition by Tracking 2D Wireframes 3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Structure from Motion

Structure from Motion Structure from Motion Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions Structure from motion Recover both 3D scene geoemetry and camera positions SLAM: Simultaneous

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Geometric camera models and calibration

Geometric camera models and calibration Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Recovering structure from a single view Pinhole perspective projection

Recovering structure from a single view Pinhole perspective projection EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based

More information

Measurement of 3D Foot Shape Deformation in Motion

Measurement of 3D Foot Shape Deformation in Motion Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The

More information

C / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.

C / 35. C18 Computer Vision. David Murray.   dwm/courses/4cv. C18 2015 1 / 35 C18 Computer Vision David Murray david.murray@eng.ox.ac.uk www.robots.ox.ac.uk/ dwm/courses/4cv Michaelmas 2015 C18 2015 2 / 35 Computer Vision: This time... 1. Introduction; imaging geometry;

More information

Multiview Projectors/Cameras System for 3D Reconstruction of Dynamic Scenes

Multiview Projectors/Cameras System for 3D Reconstruction of Dynamic Scenes Multiview Projectors/Cameras System for 3D Reconstruction of Dynamic Scenes Ryo Furukawa Hiroshima City University, Hiroshima, Japan ryo-f@hiroshima-cu.ac.jp Ryusuke Sagawa National Institute of Advanced

More information

Two-View Geometry (Course 23, Lecture D)

Two-View Geometry (Course 23, Lecture D) Two-View Geometry (Course 23, Lecture D) Jana Kosecka Department of Computer Science George Mason University http://www.cs.gmu.edu/~kosecka General Formulation Given two views of the scene recover the

More information

DD2429 Computational Photography :00-19:00

DD2429 Computational Photography :00-19:00 . Examination: DD2429 Computational Photography 202-0-8 4:00-9:00 Each problem gives max 5 points. In order to pass you need about 0-5 points. You are allowed to use the lecture notes and standard list

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation. 3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland

Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland New Zealand Tel: +64 9 3034116, Fax: +64 9 302 8106

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Model Based Perspective Inversion

Model Based Perspective Inversion Model Based Perspective Inversion A. D. Worrall, K. D. Baker & G. D. Sullivan Intelligent Systems Group, Department of Computer Science, University of Reading, RG6 2AX, UK. Anthony.Worrall@reading.ac.uk

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

CS 664 Structure and Motion. Daniel Huttenlocher

CS 664 Structure and Motion. Daniel Huttenlocher CS 664 Structure and Motion Daniel Huttenlocher Determining 3D Structure Consider set of 3D points X j seen by set of cameras with projection matrices P i Given only image coordinates x ij of each point

More information

Lecture 3: Camera Calibration, DLT, SVD

Lecture 3: Camera Calibration, DLT, SVD Computer Vision Lecture 3 23--28 Lecture 3: Camera Calibration, DL, SVD he Inner Parameters In this section we will introduce the inner parameters of the cameras Recall from the camera equations λx = P

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

3D Scanning Method for Fast Motion using Single Grid Pattern with Coarse-to-fine Technique

3D Scanning Method for Fast Motion using Single Grid Pattern with Coarse-to-fine Technique 3D Scanning Method for Fast Motion using Single Grid Pattern with Coarse-to-fine Technique Ryo Furukawa Faculty of information sciences, Hiroshima City University, Japan ryo-f@cs.hiroshima-cu.ac.jp Hiroshi

More information

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light I Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Mixed-Reality for Intuitive Photo-Realistic 3D-Model Generation

Mixed-Reality for Intuitive Photo-Realistic 3D-Model Generation Mixed-Reality for Intuitive Photo-Realistic 3D-Model Generation Wolfgang Sepp, Tim Bodenmueller, Michael Suppa, and Gerd Hirzinger DLR, Institut für Robotik und Mechatronik @ GI-Workshop VR/AR 2009 Folie

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

PART A Three-Dimensional Measurement with iwitness

PART A Three-Dimensional Measurement with iwitness PART A Three-Dimensional Measurement with iwitness A1. The Basic Process The iwitness software system enables a user to convert two-dimensional (2D) coordinate (x,y) information of feature points on an

More information

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006 Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

Registration of Moving Surfaces by Means of One-Shot Laser Projection

Registration of Moving Surfaces by Means of One-Shot Laser Projection Registration of Moving Surfaces by Means of One-Shot Laser Projection Carles Matabosch 1,DavidFofi 2, Joaquim Salvi 1, and Josep Forest 1 1 University of Girona, Institut d Informatica i Aplicacions, Girona,

More information

2D Object Definition (1/3)

2D Object Definition (1/3) 2D Object Definition (1/3) Lines and Polylines Lines drawn between ordered points to create more complex forms called polylines Same first and last point make closed polyline or polygon Can intersect itself

More information

ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows

ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows Instructor: Gabriel Taubin Assignment written by: Douglas Lanman 29 January 2009 Figure 1: 3D Photography

More information

5LSH0 Advanced Topics Video & Analysis

5LSH0 Advanced Topics Video & Analysis 1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Structure from Motion CSC 767

Structure from Motion CSC 767 Structure from Motion CSC 767 Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R,t R 2,t 2 R 3,t 3 Camera??

More information

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt ECE 47: Homework 5 Due Tuesday, October 7 in class @:3pm Seth Hutchinson Luke A Wendt ECE 47 : Homework 5 Consider a camera with focal length λ = Suppose the optical axis of the camera is aligned with

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

Stereo Observation Models

Stereo Observation Models Stereo Observation Models Gabe Sibley June 16, 2003 Abstract This technical report describes general stereo vision triangulation and linearized error modeling. 0.1 Standard Model Equations If the relative

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Overview of Active Vision Techniques

Overview of Active Vision Techniques SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009 Model s Lecture 3 Sections 2.2, 4.4 World s Eye s Clip s s s Window s Hampden-Sydney College Mon, Aug 31, 2009 Outline Model s World s Eye s Clip s s s Window s 1 2 3 Model s World s Eye s Clip s s s Window

More information

A 3-D Scanner Capturing Range and Color for the Robotics Applications

A 3-D Scanner Capturing Range and Color for the Robotics Applications J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,

More information

2D/3D Geometric Transformations and Scene Graphs

2D/3D Geometric Transformations and Scene Graphs 2D/3D Geometric Transformations and Scene Graphs Week 4 Acknowledgement: The course slides are adapted from the slides prepared by Steve Marschner of Cornell University 1 A little quick math background

More information

Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model

Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model Shunta Saito* a, Makiko Kochi b, Masaaki Mochimaru b, Yoshimitsu Aoki a a Keio University, Yokohama, Kanagawa, Japan; b

More information

Introduction to Computer Vision

Introduction to Computer Vision Introduction to Computer Vision Michael J. Black Nov 2009 Perspective projection and affine motion Goals Today Perspective projection 3D motion Wed Projects Friday Regularization and robust statistics

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

CS231M Mobile Computer Vision Structure from motion

CS231M Mobile Computer Vision Structure from motion CS231M Mobile Computer Vision Structure from motion - Cameras - Epipolar geometry - Structure from motion Pinhole camera Pinhole perspective projection f o f = focal length o = center of the camera z y

More information

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T 3 3 Motion Control (wheeled robots) Introduction: Mobile Robot Kinematics Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Structure from Motion and Multi- view Geometry. Last lecture

Structure from Motion and Multi- view Geometry. Last lecture Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,

More information

Structure from Motion

Structure from Motion 11/18/11 Structure from Motion Computer Vision CS 143, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from

More information

Precise Omnidirectional Camera Calibration

Precise Omnidirectional Camera Calibration Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

MAPI Computer Vision. Multiple View Geometry

MAPI Computer Vision. Multiple View Geometry MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan

More information

Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes

Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes Yuo Uematsu and Hideo Saito Keio University, Dept. of Information and Computer Science, Yoohama, Japan {yu-o,

More information

DESIGN AND EVALUATION OF A PHOTOGRAMMETRIC 3D SURFACE SCANNER

DESIGN AND EVALUATION OF A PHOTOGRAMMETRIC 3D SURFACE SCANNER DESIGN AND EVALUATION OF A PHOTOGRAMMETRIC 3D SURFACE SCANNER A. Prokos 1, G. Karras 1, L. Grammatikopoulos 2 1 Department of Surveying, National Technical University of Athens (NTUA), GR-15780 Athens,

More information

Epipolar geometry contd.

Epipolar geometry contd. Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Computing the relations among three views based on artificial neural network

Computing the relations among three views based on artificial neural network Computing the relations among three views based on artificial neural network Ying Kin Yu Kin Hong Wong Siu Hang Or Department of Computer Science and Engineering The Chinese University of Hong Kong E-mail:

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection

More information