1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License

Size: px
Start display at page:

Download "1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License"

Transcription

1 Title D camera geometry and Its application to circular motion estimation Author(s Zhang, G; Zhang, H; Wong, KKY Citation The 7th British Machine Vision Conference (BMVC, Edinburgh, U.K., 4-7 September In Proceedings of the British Machine Vision Conference, 2006, v., p Issued Date 2006 URL Rights Creative Commons: Attribution 3.0 Hong Kong License

2 D Camera Geometry and Its Application to Circular Motion Estimation Guoqiang Zhang, Hui Zhang and Kwan-Yee K. Wong Department of Computer Science, University of Hong Kong, Hong Kong SAR, China {gqzhang, hzhang, Abstract This paper describes a new and robust method for estimating circular motion geometry from an uncalibrated image sequence. Under circular motion, all the camera centers lie on a circle, and the mapping of the plane containing this circle to the horizon line in the image can be modelled as a D projection. A 2 2 homography is introduced in this paper to relate the projections of the camera centers in two D views. It is shown that the two imaged circular points and the rotation angle between the two views can be derived directly from the eigenvectors and eigenvalues of such a homography respectively. The proposed D geometry can be nicely applied to circular motion estimation using either point correspondences or silhouettes. The method introduced here is intrinsically a multiple view approach as all the sequence geometry embedded in the epipoles is exploited in the computation of the homography for a view pair. This results in a robust method which gives accurate estimated rotation angles and imaged circular points. Experimental results are presented to demonstrate the simplicity and applicability of the new method. Introduction In computer vision and computer graphics, a CCD camera is commonly modelled as a 2D projective device. On the other hand, many imaging systems using laser beams, infra-red or ultra-sound act only on a source plane, and can be modelled as a D projective device. Many work has been done to solve the structure and motion problem for both calibrated and uncalibrated D projective cameras [2,, 5]. It has been shown that under some special situations, a 2D camera model can be reduced to a D camera to simplify the vision problem. For instance, Faugeras et al. reduced 2D images under planar motion to the trifocal line images and derived a simple solution for self-calibration [5]. This paper considers the problem of circular motion estimation. Circular motion can be viewed as a special kind of planar motion, in which the camera rotates around a fixed axis with the object remaining stationary. Under this setup, all the camera centers lie on a circle, which is the trajectory of the rotating camera center. The projection of the plane containing the camera centers onto the image plane gives a line known as the horizon. It follows that the images of the camera centers (i.e., the epipoles must all lie on this line.

3 2 This results in a D projective geometry which will be exploited here to solve the motion problem. In this paper, a 2 2 homography is introduced to relate the projections of the camera centers in two views. It will be shown that both the imaged circular points and the rotation angle between these two views can be extracted directly from this homography simultaneously. This proposed geometry can ease the job of circular motion estimation. Many studies have been conducted for circular motion [0, 6, 7, 9, 2, 4]. Traditional method obtained the rotation angles by careful calibration [0]. In [6], Fitzgibbon et al. developed a method to handle the case of uncalibrated camera with unknown rotation angles based on a projective reconstruction. Their method requires tracking points in 2 and 3 views, respectively, for estimating the fundamental matrices and the trifocal tensors. This method was further extended by Jiang et al. in [7], where the computation of the trifocal tensor is avoided. However, their method requires tracking a minimum of 2 points in 4 images. An alternative approach is to exploit the silhouettes of the object. In [2], Wong and Cipolla proposed a method which requires 2 outer epipolar tangents. Their method involves a nonlinear optimization in a high dimensional space, and requires the knowledge of camera intrinsics. In [9], Mendonça et al. proposed to recover the structure and motion in several steps, each of which only involves a low dimensional optimization. However, the camera intrinsics are still required in the procedure for recovering the rotation angles and the subsequent Euclidean reconstruction. The work in [4] is an extension of Mendonça s method, and it removes the requirement of known camera intrinsics. However, the computation of each rotation angle in this method only involves 3 views, which sometimes results in bad estimation. Our proposed D geometry can be nicely applied to solve the circular motion problem using either point correspondences or silhouettes. The key point is to obtain the epipoles of all the view pairs. For the former case, the epipoles can be extracted from the fundamental matrices computed from point correspondences. Due to the special form of the fundamental matrix under circular motion, after obtaining one fundamental matrix using general methods, the minimum data required to compute the fundamental matrix for another view pair is only one point correspondence. It is evident that our method is more flexible than the approach proposed in [7] in that it requires no tracking of point in several images. Further, our method converts the geometry embedded in the corresponding points to the epipoles, and nicely exploits them in motion recovery. As for the silhouettebased motion estimation, our method can be directly integrated with the work of [9]. The advantage of our method over the existing one is that it is intrinsically a multiple view approach as all geometric information from the whole sequence is nicely exploited. This paper is organized as follows. Section 2 briefly reviews the D camera model. The D homography relating two views in the circular motion is introduced in Section 3. Section 4 illustrates the application of our proposed method to circular motion. Experiments on 3 sequences are presented in Section 5, followed by a short conclusion in Section 6. 2 D Projective Camera This section gives a short review of the D camera model (see [5]. Similar to a 2D projective camera which projects a point in P 3 to a point in P 2, a D camera maps a point in P 2 to a point in P. A D camera can be modelled by a 2 3 matrix [5] in the

4 3 X Z Z X X X u' u camera center image pane (a(b X v u X camera center image plane Figure : (a The relative positions of a D camera and the points X and X in the coordinate system. (b Special setup of X and X in the coordinate system where X overlaps with the camera center. form P = K ( R t, ( ( f u0 where K = is the calibration matrix with focal length f and principle point 0 u0, and R and t are the rotation matrix and translation vector. R can be expressed as ( cosθ sinθ R =. sinθ cosθ The scene space for a D camera is a projective plane, and the two circular points I and J on the line at infinity l are invariant under any similarity transformation. Similarly to the 2D camera case where the image of the absolute conic can be expressed in terms of the intrinsic parameters, the imaged circular points i and j can be expressed in terms of the intrinsic parameters of a D camera. The expression of j follows directly by projecting the circular point J = ( j 0 T onto the image plane: ( f uo j = 0 ( R t j ( = e jθ u0 + j f 0. (2 Equation (2 shows that j can be formulated in terms of the camera intrinsics f and u0. The real part of the ratio of the two components is u0 and the imaginary part is the focal length f. 3 D Homography in Circular Motion Suppose the world coordinates and the camera position are as in Fig. (a. The projection matrix is given by P = KR ( I t, where t = ( 0 T. Proposition : The 2D points X, X and the camera center are on a circle, as shown in Fig. (a. The rotation angle of the two points with respect to the center of the circle is θ.

5 4 Their projections u and u in the image are related by a D homography H as follows u = Hu = KR( θ 2 K u. (3 If X overlaps with the camera center as illustrated in Fig. (b, its expression in (3 is the vanishing point v of the X-axis direction. The formula for the special case in Fig. (b is v = Hu. Proof: Image point u can be expressed as u = PX ( sinφ = KR cosφ + ( tan φ = KR 2. (4 ( Similarly the image point u can be expressed as u tan( θ+φ = KR 2 tan( θ+φ 2 = tan 2 θ +tan φ 2, the above equation can be rewritten as tan 2 θ tan φ 2 ( u = KR Substituting (4 into (5 gives ( = KR tan θ 2 + tan φ 2 tan θ 2 tan φ 2 tan θ 2 tan θ 2 cos θ = KR( 2 sin 2 θ sin 2 θ cos 2 θ = KR R( θ ( tan φ 2 2 ( tan φ 2 ( tan φ 2 u = KR R( θ 2 R K u = KR( θ 2 K u.. Since. (5 The special case where X overlaps with the camera center can be proved in a similar manner. Note that the eigenvalues of H are {e j θ 2,e j θ 2 }, which are functions of the rotation angle θ, and the eigenvectors are the two imaged circular points i and j. H has only three degrees of freedom,which indicates that only 3 constraints are required to determine it uniquely. 4 Application of the D Homography to Circular Motion Estimation Circular motion refers to the case of a stationary camera viewing an object rotating around an axis, or a camera rotating around an axis with the object being stationary. These two

6 5 X X X u u' view2 u' u view (a (b view Figure 2: (a Two views and one point configuration. View 2 is obtained by rotating view by an angle of θ. (b One view and two points configuration. The setup of X and the camera is the same as in (a. cases are geometrically equivalent and the rotating camera case is used here for the ease of explanation. As mentioned in Section, all the camera centers should lie on a circle, and the projection of the plane containing this circle onto the image plane can be modeled by a D camera projection. Consider the projection of one camera center X into two views, as shown in Fig. 2(a. The rotation angle between the two views is θ. It is easy to see that the projection of X in view 2 in Fig. 2(a is equivalent to the projection of X in view in Fig. 2(b where the rotation angle between X and X is θ. From Proposition, u and u satisfies u = Hu, where H = KR( θ 2 K. It indicates that one such pair of u and u induced by a camera center X provides a constraint on the homography H relating a pair of views. In practice, there are many images taken under circular motion. There should be enough constraints for estimating H relating any pair of views. The rotation angle and the imaged circular points can thus be recovered easily from the eigenvalues and eigenvectors of H. Now let s focus on the minimum data requirement for computing H relating a view pair. Consider two views, if the vanishing point v as illustrated in Proposition is known, the two epipoles and v together can provide 2 constraints on H independently. Under this situation, one more constraint provided by another view is enough to determine H. This should be the minimum data required for determining H. The computation of H only involves 3 views, which is the same as the work in []. Practically, H relating any view pair can be computed more accurately by incorporating constraints induced by as many views as possible. 4. Acquisition of Epipoles from Point Correspondences Previous section describes how the proposed D homography can be used to solve the circular motion problem. In this section, we will discuss how to compute this homography from the images. The key point here is to get the epipoles from the view pairs. Since the epipoles can be extracted from the fundamental matrices, the computation of epipoles relating any pair of views is equivalent to computing the associated fundamental matrix. In this section, we will summarize the idea on how to compute epipoles in an elegant algebraic description resulting in a concise computational algorithm using the information of point correspondences.

7 6 The projective geometry of circular motion has been nicely studied by Fitzgibbon et al. [6]. There are a number of fixed features under circular motion. The fundamental matrix F can be parameterized in terms of these fixed features as [6] F = [v x ] + k tan θ 2 (l sl T h + l hl T s. (6 where the fixed features v x, l h and l s are scaled to unit norm and k is an unknown scalar which is fixed throughout the sequence. l s is the image of the rotation axis, and l h is the vanishing line of the horizontal plane and also the image line of the plane containing the camera centers. v x is the vanishing point of the normal direction of the plane π d defined by the camera center and the rotation axis. Besides, the two imaged circular points on l h are also fixed features which are to be recovered from the proposed D homography. Equation (6 reveals that the fundamental matrix relating different view pairs can be obtained by changing the scalar k tan 2 θ, with θ being the rotation angle between the chosen views. Given a fundamental matrix ˆF relating a particular view pair, the fixed features v x, l s and l h can be obtained by decomposing ˆF [6]. With the resulting v x, l s and l h, only a minimum of one point correspondence is needed to compute the fundamental matrix relating any two views in the sequence. This makes the computation of fundamental matrix with large θ possible. Note that the computation of epipoles here requires no point to be tracked over several images, as opposed to the method in [7] which requires 2 points being available in 4 images. Based on the above analysis, a simple algorithm is developed, which is summarized below:. Compute one fundamental matrix from any two given views. The two selected views should be as distant as possible while sharing enough corresponding points. 2. Decompose the fundamental matrix obtained in step into the form of (6 to get v x, l s and l h. 3. Compute the fundamental matrices relating other view pairs using corresponding points. Extract the epipoles from the fundamental matrices. 4. Compute the D homography H for each neighboring view pair from the epipoles. Recover the rotation angles and imaged circular points from the eigenvalues and eigenvectors of H. 4.2 Acquisition of Epipoles from Silhouettes In [9], Mendonça et al. demonstrated that the epipoles relating any pair of views can be computed by using the information of silhouettes. The degenerate case happens only when l h passes through the silhouettes and the rotation angle from two views are close to π. Hence, the geometric information from the whole sequence can be nicely exploited in the computation of H relating a pair of views. In this section, we will briefly summarize the algorithm as described in [9]. Consider a pair of camera matrices P and P 2 with epipole e i, formed in the image of camera P i. The fundamental matrix under circular motion can be written in a plane plus parallax representation, given by F = [e 2 ] W, (7

8 7 where W = I 2 v xl T s v T x l s is the homography induced by the plane Π W that contains the axis of rotation and bisects the line segment joining the 2 camera centers [9]. Note that W is also the harmonic homology associated with image of the surface of revolution swept by the rotating object [3]. The imaged rotation axis l s becomes the image of the axis of the surface of revolution. It has been shown in [9] that given a dense image sequence taken under complete circular motion, say that the rotation angles are about 5, the imaged envelope, obtained by overlapping all the images, can be viewed as an imaged surface of revolution. As indicated in [3], the associated harmonic homology W or equivalently l s and v x can be estimated from this imaged surface of revolution by intersecting the bitangents to the image profile. Since W satisfies W = W, it follows that the corresponding epipolar lines l and l 2 are mapped by W T mutually, as l 2 = W T l. (8 Note that the outer epipolar tangents are special epipolar lines in that they are tangent to the silhouettes of the object. This geometric constraint together with (8 makes the computation of the two outer epipolar tangents possible. The search for one outer epipolar tangent proceeds as a one-dimensional optimization of its orientation (see [9] for details. The epipoles can be directly computed as the intersection of the two recovered outer epipolar tangents, and l h can then be obtained by robustly fitting a line to these estimated epipoles. Finally, the epipoles can be refined by constraining them to lie on l h. Given the epipoles, the D homography can be recovered and the solution to the motion problem follows. 5 Experimental Results The new method of computing rotation angles and imaged circular points from D projective geometry is implemented and tested using both the information of point correspondences and silhouettes, respectively. 5. Point-Based Experiment The sequence tested is the popular dinosaur sequence from the University of Hannover. It contains 36 images taken under circular motion with a constant rotation angle of 0. The angular accuracy is about 0.05 [0]. The fundamental matrix F from the 7th and 0th images was computed to extract the fixed features. To improve the antinoise ability, robust linear regression algorithm was exploited to estimate the homograhy. Since the D imaged circular points (a ± jb, T should be fixed over the sequence, they were estimated from the histograms of the components a and b obtained by randomly selecting the view pairs in the computation of the D homography, as shown in Fig. 3. The distributions are close to a Gaussian distribution. Note that the best solutions are very distinct which are denoted by the dashed lines. The final computed results of the 2D imaged circular points are (254.2 ± j, j, T. To get the best solution of each rotation angle, the optimized imaged circular points were exploited. Note that the D homography H relating a view pair has only 3 degrees of freedom. This number is far less than the constraints on H provided by the projections

9 8 Figure 3: Histogram of a and b which are the coordinates of the D imaged circular points (a ± jb, T. The dashed lines indicate the best solutions computed. Figure 4: (a Recovered rotation angles for the dinosaur sequence. (b Recovered rotation angles for the David sequence. of other camera centers. For a pair of neighboring views, H was computed many times, each time with a portion of the constraints randomly selected from the whole set. The homography that gives the imaged circular points closest to the optimized one was taken as the best solution. The rotation angle was then extracted from it. Figure 4(a gives the recovered angles for the whole sequence. The RMS error is 0.7, which indicates the estimation was very good. Assuming that the camera satisfies unit aspect ratio and zeroskew restriction, the intrinsics of the camera were estimated from the imaged circular points and the constraints provided by l s and v x [4]. The visual hull of the dinosaur were computed as in [8], and three views of the 3D model are shown in Fig Silhouette-Based Experiment In this experiment, the silhouettes are represented by the Cubic B-spline snake [3]. It provides a compact representation of silhouettes of various complexity, and can achieve sub-pixel localization accuracy. The epipoles were estimated using the method as described in Section 4.2. After obtaining the epipoles, the remaining steps are similar to the point-based experiment. Experiment on a David sequence was carried out to test the applicability of our proposed approach. The sequence consists of 72 images with a resolution of , acquired with an electronic turntable. The rotation angle between successive views is 5 with a resolution of 0.2. Figure 6 shows two selected images. The distributions of the histograms on the estimated components of the D circular points are similar to those

10 9 Figure 5: 3D reconstruction of the dinosaur using motion estimated from point correspondences. Figure 6: Two images of David under circular motion. of the dinosaur sequence. Due to space limit, it is not shown here. In the procedure of rotation angle estimation and 3D reconstruction, a subsequence of 36 views were selected from the whole set with a rotation angle of 0. The estimated rotation angles are shown in Fig. 4(b, with a RMS error of 0.2. Some views of the 3D model constructed are shown in Fig. 7. The experiments on other sequences were also performed, and the results were impressive. 6 Conclusion In this paper, we have introduced a D homography between two D projective cameras, induced by the points lying on a circle passing through the two camera centers. We have shown that the eigenvectors of such a homography are the imaged circular points and the rotation angle between the two cameras with respect to the center of the circle can be extracted from the eigenvalues of the homography. We have demonstrated that this D geometry can be nicely applied to circular motion estimation. Circular motion is a special kind of planar motion with all the camera centers lying on a circle. This configuration exactly satisfies the requirement of the proposed D geometry. Experiments using both point correspondences and silhouettes, respectively, demonstrate that the recovered rotation angles achieve a very high precision. This is because that the algorithm efficiently and effectively exploits the underlying multiple view information. The high quality of the reconstructed models demonstrates the practicality of our work. References [] M. Armstrong, A. Zisserman, and R. Hartley. Self-calibration from image triplets. In B. Buxton and R. Cipolla, editors, Proc. 4th European Conf. on Computer Vision, volume 064

11 0 Figure 7: 3D reconstruction of David using motion estimated from silhouettes. of Lecture Notes in Computer Science, pages 3 6, Cambridge, UK, April 996. Springer Verlag. [2] K. Astrom and M. Oskarsson. Solutions and ambiguities of the structure and motion problem for d retinal vision. Journal of Mathematical Image and Vision, 2(2:2 35, April [3] R. Cipolla and A. Blake. Surface shape from the deformation of apparent contours. Int. Journal of Computer Vision, 9(2:83 2, November 992. [4] C. Colombo, A. del Bimbo, and F. Pernici. Metric 3d reconstruction and texture acquisition of surfaces of revolution from a single uncalibrated view. PAMI, 27(:99 4, January [5] O.D. Faugeras, L. Quan, and P.F. Sturm. Self-calibration of a d projective camera and its application to the self-calibration of a 2d projective camera. IEEE Trans. on Pattern Analysis and Machine Intelligence, 22(0:79 85, October [6] A. W. Fitzgibbon, G. Cross, and A. Zisserman. Automatic 3D model construction for turntable sequences. In R. Koch and L. Van Gool, editors, 3D Structure from Multiple Images of Large-Scale Environments, European Workshop SMILE 98, volume 506 of Lecture Notes in Computer Science, pages 55 70, Freiburg, Germany, June 998. Springer Verlag. [7] G. Jiang, L. Quan, and H.T. Tsui. Circular motion geometry by minimal 2 points in 4 images. In Proc. 9th Int. Conf. on Computer Vision, pages , [8] C. Liang and K.-Y. K. Wong. Complex 3d shape recovery using a dual-space approach. In Proc. Conf. Computer Vision and Pattern Recognition, pages II: , [9] P. R. S. Mendonça, K.-Y. K. Wong, and R. Cipolla. Epipolar geometry from profiles under circular motion. IEEE Trans. on Pattern Analysis and Machine Intelligence, 23(6:604 66, June 200. [0] W. Niem. Robust and fast modelling of 3d natural objects from multiple views. In SPIE Proceedings - Image and Video Processing II, volume 282, pages , 994. [] L. Quan and T. Kanade. Affine structure from line correspondences with uncalibrated affine cameras. IEEE Trans. on Pattern Analysis and Machine Intelligence, 9(8: , 997. [2] K.-Y. K. Wong and R. Cipolla. Structure and motion from silhouettes. In Proc. 8th Int. Conf. on Computer Vision, volume II, pages , Vancouver, BC, Canada, July 200. [3] K.-Y. K. Wong, P. R. S. Mendonça, and R. Cipolla. Camera calibration from symmetry. In R. Cipolla and R. Martin, editors, Proc. 9th IMA Conference on the Mathematics of Surfaces, pages , Cambridge, UK, September Springer Verlag. [4] H. Zhang, G. Zhang, and K.-Y. K. Wong. Auto-calibration and motion recovery from silhouettes for turntable sequences. In Proc. British Machine Vision Conference, volume I, pages 79 88, Oxford, UK, September 2005.

Geometry of Single Axis Motions Using Conic Fitting

Geometry of Single Axis Motions Using Conic Fitting IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 25, NO., OCTOBER 23 343 Geometry of Single Axis Motions Using Conic Fitting Guang Jiang, Hung-tat Tsui, Member, IEEE, Long Quan, Senior

More information

Circular Motion Geometry by Minimal 2 Points in 4 Images

Circular Motion Geometry by Minimal 2 Points in 4 Images Circular Motion Geometry by Minimal 2 Points in 4 Images Guang JIANG 1,3, Long QUAN 2, and Hung-tat TSUI 1 1 Dept. of Electronic Engineering, The Chinese University of Hong Kong, New Territory, Hong Kong

More information

Circular Motion Geometry Using Minimal Data. Abstract

Circular Motion Geometry Using Minimal Data. Abstract Circular Motion Geometry Using Minimal Data Guang JIANG, Long QUAN, and Hung-tat TSUI Dept. of Electronic Engineering, The Chinese University of Hong Kong Dept. of Computer Science, The Hong Kong University

More information

Camera calibration with spheres: Linear approaches

Camera calibration with spheres: Linear approaches Title Camera calibration with spheres: Linear approaches Author(s) Zhang, H; Zhang, G; Wong, KYK Citation The IEEE International Conference on Image Processing (ICIP) 2005, Genoa, Italy, 11-14 September

More information

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles

A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles Carlo Colombo, Dario Comanducci, and Alberto Del Bimbo Dipartimento di Sistemi ed Informatica Via S. Marta 3, I-5139

More information

A Stratified Approach for Camera Calibration Using Spheres

A Stratified Approach for Camera Calibration Using Spheres IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. Y, MONTH YEAR 1 A Stratified Approach for Camera Calibration Using Spheres Kwan-Yee K. Wong, Member, IEEE, Guoqiang Zhang, Student-Member, IEEE and Zhihu

More information

Multiple View Geometry in Computer Vision Second Edition

Multiple View Geometry in Computer Vision Second Edition Multiple View Geometry in Computer Vision Second Edition Richard Hartley Australian National University, Canberra, Australia Andrew Zisserman University of Oxford, UK CAMBRIDGE UNIVERSITY PRESS Contents

More information

Structure and Motion Estimation from Apparent Contours under Circular Motion

Structure and Motion Estimation from Apparent Contours under Circular Motion Structure and Motion Estimation from Apparent Contours under Circular Motion Kwan-Yee K. Wong a,,paulo R. S. Mendonça b,1,roberto Cipolla b a Department of Computer Science and Information Systems, The

More information

A stratified approach for camera calibration using spheres. Creative Commons: Attribution 3.0 Hong Kong License

A stratified approach for camera calibration using spheres. Creative Commons: Attribution 3.0 Hong Kong License Title A stratified approach for camera calibration using spheres Author(s) Wong, KYK; Zhang, G; Chen, Z Citation Ieee Transactions On Image Processing, 2011, v. 20 n. 2, p. 305-316 Issued Date 2011 URL

More information

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Camera Calibration and Shape Recovery from videos of Two Mirrors

Camera Calibration and Shape Recovery from videos of Two Mirrors Camera Calibration and Shape Recovery from videos of Two Mirrors Quanxin Chen and Hui Zhang Dept. of Computer Science, United International College, 28, Jinfeng Road, Tangjiawan, Zhuhai, Guangdong, China.

More information

Coplanar circles, quasi-affine invariance and calibration

Coplanar circles, quasi-affine invariance and calibration Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of

More information

A Case Against Kruppa s Equations for Camera Self-Calibration

A Case Against Kruppa s Equations for Camera Self-Calibration EXTENDED VERSION OF: ICIP - IEEE INTERNATIONAL CONFERENCE ON IMAGE PRO- CESSING, CHICAGO, ILLINOIS, PP. 172-175, OCTOBER 1998. A Case Against Kruppa s Equations for Camera Self-Calibration Peter Sturm

More information

Recovering light directions and camera poses from a single sphere.

Recovering light directions and camera poses from a single sphere. Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,

More information

Camera Calibration and Light Source Estimation from Images with Shadows

Camera Calibration and Light Source Estimation from Images with Shadows Camera Calibration and Light Source Estimation from Images with Shadows Xiaochun Cao and Mubarak Shah Computer Vision Lab, University of Central Florida, Orlando, FL, 32816 Abstract In this paper, we describe

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Camera calibration with two arbitrary coaxial circles

Camera calibration with two arbitrary coaxial circles Camera calibration with two arbitrary coaxial circles Carlo Colombo, Dario Comanducci, and Alberto Del Bimbo Dipartimento di Sistemi e Informatica Via S. Marta 3, 50139 Firenze, Italy {colombo,comandu,delbimbo}@dsi.unifi.it

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

3D Modeling using multiple images Exam January 2008

3D Modeling using multiple images Exam January 2008 3D Modeling using multiple images Exam January 2008 All documents are allowed. Answers should be justified. The different sections below are independant. 1 3D Reconstruction A Robust Approche Consider

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Mathematics of a Multiple Omni-Directional System

Mathematics of a Multiple Omni-Directional System Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

How to Compute the Pose of an Object without a Direct View?

How to Compute the Pose of an Object without a Direct View? How to Compute the Pose of an Object without a Direct View? Peter Sturm and Thomas Bonfort INRIA Rhône-Alpes, 38330 Montbonnot St Martin, France {Peter.Sturm, Thomas.Bonfort}@inrialpes.fr Abstract. We

More information

Camera Calibration from Surfaces of Revolution

Camera Calibration from Surfaces of Revolution IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. XX, NO. Y, MONTH YEAR 1 Camera Calibration from Surfaces of Revolution Kwan-Yee K. Wong, Paulo R. S. Mendonça and Roberto Cipolla Kwan-Yee

More information

Computer Vision I - Appearance-based Matching and Projective Geometry

Computer Vision I - Appearance-based Matching and Projective Geometry Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation

More information

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Peter F Sturm Computational Vision Group, Department of Computer Science The University of Reading,

More information

Two-View Geometry (Course 23, Lecture D)

Two-View Geometry (Course 23, Lecture D) Two-View Geometry (Course 23, Lecture D) Jana Kosecka Department of Computer Science George Mason University http://www.cs.gmu.edu/~kosecka General Formulation Given two views of the scene recover the

More information

Minimal Projective Reconstruction for Combinations of Points and Lines in Three Views

Minimal Projective Reconstruction for Combinations of Points and Lines in Three Views Minimal Projective Reconstruction for Combinations of Points and Lines in Three Views Magnus Oskarsson, Andrew Zisserman and Kalle Åström Centre for Mathematical Sciences Lund University,SE 221 00 Lund,

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Detection of Concentric Circles for Camera Calibration

Detection of Concentric Circles for Camera Calibration Detection of Concentric Circles for Camera Calibration Guang JIANG and Long QUAN Department of Computer Science Hong Kong University of Science and Technology Kowloon, Hong Kong {gjiang,quan}@cs.ust.hk

More information

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka CS223b Midterm Exam, Computer Vision Monday February 25th, Winter 2008, Prof. Jana Kosecka Your name email This exam is 8 pages long including cover page. Make sure your exam is not missing any pages.

More information

Multiple Views Geometry

Multiple Views Geometry Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective

More information

Automatic 3D Model Construction for Turn-Table Sequences - a simplification

Automatic 3D Model Construction for Turn-Table Sequences - a simplification Automatic 3D Model Construction for Turn-Table Sequences - a simplification Fredrik Larsson larsson@isy.liu.se April, Background This report introduces some simplifications to the method by Fitzgibbon

More information

Reconstruction of Surfaces of Revolution from Single Uncalibrated Views

Reconstruction of Surfaces of Revolution from Single Uncalibrated Views Reconstruction of Surfaces of Revolution from Single Uncalibrated Views Kwan-Yee K. Wong a,, Paulo R. S. Mendonça b, Roberto Cipolla c a Dept. of Comp. Science and Info. Systems, The University of Hong

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices

Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices Plane-based Calibration Algorithm for Multi-camera Systems via Factorization of Homography Matrices Toshio Ueshiba Fumiaki Tomita National Institute of Advanced Industrial Science and Technology (AIST)

More information

Camera Calibration and Light Source Orientation Estimation from Images with Shadows

Camera Calibration and Light Source Orientation Estimation from Images with Shadows Camera Calibration and Light Source Orientation Estimation from Images with Shadows Xiaochun Cao and Mubarak Shah Computer Vision Lab University of Central Florida Orlando, FL, 32816-3262 Abstract In this

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Silhouette Coherence for Camera Calibration under Circular Motion

Silhouette Coherence for Camera Calibration under Circular Motion 1 Silhouette Coherence for Camera Calibration under Circular Motion Carlos Hernández, Francis Schmitt and Roberto Cipolla Abstract We present a new approach to camera calibration as a part of a complete

More information

Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors

Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Keith Forbes 1 Anthon Voigt 2 Ndimi Bodika 2 1 Digital Image Processing Group 2 Automation and Informatics Group Department of Electrical

More information

Reconstruction of sculpture from its profiles with unknown camera positions

Reconstruction of sculpture from its profiles with unknown camera positions IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. Y, MONTH YEAR 1 Reconstruction of sculpture from its profiles with unknown camera positions Kwan-Yee K. Wong and Roberto Cipolla, Member, IEEE Abstract

More information

But First: Multi-View Projective Geometry

But First: Multi-View Projective Geometry View Morphing (Seitz & Dyer, SIGGRAPH 96) Virtual Camera Photograph Morphed View View interpolation (ala McMillan) but no depth no camera information Photograph But First: Multi-View Projective Geometry

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images Gang Xu, Jun-ichi Terai and Heung-Yeung Shum Microsoft Research China 49

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

Visualization 2D-to-3D Photo Rendering for 3D Displays

Visualization 2D-to-3D Photo Rendering for 3D Displays Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Computer Vision I - Appearance-based Matching and Projective Geometry

Computer Vision I - Appearance-based Matching and Projective Geometry Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 01/11/2016 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation

More information

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan

More information

Silhouette-based Multiple-View Camera Calibration

Silhouette-based Multiple-View Camera Calibration Silhouette-based Multiple-View Camera Calibration Prashant Ramanathan, Eckehard Steinbach, and Bernd Girod Information Systems Laboratory, Electrical Engineering Department, Stanford University Stanford,

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

A Calibration Algorithm for POX-Slits Camera

A Calibration Algorithm for POX-Slits Camera A Calibration Algorithm for POX-Slits Camera N. Martins 1 and H. Araújo 2 1 DEIS, ISEC, Polytechnic Institute of Coimbra, Portugal 2 ISR/DEEC, University of Coimbra, Portugal Abstract Recent developments

More information

Invariant Features from Interest Point Groups

Invariant Features from Interest Point Groups Invariant Features from Interest Point Groups Matthew Brown and David Lowe {mbrown lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract This paper

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

University of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract

University of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract Mirror Symmetry 2-View Stereo Geometry Alexandre R.J. François +, Gérard G. Medioni + and Roman Waupotitsch * + Institute for Robotics and Intelligent Systems * Geometrix Inc. University of Southern California,

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Douglas R. Heisterkamp University of South Alabama Mobile, AL 6688-0002, USA dheister@jaguar1.usouthal.edu Prabir Bhattacharya

More information

Uncalibrated 3D metric reconstruction and flattened texture acquisition from a single view of a surface of revolution

Uncalibrated 3D metric reconstruction and flattened texture acquisition from a single view of a surface of revolution Uncalibrated 3D metric reconstruction and flattened texture acquisition from a single view of a surface of revolution Carlo Colombo Alberto Del Bimbo Federico Pernici Dipartimento di Sistemi e Informatica

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Structure and motion in 3D and 2D from hybrid matching constraints

Structure and motion in 3D and 2D from hybrid matching constraints Structure and motion in 3D and 2D from hybrid matching constraints Anders Heyden, Fredrik Nyberg and Ola Dahl Applied Mathematics Group Malmo University, Sweden {heyden,fredrik.nyberg,ola.dahl}@ts.mah.se

More information

Multiple-View Structure and Motion From Line Correspondences

Multiple-View Structure and Motion From Line Correspondences ICCV 03 IN PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, NICE, FRANCE, OCTOBER 003. Multiple-View Structure and Motion From Line Correspondences Adrien Bartoli Peter Sturm INRIA

More information

Projective geometry for Computer Vision

Projective geometry for Computer Vision Department of Computer Science and Engineering IIT Delhi NIT, Rourkela March 27, 2010 Overview Pin-hole camera Why projective geometry? Reconstruction Computer vision geometry: main problems Correspondence

More information

Euclidean Reconstruction and Auto-Calibration from Continuous Motion

Euclidean Reconstruction and Auto-Calibration from Continuous Motion Euclidean Reconstruction and Auto-Calibration from Continuous Motion Fredrik Kahl and Anders Heyden Λ Centre for Mathematical Sciences Lund University Box 8, SE- Lund, Sweden {fredrik, andersp}@maths.lth.se

More information

Automatic Estimation of Epipolar Geometry

Automatic Estimation of Epipolar Geometry Robust line estimation Automatic Estimation of Epipolar Geometry Fit a line to 2D data containing outliers c b a d There are two problems: (i) a line fit to the data ;, (ii) a classification of the data

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Structure and Motion for Dynamic Scenes - The Case of Points Moving in Planes

Structure and Motion for Dynamic Scenes - The Case of Points Moving in Planes Structure and Motion for Dynamic Scenes - The Case of Points Moving in Planes Peter Sturm To cite this version: Peter Sturm. Structure and Motion for Dynamic Scenes - The Case of Points Moving in Planes.

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 More on Single View Geometry Lecture 11 2 In Chapter 5 we introduced projection matrix (which

More information

Metric Rectification for Perspective Images of Planes

Metric Rectification for Perspective Images of Planes 789139-3 University of California Santa Barbara Department of Electrical and Computer Engineering CS290I Multiple View Geometry in Computer Vision and Computer Graphics Spring 2006 Metric Rectification

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

A simple method for interactive 3D reconstruction and camera calibration from a single view

A simple method for interactive 3D reconstruction and camera calibration from a single view A simple method for interactive 3D reconstruction and camera calibration from a single view Akash M Kushal Vikas Bansal Subhashis Banerjee Department of Computer Science and Engineering Indian Institute

More information

The end of affine cameras

The end of affine cameras The end of affine cameras Affine SFM revisited Epipolar geometry Two-view structure from motion Multi-view structure from motion Planches : http://www.di.ens.fr/~ponce/geomvis/lect3.pptx http://www.di.ens.fr/~ponce/geomvis/lect3.pdf

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Multiple View Geometry in computer vision

Multiple View Geometry in computer vision Multiple View Geometry in computer vision Chapter 8: More Single View Geometry Olaf Booij Intelligent Systems Lab Amsterdam University of Amsterdam, The Netherlands HZClub 29-02-2008 Overview clubje Part

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

Model Refinement from Planar Parallax

Model Refinement from Planar Parallax Model Refinement from Planar Parallax A. R. Dick R. Cipolla Department of Engineering, University of Cambridge, Cambridge, UK {ard28,cipolla}@eng.cam.ac.uk Abstract This paper presents a system for refining

More information

3D Model Acquisition by Tracking 2D Wireframes

3D Model Acquisition by Tracking 2D Wireframes 3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Passive 3D Photography

Passive 3D Photography SIGGRAPH 2000 Course on 3D Photography Passive 3D Photography Steve Seitz Carnegie Mellon University University of Washington http://www.cs cs.cmu.edu/~ /~seitz Visual Cues Shading Merle Norman Cosmetics,

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Bo Li, Kun Peng, Xianghua Ying, and Hongbin Zha The Key Lab of Machine Perception (Ministry of Education), Peking University,

More information

Automatic Line Matching across Views

Automatic Line Matching across Views Automatic Line Matching across Views Cordelia Schmid and Andrew Zisserman Department of Engineering Science, University of Oxford Parks Road, Oxford, UK OX1 3PJ Abstract This paper presents a new method

More information

Reconstruction of sculpture from its profiles with unknown camera positions

Reconstruction of sculpture from its profiles with unknown camera positions Title Reconstruction of sculpture from its profiles with unknown camera positions Author(s) Wong, KYK; Cipolla, R Citation Ieee Transactions On Image Processing, 2004, v. 13 n. 3, p. 381-389 Issued Date

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Computer Vision cmput 428/615

Computer Vision cmput 428/615 Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?

More information

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt ECE 47: Homework 5 Due Tuesday, October 7 in class @:3pm Seth Hutchinson Luke A Wendt ECE 47 : Homework 5 Consider a camera with focal length λ = Suppose the optical axis of the camera is aligned with

More information

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman   Assignment #1. (Due date: 10/23/2012) x P. = z Computer Vision I Name : CSE 252A, Fall 202 Student ID : David Kriegman E-Mail : Assignment (Due date: 0/23/202). Perspective Projection [2pts] Consider a perspective projection where a point = z y x P

More information

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Vehicle Dimensions Estimation Scheme Using

More information

A Real-Time Catadioptric Stereo System Using Planar Mirrors

A Real-Time Catadioptric Stereo System Using Planar Mirrors A Real-Time Catadioptric Stereo System Using Planar Mirrors Joshua Gluckman Shree K. Nayar Department of Computer Science Columbia University New York, NY 10027 Abstract By using mirror reflections of

More information

3D Database Population from Single Views of Surfaces of Revolution

3D Database Population from Single Views of Surfaces of Revolution 3D Database Population from Single Views of Surfaces of Revolution C. Colombo, D. Comanducci, A. Del Bimbo, and F. Pernici Dipartimento di Sistemi e Informatica, Via Santa Marta 3, I-50139 Firenze, Italy

More information