Arm coordinate system. View 1. View 1 View 2. View 2 R, T R, T R, T R, T. 12 t 1. u_ 1 u_ 2. Coordinate system of a robot

Size: px
Start display at page:

Download "Arm coordinate system. View 1. View 1 View 2. View 2 R, T R, T R, T R, T. 12 t 1. u_ 1 u_ 2. Coordinate system of a robot"

Transcription

1 Czech Technical University, Prague The Center for Machine Perception Camera Calibration and Euclidean Reconstruction from Known Translations Tomas Pajdla and Vaclav Hlavac Computer Vision Laboratory Czech Technical University CZ Karlovo nam. 13, Praha 2 tel: fax: pajdla@vision.felk.cvut.cz October 4, 1996 Reference [1] Tomas Pajdla and Vaclav Hlavac. Camera calibration and Euclidean reconstruction from known translations. Presented at the workshop Computer Vision and Applied Geometry, Nordfjordeid, Norway, August 1{ This publication can be obtained via anonymous ftp from ftp://cmp.felk.cvut.cz/pub/cvl/articles/pajdla/cvag95.ps.gz Czech Technical University, Faculty of Electrical Engineering Department of Control Engineering, Center for Machine Perception, Computer Vision Laboratory Prague 2, Karlovo namest 13, Czech Republic, FAX , phone

2 Camera Calibration and Euclidean Reconstruction from Known Translations? Tomas Pajdla and Vaclav Hlavac Computer Vision Laboratory Czech Technical University CZ Karlovo nam. 13, Praha 2 tel: fax: pajdla@vision.felk.cvut.cz Abstract. We present a technique for camera calibration and Euclidean reconstruction from multiple images of the same scene. Unlike standard Tsai's camera calibration from known scene, we exploit controlled known motions of the camera to obtain the calibration and Euclidean reconstruction without any knowledge about the scene. We consider three translations of an uncalibrated but same camera mounted on the robot arm providing us with four views of the scene. We also assume to measure the translations of some Euclidean coordinate system rigidly attached to the camera. This special, but still realistic, arrangement brings us a linear algorithm for recovery of all intrinsic camera calibration parameters, rotation parameters of the camera with respect to the robot coordinate system, and proper scaling factors for all points allowing their Euclidean reconstruction. The experiments show that an ecient and robust algorithm is obtained by exploiting Total Least Squares in combination with careful normalization of image coordinates. 1 Introduction Standard stereo [1] delivers Euclidean reconstructions if calibrations of the cameras are available and if their mutual positions and orientations are known. Tsai's \bundle adjustment" camera calibration technique [8] recovers camera parameters by nding a projection of a known scene, i.e. three-dimensional coordinates of some points in the scene are explicitly measured, into observed images. If the scene is not known other knowledge must be exploited to calibrate the cameras and reconstruct the scene.? This work has been partially done during the visit of T. Pajdla at ESAT, K.U.Leuven. Support by Esprit Basic Research Action `VIVA' and the Belgian project IUAP-50 on Robotics and Industrial automation is gratefully acknowledged. This research was also supported by the Grant Agency of the Czech Republic, grant 102/95/1378, European Union grant Copernicus No RECCAD, and by Czech Ministry of Education project No. VS96049.

3 Faugeras has shown [2] that if there is no information whatsoever about the scene and the cameras, only projective reconstruction can be obtained. Maybank and Faugeras have also developed [6] an algorithm for Euclidean reconstruction from three images of a scene taken by the same camera. This method assumes no extra knowledge about the scene and camera motions but requires to solve an overdetermined system of nonlinear equations. Moons et al. have used knowledge about the motion to decrease an uncertainty in reconstructions. They presented [7] an ane reconstruction for the case when the images were taken by a translated camera. Horaud, Mohr et al. [5] exploited controlled motion of a camera to get a Euclidean reconstruction of the scene. They rst computed the projective reconstruction and then got a camera calibration by solving a set of quadratic equations. In this work we deal with the linear camera calibration from known motions. Our approach is similar to the work of Horaud et al. [5] in that respect that known motions and an unknown scene is assumed. Similarly, our experiments are carried out with a camera mounted on a robot arm. On the other hand, we show how the calibration from known pure translations reduces into the solving a set of linear equations. This allows to construct ecient and robust camera calibration algorithm that does not require any special calibration objects. In the next section we describe a controlled motion motions and dene the model of a perspective camera. The calibration method is presented in the section 2.1. Finally, experiments corroborating feasibility of the proposed method are shown in the section 3. 2 Camera calibration from controlled motion We consider a camera with xed internal parameters rigidly attached to a positioning device like the arm of a robot, see Figure 1. We expect that the robot is equipped with a cartesian coordinate system in which the positions T i and the orientations R i of the arm can be measured. The rigid transformation between the camera ane coordinate system and the local coordinate system of the arm R, T is not known but is assumed to remain constant during the measurement. A perspective linear camera projects points from a 3-dimensional projective space P 3 into a 2-dimensional projective space P 2. Points from a 3-dimensional projective space will be represented by homogeneous 4-vectors, X, as well as points from retina plane are regarded to be homogeneous 3-vectors, U. The fourth element of X, X 4 can be for nite points set to 1, and therefore nite points will be henceforth considered to be in the form (x 1) T. Vectors U are measured in images up to some non-zero scale. It is often desirable to express nite points U = (p; q; r) T in the form U = u, where u = (u; v; 1) T, so that u a v have meaning of ane, pixel image coordinates. The space-image mapping by a perspective camera can be represented by a 3 4 matrix M of rank 3. If U and X are corresponding points in P 3 resp. P 2,

4 Arm coordinate system R, 12 t 1 View 1 R, T R, T View 2 R, T View 1 View 2 t 1 R, T R 1, T1 A t 1 A R 2, T 2 u_ 1 u_ 2 Coordinate system of a robot Fig. 1. Images of the scene are taken by the camera rigidly mounted mounted on a robot's arm. Fig. 2. In the case of pure translations, the translation vector between the camera centers equals to the translation vector between the arm coordinate systems. x_ then the mapping is explicitly given by U = M X; or u = M x x 4 : (1) The matrix M can be decomposed as M = K R (I j? t), where t represents the position of the camera, R is a rotation matrix representing the orientation of the camera and K is an upper triangular matrix, camera calibration matrix. It suits to our purposes to further rearrange terms in the equation (1), so that unknown entities which are to be identied become more explicit u = K R (I j t) x x 4 = K R x? K R t x 4 = A (x? x 4 t): (2) Ane matrix A represents the transformation from an Euclidean coordinate system attached to the camera into its retina, in general ane, coordinate system. In the next sections we will seek for matrix A using image coordinates of corresponding points and known relative motions. Having a matrix A it is a simple matter to obtain the matrix K, using the QR-decomposition of matrices. 2.1 Calibration from three known translations and two points in four views Let us assume that we can measure three relative translations t i, i = 1 : : : 3, in some Euclidean coordinate system rigidly attached to the camera, yet in an unknown relation to the camera ane coordinates. In our case, when the camera is mounted on a robot arm, the translation vectors T i, i = 0 : : : 3 are available

5 in the robot coordinate system. Therefore, by setting t i = T i? T 0, relative translations are obtained. Vectors t i sre measured as the dierences of arm positions. They equal to the translations vectors between camera focal points since there is no rotation of the camera, see Figure 2. Moreover, let two unknown points X 1, X 2 from the scene project into the image points u i1, u i2, where i = 0 : : : 3 numbers the views. The idea here is to rst obtain the ane reconstruction of the points which posseses itself as a linear problem. Having ane reconstruction it will be shown that the camera calibration matrix can be recovered by solving linear equations if relative motions are known. Substituting T i and x 4 = 1 into the equation (2) yields i1 u i1 = KR x 1? KR T i ; i2 u i2 = KR x 2? KR T i ; i = 0 : : : 3:(3) If all equations, i = 1 : : : 3 are subtracted from the zeroth one, i = 0, the set of equations where unknown points x 1 and x 2 are eliminated is obtained: 01 u 01? i1 u i1 =? A t i ; 02 u 02? i2 u i2 =? A t i ; i = 1 : : : 3:(4) Now, if the equations on the right hand side are subtracted from the left ones, the following sets of homogeneous linear equations is obtained 01 u 01? i1 u i1? 02 u 02 + i2 u i2 = 0 ; i = 1 : : : 3: (5) Assuming that the u ij, i = 1 : : : 3; j = 1 : : : 2 are measured in images implies that they are nite points and can be expressed as u ij = [u ij v ij 1] T. Therefore the above equations can be rewritten as u 01?u i1?u 02 u i2 v 01?v i1?v 02 v i2 1?1? A 01 i1 02 i2 1 C A = 0 ; i = 1 : : : 3 ; (6) and resolved for unknown -s provided that one of them, e.g. 01, is kept xed for all i. Given -s, the points x i in (3) are determined up to some unknown ane transformation. Exactly this is used to obtain an ane reconstruction of a scene from pure translations in [7]. Similarly, given -s, the left hand sides of the equations in (4) are all determined up to some unknown, but common, scale. Thus, once -s are determined and if t-s spanning a 3-dimensional linear space are known, A can be computed from (4) up to a scale. Two image points are needed to form equation (6). It means that at least two corresponding points must be tracked in four images since the vectors minimally three t i, i.e. four images, span three-dimensional space.

6 2.2 Computing the calibration from many points Equations (6) were derived mainly to show the relation to an ane reconstruction and to prove that -s can be computed from image coordinates alone. In actual computation, both steps, i.e. computing the -s and the matrix A, are grouped together, so that one system of homogeneous linear equations is solved. After some manipulation with equations (4) the set of linear equations in the form has C b = 0 is obtained. By using the notation t i = (t i1 t i2 t i3 ) T, and u ij = (u ij v ij 1) T, C b = 0 can be rewritten as 0 u 01?u 11 t 11 t 12 t v 01?v 11 t 11 t 12 t ? t 11 t 12 t 13 u 01?u 21 t 21 t 22 t v 01?v 21 t 21 t 22 t ? t 21 t 22 t 23 u 01?u 31 t 31 t 32 t v 01?v 31 t 31 t 32 t ? t 31 t 32 t 33 u 02?u 12 t 11 t 12 t v 02?v 12 t 11 t 12 t ? t 11 t 12 t 13 u 02?u 22 t 21 t 22 t v 02?v 22 t 21 t 22 t ? t 21 t 22 t 23 u 02?u 32 t 31 t 32 t v 02?v 32 t 31 t 32 t ? t 31 t 32 t C B a11 a12 a13 a21 a22 a23 a31 a32 a33 1 C A = 0 : We have seen that only two points in four images suce to uniquely determine matrix A as well as all -s. However, it is always desirable to use all points available and, if possible, to design the calibration in such a way that the whole eld of view is covered by data. As more points are added, the equations (7) can be generated for each pair n2 of points. Thus, for n points, there would be = n (n?1) systems of equations 2 similar to (7) each giving rise to one (ideally same) A and four -s. On the other hand we could group all these equations together into some global matrix C in order to nd common consensus on A and -s. There is yet another reason why to solve all the equations altogether. Since all point pairs sharing common point also share its equations most of the equations are same. The same equations can be left out without any loss. It is easy to see that if all redundant equations are removed, all what remains is just one set (i.e. left equations in (4)) per one point. Hence, each point tracked in four views contributes by 9 new equations but only 4 new variables, -s, to be solved. By that, the global C will have only 9 n rows and 4 n + 9 columns (extra 9 counts for unknown entries of A). It also means that we solve only for 4 n + 9 unknowns and not for n (n?1) as it would be necessary if all point pairs 2 were treated independently. Left hand sides of the equations (4) are always linearly independent if t i span three-dimensional space since we assume that A is not singular. This could be violated only in the presence of extreme noise or if model was invalid. Equation C b = 0 is a homogeneous equation and can be numerically solved by SVD [3] decomposition since b equals to the right singular of C corresponding to its smallest singular value. This solution corresponds to the Total Least Squares solution of an overdetermined linear system [9]. In order to get stable (7)

7 results one has to scale coordinates in images and translation vectors so that they have similar ranges of magnitudes [4]. 2.3 Euclidean reconstruction By setting x 4 to 1 in (2) we can reconstruct calibration points in the robot coordinate system as x i = A?1 0i u 0i + T 0 : (8) 3 Experiments The rst experiment shows calibration of the camera mounted on the robot arm using a planar scene. Fig. 3. A camera is mounted on the robot arm and translated three times so that four images of the scene can be captured. The translation vectors span a three dimensional vector space. The camera Sony XC-75E with the Computar lens was zoomed to have focus length about 15 mm and mounted on the ABB robot arm. The arm was moved three times while keeping the rotation of the camera xed and four images of translated scene were captured. The translation vectors have been chosen to span three-dimensional vector space and so that the disparity vectors of projected points were not colinear, see Figure 3. The precision of the robot positioning was about 0:1 mm. 24 calibration points were extracted as the intersections of the lines tted to sides of six black squares in each of the four images. Each point in four views contributes to the linear system (7) by 9 equations. Therefore A is obtained by

8 sx sk sy u0 v0 Zoom NaN NaN NaN Tsai Trans Fig. 4. An Euclidean reconstruction of the calibration points is shown so that the view direction is perpendicular to the plane in which the points lie. Fig. 5. Intrinsic camera parameters recovered by the calibration from known translations (the third row) are compared to the parameters obtained by zooming (the rst row) and by Tsai's camera calibration method (the second row) Fig. 6. All reconstructed points indeed lie in a plane as the plane t residuals do not exceed 0:1 mm. Fig. 7. The lengths of the sides of squares are reconstructed with error in the range of 0:5 mm. The full line shows average reconstructed length. Two dashed lines show the range of expected length. SVD of 24 9 = 216 equations. Matrices K and R are then recovered by QR decomposition of A. Figure 5 shows camera intrinsic parameters K = sx sk u0 sy v0 1 recovered by dierent methods. The principal point given by the calibration from the known translations is much closer to the principal point measured by zooming than to the principal point obtained from Tsai's calibration method [8]. This shows the stability of the proposed algorithm since zooming delivers very accurate estimate of the principal point. 1 A

9 Fig. 8. An electricity outlet placed on the planar grid. Fig. 9. A frontal view of the reconstruction. Fig. 10. An oblique view of the reconstruction. Figure 4 shows the reconstructed points. The quality of reconstruction is supported by small residuals of plane t, Figure 6, and by good agreement of reconstructed and true size of the calibration squares, Figure 7. The second experiment shows a reconstruction of the scene consisting of the electricity outlet placed on the planar grid, Figure 8. The points were extracted manually with the error about 2 pixels. The camera was kept about half a meter from the scene while the scene was moved by hand in the range o 5 cm. The precision of translation measurements was about 1 mm. Figures 9 and 10 show a frontal and an oblique view of the reconstructed scene respectively. The overall shape of the outlet as well as angles and sizes were well reconstructed although particular points are noisy. The largest errors emerges on two small circles inside the outlet where the precision of correspond-

10 ing points was poor. 4 Conclusion The method for camera calibration and Euclidean reconstruction from known translations was presented. Our approach allows to calibrate the camera by tracking two points in four images of an unknown scene. We have shown that in this case a linear algorithm solving the calibration and reconstruction exists. Experiments suggest that the performance of the algorithm compares to the standard bundle adjustment camera calibration method. We believe that our approach is especially useful for the reconstruction of shape from long image sequences in a controlled environment. Acknowledgement We would like to thank to Dorin Ungureanu for fruitful discussions and for nding the principal point of the camera by zooming. We also thank to Bert Van den Berghe for assistance with the experiments. References 1. Olivier Faugeras. Three-Dimensional Computer Vision: A Geometric Viewpoint. The MIT Press, Olivier D. Faugeras. What can be seen in three dimensions with an uncalibrated stereo rig? In European Conference on Computer Vision, pages 563{578, G.H. Golub and C.F. Van Loan. Matrix Computations. The John Hopkins University Press, R. Hartley. In defence of the 8-point algorithm. In E. Grimson, editor, Proc. of the Fifth International Conference on Computer Vision, volume 1, pages 1064{1070, R. Horaud, R. Mohr, F. Dornaika, and B. Boufama. The advantage of mounting a camera onto a robot arm. In Proc. of the Europe-China Workshop on Geometrical Modelling and Invariants for Computer Vision, Xian, China, pages 206{213, Stephen J. Maybank and Olivier D. Faugeras. A theory of self-calibration of a moving camera. International Journal of Computer Vision, 8(2):123{151, T. Moons, L. Van Gool, M. Van Diest, and E. Pauwels. Euclidean reconstruction from uncalibrated views. In 2nd ESPRIT - ARPA Workshop on Invariants in Computer Vision, pages 297 { 316, Ponta Delgada, Azores, October R.Y. Tsai. A versatile camera calibration technique for high accuracy 3D machine vision metrology using o-the-shelf cameras and lenses. IEEE Journal of Robotics and Automation, 3(4):323{344, S. Van Huel and J. Vandewalle. The total least squares problem: Computational aspects and analysis. SIAM, Philadelphia, This article was processed using the LaT E X macro package with LLNCS style

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de

More information

Camera Calibration with a Simulated Three Dimensional Calibration Object

Camera Calibration with a Simulated Three Dimensional Calibration Object Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland

Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland Calibrating a Structured Light System Dr Alan M. McIvor Robert J. Valkenburg Machine Vision Team, Industrial Research Limited P.O. Box 2225, Auckland New Zealand Tel: +64 9 3034116, Fax: +64 9 302 8106

More information

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan

More information

Z (cm) Y (cm) X (cm)

Z (cm) Y (cm) X (cm) Oceans'98 IEEE/OES Conference Uncalibrated Vision for 3-D Underwater Applications K. Plakas, E. Trucco Computer Vision Group and Ocean Systems Laboratory Dept. of Computing and Electrical Engineering Heriot-Watt

More information

Mei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract

Mei Han Takeo Kanade. January Carnegie Mellon University. Pittsburgh, PA Abstract Scene Reconstruction from Multiple Uncalibrated Views Mei Han Takeo Kanade January 000 CMU-RI-TR-00-09 The Robotics Institute Carnegie Mellon University Pittsburgh, PA 1513 Abstract We describe a factorization-based

More information

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,

More information

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet: Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Recovering structure from a single view Pinhole perspective projection

Recovering structure from a single view Pinhole perspective projection EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their

More information

Camera calibration. Robotic vision. Ville Kyrki

Camera calibration. Robotic vision. Ville Kyrki Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Camera Calibration Using Line Correspondences

Camera Calibration Using Line Correspondences Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of

More information

Epipolar geometry. x x

Epipolar geometry. x x Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections

More information

But First: Multi-View Projective Geometry

But First: Multi-View Projective Geometry View Morphing (Seitz & Dyer, SIGGRAPH 96) Virtual Camera Photograph Morphed View View interpolation (ala McMillan) but no depth no camera information Photograph But First: Multi-View Projective Geometry

More information

Two-View Geometry (Course 23, Lecture D)

Two-View Geometry (Course 23, Lecture D) Two-View Geometry (Course 23, Lecture D) Jana Kosecka Department of Computer Science George Mason University http://www.cs.gmu.edu/~kosecka General Formulation Given two views of the scene recover the

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY

3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY IJDW Volume 4 Number January-June 202 pp. 45-50 3D FACE RECONSRUCION BASED ON EPIPOLAR GEOMERY aher Khadhraoui, Faouzi Benzarti 2 and Hamid Amiri 3,2,3 Signal, Image Processing and Patterns Recognition

More information

The end of affine cameras

The end of affine cameras The end of affine cameras Affine SFM revisited Epipolar geometry Two-view structure from motion Multi-view structure from motion Planches : http://www.di.ens.fr/~ponce/geomvis/lect3.pptx http://www.di.ens.fr/~ponce/geomvis/lect3.pdf

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

Mathematics of a Multiple Omni-Directional System

Mathematics of a Multiple Omni-Directional System Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

Combining Two-view Constraints For Motion Estimation

Combining Two-view Constraints For Motion Estimation ombining Two-view onstraints For Motion Estimation Venu Madhav Govindu Somewhere in India venu@narmada.org Abstract In this paper we describe two methods for estimating the motion parameters of an image

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

Lecture 3: Camera Calibration, DLT, SVD

Lecture 3: Camera Calibration, DLT, SVD Computer Vision Lecture 3 23--28 Lecture 3: Camera Calibration, DL, SVD he Inner Parameters In this section we will introduce the inner parameters of the cameras Recall from the camera equations λx = P

More information

Calibration of a Multi-Camera Rig From Non-Overlapping Views

Calibration of a Multi-Camera Rig From Non-Overlapping Views Calibration of a Multi-Camera Rig From Non-Overlapping Views Sandro Esquivel, Felix Woelk, and Reinhard Koch Christian-Albrechts-University, 48 Kiel, Germany Abstract. A simple, stable and generic approach

More information

CS 664 Structure and Motion. Daniel Huttenlocher

CS 664 Structure and Motion. Daniel Huttenlocher CS 664 Structure and Motion Daniel Huttenlocher Determining 3D Structure Consider set of 3D points X j seen by set of cameras with projection matrices P i Given only image coordinates x ij of each point

More information

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

An idea which can be used once is a trick. If it can be used more than once it becomes a method

An idea which can be used once is a trick. If it can be used more than once it becomes a method An idea which can be used once is a trick. If it can be used more than once it becomes a method - George Polya and Gabor Szego University of Texas at Arlington Rigid Body Transformations & Generalized

More information

Coplanar circles, quasi-affine invariance and calibration

Coplanar circles, quasi-affine invariance and calibration Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Multiple Motion Scene Reconstruction from Uncalibrated Views

Multiple Motion Scene Reconstruction from Uncalibrated Views Multiple Motion Scene Reconstruction from Uncalibrated Views Mei Han C & C Research Laboratories NEC USA, Inc. meihan@ccrl.sj.nec.com Takeo Kanade Robotics Institute Carnegie Mellon University tk@cs.cmu.edu

More information

Center for Automation Research, University of Maryland. The independence measure is the residual normal

Center for Automation Research, University of Maryland. The independence measure is the residual normal Independent Motion: The Importance of History Robert Pless, Tomas Brodsky, and Yiannis Aloimonos Center for Automation Research, University of Maryland College Park, MD, 74-375 Abstract We consider a problem

More information

is used in many dierent applications. We give some examples from robotics. Firstly, a robot equipped with a camera, giving visual information about th

is used in many dierent applications. We give some examples from robotics. Firstly, a robot equipped with a camera, giving visual information about th Geometry and Algebra of Multiple Projective Transformations Anders Heyden Dept of Mathematics, Lund University Box 8, S-22 00 Lund, SWEDEN email: heyden@maths.lth.se Supervisor: Gunnar Sparr Abstract In

More information

EECS 442: Final Project

EECS 442: Final Project EECS 442: Final Project Structure From Motion Kevin Choi Robotics Ismail El Houcheimi Robotics Yih-Jye Jeffrey Hsu Robotics Abstract In this paper, we summarize the method, and results of our projective

More information

The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.

The real voyage of discovery consists not in seeking new landscapes, but in having new eyes. The real voyage of discovery consists not in seeking new landscapes, but in having new eyes. - Marcel Proust University of Texas at Arlington Camera Calibration (or Resectioning) CSE 4392-5369 Vision-based

More information

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2

Module 4F12: Computer Vision and Robotics Solutions to Examples Paper 2 Engineering Tripos Part IIB FOURTH YEAR Module 4F2: Computer Vision and Robotics Solutions to Examples Paper 2. Perspective projection and vanishing points (a) Consider a line in 3D space, defined in camera-centered

More information

Geometric camera models and calibration

Geometric camera models and calibration Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

Epipolar Geometry in Stereo, Motion and Object Recognition

Epipolar Geometry in Stereo, Motion and Object Recognition Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Stereo Calibration from Rigid Motions

Stereo Calibration from Rigid Motions Stereo alibration from Rigid Motions Radu Horaud, Gabriela surka, David Demirdjian To cite this version: Radu Horaud, Gabriela surka, David Demirdjian. Stereo alibration from Rigid Motions. IEEE Transactions

More information

Multiple View Reconstruction of Calibrated Images using Singular Value Decomposition

Multiple View Reconstruction of Calibrated Images using Singular Value Decomposition Multiple View Reconstruction of Calibrated Images using Singular Value Decomposition Ayan Chaudhury, Abhishek Gupta, Sumita Manna, Subhadeep Mukherjee, Amlan Chakrabarti Abstract Calibration in a multi

More information

Self-Calibration from Multiple Views with a Rotating Camera

Self-Calibration from Multiple Views with a Rotating Camera Self-Calibration from Multiple Views with a Rotating Camera Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Email : hartley@crd.ge.com Abstract. A newpractical method is given for the self-calibration

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

Structure and motion in 3D and 2D from hybrid matching constraints

Structure and motion in 3D and 2D from hybrid matching constraints Structure and motion in 3D and 2D from hybrid matching constraints Anders Heyden, Fredrik Nyberg and Ola Dahl Applied Mathematics Group Malmo University, Sweden {heyden,fredrik.nyberg,ola.dahl}@ts.mah.se

More information

A Comparison of Projective Reconstruction Methods for Pairs of Views

A Comparison of Projective Reconstruction Methods for Pairs of Views A Comparison of Projective Reconstruction Methods for Pairs of Views Rothwell, C, Csurka, G and Faugeras, O INRIA, Sophia Antipolis, France email: [crothwel,csurka,faugeras]@sophiainriafr Abstract Recently,

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe : Martin Stiaszny and Dana Qu LECTURE 0 Camera Calibration 0.. Introduction Just like the mythical frictionless plane, in real life we will

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Course 23: Multiple-View Geometry For Image-Based Modeling

Course 23: Multiple-View Geometry For Image-Based Modeling Course 23: Multiple-View Geometry For Image-Based Modeling Jana Kosecka (CS, GMU) Yi Ma (ECE, UIUC) Stefano Soatto (CS, UCLA) Rene Vidal (Berkeley, John Hopkins) PRIMARY REFERENCE 1 Multiple-View Geometry

More information

COS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures)

COS429: COMPUTER VISON CAMERAS AND PROJECTIONS (2 lectures) COS429: COMPUTER VISON CMERS ND PROJECTIONS (2 lectures) Pinhole cameras Camera with lenses Sensing nalytical Euclidean geometry The intrinsic parameters of a camera The extrinsic parameters of a camera

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Model Based Perspective Inversion

Model Based Perspective Inversion Model Based Perspective Inversion A. D. Worrall, K. D. Baker & G. D. Sullivan Intelligent Systems Group, Department of Computer Science, University of Reading, RG6 2AX, UK. Anthony.Worrall@reading.ac.uk

More information

METR Robotics Tutorial 2 Week 2: Homogeneous Coordinates

METR Robotics Tutorial 2 Week 2: Homogeneous Coordinates METR4202 -- Robotics Tutorial 2 Week 2: Homogeneous Coordinates The objective of this tutorial is to explore homogenous transformations. The MATLAB robotics toolbox developed by Peter Corke might be a

More information

Affine Reconstruction From Lines

Affine Reconstruction From Lines Affine Reconstruction From Lines 267 E. Thirion, T. Moons and L. Van Gool ESAT/MI2 Katholieke Universiteit Leuven Kardinaal Mercierlaan 94 B-3001 Heverlee - Belgium thirionoesat.kuleuven.ac.be Abstract

More information

first order approx. u+d second order approx. (S)

first order approx. u+d second order approx. (S) Computing Dierential Properties of 3-D Shapes from Stereoscopic Images without 3-D Models F. Devernay and O. D. Faugeras INRIA. 2004, route des Lucioles. B.P. 93. 06902 Sophia-Antipolis. FRANCE. Abstract

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt ECE 47: Homework 5 Due Tuesday, October 7 in class @:3pm Seth Hutchinson Luke A Wendt ECE 47 : Homework 5 Consider a camera with focal length λ = Suppose the optical axis of the camera is aligned with

More information

Contents. 1 Introduction Background Organization Features... 7

Contents. 1 Introduction Background Organization Features... 7 Contents 1 Introduction... 1 1.1 Background.... 1 1.2 Organization... 2 1.3 Features... 7 Part I Fundamental Algorithms for Computer Vision 2 Ellipse Fitting... 11 2.1 Representation of Ellipses.... 11

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images Peter F Sturm and Stephen J Maybank Computational Vision Group, Department of Computer Science The University of

More information

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS Cha Zhang and Zhengyou Zhang Communication and Collaboration Systems Group, Microsoft Research {chazhang, zhang}@microsoft.com ABSTRACT

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

CS231A Midterm Review. Friday 5/6/2016

CS231A Midterm Review. Friday 5/6/2016 CS231A Midterm Review Friday 5/6/2016 Outline General Logistics Camera Models Non-perspective cameras Calibration Single View Metrology Epipolar Geometry Structure from Motion Active Stereo and Volumetric

More information

Geometry of image formation

Geometry of image formation eometry of image formation Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 3, 2008 Talk Outline

More information

A Stratified Approach to Metric Self-Calibration

A Stratified Approach to Metric Self-Calibration A Stratified Approach to Metric Self-Calibration Marc Pollefeys and Luc Van Gool K.U.Leuven-MI2 Belgium firstname.lastname@esat.kuleuven.ac.be Abstract Camera calibration is essential to many computer

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods

More information

Introduction to Homogeneous coordinates

Introduction to Homogeneous coordinates Last class we considered smooth translations and rotations of the camera coordinate system and the resulting motions of points in the image projection plane. These two transformations were expressed mathematically

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

A Novel Stereo Camera System by a Biprism

A Novel Stereo Camera System by a Biprism 528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel

More information

Vision-based Manipulator Navigation. using Mixtures of RBF Neural Networks. Wolfram Blase, Josef Pauli, and Jorg Bruske

Vision-based Manipulator Navigation. using Mixtures of RBF Neural Networks. Wolfram Blase, Josef Pauli, and Jorg Bruske Vision-based Manipulator Navigation using Mixtures of RBF Neural Networks Wolfram Blase, Josef Pauli, and Jorg Bruske Christian{Albrechts{Universitat zu Kiel, Institut fur Informatik Preusserstrasse 1-9,

More information

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation. 3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly

More information

Multi-view geometry problems

Multi-view geometry problems Multi-view geometry Multi-view geometry problems Structure: Given projections o the same 3D point in two or more images, compute the 3D coordinates o that point? Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera

More information