The Epipolar Geometry Toolbox (EGT) for MATLAB
|
|
- Reynold Bates
- 6 years ago
- Views:
Transcription
1 The Epipolar Geometry Toolbo (EGT) for MATLAB Gian Luca Mariottini, Eleonora Alunno, Domenico Prattichizzo Dipartimento di Ingegneria dell Informazione Università di Siena Via Roma 6, 3 Siena, Italy {gmariottini,alunno,prattichizzo}@dii.unisi.it Technical Report 7--3-DII University of Siena, July Siena, Italy. Abstract The Epipolar Geometry Toolbo (EGT) was realized to provide a MATLAB user with an etensible framework for the creation of multiple camera systems and easy manipulation of the geometry relating them. Functions provided, for both pinhole and panoramic vision sensors, include camera placement and visualization, computation and estimation of epipolar geometry entities and many others. EGT has been used with the Robotics Toolbo [7] for Visual Servoing applications. This article introduces the Toolbo in tutorial form. Eamples are provided to show its capabilities. The complete Toolbo and documentation is freely available on the EGT web site [8]. I. INTRODUCTION The Epipolar Geometry Toolbo (EGT) is a toolbo designed for MATLAB []. MATLAB is a software environment, available for a wide range of platforms, designed around linear algebra principles and graphical presentations also for large datasets. Its core functionalities are etended by the use of many additional toolboes. Combined with interactive MATLAB environment and advanced graphical functions, EGT provides a wide set of functions to approach computer vision problems with multiple views. When used with the Robotics Toolbo by Corke [7], EGT allows to design Visual Servoing simulations. The increasing interest in robotic visual servoing for both 6DOF kinematic chains and mobile robots equipped with pinhole or panoramic cameras fied to the workspace or to the robot, motivated the development of EGT. Several authors, such as [], [8], [7], [9], [], have recently proposed new visual servoing strategies based on the geometry relating multiple views acquired from different camera configurations, i.e. the Epipolar Geometry [3]. In these years we have observed the necessity to develop a software environment that could help researchers to rapidly create a multiple camera setup, use visual data and design new visual servoing algorithms. With EGT we provide a wide set of easy-to-use but also completely customizable functions in order to create general multi-camera scenarios and manipulate the visual information between them. A distinguishable remark of EGT is that it can be used to create and manipulate visual data provided both by pin-hole and panoramic cameras. Catadioptric cameras, due to their wide field of view, has been recently applied in visual servoing contet [7]. The second motivation to develop EGT was the increasing distribution of free software in the latest years on the basis of the Free Software Foundation [9] principles. In this way users are allowed, and also encouraged, to adapt and improve the program as dictated by their need. Eamples of programs that follow these principles include for instance the Robotics Toolbo [7], for the creation of simulations in robotics, and the Intel s OpenCV C++ libraries for the implementation of computer vision algorithms, such as image processing and object recognition []. The third important motivation to design of EGT was the availability and increasing sophistication of MATLAB. EGT could have been written in other languages, such as C, C++ and this would have freed it from dependency on other softwares. However these low-level languages are not so conducive to rapid program development as MATLAB. This tutorial assumes the reader has familiarity with MAT- LAB and presents the basic EGT functions, after short theory recalls, together with intuitive eamples. An interesting eample is provided at the end of this paper to illustrate the use of EGT in combination with the Robotics Toolbo to tackle involved tasks in visual servoing applications. Section presents the basic reference frame notation in EGT, while in Section 3 the pin-hole and omnidirectional camera models together with EGT basic functions are presented. Then in Section we present the setup for multiple camera geometry (Epipolar Geometry) while in Section an application of EGT to the visual servoing is presented together with simulation results. In Section 6 we make a comparison between EGT and other software packages. EGT can be freely downloaded at [8] and requires MATLAB 6. or upper. The detailed manual is provided with a large set of eamples, figures and source code also for beginners. II. BASIC VECTOR NOTATION We here present the basic notation adopted in Epipolar Geometry Toolbo. In EGT all scene points w IR 3 are epressed in the world frame S w =< O w w y w z w > (Fig.). When referred to the pin-hole camera frame S c =< O c c y c z c > they will be indicated with c. Moreover all scene points epressed w.r.t. a central catadioptric camera
2 O c φ y c ψ c Pin-hole Camera frame Sc Fig.. θ z c t w c c ; R w c z w w y w O w World frame S w w m m O m z m CCD y m Central Catadioptric Camera frame Sm Main reference frames notation and vector representation in EGT. frame S m =< O m m y m z m > will be indicated with m. For the reader convenience we briefly present the basic vector notation and transformation []. Refer to Fig. and consider the 3 vector c S c. It can be epressed is S w as follows: w = R c w c + t c w () where t c w is the translational vector centered in S w and pointing toward the S c frame (Fig. ). The matri R c w is the rotation necessary to align the world frame with the camera frame. For eample we may choose R c w = R roll,pitch,yaw = R z,θ R y,φ R,ψ. The homogeneous notation aims to epress eq. () in linear form: w = H c w c where w = [ T w ] T, c = [ T c ] T. The matri H c w is referred to as homogeneous transformation matri: H c w = Rc w t c w T In analogous way a point w can be epressed in the camera frame by the following transformation [ R c T c = w R c w T ] t c w T w Consider the more general case in which two camera frames, referred to as actual and desired, are observing the same point w. From (??) it results that: w = R d w d + t d w () w = R a w a + t a w (3) Substituting (3) in () it follows: d = R d wt R a }{{ w a + R d ( wt t a } w t d ) w }{{} R a d t a d Equation () will be very useful for the epipolar geometry computation in EGT where it is necessary to know the relative displacement t a d and orientation Ra d between these two camera frames. () III. PIN-HOLE AND OMNIDIRECTIONAL CAMERA MODELS In this section the fundamentals of perspective and omnidirectional camera models are quickly reviewed. The reader is referred to [3], [6], [] for a detailed treatment. According to the purpose of this tutorial some basic EGT functions are reported together with the theory. A. Perspective camera With reference to Fig., consider a pin-hole camera located at O c. The full perspective model describes the relationship between a 3D point (in homogeneous coordinates) P = [ Y Z ] T epressed in the world frame and its projection m = [ u v ] T onto the image plane according to m = KΠ P where K IR 3 3 is the camera intrinsic parameters matri given by: K = k uf γ u k v f v. Here (u, v ) represent the piels coordinates, in the image frame, of the principal point (i.e. the intersection point between the image plane and the optical ais z c ), k u and k v are the number of piels per unit distance in image coordinates, f is the focal length and γ is the orthogonality factor of the CCD image aes (skew-factor). Π = [R t] IR 3 is the so-called eternal parameters camera matri, that contains the rotation R and the translation t between the world and the camera frames. According to the most part of the literature, the optical ais z c of pin-hole cameras has been chosen parallel to the y w ais of S w. It then results that: R = ( ) T R, π/ R rpy t = ( R, π/ R rpy ) T t c w In EGT the position t of a pin-hole camera is specified with respect to the world frame while the rotation R is referred to the pin-hole camera frame. During the testing phase at the University of Siena this choice seemed to be appreciated Image plane coordinates Oc y c zc v (,) v c f u Camera-centered coordinates u m I (R,t) Optical Ais c P zw yw O w w World frame coordinates Fig.. The pin-hole camera model. The 3D point P is projected onto m through the optical center O c. Note that m is epressed in the image plane coordinates (u, v) (piels).
3 from students that addressed it as intuitive. Especially, in order to directly obtain the homogeneous matri H w c the function f_rth is provided H=f_RtH(R,t). Eample (3D scene and pin-hole camera placement): Consider now a pin-hole camera rotated by R = R y,π/ IR 3 3 and translated by t = [,, ] T : >> R=rotoy(pi/); >> t=[-,-,] ; >> H=f_RtH(R,t); In EGT the camera frame and the associated 3D camera can be visualized with functions f_3dframe(h) and f_3dcamera(h) respectively, where H is the homogeneous transformation describing position and orientation of the camera with respect to S w >> f_3dframe(h,); %camera frame >> hold on >> f_3dcamera(h); %3D pin-hole camera >> ais equal, grid on, view(,3) >> title( 3D setup - EGT Tutorial - E. ) Here presented functions have additional options. See the Manual [8] for details. We can also place a set of N 3D points P i (e.g. the rectangular panel vertees) defined as P =... N Y Y... Y N IR 3 n Z Z... Z N by the use of function f_scenepnt(p) Zm - Yc 3D setup EGT Tutorial E. c m Zc (a) Z wf Y wf 3 - wf Ym Image Plane EGT Tutorial E Fig. 3. Eample. A pin-hole camera is positioned in [,, ] in the 3D world frame and rotated by π/ around the y-ais. The 3D scene points are projected on the image plane. Note that in this case K = I for simplicity. objects returning surface points and normals (see function f_3dsurface in [8]). B. Omnidirectional Camera Model Omnidirectional cameras combine reflective surfaces (mirrors) and lenses. Several types of panoramic cameras can be obtained simply combining cameras (pin-hole or orthographic) and mirrors (hyperbolic, parabolic or elliptical) []. Panoramic cameras are classified according to the fact that they satisfy or not the single viewpoint constraint guaranteeing that the visual sensor measures the light through a single point. Note that this constraint is required for the eistence of epipolar geometry and for the generation of geometrically correct images [] []. In [3], Baker et al. derive the entire class of catadioptric systems verifying the single viewpoint constraint. Among these EGT takes into account catadioptric systems consisting of pin-hole cameras coupled with hyperbolic mirrors, and orthographic cameras coupled with parabolic mirrors. In what follows the imaging model for a pin-hole camera with hyperbolic mirror is described. Consider now the basic scheme in Fig.. Note that in this (b) 3 >> =[-3, 3, 3, -3]; >> Y=[ 3, 3, 3, 3]; >> Z=[-3, -3, 3, 3]; >> P=[; Y; Z]; >> f_scenepnt(p); >> f_3dwfenum(p); %enumerate points Hyperbolic mirror z m O m y h m m w z w y w w O w The perspective projection m = [u, v] T obtained with f_perspproj(p,h,k): of points P is >> [u,v]=f_perspproj(p,h,k); >> plot(u,v, ro ) In Fig. 3(a) we can see the 3D view of the camera and the scene points in Fig. 3(b). Options of the proposed functions are detailed in the Reference manual [8]. Note that while the above eample describes the placement of 3D points P, EGT is also able to build scenes with 3D Pin-hole camera z c Oc u y c c Fig.. The panoramic camera model (pin-hole camera and hyperbolic mirror). The 3D point w is projected at u through the optical center O c, after being projected through the mirror center O m. case all frames (for both the camera and the mirror) are aligned f e
4 with the EGT frame. Three important reference frames are defined: () the world reference system centered at W whose vector is w ; () the mirror coordinate system centered at the focus M whose vector is = [, Y, Z] T ; and (3) the camera coordinate system centered at C whose vector is c. Henceforth all equations will be epressed in the mirror reference frame if not stated otherwise. Refer to Fig. and let a and b be the hyperbolic mirror parameters (z + e) a + y b = with eccentricity e = a + b, the transformation to obtain the projection u in the pin-hole camera frame (see Fig. ) is given by u = K e ( R m c ( ) ) λr mt w ( w t m w ) + t m c where λ = b ( ez±a ) b Z a a Y is a nonlinear function of. Eample (Panoramic camera placement): In EGT a panoramic camera can be placed and visualized. Let us place the camera in t=[,,] with orientation R=eye(3). EGT provides a function to simply visualize the panoramic camera in the 3D world frame as in Fig.. >> f_3dpanoramic(h); Moreover, for K=I, the projection of a 3D point w=[ ] in both the camera (m) and mirror (h) frames can be obtained from: >> [m,h] = f_panproj(w,h,k); Graphical results are reported in Fig. and more accurately described in the EGT Reference Manual. z w 6 8 w w h 6 y w Fig.. Eample. A panoramic camera (K = I) is positioned in [,, ] T in the 3D world frame and the 3D point w = [,, ] T is projected to the pinhole camera after being projected in h. 8 () IV. EPIPOLAR GEOMETRY In this section the EGT functions dealing with the Epipolar Geometry are presented. A. Epipolar geometry for pinhole cameras Consider now two perspective views of the same scene taken from distinct viewpoints in O and O, as in Fig.6. Let m and m be the corresponding points in two views, i.e. O y a a Fig. 6. P l z d m d O e e y d za baseline (R,t) Basic Epipolar Geometry entities for pinhole cameras. the perspective projection through O and O, of the same point P, in both image planes I and I, respectively. The epipolar geometry defines the geometric relationship between these corresponding points. Associated with Fig.6 we define the following set of geometric entities [3], [6]: Definition : Epipolar Geometry ) The plane passing through the optical centers O and O and the scene point P, is called an epipolar plane. Note that a pencil of planes eists. ) The projection e (e ) of one camera center onto the image plane of the other camera frame is called epipole. The epipole will be epressed in homogeneous coordinates ẽ = [ e e y ] T m l ẽ = [ e e y ] T 3) The intersection of the epipolar plane for P with image plane I (I ) defines the epipolar line l (l ). It may be shown that all epipolar lines pass through the epipole. One of the main parameters of the epipolar geometry is the 3 3 Fundamental Matri F that conveys most of the information about the relative position and orientation (t, R) between the two views. Moreover, the fundamental matri algebraically relates corresponding points in the two images through the Epipolar Constraint: Theorem : (Epipolar Constraint) Let be given two views of the same 3D point P, both characterized by their relative position and orientation (t, R) and the internal calibration parameters K and K. Moreover, let and be the two corresponding projections of a 3D point P in both image planes. The epipolar constraint is: T F = where F = K T [t] RK.
5 Basic epipolar geomtry setup Image plane for pinhole camera in O Image plane for pinhole camera in O.. e e. P v [piel]. v [piel]..3 P....6 Zm a Y a m Z a Fig. 7. Eample 3. 3D simulation setup, with both cameras looking at two scene points P and P. and [t] is the 3 3 skew-symmetric matri associated to t. Some important properties of epipolar geometry are reviewed etensively in [3] [6]. Eample 3 (Epipolar Geometry for pinhole cameras): The computation of epipolar geometry with EGT is straightforward. First of all suppose the first camera placed at t = [,, ] T with orientation R = R y,π/, while the second one is in the world frame, i.e. t = [,, ] T with orientation R = I. Pin-hole cameras can be designed and their homogeneous transformation matrices H and H can be obtained as described in Sec. III-A. Two feature points in P = [,, ] T and P = [,, ] T are observed and projected onto m and m on the two image planes. The two views setup is reported in Fig. 7. The computation of epipoles and fundamental matri is obtained from: >> [e,e,f]=f_epipole(h,h,k,k); while the epipolar lines are retrieved with: >> [l,l]=f_epipline(m,m,f); Note that l (l ) is a vector in IR 3 in homogeneous coordinates. In Fig. 8 both image planes are represented together with feature points, epipoles and the corresponding epipolar lines. Note that in this case it is assumed that K = I. B. Epipolar Geometry for panoramic cameras As in the pinhole cameras, the epipolar geometry is here defined when a pair of panoramic views is available. Note however that in this case epipolar lines become epipolar conics. The reader is referred to [], [3] for an ehaustive treatment. Let now consider two panoramic cameras (with equal hyperbolical mirror Q) with foci M and M respectively, as represented in Fig. 9. Moreover, let define the rotation R and Y d Z d d Ym... u [piel] u [piel] Fig. 8. Eample 3. Image planes of both pin-hole cameras with corresponding features and epipoles. Note that all epipolar lines meet at the epipole and pass through the corresponding points. the translation t between the two mirror frames. The coplanarity of h, h and can be epressed, in the reference coordinate system centered at M, as: T h E h = where E = R [t] (6) Equivalently to the pin-hole case, the 3 3 matri E is referred to as the essential matri []. Vectors h, h and t define the epipolar plane π, that intersects both mirror quadrics Q in the two epipolar conics C and C []. All the epipolar conics C i pass through two pairs of epipoles, e and e for one mirror and e and e for the second one, which are the intersections of the two mirrors with the baseline M M (see Fig.9). Eample (Epipolar geometry for panoramic cameras): A similar cameras and scene configuration as for the Eample 3 has been set. By using the EGT function: >> [ec,epc,ec,epc] = f_panepipoles(h,h); the CCD coordinates of the epipoles are easily computed and plotted. The image projections Cm and Cm of epipolar conics C and C, corresponding to the 3D point are computed and visualized by >> [Cm,Cm] = f_epipconics(,h,h); In an analogous way the epipolar conics on the mirror surface can be plotted, and the normal vector n π to the e Fig. 9. m nπ h e' baseline e h M M ( R,t) C C Q Epipolar geometry for panoramic camera sensors. m e ' Q
6 Zm v [piels] - Fig Ym 7 6 Y c c Z c 3 Y c ead ead c Z c (A) - Y wf Z wf wf (D) - Y c c Z c Y c eda eda c Z c m Epipolar geometry computation and visualization with EGT. epipole feature points epipole... u [piels] v [piels].... epipole feature points epipole..... u [piels] Fig.. Epipolar conics visualization. All epipolar conics, in each image plane, intersect at the epipoles. epipolar plane can be computed, with: >> n_pi = f_mirrorconic(,h,h) Fig. presents the application of EGT to the computation o epipolar geometry between both actual (A) and desired (D) catadioptric views respectively. Epipolar conics in both CCD image plane, together with corresponding feature points, are plotted in Fig.?? The complete MATLAB source code is available in the EGT web-site [8]. C. Estimation of Epipolar Geometry A very important feature of EGT is that the most important algorithms for the epipolar geometry estimation have been embedded in the toolbo code. In particular we implemented linear methods [], the well known 8-points algorithm [], the algorithm based on geometric distance (Sampson) [3] and finally the CFNS - Constrained method [6] as summarized in Table I. Note that all these algorithms estimate the fundamental matri starting from a set of correspondences. A complete comparison and review of these methods has been reported in [3]. The epipolar geometry estimation is one of the most relevant features of EGT. The main reason is that users can run simulations where the fundamental matri is estimated from noisy feature points. The basic EGT function to estimate the fundamental matri F is >> F=f_Festim(U,U,algorithm) where U and U are two ( n) matrices containing n corresponding features [ ] U =... n y y... y n The integer algorithm identifies the type of estimation algorithm to be used between the five available reported in Table I. It is worth noting that these functions allows EGT to estimate the epipolar geometry also in real eperiments other than in simulations. V. APPLICATION TO VISUAL SERVOING When used with a robotics toolbo, such as the Robotics Toolbo (RT) by Corke [7], EGT proves to be an efficient tool to implement visual servoing techniques based on a multiple view geometry. The visual servoing controller proposed by Rives in [] has been considered as a tutorial eample to show the main EGT features in a visual servoing contet. In what follows the algorithm proposed in [] is briefly recalled for the reader convenience. The main goal of this visual servoing is to drive a 6DOF robot arm (simulated with RT), equipped with a CCD camera, from a starting configuration toward a desired one using only image data provided during the robot motion (Fig. ). The basic visual servoing idea consists in decoupling the camera/end-effector rotations and translations by the use of the hybrid vision-based task function approach [8]. The servoing task is performed minimizing an error function ϕ which is divided in both a priority task ϕ, that acts rotating the actual camera until the desired orientation is get, and in a secondary ID Estimation Algorithm Unconstrained linear estimation [] Linear estimation with constraint (det(f) = ) [] 3 The normalized 8-point algorithm [] Algorithm based on Geometric Distance (Sampson) [3] CFNS - Constrained method [6] TABLE I THE FIVE ESTIMATION ALGORITHMS OF THE EPIPOLAR GEOMETRY IMPLEMENTED IN EGT WITH BIBLIOGRAPHY.
7 task ϕ (to be minimized under the constraint ϕ = ) that deals with the translation of the camera toward the desired position. The analytical epression of the error function is given by.7.6. starting point Desired Image Plane ϕ = W + ϕ + β(ii 6 W + W) ϕ where β IR +, while W + is the pseudo-inverse of the matri W IR m n : Range(W T ) = Range(J T ). Moreover, let ϕ = ( k t t ) be the distance between the current translation and the desired one. The primary task accomplishes to rotate the camera/endeffector using the epipolar geometry property stating that when both the initial and the desired cameras gain the same orientation, then the distances between feature points m i and corresponding epipolar lines l i are zero. Fig. 3 shows the simulation results of the algorithm previously proposed for a scene with 8 points. The dotted points in Fig. 3 shows, in the image plane (K = I), the migration of the features to the epipolar lines during the rotation (first task), while crossed points correspond to the camera translation (second task). In Fig. the control inputs (v c, ω c ) and the plot of the norm of the error function e are reported. The MATLAB code of this simulation can be downloaded at the EGT web site [8]. VI. COMPARISON WITH OTHER PACKAGES To the best of our knowledge no other software packages eist that deal with multiple view geometry, panoramic cameras, that are designed for the MATLAB environment. An interesting MATLAB package is the Structure and Motion Toolkit by Torr [6]. This toolkit is specifically designed for features detection and matching, robust estimation of the fundamental matri, self-calibration, and recovery of the (7) target point epipolar line Fig. 3. Migration of feature points from initial position toward desired one. Feature points, from a starting position, reach the epipolar lines at the end of the first task and then move toward desired ones (crossed points), during the second task. Control inputs samples ω ω y ω z v v y v z samples Fig.. (a) Control inputs to PUMA 6; (b) Norm of the error when β = and when β =. projection matrices. The main difference with EGT is that the work of Torr, specifically oriented to pin-hole cameras, is not specifically designed to run simulations. Regarding the visual servoing, it is worthwhile to mention the interesting Visual Servoing Toolbo (VST) by Cervera et al. [] providing a set of functions and blocks for simulation of vision-controlled systems. The main difference is that VST implements some visual servoing control laws but does not take consider neither multiple view geometry nor panoramic cameras that are the distinguishing features of EGT. Norm of the error β= β= z y VII. CONCLUSIONS Fig.. Simulation setup for EGT application to visual servoing. EGT is compatible also with Robotics Toolbo. The Epipolar Geometry Toolbo for MATLAB is a software package targeted to research and education in Computer Vision and Robotics Visual Servoing. It provides the user a wide set of functions for design multi-camera systems, also when panoramic camera sensors are involved. Several epipolar geometry estimation algorithms have been implemented. EGT is freely available and can be downloaded at the EGT web site together with a detailed manual and code eamples. The eamples in Sec. V and others are contained in the directory demos of the toolbo.
8 VIII. ACKNOWLEDGMENTS Authors want to thank all the students of the Robotics and Systems Laboratory the University of Siena who contributed to the testing phase of Epipolar Geometry Toolbo. Special thanks go to Jacopo Piazzi, Paola Garosi, Annalisa Bardelli and Fabio Morbidi that helped us with useful comments on the toolbo and on the paper. [6] Philip Torr. The Structure and Motion Toolkit for MATLAB. philiptorr/. [7] R. Vidal, O. Shakernia,, and S. Sastry. Omnidirectional visionbased formation control. In Fortieth Annual Allerton Conference on Communication, Control and Computing,, page 663,. REFERENCES [] Intel s OpenCV. [] The Visual Servoing Toolbo, 3. [3] S. Baker and S.K. Nayar. Catadioptric image formation. In Proc. of DARPA Image Understanding Workshop, pages 3 37, 997. [] R. Basri, E. Rivlin, and I. Shimshoni. Visual homing: Surfing on the epipoles. In International Conference on Computer Vision, pages , 998. [] R. Benosman and S.B. Kang. Panoramic Vision: Sensors, Theory and Applications. Springer Verlag, New York,. [6] W. Chojnacki, M. J. Brooks, A. van den Hengel, and D. Gawley. A new constrained parameter estimator: eperiments in fundamental matri computation. Image and Vision Computing, to appear, 3. [7] P.I. Corke. A robotics toolbo for MATLAB. IEEE Robotics and Automation Magazine, 3: 3, 996. [8] B. Espiau, F. Chaumette, and P. Rives. A new approach to visual servoing in robotics. In IEEE Trans. on Robotics and Automation, volume 8(3), [9] Free Software Foundation. The free software definition. [] C. Geyer and K. Daniilidis. A unifying theory for central panoramic systems. In Proc. of Sith European Conference on Computer Vision, pages 6,. Dublin, Ireland. [] C. Geyer and K. Daniilidis. Mirrors in motion: Epipolar geometry and motion estimation. In Proc. of the Ninth IEEE International Conference on Computer Vision (ICCV 3), 3. [] R. Hartley. In defence of the 8-point algorithm. In Proc. of IEEE Int. Conference on Computer Vision, pages 6 7, Cambridge MA,USA, June 99. [3] R. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge University Press, September. [] H.C. Longuet-Higgins. A computer algorithm for recostructing a scene from two projections. Nature, 93:33 3, 98. [] Q.T. Luong and O.D. Faugeras. The fundamental matri: theory, algorithms and stability analysis. Int. Journal of Computer Vision, 7():3 76, 996. [6] Y. Ma, S. Soatto, J. Košecká, and S. Shankar Sastry. An invitation to 3-D Vision, From Images to Geometric Models. Springer Verlag, New York, 3. [7] E. Malis and F. Chaumette. / d visual servoing with respect to unknown objects through a new estimation scheme of camera displacement. International Journal of Computer Vision, 37():79 97,. [8] G. Mariottini and D. Prattichizzo. Epipolar Geometry Toolbo for Matlab. University of Siena, 3. [9] G.L. Mariottini, G.Oriolo, and D.Prattichizzo. Epipole-based visual servoing for nonholonomic mobile robots. In IEEE International Conference Robotics and Automation,. [] P. Rives. Visual servoing based on epipolar geometry. In International Conference on Intelligent Robots and Systems, volume, pages 6 67,. [] M.W. Spong and M. Vidyasagar. Robot Dynamics and Control. John Wiley & Sons, New York, 989. [] T. Svoboda. Central Panoramic Cameras Design, Geometry, Egomotion. Phd thesis, Center for Machine Perception, Czech Technical University, Prague, Czech Republic, September 999. [3] T. Svoboda and T. Pajdla. Epipolar geometry for central catadioptric cameras. International Journal of Computer Vision, 9():3 37,. Kluwer Academic Publishers. [] T. Svoboda, T. Pajdla, and V. Hlaváč. Motion estimation using central panoramic cameras. In Proc. IEEE Conf. on Intelligent Vehicles, pages 33 3, [] Inc. The MathWorks. Matlab.
The Epipolar Geometry Toolbox: multiple view geometry and visual servoing for MATLAB
The Epipolar Geometry Toolbox: multiple view geometry and visual servoing for MATLAB Gian Luca Mariottini and Domenico Prattichizzo Dipartimento di Ingegneria dell Informazione, Università di Siena Via
More informationThe Epipolar Geometry Toolbox (EGT) was created
EGT for Multiple View Geometry and Visual Servoing Robotics and Vision with Pinhole and Panoramic Cameras The Epipolar Geometry Toolbox (EGT) was created to provide MATLAB users with an extensible framework
More informationUncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images
Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Gian Luca Mariottini and Domenico Prattichizzo Dipartimento di Ingegneria dell Informazione Università di Siena Via Roma 56,
More informationTOOLBOX. Epipolar Geometry. For use with MATLAB. User Guide v1.3. (Dec. 2005) Web: Gian Luca MARIOTTINI Domenico PRATTICHIZZO
Epipolar Geometry TOOLBOX For use with MATLAB User Guide v1.3 (Dec. 005) Web: http://egt.dii.unisi.it Gian Luca MARIOTTINI Domenico PRATTICHIZZO University of Siena, Siena, ITALY An Introduction to EGT
More informationA Robust Uncalibrated Visual Compass Algorithm from Paracatadioptric Line Images
A Robust Uncalibrated Visual Compass Algorithm from Paracatadioptric Line Images Gian Luca Mariottini 1 and Stefano Scheggi, Fabio Morbidi, Domenico Prattichizzo 2 1 Dept. of Computer Science and Engineering
More informationCatadioptric camera model with conic mirror
LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación
More informationMultibody Motion Estimation and Segmentation from Multiple Central Panoramic Views
Multibod Motion Estimation and Segmentation from Multiple Central Panoramic Views Omid Shakernia René Vidal Shankar Sastr Department of Electrical Engineering & Computer Sciences Universit of California
More informationMathematics of a Multiple Omni-Directional System
Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba
More informationKeeping features in the camera s field of view: a visual servoing strategy
Keeping features in the camera s field of view: a visual servoing strategy G. Chesi, K. Hashimoto,D.Prattichizzo,A.Vicino Department of Information Engineering, University of Siena Via Roma 6, 3 Siena,
More informationPrecise Omnidirectional Camera Calibration
Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional
More informationCS231M Mobile Computer Vision Structure from motion
CS231M Mobile Computer Vision Structure from motion - Cameras - Epipolar geometry - Structure from motion Pinhole camera Pinhole perspective projection f o f = focal length o = center of the camera z y
More informationCamera Calibration Using Line Correspondences
Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of
More informationRecovering structure from a single view Pinhole perspective projection
EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationTHIS PAPER presents an image-based visual servoing
IEEE TRANSACTIONS ON ROBOTICS, VOL. 23, NO. 1, FEBRUARY 2007 87 Image-Based Visual Servoing for Nonholonomic Mobile Robots Using Epipolar Geometry Gian Luca Mariottini, Member, IEEE, Giuseppe Oriolo, Senior
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric
More informationShortest Path Homography-Based Visual Control for Differential Drive Robots
Citation: G. López-Nicolás, C. Sagüés, and J. J. Guerrero. Vision Systems, Shortest Path Homography-Based Visual Control for Differential Drive Robots, chapter 3, pp. 583 596. Edited by Goro Obinata and
More informationPerspective Projection Transformation
Perspective Projection Transformation Where does a point of a scene appear in an image?? p p Transformation in 3 steps:. scene coordinates => camera coordinates. projection of camera coordinates into image
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationRobot Vision Control of robot motion from video. M. Jagersand
Robot Vision Control of robot motion from video M. Jagersand Vision-Based Control (Visual Servoing) Initial Image User Desired Image Vision-Based Control (Visual Servoing) : Current Image Features : Desired
More informationImage-based Visual Servoing for Mobile Robots: the Multiple View Geometry Approach
UNIVERSITÀ DEGLI STUDI DI SIENA DIPARTIMENTO DI INGEGNERIA DELL INFORMAZIONE DOTTORATO DI RICERCA IN INGEGNERIA DELL INFORMAZIONE CICLO XVIII Image-based Visual Servoing for Mobile Robots: the Multiple
More informationVisual Tracking of Planes with an Uncalibrated Central Catadioptric Camera
The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 29 St. Louis, USA Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera A. Salazar-Garibay,
More information3D Geometry and Camera Calibration
3D Geometr and Camera Calibration 3D Coordinate Sstems Right-handed vs. left-handed 2D Coordinate Sstems ais up vs. ais down Origin at center vs. corner Will often write (u, v) for image coordinates v
More informationStructure from Small Baseline Motion with Central Panoramic Cameras
Structure from Small Baseline Motion with Central Panoramic Cameras Omid Shakernia René Vidal Shankar Sastry Department of Electrical Engineering & Computer Sciences, UC Berkeley {omids,rvidal,sastry}@eecs.berkeley.edu
More informationUNIFYING IMAGE PLANE LIFTINGS FOR CENTRAL CATADIOPTRIC AND DIOPTRIC CAMERAS
UNIFYING IMAGE PLANE LIFTINGS FOR CENTRAL CATADIOPTRIC AND DIOPTRIC CAMERAS Jo~ao P. Barreto Dept. of Electrical and Computer Engineering University of Coimbra, Portugal jpbar@deec.uc.pt Abstract Keywords:
More informationLucas-Kanade Image Registration Using Camera Parameters
Lucas-Kanade Image Registration Using Camera Parameters Sunghyun Cho a, Hojin Cho a, Yu-Wing Tai b, Young Su Moon c, Junguk Cho c, Shihwa Lee c, and Seungyong Lee a a POSTECH, Pohang, Korea b KAIST, Daejeon,
More informationAn idea which can be used once is a trick. If it can be used more than once it becomes a method
An idea which can be used once is a trick. If it can be used more than once it becomes a method - George Polya and Gabor Szego University of Texas at Arlington Rigid Body Transformations & Generalized
More informationImage Based Visual Servoing Using Algebraic Curves Applied to Shape Alignment
The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 29 St. Louis, USA Image Based Visual Servoing Using Algebraic Curves Applied to Shape Alignment Ahmet Yasin Yazicioglu,
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de
More informationDescriptive Geometry Meets Computer Vision The Geometry of Two Images (# 82)
Descriptive Geometry Meets Computer Vision The Geometry of Two Images (# 8) Hellmuth Stachel stachel@dmg.tuwien.ac.at http://www.geometrie.tuwien.ac.at/stachel th International Conference on Geometry and
More informationSelf-Calibration of a Camera Equipped SCORBOT ER-4u Robot
Self-Calibration of a Camera Equipped SCORBOT ER-4u Robot Gossaye Mekonnen Alemu and Sanjeev Kumar Department of Mathematics IIT Roorkee Roorkee-247 667, India gossayemekonnen@gmail.com malikfma@iitr.ac.in
More informationTwo-view geometry Computer Vision Spring 2018, Lecture 10
Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of
More informationDegeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix
J Math Imaging Vis 00 37: 40-48 DOI 0007/s085-00-09-9 Authors s version The final publication is available at wwwspringerlinkcom Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential
More informationMonocular Visual Odometry
Elective in Robotics coordinator: Prof. Giuseppe Oriolo Monocular Visual Odometry (slides prepared by Luca Ricci) Monocular vs. Stereo: eamples from Nature Predator Predators eyes face forward. The field
More information3D Reconstruction with two Calibrated Cameras
3D Reconstruction with two Calibrated Cameras Carlo Tomasi The standard reference frame for a camera C is a right-handed Cartesian frame with its origin at the center of projection of C, its positive Z
More informationStructure and motion in 3D and 2D from hybrid matching constraints
Structure and motion in 3D and 2D from hybrid matching constraints Anders Heyden, Fredrik Nyberg and Ola Dahl Applied Mathematics Group Malmo University, Sweden {heyden,fredrik.nyberg,ola.dahl}@ts.mah.se
More informationA Calibration Algorithm for POX-Slits Camera
A Calibration Algorithm for POX-Slits Camera N. Martins 1 and H. Araújo 2 1 DEIS, ISEC, Polytechnic Institute of Coimbra, Portugal 2 ISR/DEEC, University of Coimbra, Portugal Abstract Recent developments
More informationLUMS Mine Detector Project
LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines
More informationCALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES
CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES YINGYING REN Abstract. In this paper, the applications of homogeneous coordinates are discussed to obtain an efficient model
More informationCamera Calibration with a Simulated Three Dimensional Calibration Object
Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek
More informationSynchronized Ego-Motion Recovery of Two Face-to-Face Cameras
Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn
More informationThe Geometry Behind the Numerical Reconstruction of Two Photos
The Geometry Behind the Numerical Reconstruction of Two Photos Hellmuth Stachel stachel@dmg.tuwien.ac.at http://www.geometrie.tuwien.ac.at/stachel ICEGD 2007, The 2 nd Internat. Conf. on Eng g Graphics
More informationMachine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationA General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras
A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan
More informationLecture 5 Epipolar Geometry
Lecture 5 Epipolar Geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 5-24-Jan-18 Lecture 5 Epipolar Geometry Why is stereo useful? Epipolar constraints Essential
More informationHomography-based Tracking for Central Catadioptric Cameras
Homography-based Tracking for Central Catadioptric Cameras Christopher MEI, Selim BENHIMANE, Ezio MALIS and Patrick RIVES INRIA Icare Project-team Sophia-Antipolis, France Email : {firstname.surname@sophia.inria.fr}
More informationMonitoring surrounding areas of truck-trailer combinations
Monitoring surrounding areas of truck-trailer combinations Tobias Ehlgen 1 and Tomas Pajdla 2 1 Daimler-Chrysler Research and Technology, Ulm tobias.ehlgen@daimlerchrysler.com 2 Center of Machine Perception,
More informationWeek 2: Two-View Geometry. Padua Summer 08 Frank Dellaert
Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential
More informationA Real-Time Catadioptric Stereo System Using Planar Mirrors
A Real-Time Catadioptric Stereo System Using Planar Mirrors Joshua Gluckman Shree K. Nayar Department of Computer Science Columbia University New York, NY 10027 Abstract By using mirror reflections of
More informationComments on Consistent Depth Maps Recovery from a Video Sequence
Comments on Consistent Depth Maps Recovery from a Video Sequence N.P. van der Aa D.S. Grootendorst B.F. Böggemann R.T. Tan Technical Report UU-CS-2011-014 May 2011 Department of Information and Computing
More informationAuto-epipolar Visual Servoing
Auto-epipolar Visual Servoing Jacopo Piazzi Johns Hopkins University Baltimore, Maryland USA Domenico Prattichizzo Univeristá degli Studi di Siena Siena, Italy Noah J. Cowan Johns Hopkins University Baltimore,
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationCamera Calibration from the Quasi-affine Invariance of Two Parallel Circles
Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese
More informationEpipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz
Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two
More informationBut First: Multi-View Projective Geometry
View Morphing (Seitz & Dyer, SIGGRAPH 96) Virtual Camera Photograph Morphed View View interpolation (ala McMillan) but no depth no camera information Photograph But First: Multi-View Projective Geometry
More informationKinematic Model of Robot Manipulators
Kinematic Model of Robot Manipulators Claudio Melchiorri Dipartimento di Ingegneria dell Energia Elettrica e dell Informazione (DEI) Università di Bologna email: claudio.melchiorri@unibo.it C. Melchiorri
More informationVisual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing
Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing Minoru Asada, Takamaro Tanaka, and Koh Hosoda Adaptive Machine Systems Graduate School of Engineering Osaka University, Suita,
More informationA Computer Vision Sensor for Panoramic Depth Perception
A Computer Vision Sensor for Panoramic Depth Perception Radu Orghidan 1, El Mustapha Mouaddib 2, and Joaquim Salvi 1 1 Institute of Informatics and Applications, Computer Vision and Robotics Group University
More informationOmnivergent Stereo-panoramas with a Fish-eye Lens
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Omnivergent Stereo-panoramas with a Fish-eye Lens (Version 1.) Hynek Bakstein and Tomáš Pajdla bakstein@cmp.felk.cvut.cz, pajdla@cmp.felk.cvut.cz
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More informationA Novel Stereo Camera System by a Biprism
528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel
More informationRegion matching for omnidirectional images using virtual camera planes
Computer Vision Winter Workshop 2006, Ondřej Chum, Vojtěch Franc (eds.) Telč, Czech Republic, February 6 8 Czech Pattern Recognition Society Region matching for omnidirectional images using virtual camera
More informationStructure from motion
Multi-view geometry Structure rom motion Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera 3 R 3,t 3 Figure credit: Noah Snavely Structure rom motion? Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera 3 R 3,t 3 Structure:
More informationEpipolar Constraint. Epipolar Lines. Epipolar Geometry. Another look (with math).
Epipolar Constraint Epipolar Lines Potential 3d points Red point - fied => Blue point lies on a line There are 3 degrees of freedom in the position of a point in space; there are four DOF for image points
More informationEpipolar geometry. x x
Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationMulti-view geometry problems
Multi-view geometry Multi-view geometry problems Structure: Given projections o the same 3D point in two or more images, compute the 3D coordinates o that point? Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera
More informationPin Hole Cameras & Warp Functions
Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg
More informationCS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017
CS 277: Intro to Computer Vision Multiple Views Prof. Adriana Kovashka Universit of Pittsburgh March 4, 27 Plan for toda Affine and projective image transformations Homographies and image mosaics Stereo
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationCS231A Course Notes 4: Stereo Systems and Structure from Motion
CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance
More informationEpipolar Geometry and the Essential Matrix
Epipolar Geometry and the Essential Matrix Carlo Tomasi The epipolar geometry of a pair of cameras expresses the fundamental relationship between any two corresponding points in the two image planes, and
More informationCamera model and multiple view geometry
Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then
More informationA COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES
A COMPREHENSIVE TOOL FOR RECOVERING 3D MODELS FROM 2D PHOTOS WITH WIDE BASELINES Yuzhu Lu Shana Smith Virtual Reality Applications Center, Human Computer Interaction Program, Iowa State University, Ames,
More information3D Metric Reconstruction from Uncalibrated Omnidirectional Images
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY 3D Metric Reconstruction from Uncalibrated Omnidirectional Images Branislav Mičušík, Daniel Martinec and Tomáš Pajdla micusb1@cmp.felk.cvut.cz,
More informationMEM380 Applied Autonomous Robots Winter Robot Kinematics
MEM38 Applied Autonomous obots Winter obot Kinematics Coordinate Transformations Motivation Ultimatel, we are interested in the motion of the robot with respect to a global or inertial navigation frame
More informationCalibration of a fish eye lens with field of view larger than 180
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Calibration of a fish eye lens with field of view larger than 18 Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek
More informationA linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images
A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images Gang Xu, Jun-ichi Terai and Heung-Yeung Shum Microsoft Research China 49
More informationCamera Self-calibration Based on the Vanishing Points*
Camera Self-calibration Based on the Vanishing Points* Dongsheng Chang 1, Kuanquan Wang 2, and Lianqing Wang 1,2 1 School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001,
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More informationQuasi-Euclidean Uncalibrated Epipolar Rectification
Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report September 2006 RR 43/2006 Quasi-Euclidean Uncalibrated Epipolar Rectification L. Irsara A. Fusiello Questo
More informationAnnouncements. Equation of Perspective Projection. Image Formation and Cameras
Announcements Image ormation and Cameras Introduction to Computer Vision CSE 52 Lecture 4 Read Trucco & Verri: pp. 22-4 Irfanview: http://www.irfanview.com/ is a good Windows utilit for manipulating images.
More informationCoplanar circles, quasi-affine invariance and calibration
Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of
More informationHumanoid Robotics. Projective Geometry, Homogeneous Coordinates. (brief introduction) Maren Bennewitz
Humanoid Robotics Projective Geometry, Homogeneous Coordinates (brief introduction) Maren Bennewitz Motivation Cameras generate a projected image of the 3D world In Euclidian geometry, the math for describing
More informationMulti-View Geometry Part II (Ch7 New book. Ch 10/11 old book)
Multi-View Geometry Part II (Ch7 New book. Ch 10/11 old book) Guido Gerig CS-GY 6643, Spring 2016 gerig@nyu.edu Credits: M. Shah, UCF CAP5415, lecture 23 http://www.cs.ucf.edu/courses/cap6411/cap5415/,
More informationMAN-522: COMPUTER VISION SET-2 Projections and Camera Calibration
MAN-522: COMPUTER VISION SET-2 Projections and Camera Calibration Image formation How are objects in the world captured in an image? Phsical parameters of image formation Geometric Tpe of projection Camera
More informationTowards Generic Self-Calibration of Central Cameras
Towards Generic Self-Calibration of Central Cameras Srikumar Ramalingam 1&2, Peter Sturm 1, and Suresh K. Lodha 2 1 INRIA Rhône-Alpes, GRAVIR-CNRS, 38330 Montbonnot, France 2 Dept. of Computer Science,
More informationA Visual Servoing Model for Generalised Cameras: Case study of non-overlapping cameras
IEEE International Conference on Robotics and Automation Shanghai International Conference Center May 9-3,, Shanghai, China A Visual Servoing Model for Generalised Cameras: Case study of non-overlapping
More informationCSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D.
University of Texas at Arlington CSE 4392/5369 Introduction to Vision Sensing Dr. Gian Luca Mariottini, Ph.D. Department of Computer Science and Engineering University of Texas at Arlington WEB : http://ranger.uta.edu/~gianluca
More informationRobust Camera Calibration from Images and Rotation Data
Robust Camera Calibration from Images and Rotation Data Jan-Michael Frahm and Reinhard Koch Institute of Computer Science and Applied Mathematics Christian Albrechts University Kiel Herman-Rodewald-Str.
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationSELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR
SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR Juho Kannala, Sami S. Brandt and Janne Heikkilä Machine Vision Group, University of Oulu, Finland {jkannala, sbrandt, jth}@ee.oulu.fi Keywords:
More informationAnnouncements. Stereo
Announcements Stereo Homework 2 is due today, 11:59 PM Homework 3 will be assigned today Reading: Chapter 7: Stereopsis CSE 152 Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where relative
More informationURBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES
URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment
More informationDept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan
An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi
More informationBehavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism
Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate
More informationCS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003
CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass
More information