A Calibration Algorithm for POX-Slits Camera

Size: px
Start display at page:

Download "A Calibration Algorithm for POX-Slits Camera"

Transcription

1 A Calibration Algorithm for POX-Slits Camera N. Martins 1 and H. Araújo 2 1 DEIS, ISEC, Polytechnic Institute of Coimbra, Portugal 2 ISR/DEEC, University of Coimbra, Portugal Abstract Recent developments have suggested alternative multiperspective camera models potentially advantageous for the analysis of the scene structure. Two-slit cameras are one such case. These cameras collect all rays passing through two lines. The projection model for these cameras is non-linear, and in this model every 3D point is projected by a line that passes through that point and intersects two slits. In this paper we propose a robust non-iterative linear method for the calibration of this type of cameras. For that purpose a calibrating object with known dimensions is required. A solution for the calibration can be obtained using at least thirteen world to image correspondences. To achieve a higher level of accuracy data normalization and a non-linear technique based on the maximum likelihood criterion can be used to refine the estimated solution. 1 Introduction Projection models constitute a relevant issue in computer vision. The mathematical model that describes the formation of the most common type of images is the perspective projection model. Most of the commercialized optical devices generate images whose geometrical properties are described in this model. Therefore, the classic pinhole and orthographic camera models have long been used in 3D imaging applications. However certain special vision problems can benefit from the application of alternative projection models, as recent developments have suggested. Besides, those developments in image sensing make the perspective model highly restrictive. These multiperspective models have been providing advantageous imaging systems for understanding the structure of observed 3D scenes. Examples of such camera models are bi-centric [13], crossed-slits (also known as x-slits) [15], general linear [14] and rational polynomial [4] models. Multiperspective imaging has also been explored in computer graphics[8]. In the bi-centric model the centers of horizontal and vertical projections lie in different locations on the camera s optical axis. Perspective and pushbroom cameras [3] are particular cases of this model, if the horizontal and vertical projections lie in the same locations and if only the horizontal projection resides on the infinity (corresponds to a vertical strip of a sensor translating sideways), respectively. In [13] it was also shown that a straight line in the scene is projected

2 2 into a hyperbole in the image. The pushbroom model collects rays along parallel planes from points swept along a linear trajectory [3]. The most visible distortion in the images that followthe pushbroom model is the variation of aspect-ratio. General linear and cubic camera are general models. The general linear camera model unifies most projection models used in computer vision, including perspective and affine models, optical distortions models, x-slits models, stereo systems and catadioptric systems [14]. A cubic camera maps the image points as rational polynomial functions, of degree less than four, of the coordinates of a world point [4]. This camera model treats projective, affine, pushbroom and x-slits cameras as particular cases. In the x-slits model the projection ray of a generic 3D point is defined by the 3D line that passes through the point and two lines, referred as slits. The image is obtained by the intersection of every projective ray with the image plane. This model was initially designed by one of the pioneers of the color photography, Ducos du Hauron, in 1888 [7], under the name transformisme en photographie [6]. He thought that his device would be used in the 20 th century to create visions of another world [7]. However, it was a restricted model in terms of the slits positions, which were parallel and orthogonal between each other (this situation was later referred as parallel-orthogonal x-slits, or pox-slits [15]). An interesting aspect is that pox-slits projection equations are similar to the bi-centric model [1]. A particular case of the pox-slits camera, in which the vertical slit resides at infinity, is the pushbroom camera. One century later, the pox-slits model was revised and generalized by Kingslake, who concluded that it was similar to the perspective projection model in which the image is stretched or compressed in one direction more than the other [6]. This fact shows its adequacy to the use in wide-screen technologies. Zomet et al, in [15], expanding the Kingslake generalization, introduced the x- slits projection model. According to their study, one advantage of of this model is the fact that x-slits images can be easily generated by perspective images. Shortly, this procedure is performed by pasting together vertical or horizontal samplings of a sequence of images captured from a perspective camera, which moves, respectively, along a horizontal or vertical line. With a more complex procedure new x-slits views can be generated even when the camera motion is not parallel to the image plane [1]. The idea of sampling columns from images has been explored before, but using a constant sampling function [10]. This traditional mosaicing technique is similar to the one used to create pushbroom panoramas [13]. Another remarkable aspect of this camera is that perspective model is a particular case of the x-slits camera, in which the vertical and horizontal slits lie in the same plane. The optical center of the perspective camera is the intersection of the slits. In spite of the extensive analysis of x-slits cameras by [15], they have only focused on aspects related to image generation. In this paper we deal with the problem of calibrating this type of cameras. Grossberg et al, in [2], presented a different camera calibration algorithm, referred to as the generic imaging model. In that case calibration consists in de-

3 3 termining, for every image pixel, the associated 3D projection ray. This method is also used in [8] and [10]. This mapping can be conveniently described using a set of virtual sensing elements, called raxels. Raxels include geometric, radiometric and optical properties. As an extension to [2], [12] introduce a generic calibration approach. In this method at least three images of a calibration object are acquired. The fact that a projective ray is a 3D line yields a constraint that allows the recovery of both the motion and the camera s parameters. This constraint is formulated via a set of trifocal tensors that can be estimated linearly. In [9] this calibration method is used in a 3D reconstruction process, with a parametric reprojection to refine the obtained solution, based on bundle adjustment. In the calibration method described in this paper, we use the non-linear x- slits equations. For estimation purposes the equations are rewritten so that linear estimation methods can be used. For good levels of accuracy in the estimates, data normalization and a non-linear technique based on the maximum likelihood criterion can be used [5]. 2 X-slits projection model Consider the x-slits projection configuration represented in figure 1. The pro- Fig. 1. X-slits projection model. jective ray of a generic 3D point, P, must intersect two lines, or slits, l 1 and l 2. Point P together with each slit defines one plane. The intersection of those planes defines the projective ray. The projection of the 3D point in the image, p, is obtained by the intersection of the projective ray with the image plane. To define the two slits, let u i and v i (with i =1, 2) be two generic planes defined in a space of 3 dimensions, given by their parametric coordinates. The slits, l i, are defined through the intersection of those planes. These slits can be represented by the dual Plucker matrix [5], whose equation is 0 L i34 L i42 L i23 L i = u iv T i v iu T L i34 0 L i14 L i13 i = L i42 L i14 0 L i12 L i23 L i13 L i12 0

4 4 if we use the Plucker coordinates of the slits. The projective ray, l, is the intersection between two planes, defined by each slit l i and the 3D point P (L 1 P and L 2 P, respectively), and can be defined by the dual Plucker matrix, through L =(L 1 P )(L 2 P ) T (L 2 P )(L 1 P ) T Assuming that the image plane, I, is defined by the points P 0, P 1 and P 2, any point that belongs to I can be expressed by the linear combination of those points, given by kxp 0 + kyp 1 + kp 2 As a result any point from a 2D space vector defined in the image plane, in homogeneous co-ordinates, is given by p = [ kx ky k ] T [11]. The projection of a 3D generic point P in the image plane I generates a 2D point p. This projection is given by the intersection of the projective ray l with the image plane I. Therefore l must belong to both planes L i P. Therefore, [ P T L 1 P 0 P T L 1 P 1 P T L ] 1 P 2 P T L 2 P 0 P T L 2 P 1 P T L p = 0 (1) 2 P 2 The solution for equation (1) is the right null space of the matrix. This solution is obtained by using the cross product between the elements of the matrix, as P T L ( 1 P 1 P T T 2 P 2 P ) 1 L 2 P kp = P T L ( 1 P 2 P T T 0 P 0 P ) 2 L 2 P P T L ( 1 P 0 P T T 1 P 1 P ) 0 L 2 P The homogeneous relation between 3D world scene points and 2D image points, in pixels, for the x-slits projection model is kp = k x γ c x P T L 1 I 0 L 2 P P T L 1 I 1 L 2 P (2) P T L 1 I 2 L 2 P 0 k y c y where I 0, I 1 and I 2 are the Plucker matrices corresponding to the x and y axis of the image plane and the line at infinity. k x and k y are the focal lengths. (c x,c y ) are the coordinates of the principal point and γ is the image skew. According to [15], this solution is unique unless it resides on the line joining the intersections of the two slits with the image plane. 3 Calibrating X-slits projection model In this section we describe an algorithm to calibrate the x-slits camera. We begin with a particular case of this camera, known as pox-slits, and then we address the general case.

5 5 3.1 Pox-slits case We showhowto calibrate the pox-slits camera because this camera is similar to the bi-centric camera and a generalization of the pushbroom camera. Let us define, u 1 = [ 1000 ] T v 1 = [ ] T 001 Z 1 u 2 = [ 0100 ] T v 2 = [ ] T 001 Z 2 Let us also consider three homogeneous points P 0 = [ 1000 ] T P 1 = [ 0100 ] T P 2 = [ 0001 ] T which belong to the image plane. As a result, equation (2) is given by k x y = k X x γ c x Z 1 Z Z 1 0 k y c y Y Z 2 Z Z 2 (3) The calibration algorithm aims at estimating the intrinsic camera parameters k x, k y, c x, c y and γ and the slits parameters Z 1 and Z 2. From equation (3) we can obtain k x Z 1 XZ + k x Z 1 Z 2 X γz 2 YZ+ γz 1 Z 2 Y + c x Z 2 c x Z 1 Z c x Z 2 Z+ +c x Z 1 Z 2 + Z 1 xz + Z 2 xz Z 1 Z 2 x = xz 2 (4) and c y Z c y Z 2 k y Z 2 Y + Z 2 y = Zy (5) Assuming, without loss of generality,, C 1 = c y Z 2 and C 2 = k y Z 2,wecan rewrite equation (5), matrix form, as c y [ ] Z 1 Y y C 1 C 2 = Zy Z 2 Using, at least, four world to image correspondences, we obtain a system of equations whose solution can be obtained using any numerical linear method, e.g, SVD. The solutions of this system of equations yield estimates for the intrinsic parameters k y and c y, and the slit parameter Z 2. Assuming now, without loss of generality, C 3 = k x Z 1, C 4 = c x Z 1, C 5 = Z 1 γ and C 6 = c x Z 1, and substituting the estimated parameters in equation (4) we get xz 2 xzz 2 =( XZ + XZ 2 )C 3 YZZ 2 γ + YZ 2 C 5 +(Z 2 ZZ 2 )c x + +( Z + Z 2 )C 6 +(xz xz 2 )Z 1 Similarly, and using at least six world to image correspondences, we obtain a system of equations whose solutions yield estimates for the the intrinsic parameters k x, c x and γ, and the slit parameter Z 1. To obtain a higher level of accuracy, Hartley et al, in [5], suggest data normalization and a non-linear technique based on the maximum likelihood criterion. Therefore to calibrate the pox-slits camera six world to image correspondences, at least, must be used.

6 6 3.2 General case Let us nowaddress the case of the general x-slits camera. Assuming C 1 = L 142L 234 L 134L 242 C 2 = L 114L 234 L 134L 214 C 3 = L 142L 213 L 113L 242 C 4 = L 134L 213 L 113L 234 C 5 = L 114L 223 L 123L 214 C 6 = L 123L 234 L 134L 223 C 7 = L 114L 242 L 142L 214 C 8 = L 123L 213 L 113L 223 C 9 = L 114L 213 L 113L 214 C 10 = L 134L 212 L 112L 234 C 11 = L 112L 214 L 114L 212 C 12 = L 113L 212 L 112L 213 C 13 = L 142L 223 L 123L 242 C 14 = L 123L 212 L 112L 223 C 15 = L 142L 212 L 112L 242 V 1 = C 3 + C 5 V 2 = c xc 1 k xc 5 + k xc 10 + γc 13 V 8 = c xc 8 V 20 = c yc 8 V 3 = c xc 2 γc 3 + k xc 9 + γc 10 V 4 = c xv 1 + k xc 12 + γc 14 V 5 = c xc 4 + γc 8 V 6 = c xc 6 + k xc 8 V 7 = c xc 7 + k xc 11 + γc 15 V 9 = k xc 4 + γc 6 V 10 = k xc 6 V 12 = k yc 3 + k yc 10 c yc 2 V 16 = k yc 13 c yc 1 V 17 = k yc 15 c yc 7 V 13 = k yc 4 V 11 = γc 4 V 15 = k yc 8 + c yc 4 V 14 = k yc 6 V 18 = k yc 14 c yv 1 V 19 = c yc 6 without loss of generality, equation (2) can be rewritten as V 10 V 9 xc 1 + V 2 xc 6 V 6 P T 0 V 11 xc 2 + V 3 xc 4 V xc 7 + V 7 xv 1 + V P = xc 8 V 8 and P T 0 V 14 yc 1 + V 16 yc 6 V 19 0 V 13 yc 2 + V 12 yc 4 V yc 7 + V 17 yv 1 + V P = yc 8 V 20 The general model of this camera is specified by 15 parameters and therefore the total number of unknowns is also 15. However, as a result of rewriting the equations so that a linear numerical method can be used, we end up with 26 unknowns. Therefore at least 13 world to image correspondences are required. 4 Experimental Results The experimental results presented in this paper use synthetically generated data. In addition we only present results for the case of the pox-slits model. Results for the general case are still being obtained. As it can be seen in figure 2(a), a sphere is used as calibration object. This random sphere, with radius 20 and center ( 22.33, 43.37, ), is made up of D known points. In the figure we also show the image plane (bottom) and the planes that contain the slits (the two upper planes). Figure 2(b) represents the pox-slits image of the sphere points, with resolution Using equation (3), the pox-slits camera is defined with Z 1 = 100, Z 2 = 50, k x =47, k y = 63, c x = 320, c y = 240 and γ = 25. To calibrate the camera we start by normalizing the image coordinates as suggested by Hartley [5]. Gaussian white noise with 0 mean and σ 2 variance

7 7 (a) (b) Fig. 2. (a) Visualization of the calibration object, with the image plane and the planes that contain the slits; (b) Pox-slit image of calibration object. was added to the image coordinates of the points. The noise variance was varied between 0.1 pixels and 20 pixels. For each value of noise variance 150 runs were performed. The percent error in the estimates for each parameter was computed. The averages (for each noise variance level) of the percent errors are presented in Figure 3. As it can be seen in the figure, errors increase almost linearly with the noise level. We also computed the variance of errors in the estimates of the parameters. The values of the error variances are belowthe floating point precision. Therefore we can assume that this algorithm can be used to estimate this type of camera. 5 Conclusions In this paper we present a robust non-iterative linear algorithm to calibrate a pox-slits camera. The algorithm requires at least with six world to image correspondences. Normalization of the coordinates of the image points is an essential step of the algorithm. The algorithm for the general x-slit camera is also described briefly. References 1. D. Feldman, A. Zomet, S. Peleg, and D. Weinshall, Video synthesis made simple with the x-slits projection, IEEE Workshop on Motion and Video Computing, pp , Orlando, December M. Grossberg and S. Nayar, A General Imaging Model and a Method for Finding its Parameters, Proceedings of IEEE International Conference on Computer Vision, Vancouver - Canada, July R. Gupta and R. Hartley, Linear Pushbroom Cameras, IEEE Transactions Pattern Analysis and Machine Intelligence, Vol. 19, N. 9, pp , R. Hartley and T. Saxena, The cubic rational polynomial camera model, Image Understanding Workshop, pp , R. Hartley and A. Zisserman, Multiple View Geometry, Cambridge University Press, R. Kingslake, Optics in Photography, SPIE Optical Eng. Press, B. Newhall, The History of Photography, from 1839 to the present day, The Museum of Modern Art, pp. 162, 1964.

8 8 Fig. 3. Relative mean error in the estimation of the camera parameters plotted as function of the noise variance. 8. S. Peleg, M. Ben-Ezra, and Y. Pritch, Omnistereo: Panoramic Stereo Imaging, IEEE Transactions Pattern Analysis and Machine Intelligence, vol. 23, n. 3, pp , March S. Ramalingam, S. K. Lodha and P. Sturm, A generic structure-from-motion algorithm for cross-camera scenarios, OMNIVIS - Workshop on Omnidirectional Vision, Camera Networks and Non-Classical Cameras, pp , Prague, Czech Republic, May S. Seitz, The Space of All Stereo Images, Proceedings of the 8th IEEE International Conference on Computer Vision, vol. 1, pp , Vancouver - Canada, July J. Semple and G. Kneebone, Algebraic Projective Geometry, Oxford University Press, P. Sturm and S. Ramalingam, A generic concept for camera calibration, Proceedings of 8th European Conference on Computer Vision, Prague - Czech, D. Weinshall, M. Lee, T. Brodsky, M. Trajkovic, and D. Feldman, New View Generation with a Bi-Centric Camera, Proceedings of 7th European Conference on Computer Vision, pp , Copenhagen - Denmark, May J. Yu and L. McMillan, General Linear Cameras, Proceedings of 8th European Conference on Computer Vision, Prague - Czech, A. Zomet, D. Feldman, S. Peleg, and D. Weinshall, Mosaicing New Views: The Crossed-Slits Projection, IEEE Transactions Pattern Analysis and Machine Intelligence, pp , 2003.

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Constraints on perspective images and circular panoramas

Constraints on perspective images and circular panoramas Constraints on perspective images and circular panoramas Marc Menem Tomáš Pajdla!!"# $ &% '(# $ ) Center for Machine Perception, Department of Cybernetics, Czech Technical University in Prague, Karlovo

More information

Minimal Solutions for Generic Imaging Models

Minimal Solutions for Generic Imaging Models Minimal Solutions for Generic Imaging Models Srikumar Ramalingam Peter Sturm Oxford Brookes University, UK INRIA Grenoble Rhône-Alpes, France Abstract A generic imaging model refers to a non-parametric

More information

On the Epipolar Geometry of the Crossed-Slits Projection

On the Epipolar Geometry of the Crossed-Slits Projection In Proc. 9th IEEE International Conference of Computer Vision, Nice, October 2003. On the Epipolar Geometry of the Crossed-Slits Projection Doron Feldman Tomas Pajdla Daphna Weinshall School of Computer

More information

Catadioptric camera model with conic mirror

Catadioptric camera model with conic mirror LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación

More information

Analyzing the Spatio-Temporal Domain: from View Synthesis to Motion Segmentation

Analyzing the Spatio-Temporal Domain: from View Synthesis to Motion Segmentation Analyzing the Spatio-Temporal Domain: from View Synthesis to Motion Segmentation THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF PHILOSOPHY BY DORON FELDMAN SUBMITTED TO THE SENATE OF THE HEBREW UNIVERSITY

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Structure Computation Lecture 18 March 22, 2005 2 3D Reconstruction The goal of 3D reconstruction

More information

Multiperspective Projection and Collineation

Multiperspective Projection and Collineation Multiperspective Projection and Collineation Jingyi Yu Massachusetts Institute of Technology Cambridge, MA 02139 jingyiyu@csail.mit.edu Leonard McMillan University of North Carolina Chapel Hill, NC 27599

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Multiple View Geometry in Computer Vision Second Edition

Multiple View Geometry in Computer Vision Second Edition Multiple View Geometry in Computer Vision Second Edition Richard Hartley Australian National University, Canberra, Australia Andrew Zisserman University of Oxford, UK CAMBRIDGE UNIVERSITY PRESS Contents

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu Reference Most slides are adapted from the following notes: Some lecture notes on geometric

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D

More information

Geometric camera models and calibration

Geometric camera models and calibration Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

Coplanar circles, quasi-affine invariance and calibration

Coplanar circles, quasi-affine invariance and calibration Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of

More information

Epipolar Geometry in Stereo, Motion and Object Recognition

Epipolar Geometry in Stereo, Motion and Object Recognition Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Mathematics of a Multiple Omni-Directional System

Mathematics of a Multiple Omni-Directional System Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

ON CALIBRATION, STRUCTURE-FROM-MOTION AND MULTI-VIEW GEOMETRY FOR PANORAMIC CAMERA MODELS

ON CALIBRATION, STRUCTURE-FROM-MOTION AND MULTI-VIEW GEOMETRY FOR PANORAMIC CAMERA MODELS ON LIBRTION, STRUTURE-FROM-MOTION ND MULTI-VIEW GEOMETRY FOR PNORMI MER MODELS Peter Sturm a, Srikumar Ramalingam b, Suresh K Lodha b a INRI Rhône-lpes, 655 venue de l Europe, 3833 Montbonnot, France PeterSturminrialpesfr

More information

Geometry of image formation

Geometry of image formation eometry of image formation Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 3, 2008 Talk Outline

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Projective 3D Geometry (Back to Chapter 2) Lecture 6 2 Singular Value Decomposition Given a

More information

UNIFYING IMAGE PLANE LIFTINGS FOR CENTRAL CATADIOPTRIC AND DIOPTRIC CAMERAS

UNIFYING IMAGE PLANE LIFTINGS FOR CENTRAL CATADIOPTRIC AND DIOPTRIC CAMERAS UNIFYING IMAGE PLANE LIFTINGS FOR CENTRAL CATADIOPTRIC AND DIOPTRIC CAMERAS Jo~ao P. Barreto Dept. of Electrical and Computer Engineering University of Coimbra, Portugal jpbar@deec.uc.pt Abstract Keywords:

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Outline. ETN-FPI Training School on Plenoptic Sensing

Outline. ETN-FPI Training School on Plenoptic Sensing Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR

SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR SELF-CALIBRATION OF CENTRAL CAMERAS BY MINIMIZING ANGULAR ERROR Juho Kannala, Sami S. Brandt and Janne Heikkilä Machine Vision Group, University of Oulu, Finland {jkannala, sbrandt, jth}@ee.oulu.fi Keywords:

More information

CS231A Midterm Review. Friday 5/6/2016

CS231A Midterm Review. Friday 5/6/2016 CS231A Midterm Review Friday 5/6/2016 Outline General Logistics Camera Models Non-perspective cameras Calibration Single View Metrology Epipolar Geometry Structure from Motion Active Stereo and Volumetric

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Absolute Scale Structure from Motion Using a Refractive Plate

Absolute Scale Structure from Motion Using a Refractive Plate Absolute Scale Structure from Motion Using a Refractive Plate Akira Shibata, Hiromitsu Fujii, Atsushi Yamashita and Hajime Asama Abstract Three-dimensional (3D) measurement methods are becoming more and

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

A Theory of Multi-Layer Flat Refractive Geometry

A Theory of Multi-Layer Flat Refractive Geometry A Theory of Multi-Layer Flat Refractive Geometry Axis Amit Agrawal Srikumar Ramalingam Yuichi Taguchi Visesh Chari Mitsubishi Electric Research Labs (MERL) INRIA Imaging with Refractions Source: Shortis

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

But First: Multi-View Projective Geometry

But First: Multi-View Projective Geometry View Morphing (Seitz & Dyer, SIGGRAPH 96) Virtual Camera Photograph Morphed View View interpolation (ala McMillan) but no depth no camera information Photograph But First: Multi-View Projective Geometry

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

Multiple-View Structure and Motion From Line Correspondences

Multiple-View Structure and Motion From Line Correspondences ICCV 03 IN PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, NICE, FRANCE, OCTOBER 003. Multiple-View Structure and Motion From Line Correspondences Adrien Bartoli Peter Sturm INRIA

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Omnivergent Stereo-panoramas with a Fish-eye Lens

Omnivergent Stereo-panoramas with a Fish-eye Lens CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Omnivergent Stereo-panoramas with a Fish-eye Lens (Version 1.) Hynek Bakstein and Tomáš Pajdla bakstein@cmp.felk.cvut.cz, pajdla@cmp.felk.cvut.cz

More information

Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera

Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 29 St. Louis, USA Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera A. Salazar-Garibay,

More information

Multi-View Omni-Directional Imaging

Multi-View Omni-Directional Imaging Multi-View Omni-Directional Imaging Tuesday, December 19, 2000 Moshe Ben-Ezra, Shmuel Peleg Abstract This paper describes a novel camera design or the creation o multiple panoramic images, such that each

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

CS 664 Structure and Motion. Daniel Huttenlocher

CS 664 Structure and Motion. Daniel Huttenlocher CS 664 Structure and Motion Daniel Huttenlocher Determining 3D Structure Consider set of 3D points X j seen by set of cameras with projection matrices P i Given only image coordinates x ij of each point

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg

More information

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt ECE 47: Homework 5 Due Tuesday, October 7 in class @:3pm Seth Hutchinson Luke A Wendt ECE 47 : Homework 5 Consider a camera with focal length λ = Suppose the optical axis of the camera is aligned with

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003 CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass

More information

Towards Generic Self-Calibration of Central Cameras

Towards Generic Self-Calibration of Central Cameras Towards Generic Self-Calibration of Central Cameras Srikumar Ramalingam 1&2, Peter Sturm 1, and Suresh K. Lodha 2 1 INRIA Rhône-Alpes, GRAVIR-CNRS, 38330 Montbonnot, France 2 Dept. of Computer Science,

More information

Robust Camera Calibration from Images and Rotation Data

Robust Camera Calibration from Images and Rotation Data Robust Camera Calibration from Images and Rotation Data Jan-Michael Frahm and Reinhard Koch Institute of Computer Science and Applied Mathematics Christian Albrechts University Kiel Herman-Rodewald-Str.

More information

Image warping and stitching

Image warping and stitching Image warping and stitching May 4 th, 2017 Yong Jae Lee UC Davis Last time Interactive segmentation Feature-based alignment 2D transformations Affine fit RANSAC 2 Alignment problem In alignment, we will

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 More on Single View Geometry Lecture 11 2 In Chapter 5 we introduced projection matrix (which

More information

Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix

Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential Matrix J Math Imaging Vis 00 37: 40-48 DOI 0007/s085-00-09-9 Authors s version The final publication is available at wwwspringerlinkcom Degeneracy of the Linear Seventeen-Point Algorithm for Generalized Essential

More information

EXPERIMENTAL RESULTS ON THE DETERMINATION OF THE TRIFOCAL TENSOR USING NEARLY COPLANAR POINT CORRESPONDENCES

EXPERIMENTAL RESULTS ON THE DETERMINATION OF THE TRIFOCAL TENSOR USING NEARLY COPLANAR POINT CORRESPONDENCES EXPERIMENTAL RESULTS ON THE DETERMINATION OF THE TRIFOCAL TENSOR USING NEARLY COPLANAR POINT CORRESPONDENCES Camillo RESSL Institute of Photogrammetry and Remote Sensing University of Technology, Vienna,

More information

Contents. 1 Introduction Background Organization Features... 7

Contents. 1 Introduction Background Organization Features... 7 Contents 1 Introduction... 1 1.1 Background.... 1 1.2 Organization... 2 1.3 Features... 7 Part I Fundamental Algorithms for Computer Vision 2 Ellipse Fitting... 11 2.1 Representation of Ellipses.... 11

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two

More information

Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images

Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Gian Luca Mariottini and Domenico Prattichizzo Dipartimento di Ingegneria dell Informazione Università di Siena Via Roma 56,

More information

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential

More information

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004 Augmented Reality II - Camera Calibration - Gudrun Klinker May, 24 Literature Richard Hartley and Andrew Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2. (Section 5,

More information

Structure from Motion

Structure from Motion Structure from Motion Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions Structure from motion Recover both 3D scene geoemetry and camera positions SLAM: Simultaneous

More information

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke*

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke* ADS40 Calibration & Verification Process Udo Tempelmann*, Ludger Hinsken**, Utz Recke* *Leica Geosystems GIS & Mapping GmbH, Switzerland **Ludger Hinsken, Author of ORIMA, Konstanz, Germany Keywords: ADS40,

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

Epipolar Geometry and the Essential Matrix

Epipolar Geometry and the Essential Matrix Epipolar Geometry and the Essential Matrix Carlo Tomasi The epipolar geometry of a pair of cameras expresses the fundamental relationship between any two corresponding points in the two image planes, and

More information

Image warping and stitching

Image warping and stitching Image warping and stitching May 5 th, 2015 Yong Jae Lee UC Davis PS2 due next Friday Announcements 2 Last time Interactive segmentation Feature-based alignment 2D transformations Affine fit RANSAC 3 Alignment

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

Homographies and RANSAC

Homographies and RANSAC Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints

Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints Wei Jiang Japan Science and Technology Agency 4-1-8, Honcho, Kawaguchi-shi, Saitama, Japan jiang@anken.go.jp

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:

More information

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.

More information

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois Pinhole Camera Model /5/7 Computational Photography Derek Hoiem, University of Illinois Next classes: Single-view Geometry How tall is this woman? How high is the camera? What is the camera rotation? What

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

CS4670: Computer Vision

CS4670: Computer Vision CS467: Computer Vision Noah Snavely Lecture 13: Projection, Part 2 Perspective study of a vase by Paolo Uccello Szeliski 2.1.3-2.1.6 Reading Announcements Project 2a due Friday, 8:59pm Project 2b out Friday

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images

A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images A linear algorithm for Camera Self-Calibration, Motion and Structure Recovery for Multi-Planar Scenes from Two Perspective Images Gang Xu, Jun-ichi Terai and Heung-Yeung Shum Microsoft Research China 49

More information