Silhouette-based Multiple-View Camera Calibration

Size: px
Start display at page:

Download "Silhouette-based Multiple-View Camera Calibration"

Transcription

1 Silhouette-based Multiple-View Camera Calibration Prashant Ramanathan, Eckehard Steinbach, and Bernd Girod Information Systems Laboratory, Electrical Engineering Department, Stanford University Stanford, CA {pramanat, steinb, Abstract In this paper, we present an efficient method for calibrating many camera views simultaneously. We derive an error function based on the mutual consistency of the object silhouettes in pairs of views and explicitly derive gradients that are employed for minimizing the error function. Our experimental results suggest that the gradient-based minimization of the error function robustly calibrates multiple camera views. Due to the explicit availability of gradients, the technique is also computationally efficient. 1 Introduction Camera calibration is important for most computer vision and computer graphics applications since the quality of the results is very much dependent on the accuracy of the camera parameters. In 3-D reconstruction from multiple camera views, for example, the object shape that is recovered is, to a large extent, determined by the accuracy of the camera parameters in each of the views. Accurately determining the true values for the camera parameters can be difficult and is not possible in all situations. It may, however, be possible to obtain some approximate estimates of these parameters. When the silhouette of the object can be determined in each view, this information can be exploited to improve the accuracy of the camera parameters. Grattarola [1], and later Niem [2], have proposed silhouette-based methods to improve camera parameters for 3-D object reconstruction from multiple silhouettes. In this paper, we propose a silhouette-based technique that addresses some of the concerns of the previous approaches in [1] and [2], and extends them so that robust, efficient calibration of many camera views is possible. In Section 2, we review the previous work in silhouette-based calibration. In Section 3, we present our approach. Section 4 discusses the parameterization used for different camera arrangements in our simulation experiments. Experimental results are given in Section 5. In Section 6, we make a few concluding remarks. 2 Previous Work Silhouette information has been extensively used in computer vision for 3-D reconstruction [3]. As shown in Figure 1, a silhouette from a single view in the image plane I defines a three-dimensional cone C with the focal point P as the apex, and the points on the contour of the silhouette S defining the surface of the cone. When multiple views of an object are available, then these 3-D cones can be intersected to give the shape of the reconstructed object [3]. The accuracy of the resulting reconstruction depends not only on the topology of the object and the specific views that are used, but also on the accuracy of the camera parameters. Camera parameters include both intrinsic and extrinsic pa-

2 00 Cone C Image plane I Object silhouette S Center of projection P Figure 1: The 3-D cone C is defined by the silhouette S rameters. In this work, we perform calibration only over the extrinsic camera parameters, which we will henceforth refer to as the calibration parameters. If the calibration parameters are incorrect then, in general, the reconstructed shape will be incorrect. Since the object shape is unknown, the reconstructed object cannot be directly used to determine whether the camera parameters are incorrect. In [2], Niem projects the reconstructed shape into each of the views, and compares this projected silhouette with the original silhouette. The difference between these two silhouettes is considered to be a measure of the error in the calibration parameters. Niem also computes an approximate gradient of the error with respect to the calibration parameters. This gradient is used to both select the parameter to calibrate first, as well as approximate the required change in that parameter to improve the calibration. Niem s error measure, however, is computationally expensive since the object shape must be reconstructed in object space, and then projected onto each of the views. In addition, the error function is not a very smooth function, possibly due to the voxel-based representation of the object. As a result, the robust calibration of even a few parameters becomes difficult. The method proposed by Grattarola [1] does not require these computationally expensive reconstruction and projection steps. Instead, pairs of views are checked for mutual C ij P Image plane I ij j Cone C i Image plane I Object silhouette S i Center of projection P Figure 2: A 3-D cone C i defined by silhouette S i projects onto the image plane I j to become the 2-D cone C ij consistency in their silhouettes entirely in the image plane of one of the views. Figure 2 illustrates some of the basic underlying concepts of this approach. A pair of views, i and j is considered. The 3-D cone C i defined by an silhouette S i in image plane I i, when projected onto the image plane I j,appears as a 2-D cone C ij. This 2-D cone will be tangent to and contain the silhouette S j from the image plane I j as illustrated in Figure 3. This is, in general, only true when the camera parameters are correct for the views i and j. If the parameters are incorrect, the silhouette in the image plane will not be tangent and internal to the projected 2-D cone. Figure 4 shows the tangent cone which is formed from the silhouette S j and the projected focal point P ij. This is compared against the projected cone C ij. We define the error to be the sum of the absolute differences in the angles between the tangent cone and the projected cone. The error is illustrated in Figure 4 as the angles ɛ 1 and ɛ 2. This is slightly different from the error measure in [1] where Grattarola defines the error measure to be the perpendicular distance from the lines defining the projected cone C ij to the contour of the silhouette S j. The error, so far, has only been defined for a pair of views iand j. To compute the overall error, this pairwise error is averaged over all ordered pairs of views. However, it may occur that a particular pair of views cannot be used due to their relative positions. Specifically, if the projected focal point P ij is internal to the sihouette S j, then the pair (i, j), where view i is projected onto view j, cannot be used [1]. i i

3 If N T is used to denote the number of usable pairs of views, then the overall error can be defined as Silhouette S j Image plane I j Projected cone Figure 3: With correct parameters, the silhouette S j of image plane I j is internal and tangent to the 2-D cone C ij that is the projection of cone C i onto the image plane I j. C ij P ij Ni=1 Nj=1 ɛ i,j ɛ = (1) N T with ɛ i,j = 0 if the view pair (i,j) cannot be used. ɛ represents the overall error, ɛ i,j represents the error for a pair of views (i, j), and N is the total number of views. Note that, in general, ɛ i,j ɛ j,i. To find the correct set of calibration parameters, Grattarola uses a simple search procedure. At each iteration, each parameter is changed by a given step-size and if a decrease in the error is found, then a step is taken along that parameter direction. The step-size is halved for the next iteration. This continues until the procedure converges on a set of parameters. 3 Proposed Method ε 1 ε 2 Silhouette S j Tangent cone Projected cone Cij The method that we propose is based on Grattarola s error measure. Instead of using the perpendicular distance from the projected cone C ij and the silhouette S j, we use the absolute difference of the angles between the projected cone and the tangent cone, as discussed in the earlier section and in Figure 4. In this section, we show that it is possible to compute the gradients for this error measure with respect to each of the calibration parameters. A standard gradient-based minimization procedure can then be used to robustly and efficiently minimize the error function. Image plane Figure 4: Incorrect parameters results in the silhouette S j on the image plane I j no longer being internal and tangent to the projected 2- DconeC ij. The error is defined as the sum of angles, ɛ 1 + ɛ 2, between the tangent cone and the projected cone C ij. I j 3.1 Gradients Changing the view parameters will cause a change in the error. In this section, we show that the partial derivative of the error with respect to the calibration parameters can be derived and computed analytically. There is a well-defined chain of dependency between the error and the parameter values. The error is a function of the angles defined by the tangent cone and the projected cone

4 defined in the previous section. These cones, in turn, can be described by the positions of 5 points on the image plane to be defined shortly. These 5 points are functions of the calibration parameters. Therefore, the derivative of the error with respect to a parameter can be decomposed into three simpler partial derivatives, using the chain rule of calculus. Figure 5 shows the 5 points in the image plane that completely describe the tangent cone and projected cone. Point A is the projected view s focal point, corresponding to the point P ij in Figures 2, 3 and 4. Points B and C are the two extremal points on the silhouette such that lines drawn from Point A would go through points B and C, and be tangent to the silhouette. Points D and E are any two points on the lines that define the projected 2-D cone. These five points are all found while computing the error. The tangent cone is defined by the points A, B and C, while the projected cone is defined by the points A, D and E. The error ɛ between the tangent cone and the projected cone is computed from the angles defined by the points using the equation ɛ = θ BA θ DA + θ CA θ EA. (2) The angles can be computed from the x and y positions of the points. For example, if point A is described by the coordinates (x A,y A ), and point B is described by the coordinates (x B,y B ), then the angle θ BA is given by the expression ( ) θ BA =tan 1 yb y A. (3) x B x A If these five points are given, then the derivative of the error with respect to a parameter can be easily computed by combining simpler partial derivatives. We give a simple intuitive explanation for this procedure. We now consider a small change in one of the parameters of either the projected silhouette s view or the image plane s view. Since points A, D and E result from the projected view s silhouette, their positions are functions of these parameters. A small change in one of E D C tangent cone B projected cone A θ Figure 5: 5 points on the image plane, A,B,C,D,E, describe the tangent cone and the projected cone. The error can then be calculated from the angles of the lines defined these cones. E D C Figure 6: A small change in the parameters causes the position of points A, D and E to change; this causes a change in the angles and, ultimately, in the error ɛ. the parameters will, therefore, cause a small change in the x and y positions of points A, D and E, as shown in Figure 6. We can neglect the effect of small changes in the parameter values on the positions of the points B and C. This is possible because the points B and C are always on the contour of the silhouette, and for there to be a change in their positions, the position of point A must change by a large amount. The assumption of a small change holds when partial derivatives are considered. The change in the positions of the points A, D and E causes a change in the angles θ BA, θ DA, θ CA and θ EA. The changes in these angles cause a change in the error ɛ. The partial derivative of the error ɛ with respect to a parameter p i can be described by the equation B = ɛ θ P (4) p i where ɛ, θ and P are matrices of partial BA θ DA θ EA θ A CA

5 derivatives defined as follows. ɛ = [ θ = θ BA θ BA x A θ DA x A. θ EA x A P = [ x A p i θ DA θ BA θ DA θ CA ] θ EA θ BA y E θ DA y E y A... y A θ EA y A... y A p i... θ EA y E (5) (6) y E p i ] T (7) The partial derivatives in matrices ɛ and θ can be found by differentiating (2) and (3), respectively. To describe the process of finding the elements of the matrix P,wemust first review some of the ideas in the algorithm. The points A, D and E are found by transforming and projecting 3-D points of cone C i onto the image plane I j. Those 3-D points, however, are assumed to remain unchanged because of our assumption of small changes in the parameters. Therefore, the changes in the positions of the points A, D and E are only a function of the calibration parameters, and it is now straightforward to derive the partial derivatives. [1] contains more details on finding the 3-D points corresponding to A, D and E. The development of the partial derivatives and gradient have, so far, only dealt with a pair of views. Since the overall error is a linear combination of the pairwise errors and the gradient a linear operator, the overall gradient can be calculated in much the same way as the overall error. We first define the gradient ɛ i,j as the gradient for a particular pair of views i and j, where ɛ i,j =0ifthatview pair is not usable. The overall gradient ɛ is given by the expression ɛ = Ni=1 Nj=1 ɛ i,j N T (8) where N is the number of views, and N T is the total number of pairs that can be used. 3.2 Minimization The previous section described a method of calculating the value and the gradient of the error function at a point p which represents a particular set of parameter values. The set of parameters that minimizes the error function is conjectured to be the correct set of values for the calibration parameters. We wish, therefore, to find the point p that will minimize the error function ɛ. One standard method to perform such a minimization using function evaluations and gradient information is the Broyden-Fletcher- Goldfarb-Shanno method. A complete description and implementation of this can be found in [4]. 4 View Parameterization We use two types of camera arrangements in the experiments. The first is an unconstrained arrangement, where each of the cameras can be in any position and pose around an object. For the second arrangement, the object is placed on a turntable, and views of the object are obtained at various rotations of the turntable, with a fixed camera. The extrinsic camera parameters, over which we calibrate, can be described as the relative rotation and translation of the view s coordinate system from some world coordinate system. This can be thought of as the transformation of a point in world coordinates, x w, into view coordinates, x v,givenby the equation x v = Rx w + T (9) where R is the rotation matrix and T is the translation vector. 4.1 Unconstrained Views The rotation can be described by a series of rotations about the principal axes in a given order. Therefore, three parameters suffice to describe the rotation matrix. Similarly, translation can be specified by the translation in

6 the directions along the three axes. In total, there are 6 parameters that describe the position and pose of a given view. For the unconstrained camera arrangement in our experiments, one of the views is used as reference for all the other views, so that all those views are calibrated relative to the reference view. In total, there are 6(N 1) degrees of freedom, if N views are available. 4.2 Constrained Views For our constrained camera arrangement, the object is placed on a turntable, which rotates to discrete positions to generate new views. If the rotation axis for the turntable is assumed to be the y-axis, then we can parameterize the camera arrangement such that all views share 5 common parameters plus a turntable rotation angle. We consider the rotation matrix R to be described a rotation about the x-axis, θ x, a rotation about the z-axis, θ z,andarotation about the y-axis, θ y, in this order. All views will share the common parameters θ x and θ z, but will have a unique θ y parameter. The 3 translation parameters are common to all views. We assume that the turntable is perfectly calibrated, which means that the θ y parameters are known. In addition, it is not possible to determine the y-translation since the axis of rotation of the turntable is in the y- direction. Also, it is not possible to determine the z-translation since we assume no knowledge of the scale of the object. Values for the y and z translation parameters are arbitrarily assumed. This gives us a view parameterization where there are only 3 degrees of freedom. As we see in the next section, the reduction in the dimensionality of the parameters leads to a faster and more accurate calibration. 5 Experimental Results 5.1 Reconstruction Error All experiments use rendered images of the Utah teapot against a solid background. Figure 7: A reconstruction of the teapot obtained using the shape-from-silhouette algorithm with correct calibration parameters. 9 views are used. Each image is pixels. Figure 7 shows the voxel reconstruction of this teapot from 9 views of the teapot using the shape-fromsilhouette algorithm when the correct calibration parameters are used. If there are errors in the calibration parameters, then the reconstructed object will not match the original. This is illustrated in Figures 8 and 9 which show the resulting reconstruction at various levels of error in the rotation and translation parameters. Figure 8 shows the reconstructed teapot for 4 different values of average error in the rotation parameters. For an error of 1, there is no apparent reconstruction error, whereas at 20, all discernable features of the teapot such as the spout and the handle are lost. Figure 9 shows the reconstructed teapot for 4 different values of average error in the translation parameters. Based on the focal distance used and the approximate z- translation, the errors in x and y translation values can be interpreted in terms of pixel distance in the image plane. A translation value of corresponds to a distance of approximately 1 pixel in the image plane. In other words, for these sets of experiments, the translation units and the pixel values are related by a factor of For the remainder of this section, translation error will be stated in terms of pixels, based on this factor of For a translation error of 1 pixel, there is no apparent degradation in the reconstruction, while for a translation error of 50 pixels, no reconstruction of the object is possible.

7 Function evaluations Gradients No Gradients Constrained Unconstrained Table 1: Average number of functions evaluations for calibration with and without gradients. 9 views were used for both the constrained and unconstrained arrangements. 5.2 Gradient-Based Approach Figure 8: Teapots reconstructed using shapefrom-silhouette for 9 views. Different levels of average rotation error are used for each of the reconstructions. The error values, in degrees, clockwise from the top-left image are: 1,5, 10,20. Figure 9: Teapots reconstructed using shapefrom-silhouette for 9 views. Different levels of translation error are used for each of the reconstructions. The error values, in pixels, clockwise from the top-left image are: 1, 10, 20, 50. The image size is There are two main advantages to using a gradient-based approach over not using any gradient information. The first is computational, since the use of gradient information leads to faster convergence. The second is the quality of the results, since the gradient-based method tends to give superior results when compared to the conventional method, especially in a high-dimensional parameter space. The non-gradient-based method that we have implemented uses Powell s method [4] to perform the minimization. The computational advantages are summarized in Table 1, which shows the number of function evaluations required for both the gradient-based and non-gradient-based methods. The number of function evaluations is directly related to the execution time of the algorithm, with each function evaluation taking approximately 1 second on a Sun Ultra60, for 9 views. In the constrained camera arrangement, where there are only 3 calibration parameters, the gradient-based approach runs approximately 2.75 times faster than the nongradient-based approach. The computational advantage is more apparent in the unconstrained arrangement where there are 48 calibration parameters, and the algorithm runs 30 times faster. In the constrained camera arrangement where there are only 3 degrees of freedom, both gradient and non-gradient approach work extremely well even with large initial amounts of error. The experiments were performed as follows. The calibration parameters were randomly offset from their correct

8 Gradients No Gradients Final Rot. Error Final Trans. Error 0.02 pixels 0.03 pixels Gradients No Gradients Final Rot. Error Final Trans. Error 6.5 pixels 19.2 pixels Table 2: Avg. resulting error mag. (constrained camera arrangement). Avg. initial error mag.: rot. 7, trans. 174 pixels. values, and this was used as the initial parameter estimate given to the algorithm. 24 such initial sets of parameters were used. The magnitudes of the initial rotation error ranged between 0 and 18, with an average error of 7. The magnitudes of the initial translation error ranged from 0 to 470 pixels, with an average error of 174 pixels. The reconstruction results from the previous section indicate that the size of these errors is very large. Recall that for a translation error of just 50 pixels, no reconstruction was possible. Table 2 summarizes the results of the experiment starting from the 24 different initial points. The unconstrained camera arrangement has 48 calibration parameters making this a more difficult calibration problem. Table 3 shows the results for these experiments. In this set of experiments, 10 different sets of initial values were chosen with varying amounts of rotation and translation error. The magnitudes of the initial rotation error ranged between 0 and 3, with an average error magnitude of 1.5. The magnitudes of the initial translation error ranged between 0 and 50 pixels, with an average error magnitude of 26 pixels. From this data, it can be seen that the gradient-based approach gives much better results than the non-gradient based approach. 6 Conclusions In this paper, we have proposed a new method for calibrating multiple camera views of an object using only silhouette information. This approach uses a gradient-based minimization scheme to find the correct camera calibration Table 3: Avg. resulting error mag. (unconstrained camera arrangement). Avg. initial error mag.: rot. 1.6, trans. 26 pixels. parameters starting from an initial estimate. This gradient-based method appears to be especially useful in situations where there may be many views and many calibration parameters. In situations where the views have certain mutual restrictions, these constraints can be exploited to reduce the number of calibration parameters, thereby improving performance. 7 Acknowledgements The authors would like to thank Peter Eisert for valuable discussions in the early stages of this work. References [1] A.A. Grattarola, Volumetric reconstruction from object silhouettes: A regularization procedure, Signal Processing Vol. 27 No , p [2] W. Niem, Automatische Rekonstruktion starrer dreidimensionaler Objekte aus Kamerabildern, PhD Thesis, University of Hannover, [3] W.N. Martin and J.K.Aggarwal, Volumetric description of objects from multiple views, IEEE Trans. Pattern Anal. Mach. Intell., Vol. PAMI-5, No. 2, p , March [4]W.H.Pressetal,Numerical recipes in C : the art of scientific computing, Cambridge University Press, Cambridge, 1992

Precise Omnidirectional Camera Calibration

Precise Omnidirectional Camera Calibration Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional

More information

Camera Model and Calibration

Camera Model and Calibration Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Module 6: Pinhole camera model Lecture 32: Coordinate system conversion, Changing the image/world coordinate system

Module 6: Pinhole camera model Lecture 32: Coordinate system conversion, Changing the image/world coordinate system The Lecture Contains: Back-projection of a 2D point to 3D 6.3 Coordinate system conversion file:///d /...(Ganesh%20Rana)/MY%20COURSE_Ganesh%20Rana/Prof.%20Sumana%20Gupta/FINAL%20DVSP/lecture%2032/32_1.htm[12/31/2015

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Yongying Gao and Hayder Radha Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823 email:

More information

Silhouette Coherence for Camera Calibration under Circular Motion

Silhouette Coherence for Camera Calibration under Circular Motion Silhouette Coherence for Camera Calibration under Circular Motion Carlos Hernández, Francis Schmitt and Roberto Cipolla Appendix I 2 I. ERROR ANALYSIS OF THE SILHOUETTE COHERENCE AS A FUNCTION OF SILHOUETTE

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu Reference Most slides are adapted from the following notes: Some lecture notes on geometric

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Early Fundamentals of Coordinate Changes and Rotation Matrices for 3D Computer Vision

Early Fundamentals of Coordinate Changes and Rotation Matrices for 3D Computer Vision Early Fundamentals of Coordinate Changes and Rotation Matrices for 3D Computer Vision Ricardo Fabbri Benjamin B. Kimia Brown University, Division of Engineering, Providence RI 02912, USA Based the first

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D

More information

Introduction to Homogeneous coordinates

Introduction to Homogeneous coordinates Last class we considered smooth translations and rotations of the camera coordinate system and the resulting motions of points in the image projection plane. These two transformations were expressed mathematically

More information

Camera Model and Calibration. Lecture-12

Camera Model and Calibration. Lecture-12 Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

An Efficient Visual Hull Computation Algorithm

An Efficient Visual Hull Computation Algorithm An Efficient Visual Hull Computation Algorithm Wojciech Matusik Chris Buehler Leonard McMillan Laboratory for Computer Science Massachusetts institute of Technology (wojciech, cbuehler, mcmillan)@graphics.lcs.mit.edu

More information

Efficient View-Dependent Sampling of Visual Hulls

Efficient View-Dependent Sampling of Visual Hulls Efficient View-Dependent Sampling of Visual Hulls Wojciech Matusik Chris Buehler Leonard McMillan Computer Graphics Group MIT Laboratory for Computer Science Cambridge, MA 02141 Abstract In this paper

More information

Comment on Numerical shape from shading and occluding boundaries

Comment on Numerical shape from shading and occluding boundaries Artificial Intelligence 59 (1993) 89-94 Elsevier 89 ARTINT 1001 Comment on Numerical shape from shading and occluding boundaries K. Ikeuchi School of Compurer Science. Carnegie Mellon dniversity. Pirrsburgh.

More information

Multiview Reconstruction

Multiview Reconstruction Multiview Reconstruction Why More Than 2 Views? Baseline Too short low accuracy Too long matching becomes hard Why More Than 2 Views? Ambiguity with 2 views Camera 1 Camera 2 Camera 3 Trinocular Stereo

More information

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License

1D camera geometry and Its application to circular motion estimation. Creative Commons: Attribution 3.0 Hong Kong License Title D camera geometry and Its application to circular motion estimation Author(s Zhang, G; Zhang, H; Wong, KKY Citation The 7th British Machine Vision Conference (BMVC, Edinburgh, U.K., 4-7 September

More information

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection 3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection Peter Eisert, Eckehard Steinbach, and Bernd Girod Telecommunications Laboratory, University of Erlangen-Nuremberg Cauerstrasse 7,

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

THE CAMERA TRANSFORM

THE CAMERA TRANSFORM On-Line Computer Graphics Notes THE CAMERA TRANSFORM Kenneth I. Joy Visualization and Graphics Research Group Department of Computer Science University of California, Davis Overview To understanding the

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors

Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Keith Forbes 1 Anthon Voigt 2 Ndimi Bodika 2 1 Digital Image Processing Group 2 Automation and Informatics Group Department of Electrical

More information

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Oliver Cardwell, Ramakrishnan Mukundan Department of Computer Science and Software Engineering University of Canterbury

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations

Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations Prashant Ramanathan and Bernd Girod Department of Electrical Engineering Stanford University Stanford CA 945

More information

Parallel and perspective projections such as used in representing 3d images.

Parallel and perspective projections such as used in representing 3d images. Chapter 5 Rotations and projections In this chapter we discuss Rotations Parallel and perspective projections such as used in representing 3d images. Using coordinates and matrices, parallel projections

More information

Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations

Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations Prashant Ramanathan and Bernd Girod Department of Electrical Engineering Stanford University Stanford CA 945

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Model Based Perspective Inversion

Model Based Perspective Inversion Model Based Perspective Inversion A. D. Worrall, K. D. Baker & G. D. Sullivan Intelligent Systems Group, Department of Computer Science, University of Reading, RG6 2AX, UK. Anthony.Worrall@reading.ac.uk

More information

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne

More information

Non-Differentiable Image Manifolds

Non-Differentiable Image Manifolds The Multiscale Structure of Non-Differentiable Image Manifolds Michael Wakin Electrical l Engineering i Colorado School of Mines Joint work with Richard Baraniuk, Hyeokho Choi, David Donoho Models for

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Robust PDF Table Locator

Robust PDF Table Locator Robust PDF Table Locator December 17, 2016 1 Introduction Data scientists rely on an abundance of tabular data stored in easy-to-machine-read formats like.csv files. Unfortunately, most government records

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Answers to practice questions for Midterm 1

Answers to practice questions for Midterm 1 Answers to practice questions for Midterm Paul Hacking /5/9 (a The RREF (reduced row echelon form of the augmented matrix is So the system of linear equations has exactly one solution given by x =, y =,

More information

Abstract. Keywords. Computer Vision, Geometric and Morphologic Analysis, Stereo Vision, 3D and Range Data Analysis.

Abstract. Keywords. Computer Vision, Geometric and Morphologic Analysis, Stereo Vision, 3D and Range Data Analysis. Morphological Corner Detection. Application to Camera Calibration L. Alvarez, C. Cuenca and L. Mazorra Departamento de Informática y Sistemas Universidad de Las Palmas de Gran Canaria. Campus de Tafira,

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Agenda. Rotations. Camera models. Camera calibration. Homographies

Agenda. Rotations. Camera models. Camera calibration. Homographies Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate

More information

Coordinate Transformations for VERITAS in OAWG - Stage 4

Coordinate Transformations for VERITAS in OAWG - Stage 4 Coordinate Transformations for VERITAS in OAWG - Stage 4 (11 June 2006) Tülün Ergin 1 Contents 1 COORDINATE TRANSFORMATIONS 1 1.1 Rotation Matrices......................... 1 1.2 Rotations of the Coordinates...................

More information

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Peter F Sturm Computational Vision Group, Department of Computer Science The University of Reading,

More information

5LSH0 Advanced Topics Video & Analysis

5LSH0 Advanced Topics Video & Analysis 1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl

More information

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior

More information

Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives

Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives Recall that if z = f(x, y), then the partial derivatives f x and f y are defined as and represent the rates of change of z in the x- and y-directions, that is, in the directions of the unit vectors i and

More information

Non-linear dimension reduction

Non-linear dimension reduction Sta306b May 23, 2011 Dimension Reduction: 1 Non-linear dimension reduction ISOMAP: Tenenbaum, de Silva & Langford (2000) Local linear embedding: Roweis & Saul (2000) Local MDS: Chen (2006) all three methods

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Agenda. Rotations. Camera calibration. Homography. Ransac

Agenda. Rotations. Camera calibration. Homography. Ransac Agenda Rotations Camera calibration Homography Ransac Geometric Transformations y x Transformation Matrix # DoF Preserves Icon translation rigid (Euclidean) similarity affine projective h I t h R t h sr

More information

CS231A. Review for Problem Set 1. Saumitro Dasgupta

CS231A. Review for Problem Set 1. Saumitro Dasgupta CS231A Review for Problem Set 1 Saumitro Dasgupta On today's menu Camera Model Rotation Matrices Homogeneous Coordinates Vanishing Points Matrix Calculus Constrained Optimization Camera Calibration Demo

More information

Camera calibration. Robotic vision. Ville Kyrki

Camera calibration. Robotic vision. Ville Kyrki Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

A GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS

A GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS A GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS HEMANT D. TAGARE. Introduction. Shape is a prominent visual feature in many images. Unfortunately, the mathematical theory

More information

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length

Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Critical Motion Sequences for the Self-Calibration of Cameras and Stereo Systems with Variable Focal Length Peter Sturm To cite this version: Peter Sturm. Critical Motion Sequences for the Self-Calibration

More information

3D Modeling using multiple images Exam January 2008

3D Modeling using multiple images Exam January 2008 3D Modeling using multiple images Exam January 2008 All documents are allowed. Answers should be justified. The different sections below are independant. 1 3D Reconstruction A Robust Approche Consider

More information

A1:Orthogonal Coordinate Systems

A1:Orthogonal Coordinate Systems A1:Orthogonal Coordinate Systems A1.1 General Change of Variables Suppose that we express x and y as a function of two other variables u and by the equations We say that these equations are defining a

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993 Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

STRUCTURE AND MOTION ESTIMATION FROM DYNAMIC SILHOUETTES UNDER PERSPECTIVE PROJECTION *

STRUCTURE AND MOTION ESTIMATION FROM DYNAMIC SILHOUETTES UNDER PERSPECTIVE PROJECTION * STRUCTURE AND MOTION ESTIMATION FROM DYNAMIC SILHOUETTES UNDER PERSPECTIVE PROJECTION * Tanuja Joshi Narendra Ahuja Jean Ponce Beckman Institute, University of Illinois, Urbana, Illinois 61801 Abstract:

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Jian Wang 1,2, Anja Borsdorf 2, Joachim Hornegger 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

Planes Intersecting Cones: Static Hypertext Version

Planes Intersecting Cones: Static Hypertext Version Page 1 of 12 Planes Intersecting Cones: Static Hypertext Version On this page, we develop some of the details of the plane-slicing-cone picture discussed in the introduction. The relationship between the

More information

Jacobian of Point Coordinates w.r.t. Parameters of General Calibrated Projective Camera

Jacobian of Point Coordinates w.r.t. Parameters of General Calibrated Projective Camera Jacobian of Point Coordinates w.r.t. Parameters of General Calibrated Projective Camera Karel Lebeda, Simon Hadfield, Richard Bowden Introduction This is a supplementary technical report for ACCV04 paper:

More information

Volumetric Scene Reconstruction from Multiple Views

Volumetric Scene Reconstruction from Multiple Views Volumetric Scene Reconstruction from Multiple Views Chuck Dyer University of Wisconsin dyer@cs cs.wisc.edu www.cs cs.wisc.edu/~dyer Image-Based Scene Reconstruction Goal Automatic construction of photo-realistic

More information

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam Presented by Based on work by, Gilad Lerman, and Arthur Szlam What is Tracking? Broad Definition Tracking, or Object tracking, is a general term for following some thing through multiple frames of a video

More information

A 3D Pattern for Post Estimation for Object Capture

A 3D Pattern for Post Estimation for Object Capture A 3D Pattern for Post Estimation for Object Capture Lei Wang, Cindy Grimm, and Robert Pless Department of Computer Science and Engineering Washington University One Brookings Drive, St. Louis, MO, 63130

More information

AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES

AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES Nader Moayeri and Konstantinos Konstantinides Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA 94304-1120 moayeri,konstant@hpl.hp.com

More information

Model Fitting. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Model Fitting. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Model Fitting CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Model Fitting 1 / 34 Introduction Introduction Model

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

EXPERIMENTAL RESULTS ON THE DETERMINATION OF THE TRIFOCAL TENSOR USING NEARLY COPLANAR POINT CORRESPONDENCES

EXPERIMENTAL RESULTS ON THE DETERMINATION OF THE TRIFOCAL TENSOR USING NEARLY COPLANAR POINT CORRESPONDENCES EXPERIMENTAL RESULTS ON THE DETERMINATION OF THE TRIFOCAL TENSOR USING NEARLY COPLANAR POINT CORRESPONDENCES Camillo RESSL Institute of Photogrammetry and Remote Sensing University of Technology, Vienna,

More information

Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera

Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera Wolfgang Niem, Jochen Wingbermühle Universität Hannover Institut für Theoretische Nachrichtentechnik und Informationsverarbeitung

More information

3D Transformations. CS 4620 Lecture 10. Cornell CS4620 Fall 2014 Lecture Steve Marschner (with previous instructors James/Bala)

3D Transformations. CS 4620 Lecture 10. Cornell CS4620 Fall 2014 Lecture Steve Marschner (with previous instructors James/Bala) 3D Transformations CS 4620 Lecture 10 1 Translation 2 Scaling 3 Rotation about z axis 4 Rotation about x axis 5 Rotation about y axis 6 Properties of Matrices Translations: linear part is the identity

More information

CoE4TN4 Image Processing

CoE4TN4 Image Processing CoE4TN4 Image Processing Chapter 11 Image Representation & Description Image Representation & Description After an image is segmented into regions, the regions are represented and described in a form suitable

More information

291 Programming Assignment #3

291 Programming Assignment #3 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods

More information

A Study of Medical Image Analysis System

A Study of Medical Image Analysis System Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun

More information

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming L1 - Introduction Contents Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming 1 Definitions Computer-Aided Design (CAD) The technology concerned with the

More information

An Evaluation of the Performance of RANSAC Algorithms for Stereo Camera Calibration

An Evaluation of the Performance of RANSAC Algorithms for Stereo Camera Calibration Tina Memo No. 2000-009 Presented at BMVC 2000 An Evaluation of the Performance of RANSAC Algorithms for Stereo Camera Calibration A. J. Lacey, N. Pinitkarn and N. A. Thacker Last updated 21 / 02 / 2002

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Computer Vision cmput 428/615

Computer Vision cmput 428/615 Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?

More information

ENGI Parametric & Polar Curves Page 2-01

ENGI Parametric & Polar Curves Page 2-01 ENGI 3425 2. Parametric & Polar Curves Page 2-01 2. Parametric and Polar Curves Contents: 2.1 Parametric Vector Functions 2.2 Parametric Curve Sketching 2.3 Polar Coordinates r f 2.4 Polar Curve Sketching

More information

Lecture 8 Object Descriptors

Lecture 8 Object Descriptors Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

Image Enhancement Techniques for Fingerprint Identification

Image Enhancement Techniques for Fingerprint Identification March 2013 1 Image Enhancement Techniques for Fingerprint Identification Pankaj Deshmukh, Siraj Pathan, Riyaz Pathan Abstract The aim of this paper is to propose a new method in fingerprint enhancement

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de

More information

CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES

CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES YINGYING REN Abstract. In this paper, the applications of homogeneous coordinates are discussed to obtain an efficient model

More information

Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles

Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles 177 Recovery of Intrinsic and Extrinsic Camera Parameters Using Perspective Views of Rectangles T. N. Tan, G. D. Sullivan and K. D. Baker Department of Computer Science The University of Reading, Berkshire

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

Calibrated Image Acquisition for Multi-view 3D Reconstruction

Calibrated Image Acquisition for Multi-view 3D Reconstruction Calibrated Image Acquisition for Multi-view 3D Reconstruction Sriram Kashyap M S Guide: Prof. Sharat Chandran Indian Institute of Technology, Bombay April 2009 Sriram Kashyap 3D Reconstruction 1/ 42 Motivation

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Multi-View AAM Fitting and Camera Calibration

Multi-View AAM Fitting and Camera Calibration To appear in the IEEE International Conference on Computer Vision Multi-View AAM Fitting and Camera Calibration Seth Koterba, Simon Baker, Iain Matthews, Changbo Hu, Jing Xiao, Jeffrey Cohn, and Takeo

More information