A robust and convergent iterative approach for determining the dominant plane from two views without correspondence and calibration

Size: px
Start display at page:

Download "A robust and convergent iterative approach for determining the dominant plane from two views without correspondence and calibration"

Transcription

1 Proc. Computer Vision and Pattern Recognition (CVPR 97), pp , June 17-19, 1997, San Juan, Puerto Rico robust and convergent iterative approach for determining the dominant plane from two views without correspondence and calibration Pär Fornland Computational Vision and ctive Perception Num. nalysis and Comp. Science Royal Institute of Technology (KTH) S Stockholm, Sweden Christoph Schnörr B Kogs, FB Informatik Vogt-Koelln-Str. 30 Univ. Hamburg D Hamburg, Germany schnoerr@informatik.uni-hamburg.de bstract robust, iterative approach is introduced for finding the dominant plane in a scene using binocular vision. Neither camera calibration nor stereo correspondence is required. Recently Cohen formalized a framework guaranteeing (local) convergence of iterative two-step methods. In this paper, the framework is adopted, with a global step using tentative matches to estimate the planar projectivity, and a local step attempting to solve the stereo correspondence. detected point in the first image is matched to an auxiliary point in the second image, on the line joining the transformed first image point, and its closest detected second image point. Convergence is assured, while achieving robustness to both mismatching and non-coplanar points. 1. Introduction Since active robots change their visual attention by altering the camera parameters, they continuously require a recalibration. Therefore active robots cannot reconstruct any metric information about its surroundings, but it is still possible to extract useful, comparative information [6, 12], such as planarity. In fact, these measures can be more relevant for navigation than detailed metric information. In particular, coplanarity is important, since in-door environments abound with planar surfaces [14]. For example, advantageous to a robot is the capability of automatically detecting obstacles [2, 4, 5]; which means finding regions in the images with the dominant plane. The framework we consider is the use of binocular vision and point features. The fundamental matrix [10] relates points in the first image to corresponding points in the second image. In the context of planar surfaces, this relationship simplifies to a projective transformation with respect to the image coordinates [9, 11]. In [13] an approach was presented for finding planar regions in two images using point matches that are assumed to be found in a previous step. Five point matches are tested to be coplanar, and the planar projectivity between the two views is then calculated, which enables a prediction of positions from one image to the other. If sufficiently many actual and predicted points are close enough, an extended planar region is recursively found. In this paper, we address the problem of robustly finding points on the dominant plane from two images. The difference to previous work (e.g. [13]) is that we neither assume established correspondences nor camera calibration. Rather, the problems of matching corresponding points and estimating the corresponding planar projectivity are simultaneously addressed. In a related context, namely rigid registration of 3D points without given correspondence, iterative approaches have been reported [1, 15] that may be adopted to solve our task. However, since we also wish to apply robust estimators, we address the question of (local) convergence. Cohen [3] presented a general framework for a broad class of vision problems that has been solved by two-step methods. The key idea is to introduce auxiliary variables to guarantee at least local convergence. In the present paper, we adopt this framework to the problem of robustly estimating planar projectivities and point correspondences from stereo images. The method is a two-step iterative scheme. The first step predicts points from the first to the second image using the current estimate of the projective transformation. The closest point in the second image to each predicted point is then used to define auxiliary image points. Each such point is calculated by a linear interpolation between the closest point and the predicted point, and the weights depend on a

2 robust function of the distance between the two points. The second step returns an estimate of the projective transformation by a least squares approach fitting the points in the first image to the auxiliary points in the second image. The resulting transformation is then used for the first step again, until convergence is reached. The duality of points and lines in the projective plane implies that we can use the method presented here also for line coordinates, which is useful considering the abundance of straight lines in images of in-door environments. In the next section, we describe stereo geometry in sufficient detail, and how to estimate projectivities from matching points. In section 3 we describe properties of iterative two-step methods, and introduce the auxiliary variables. Section 4 is devoted to the experiments, testing convergence and stability properties of the method. We conclude the paper with a discussion. 2. Stereo geometry and projectivity estimation Two cameras at different positions generally assign different pixel coordinates to a fixed point in 3D space. The geometry of the camera set up causes various relationships between the pixel coordinates, such as the epipolar constraint [10]. In particular, each 3D plane gives arise to a projective transformation between the coordinates: x 2 = t 11 x 1 + t 12 y 1 + t 13 y 2 = t 21 x 1 + t 22 y 1 + t 23 (1) = t 31 x 1 + t 32 y 1 + t 33 where (x 1 ; y 1 ) and (x 2 ; y 2 ) are two points in the first and second images, respectively. Since we use homogeneous coordinates, any scalar 6= 0 is valid Estimating the projectivity By eliminating in (1), we obtain two equations, that are linear in elements of the projectivity matrix. t 11 x 1 +t 12 y 1 +t 13?t 31 x 1 x 2?t 32 y 1 x 2?t 33 x 2 = 0 t 21 x 1 +t 22 y 1 +t 23?t 31 x 1 y 2?t 32 y 1 y 2?t 33 y 2 = 0 Due to the linearity, we can estimate the matrix through multiple linear regression if we know the stereo correspondences. Since we can set the scale factor arbitrarily, we scale T to have unit Frobenius norm, that is, kt k F = 1, reducing the degrees of freedom to eight. full-rank estimation of these is possible using four matching points, but often we have many matching points, allowing us to solve the over-determined system by least square estimation. This is done by eigenvector analysis of the system matrix, using singular value decomposition. (2) 2.2. lgebraic versus Euclidean estimation Figure 1 depicts the performance of two different methods for estimating the projectivity from matching points. The first method applies linear regression to the equations in (2). The system of equations is solved by finding the smallest eigenvalue of the system matrix using singular value decomposition. The second method minimizes the average squared Euclidean distance (the prediction error) between the points in the second image, and the transformed points of the first image. This is a non linear problem, which however can be well approximated by iterating the linear method but using a weighted least squares approach instead. In the figure we show for both estimation procedures the prediction errors as a function of the standard deviation of the additive gaussian noise applied to the input point positions. The conclusion for this synthetic example is that the Euclidean estimation gives a better estimate of the projectivity for all noise levels. However, we also performed ad Figure 1. Prediction error plotted against the perturbation of input points, for the algebraic (solid) and Euclidean (dashed) distances. ditional experiments with projective transformations taken from real scenes, in which the projective transformation often is approximately affine. These experiments did not indicate a significant advantage for the Euclidean estimation over the algebraic estimation. The algebraic version is also favored by the computational savings that can be achieved, and is therefore used in the rest of this paper Sensitivity to projectivity perturbations Later in this paper we introduce methods that require an initial estimate of the projectivity matrix. Therefore it is important to consider how the projectivity changes qualitatively for small random perturbations of the matrix. If the transformed points move only moderately for slight perturbations, then we can allow minor errors in the initial matrix. The prediction error for perturbed projectivities was investigated by an experiment with a natural projectivity corresponding to the real scene in Fig. 7. Fifteen matching points on the floor were detected, and the projectivity matrix was

3 estimated from these. The points in the second image were then redefined to fit the estimated projectivity; this provided a convenient experimental set up with matching points in the two images and the projectivity perfectly transforming points between the images. Figure 2 shows how the prediction error increases with respect to perturbations of the projectivity matrix. For each noise level, we performed a series of 100 experiments. In each such experiment, the perturbation matrix was taken randomly, with a Frobenius norm equal to the desired noise level. The average prediction error was calculated as the sum of the squared prediction errors for all points, and in the figure we show the median value of all 100 average prediction errors as a function of the norm of the perturbation matrix. The experiment exhibits that even slight perturba Figure 2. Median prediction error plotted against projectivity perturbation. tions cause major prediction errors. For instance, if the allowed prediction error is 10 pixels, then the perturbation matrix should have a norm less than 5 10?6! The conclusion of this section is that small random changes of the projectivity matrix may considerably change the corresponding transformation. Therefore it is important to exploit available constraints to obtain a realistic initial projectivity estimate (see next section). Furthermore, this result emphasizes the necessity of being robust with respect to local mismatches, which is the objective of section Selecting a meaningful initial projectivity In the previous section we observed how small perturbations of the projectivity matrix may cause large changes of the transformation. By investigating the geometrical parameters defining the projectivity matrix, we can find realistic matrices by invoking constraints derived from the geometry. Often, we can assume that the interior calibration is equal in both cameras and approximated by a uniform scaling and a translation. The relative pose of the cameras can generally be modeled by a sideways translation and a rotation around the y-direction. The geometrical parameters thus defining the approximate projective matrix can be restricted to reasonable intervals by exploiting knowledge about the approximate camera set-up. These intervals can then be sampled at appropriate points, to find geometrically sound initial values of the projectivity matrix to be used in the method described later in this paper. 3. Convergent iterative joint-estimation of point correspondence and projectivity In the next section, we describe how to adopt the framework of current two-step iterative algorithms to our problem. The following section describes how we set up the joint-optimization problem in the present context of planar projectivities and point correspondences. In section 3.3 we extend the optimization problem by introducing auxiliary variables to preserve local convergence while including robust estimators. The impact of selected robust distances on the joint estimation procedure is discussed in section Iterative Closest Point Methods Many computer vision problems can be modeled as the joint minimization of an energy with respect to a number of unknown parameters. Several iterative algorithms has been presented in the literature that alternatively solve for some parameters, and then fix these parameters when solving for others. The convergence of the methods might not be guaranteed. typical example is the snake [8] algorithm which finds smooth curves enclosing regions in a grey level image by alternatively a) moving a number of reference points in the gradient direction and b) fitting a regularized curve that passes through the reference points. nother example is 3D registration where the aim is to find a transformation and a matching between two sets of 3D points. gain, this can be solved [1, 15] through a two-step algorithm. In this paper, we have a stereoscopic visual system, and the aim is to find the correspondence between points in two images, while simultaneously estimating the projective transformation between two sets of image points. We adopt the two-step procedure that we described above to this problem. The sets of points in the first and second images are defined as S 1 = fp 1 g and k S2 = fp 2 g, respectively. The k distance from any point p 0 in the second image to its closest point in the second set is referred to as d S 2 (p 0 ) and is defined through d S 2 (p 0 ) = d(p 0 ; S 2 ) = inf ; p 2 )g p 2 2S 2fd(p0 The method is initialized with a projective transformation H 0, and consists of two steps. Local step: Given a previous projectivity estimate H k, we define the points p 1 2 S 1 and p 2 2 S 2 to be matching, if p 2 is the closest point to the transformed point H k (p 1 ). Global step: The matrix representing the projectivity H k+1 is estimated from the matching done in the local step.

4 This two-step method is described in the following section as a joint energy minimization Energy minimization The iterative scheme that we presented above can be interpreted as the joint minimization of an energy with respect to both stereo matching and the projectivity. In principle, the performance can be improved by using robust estimators. But convergence is only assured if the distance measure used in the local step is the same as that induced by the norm used in the global estimation step, since then the same energy term is minimized in both steps. Therefore, convergence is not guaranteed [3] if a robust technique is used in one step. In the next section, we address this problem and apply the general framework developed in [3] to our stereoscopic setting. To enable the quantative evaluation of point matches in a variety of ways, a potential P (to be specified later on) is introduced for each point in the second image. The potential is a function of the distance d S 2 (p 0 ) defined above, that is, the potential is P (p 0 ) = f (d S 2 (p 0 )). Using these definitions, the energy that we wish to minimize jointly with respect to stereo correspondence and projective transformation can be defined as follows, E =X i P (H(p 1 i )) 3.3. uxiliary variable framework s we discussed above, there is no guarantee that the iterative methods converge except when using a least squares approach. Using ordinary least squares, however means that no distinction is made between mismatches and good matches, and that potential outliers are included. Therefore, we wish to incorporate robust estimation in the method. The question then is how to achieve convergence of the two-step iterative method. The key idea is to introduce points in the second image called auxiliary points, that might not coincide with any of the points in the second set. Following Cohen [3], energy E of the previous section has to be modified to include the auxiliary points as a second set of variables. It is shown that this energy can be minimized with respect to each of the two sets of variables alternately. The non-convex potential in the original formulation is transformed into a potential that is convex with respect to each set of variables. The auxiliary energy of the problem addressed in this paper reads: E aux = 1 2X i kh(p i )? ~p i k 2 +X i P 1 (~p i ) where we use the notation ~p for auxiliary variables. The new potential P 1 is associated to the potential P via conjugate functions (see [3]). The objective now is to minimize the auxiliary energy with respect to the projective transformation H and the auxiliary variables ~p i. For the special case of the original energy corresponding to least-squares, we choose P (p 0 ) = d 2 =2, where d = d(p 0 ; S 2 ). The point p 0 lies in the second image, as before. The corresponding potential P 1 is then P 1 (p 0 ) = 2(1?) d2 (p 0 ; S 2 ). Furthermore, the auxiliary variable ~p i is calculated by a linear interpolation with fixed weights between the predicted point and its closest point in the second set, that is, ~p i = (1? )p 0 i + p2 S 2 When we have a different potential in the original problem, we find the potential P 1 and the auxiliary variables ~p differently to satisfy convexity requirements. ccording to Theorem 5. in [3], we do not need to compute the auxiliary energy P 1 in order to find the auxiliary variable. In the following section, we derive the auxiliary variables corresponding to specific robust potentials Robust estimation The choice of the potential P (p) = f (d(p; S 2 )) determines where the auxiliary point lies on the line joining the predicted point and its closest point. bove we described that for least squares, the weight between the two points is fixed. However, if we want a robust estimation framework, or if we wish to include image features to aid the matching, then the potential is defined differently. Most robust functions are like least squares for low d-values, but for high values, the influence of the data decreases. The robust inffunction f (d) = inf(; d 2 =2) was used as an example in [3]; it assigns the same energy contribution to all d values above a threshold. From the theory in [3], we derive how to calculate the auxiliary point when we use the inf-function, ( (1? )p 0 + p 2 S 2 if d 2 < 2= ~p = p 0 otherwise Thus, it behaves like least squares up to a threshold, but when the distance is too large, the data in the second set is ignored. This difference between least squares and the inf-function is illustrated in Fig. 3. The Huber function [7] is well known in robust estimation; it is quadratic for small values on d, but linear above a threshold. The parameters are chosen such that the function and its derivative are both continuous, f (d) = ( d 2 =2 (d? =2) if 0 d if d > If the Huber function is used to define the potential P in the original energy, then we can derive the corresponding auxiliary points from the theory in [3] as ( (1? )p 0 + p 2 S 2 if 0 d ~p = (1? =d)p 0 + ( =d)p 2 S 2 if d >

5 Figure 3. Point traces of the predicted point (solid curve) and its corresponding auxiliary point (dashed) through the iterations. The two figures show the least squares approach (left) and the robust inf-function (right). Thus, using the Huber function in the auxiliary variable approach gives the least squares interpolation for low distances, but above a threshold, the weight of the predicted point increases with the distance. 4. Experiments Experiments are presented in this section that evaluate the robustness, stability and convergence of the auxiliary variables method. The performance for different robust functions, values on the parameters and different scenes are investigated for different levels of difficulty. Figure 4 shows the decrease of the auxiliary energy during the iterations, for two values of (0.3 and 0.6). The points are depicted in Fig. 5, and the robust inf-function was used. The residual decreases more rapidly for a higher -value (dashed curve), and the jumps in the graphs correspond to resolved matching ambiguities Figure 4. The energy E aux decreasing through the iterations, for two different -values. The prominent jumps correspond to resolved matching ambiguities. Figure 5 illustrates the advantage of using a robust potential in the auxiliary variable method. The figure shows the trace of predicted points during the iterations, and the points in the second set are marked with crosses. Each predicted point should converge to its corresponding cross. We defined twelve points on a circle and two other points close to the center of the circle. The same point sets are used in the two images, that is, the correct projectivity is the identity matrix. To test the robustness of the method against mismatches, we selected the initial projectivity matrix as a small rotation, scaling and translation. For instance, the point that corresponds to point E in the second image, is transformed by the initial matrix to be closer to the point D. lso, the point corresponding to point B in the second image is transformed to a point closer to point C. The least-squares method in the left part of Fig.5, with = 0:1, locks onto an incorrect minimum, due to mismatches between transformed points and their closest points in the second image. But when using the potential with the robust inf-function (right), it converges to the global minimum, and the initial mismatching, for instance between the two midpoints, is successfully resolved. B C D E Figure 5. Mismatching causes the least squares method (left) to lock onto an incorrect solution, but the robust method (right) successfully finds the global minimum. The aim of another experiment was to check if the method can segment out the coplanar points on the dominant plane. real projectivity from the stereo image in Fig. 7 was used to produce corresponding point sets in the two image planes. nother point was added that did not satisfy the projective transformation, simulating a noncoplanar point. In the left part of Fig. 6 the traces for the least squares approach is shown; it converges to an incorrect solution that is close the correct one. The closest point matching causes it to fail to notice that the point is too far from its corresponding point to be considered. However, since the point lies among the coplanar points, and not far away, it did not affect the estimated projectivity very much. It is clear that the robust method (right) using the inffunction, successfully finds the coplanar points, and converges to the correct projectivity estimate, while disregarding point as an outlier. Figure 7 shows the result for a real scene. Two of the 17 detected points are not on the floor, and they are segmented from the coplanar points. The traces of the predicted points converge (coplanar) or do not converge (not coplanar) to their corresponding points. We initilized the method with a B C D E

6 Figure 6. The coplanar points correctly converge with the robust method (right), but not with the least squares method (left). projectivity causing initial mismatches, and we observe how the robust method (using the inf-function) converges to the correct projectivity, despite the initial mismatches and perturbed predictions. Points that do not belong to the ground plane were disregarded. Figure 7. Real stereo image with detected points, and the traces of the predicted points in the right figure. Mismatching and outlier points are successfully handled. different real scene was also tested (Fig. 8) in which 18 coplanar points were detected, and five points not on the plane. The initial mismatching was resolved and the method converges correctly, disregarding the five outlier points. 5. Discussion We presented a robust iterative framework to detect the dominant plane in binocular vision, without requiring any calibration nor the matching between the images. The dominant plane is found by simultaneously addressing the problems of finding the planar projectivity between the images and the stereo matching, with an iterative closest point method. To assure its convergence, we employed the auxiliary variable framework developed by Cohen [3]. The theoretical framework was further elaborated to provide mechanisms for several robust estimators. Numerical experiments confirmed that the method converges as theoretically pre- Figure 8. nother real stereo image with detected points, and the traces of the predicted points in the right figure. Mismatching and outlier points are successfully handled. dicted, and that it successfully copes with mismatches and outlier points not belonging to the dominant plane. References [1] P. Besl and N. McKay. method for registration of 3-d shapes. IEEE Trans. PMI, 14(2): , February [2] S. Carlsson and J.-O. Eklundh. Object detection using model based prediction and motion parallax. In Proc. 1st ECCV, pages , pr [3] L. Cohen. uxiliary variables and two-step iterative algorithms in computer vision problems. JMIV, 6:59 83, [4] W. Enkelmann. Obstacle detection by evaluation of optical flow fields from image sequences. IVC, 9(3): , [5] P. Fornland. Direct obstacle detection and motion from spatio-temporal derivatives. In V. Hlaváč and R. Šara, editors, Proc. 6th CIP, pages , Prague, Czech Republic, Sep [6] R. Hartley, R. Gupta, and T. Chang. Stereo from uncalibrated cameras. In CVPR92, pages , [7] P. Huber. Robust Statistics. Wiley Series in Prob. and Math. Stat. John Wiley & Sons, Inc., [8] M. Kass,. P. Witkin, and D. Terzopoulos. Snakes: ctive contour models. IJCV, 1(4): , January [9] Q. Luong and O. Faugeras. Determining the fundamental matrix with planes: Instability and new algorithms. In Proc. CVPR, pages , [10] Q. Luong and O. Faugeras. The fundamental matrix: Theory, algorithms, and stability analysis. IJCV, 17(1):43 75, January [11] L. Robert and M. Hebert. Deriving orientation cues from stereo images. In Proc. 3rd ECCV, pages : , [12]. Shashua. Projective structure from uncalibrated images: Structure-from-motion and recognition. PMI, 16(8): , ugust [13] D. Sinclair and. Blake. Quantative planar region segmentation. IJCV, 18(1):77 91, [14] R. Tsai and T. Huang. Estimating three-dimensional motion parameters of a rigid planar patch ii: singular value decomposition. IEEE Trans. coustic, Speech and Signal Processing, 30, [15] Z. Zhang. Iterative point matching for registration of freeform curves and surfaces. IJCV, 13(2): , 1994.

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras Zhengyou Zhang* ATR Human Information Processing Res. Lab. 2-2 Hikari-dai, Seika-cho, Soraku-gun Kyoto 619-02 Japan

More information

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics

Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Object and Motion Recognition using Plane Plus Parallax Displacement of Conics Douglas R. Heisterkamp University of South Alabama Mobile, AL 6688-0002, USA dheister@jaguar1.usouthal.edu Prabir Bhattacharya

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Camera Calibration with a Simulated Three Dimensional Calibration Object

Camera Calibration with a Simulated Three Dimensional Calibration Object Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek

More information

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

The end of affine cameras

The end of affine cameras The end of affine cameras Affine SFM revisited Epipolar geometry Two-view structure from motion Multi-view structure from motion Planches : http://www.di.ens.fr/~ponce/geomvis/lect3.pptx http://www.di.ens.fr/~ponce/geomvis/lect3.pdf

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Multiple View Geometry in Computer Vision Second Edition

Multiple View Geometry in Computer Vision Second Edition Multiple View Geometry in Computer Vision Second Edition Richard Hartley Australian National University, Canberra, Australia Andrew Zisserman University of Oxford, UK CAMBRIDGE UNIVERSITY PRESS Contents

More information

Announcements. Stereo

Announcements. Stereo Announcements Stereo Homework 2 is due today, 11:59 PM Homework 3 will be assigned today Reading: Chapter 7: Stereopsis CSE 152 Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where relative

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

Epipolar geometry. x x

Epipolar geometry. x x Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Engin Tola 1 and A. Aydın Alatan 2 1 Computer Vision Laboratory, Ecóle Polytechnique Fédéral de Lausanne

More information

Camera Calibration Using Line Correspondences

Camera Calibration Using Line Correspondences Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of

More information

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential

More information

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Euclidean Reconstruction Independent on Camera Intrinsic Parameters Euclidean Reconstruction Independent on Camera Intrinsic Parameters Ezio MALIS I.N.R.I.A. Sophia-Antipolis, FRANCE Adrien BARTOLI INRIA Rhone-Alpes, FRANCE Abstract bundle adjustment techniques for Euclidean

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Coplanar circles, quasi-affine invariance and calibration

Coplanar circles, quasi-affine invariance and calibration Image and Vision Computing 24 (2006) 319 326 www.elsevier.com/locate/imavis Coplanar circles, quasi-affine invariance and calibration Yihong Wu *, Xinju Li, Fuchao Wu, Zhanyi Hu National Laboratory of

More information

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Javier Romero, Danica Kragic Ville Kyrki Antonis Argyros CAS-CVAP-CSC Dept. of Information Technology Institute of Computer Science KTH,

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Announcements. Stereo

Announcements. Stereo Announcements Stereo Homework 1 is due today, 11:59 PM Homework 2 will be assigned on Thursday Reading: Chapter 7: Stereopsis CSE 252A Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where

More information

Two-View Geometry (Course 23, Lecture D)

Two-View Geometry (Course 23, Lecture D) Two-View Geometry (Course 23, Lecture D) Jana Kosecka Department of Computer Science George Mason University http://www.cs.gmu.edu/~kosecka General Formulation Given two views of the scene recover the

More information

Self-calibration of a pair of stereo cameras in general position

Self-calibration of a pair of stereo cameras in general position Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Sharjeel Anwar, Dr. Shoaib, Taosif Iqbal, Mohammad Saqib Mansoor, Zubair

More information

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

Subspace Clustering with Global Dimension Minimization And Application to Motion Segmentation

Subspace Clustering with Global Dimension Minimization And Application to Motion Segmentation Subspace Clustering with Global Dimension Minimization And Application to Motion Segmentation Bryan Poling University of Minnesota Joint work with Gilad Lerman University of Minnesota The Problem of Subspace

More information

MAPI Computer Vision. Multiple View Geometry

MAPI Computer Vision. Multiple View Geometry MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry

More information

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15 Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15 Lecture 6 Stereo Systems Multi- view geometry Stereo systems

More information

Visualization 2D-to-3D Photo Rendering for 3D Displays

Visualization 2D-to-3D Photo Rendering for 3D Displays Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information

More information

Rectification for Any Epipolar Geometry

Rectification for Any Epipolar Geometry Rectification for Any Epipolar Geometry Daniel Oram Advanced Interfaces Group Department of Computer Science University of Manchester Mancester, M13, UK oramd@cs.man.ac.uk Abstract This paper proposes

More information

Global localization from a single feature correspondence

Global localization from a single feature correspondence Global localization from a single feature correspondence Friedrich Fraundorfer and Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology {fraunfri,bischof}@icg.tu-graz.ac.at

More information

Mathematics of a Multiple Omni-Directional System

Mathematics of a Multiple Omni-Directional System Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba

More information

Quasi-Euclidean Uncalibrated Epipolar Rectification

Quasi-Euclidean Uncalibrated Epipolar Rectification Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report September 2006 RR 43/2006 Quasi-Euclidean Uncalibrated Epipolar Rectification L. Irsara A. Fusiello Questo

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Linear Multi View Reconstruction and Camera Recovery Using a Reference Plane

Linear Multi View Reconstruction and Camera Recovery Using a Reference Plane International Journal of Computer Vision 49(2/3), 117 141, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Linear Multi View Reconstruction and Camera Recovery Using a Reference

More information

On-line and Off-line 3D Reconstruction for Crisis Management Applications

On-line and Off-line 3D Reconstruction for Crisis Management Applications On-line and Off-line 3D Reconstruction for Crisis Management Applications Geert De Cubber Royal Military Academy, Department of Mechanical Engineering (MSTA) Av. de la Renaissance 30, 1000 Brussels geert.de.cubber@rma.ac.be

More information

Multiple View Geometry. Frank Dellaert

Multiple View Geometry. Frank Dellaert Multiple View Geometry Frank Dellaert Outline Intro Camera Review Stereo triangulation Geometry of 2 views Essential Matrix Fundamental Matrix Estimating E/F from point-matches Why Consider Multiple Views?

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Combining Appearance and Topology for Wide

Combining Appearance and Topology for Wide Combining Appearance and Topology for Wide Baseline Matching Dennis Tell and Stefan Carlsson Presented by: Josh Wills Image Point Correspondences Critical foundation for many vision applications 3-D reconstruction,

More information

STRUCTURAL ICP ALGORITHM FOR POSE ESTIMATION BASED ON LOCAL FEATURES

STRUCTURAL ICP ALGORITHM FOR POSE ESTIMATION BASED ON LOCAL FEATURES STRUCTURAL ICP ALGORITHM FOR POSE ESTIMATION BASED ON LOCAL FEATURES Marco A. Chavarria, Gerald Sommer Cognitive Systems Group. Christian-Albrechts-University of Kiel, D-2498 Kiel, Germany {mc,gs}@ks.informatik.uni-kiel.de

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn

More information

Multiple Views Geometry

Multiple Views Geometry Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective

More information

Linear Multi-View Reconstruction of Points, Lines, Planes and Cameras using a Reference Plane

Linear Multi-View Reconstruction of Points, Lines, Planes and Cameras using a Reference Plane Linear Multi-View Reconstruction of Points, Lines, Planes and Cameras using a Reference Plane Carsten Rother Computational Vision and Active Perception Laboratory (CVAP) Royal Institute of Technology (KTH),

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Lecture 10: Multi-view geometry

Lecture 10: Multi-view geometry Lecture 10: Multi-view geometry Professor Stanford Vision Lab 1 What we will learn today? Review for stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

3D Model Acquisition by Tracking 2D Wireframes

3D Model Acquisition by Tracking 2D Wireframes 3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract

More information

Affine Surface Reconstruction By Purposive Viewpoint Control

Affine Surface Reconstruction By Purposive Viewpoint Control Affine Surface Reconstruction By Purposive Viewpoint Control Kiriakos N. Kutulakos kyros@cs.rochester.edu Department of Computer Sciences University of Rochester Rochester, NY 14627-0226 USA Abstract We

More information

Computer Vision I - Appearance-based Matching and Projective Geometry

Computer Vision I - Appearance-based Matching and Projective Geometry Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Surface Registration. Gianpaolo Palma

Surface Registration. Gianpaolo Palma Surface Registration Gianpaolo Palma The problem 3D scanning generates multiple range images Each contain 3D points for different parts of the model in the local coordinates of the scanner Find a rigid

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Multiple-View Structure and Motion From Line Correspondences

Multiple-View Structure and Motion From Line Correspondences ICCV 03 IN PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, NICE, FRANCE, OCTOBER 003. Multiple-View Structure and Motion From Line Correspondences Adrien Bartoli Peter Sturm INRIA

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

Camera calibration. Robotic vision. Ville Kyrki

Camera calibration. Robotic vision. Ville Kyrki Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis

More information

Recovering structure from a single view Pinhole perspective projection

Recovering structure from a single view Pinhole perspective projection EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their

More information

Multi-view geometry problems

Multi-view geometry problems Multi-view geometry Multi-view geometry problems Structure: Given projections o the same 3D point in two or more images, compute the 3D coordinates o that point? Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera

More information

Geometry of Multiple views

Geometry of Multiple views 1 Geometry of Multiple views CS 554 Computer Vision Pinar Duygulu Bilkent University 2 Multiple views Despite the wealth of information contained in a a photograph, the depth of a scene point along the

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Textureless Layers CMU-RI-TR Qifa Ke, Simon Baker, and Takeo Kanade

Textureless Layers CMU-RI-TR Qifa Ke, Simon Baker, and Takeo Kanade Textureless Layers CMU-RI-TR-04-17 Qifa Ke, Simon Baker, and Takeo Kanade The Robotics Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Abstract Layers are one of the most well

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Utilizing Semantic Interpretation of Junctions for 3D-2D Pose Estimation

Utilizing Semantic Interpretation of Junctions for 3D-2D Pose Estimation Utilizing Semantic Interpretation of Junctions for 3D-2D Pose Estimation Florian Pilz 1, Yan Shi 1,DanielGrest 1, Nicolas Pugeault 2, Sinan Kalkan 3, and Norbert Krüger 4 1 Medialogy Lab, Aalborg University

More information

Matching of Line Segments Across Multiple Views: Implementation Description (memo)

Matching of Line Segments Across Multiple Views: Implementation Description (memo) Matching of Line Segments Across Multiple Views: Implementation Description (memo) Tomas Werner Visual Geometry Group Department of Engineering Science University of Oxford, U.K. 2002 1 Introduction This

More information

Vision par ordinateur

Vision par ordinateur Epipolar geometry π Vision par ordinateur Underlying structure in set of matches for rigid scenes l T 1 l 2 C1 m1 l1 e1 M L2 L1 e2 Géométrie épipolaire Fundamental matrix (x rank 2 matrix) m2 C2 l2 Frédéric

More information

Introduction à la vision artificielle X

Introduction à la vision artificielle X Introduction à la vision artificielle X Jean Ponce Email: ponce@di.ens.fr Web: http://www.di.ens.fr/~ponce Planches après les cours sur : http://www.di.ens.fr/~ponce/introvis/lect10.pptx http://www.di.ens.fr/~ponce/introvis/lect10.pdf

More information

External camera calibration for synchronized multi-video systems

External camera calibration for synchronized multi-video systems External camera calibration for synchronized multi-video systems Ivo Ihrke Lukas Ahrenberg Marcus Magnor Max-Planck-Institut für Informatik D-66123 Saarbrücken ihrke@mpi-sb.mpg.de ahrenberg@mpi-sb.mpg.de

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

A Canonical Framework for Sequences of Images

A Canonical Framework for Sequences of Images A Canonical Framework for Sequences of Images Anders Heyden, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: andersp@maths.lth.se kalle@maths.lth.se Abstract This

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de

More information