3D Reconstruction on GPU: A Parallel Processing Approach

Size: px
Start display at page:

Download "3D Reconstruction on GPU: A Parallel Processing Approach"

Transcription

1 3D Reconstruction on GPU: A Parallel Processing Approach Shubham Gupta, Siddharth Choudhary, and P.J. Narayanan Center for Visual Information Technology International Institute of Information Technology Hyderabad INDIA {shubham@students,siddharth_ch@students,pjn@}iiit.ac.in Abstract. This report presents the current state of art algorithm for fully automatic recognition and reconstruction of 3D objects in image databases and the scope of parallelization in them. The object recognition problem is posed as one of finding consistent matches between all images, subject to constraint that the images were taken from a perspective camera. We assume that the objects or scenes are rigid. For each image we associate a camera matrix, which is parameterised by rotation, translation and focal length. We use invariant local features to find matches between all image and the RANSAC algorithm to find those that are consistent with the fundamental matrix. Objects are recognised as subsets of matching images. We then solve for the structure and motion of each object, using a sparse bundle adjustment algorithm. We studied the above algorithm and are in the process of parallelizing the above algorithm. Key words: 3D Recognition and Reconstruction, Parallelization using CUDA SDK on GPU 1 Introduction A central goal of image-based rendering is to evoke a visceral sense of presence based on a collection of photographs of a scene. There has been significant research progress towards this goal ghrough view synthesis methods in the research community and in commercial products such as panorama tools. One of the dreams is that these approaches will one day allow virtual tourism of the world s interesting and important sites. Now a days, if we do a Google image search on Notre Dame Cathedral, it returns over 10,000 photos, capturing the scene from myriad viewpoints, levels of detail, lighting conditions, seasons, decades and so forth. The system for 3D Reconstruction, makes it easier for one to browse through the photos, along with a sparse 3D geometric representation of the scene. The current state of art has the total running time of a few hours for 120 photos to about two weeks in case of 2500 photos. The goal of this project is to find the

2 2 3D Reconstruction on GPU: A Parallel Processing Approach scope of parallelzation in the various modules explained below. The remainder of this report is structured as follows. In section 2 we describe the system overview. Section 3 describes the invariant feature extraction and matching scheme. Secton 4 describes the Fundamental Matrix Computation to find the correct image matches. Section 5 describes the auto calibration of the intrinsic camera parameters. Section 6 describes the sparse bundle adjustment algorithm used to solve jointly for the cameras and structures. 2 System Overview The system is divided into following modules: Feature Extraction and Matching Image Matching Fundamental Matrix Computation Self Calibration Bundle Adjustment 3 Feature Extraction and Matching The features used in this project are SIFT (Scale Invariant Feature Transfor) features. These locate interest points at maxima/minima of a difference of Gaussian function in scale-space. Each interest point has an associated orientation, which is the peak of a histogram of local orientations. This gives a similarity invariant frame in which a descriptor vector is sampled. Though a simple pixel resampling would be similarity invariant, the descriptor vector actually consists of spatially accumulated gradient measurements. This spatial accumulation is important for shift invariance, since the interest point locations are typically accurate in the 1-3 pixel range. Illumination invariance is achieved by using gradients and normalising the descriptor vector. We extract the SIFT features from all n images. Sincle multiple images may view the same point in the world, each feature is matched to k nearest neighbours (typically k = 4 ). Finding k nearest neighbours of all the features can be done using k-d tree using the ANN library. This section consists of SIFT Feature extraction which can be parallelized using SIFT GPU, which is already implemented and finding Approximate nearest neighbour is highly parallelizable, since all the queries (for all the features) run independent of each other. 3.1 KD Tree Construction Algorithm The SIFT feature vectors of each image is inserted into a KD Tree whose algorithm is as follows:

3 3D Reconstruction on GPU: A Parallel Processing Approach 3 function kdtree (list of points pointlist, int depth) { if pointlist is empty return nil; else { // Select axis based on depth so that axis cycles through all valid values var int axis := depth mod k; // Sort point list and choose median as pivot element select median by axis from pointlist; } } // Create node and construct subtrees var tree_node node; node.location := median; node.leftchild := kdtree(points in pointlist before median, depth+1); node.rightchild := kdtree(points in pointlist after median, depth+1); return node; 3.2 Feature Space Outlier Rejection The feature space outlier rejection is used to remove incorrect matches by comparing the distance of a potential match to the distance of the best incorrect match. In an ordered list of nearest-neigbour matches, we assume that the first n overlap -1 elements are potentially correct but that the n overlap element is an incorrect match. We denote the match distance of the n overlap element as e outlier, as it is the best matching outlier. In order to verifty a match, we compare the match distance of a potential correct match e match to the outlier distance, accepting the match if e match 0.8 x e outlier 4 Image Matching We used the k nearest neighbour of all the features (output of the previous step), to find the m nearest matching images for each images. This is done so that we need not compute the fundamental matrix for all pair of images ( which is quadratic in nature ). So, we have found it necessary to match each image only to a small number of neighbouring images in order to get good solutions for the camrea positions. We can find all the m nearest matching images of all images in parallel, and hence this module can be highly optimized.

4 4 3D Reconstruction on GPU: A Parallel Processing Approach 5 Fundamental Matrix Computation The fundamental matrix F is a 3 x 3 matrix of rank 2 which relates corresponding points in stereo images. In epipolar geometry, F y 1 describes a line of which the corresponding point y 2 on the other image must lie. That means, for all pairs of corresponding points with homogeneous image coordinates y 1 and y 2 holds T y 2 F y 1 = 0 Being of rank two and determined only up to scale, the fundamental matrix can be estimated given at least seven point correspondences. The application of the fundamental matrix makes it a useful tool for scene reconstruction.the rela- tion between corresponding image points which the fundamental matrix represent is referred to as epipolar constraint, matching constraint, or incidence relation. The method to solve for F without the problem of ambiguity or the need to solve cubic equations is the 8 point or least squares algorithm. This algorithm can also be easily extended to any larger number of points. Each point correspondence equation is written as a homogenous equation in 9 variables. All the equations are stacked into a matrix and the least squares optimal solution is found using eigen analysis. Unfortunately, this method does not enforce the rank constraint. F must be projected onto a rank two subspace. This can be done by performing singular value decomposition and setting the smallest singular value to zero. We use normalized coordinates to find F, which is unnormalized later on. 5.1 Eight Point Algorithm The eight point algorithm is described as follows: Given 8 correspondence points p11, p12...p18, and p21, p22... p28 1. Normalize the points for unit variance and centering the origin in the image q 1 = T 1 p 1 q 2 = T 2 p 2 2. Create the data matrix and express the epipolar constraint as Df = 0 where D is 8x9 Row i of D = [u u u v u v u v v v u v 1 ] where q 1 = [u v 1] T, q 2 = [u v 1] T And f = [G 11 G 12 G 13 G 21 G 22 G 23 G 31 G 32 G 33 ] T 3. Solve for f as the eigen vector corresponding to least eigen value in the form: Df = V 4. Ensure that G obtained has rank 2, by finding the nearest rank 2 matrix G. which minimizes the frobenius norm G G

5 3D Reconstruction on GPU: A Parallel Processing Approach 5 5. Denormalize to get F F = T T 2 G T RANSAC For a robust estimate of F, we have applied a ransac approach.the value of F decides the goodness of the essential matrix. It is an iterative method to estimate parameters of a mathematical model from a set of observed data which contains outliers. It is a non-deterministic algorithm in the sense that it produces a reasonable result only with a certain probability, with this probability increasing as more iterations are allowed. The algorithm was first published by Fischler and Bolles in The computation of all the Fundamental Matrices can be computed in parallel, since we can pass all the random samples in parallel, to compute all the fundamental matrices in one go, but to seperate inliers and outliers we need to compute it serially. RANSAC Algorithm The generic RANSAC algorithm, in pseudocode, works as follows: input: data - a set of observations model - a model that can be fitted to data n - the minimum number of data required to fit the model k - the maximum number of iterations allowed in the algorithm t - a threshold value for determining when a datum fits a model d - the number of close data values required to assert that a model fits well to data output: best_model - model parameters which best fit the data best_consensus_set - data point from which this model has been estimated best_error - the error of this model relative to the data iterations := 0 best_model := nil best_consensus_set := nil best_error := infinity while iterations < k maybe_inliers := n randomly selected values from data

6 6 3D Reconstruction on GPU: A Parallel Processing Approach maybe_model := model parameters fitted to maybe_inliers consensus_set := maybe_inliers for every point in data not in maybe_inliers if point fits maybe_model with an error smaller than t add point to consensus_set if the number of elements in consensus_set is > d better_model := model parameters fitted to all points in consensus_set this_error := a measure of how well better_model fits these points if this_error < best_error best_model := better_model best_consensus_set := consensus_set best_error := this_error increment iterations return best_model, best_consensus_set, best_error 6 Self Calibration 6.1 Self Calibration Camera Calibration is the process of finding the internal parameters of the camera. The camera off-line calibration procedure requires the euclidean distance of the various points on the object. It computes the camera matrix that minimizes the error between the image of the object and its reprojection. The key point of this technique is to have the prior knowledge of the euclidean geometry of the object. This may not be available all the time. Moreover if the camera internals are changed the camera has to be recaliberated. To circumvent the problem of not having the prior knowledge of the euclidean structure of the scene, self calibration is deployed to find out the internal parameters of the camera. This makes use of the correspondences of the points in the images taken at different rotations and the translations of the camera. The camera is sufficiently rotated and translated to avoid the epipoles meeting at the same point.the various assumptions that were employed are described in Section 2. In Section 3 we describe the various mathematical formulations.the pseudo code is explained in the Section Assumptions used in the Calibration The various assumptions employed during the calibration are listed as follows:

7 3D Reconstruction on GPU: A Parallel Processing Approach 7 The Skew parameter of the camera is assumed to zero. The aspect Ratio of the camera is assumed to be 1. The center is taken close to the actual center point of the image. ụ 6.3 Hartley s Equations and Cipolla s Method We use the Hartleys equations to find out the best focal length estimate of the camera. The Fundamental matrix used in the equation is derived from the point to point correspondences using RANSAC. With this focal length the internal parameters of the camera matrix are initialized. From the Fundamental Matrix obtained and the initial estimate of the camera matrix, the Essential matrix is calculated. The constraint that the two singular values of the Essential matrix should beequal forms the basis of the cost function which is used to find out the best estimate of the camera matrix such that it gives the two singular values of the Essential Matrix equal in magnitude. The focal length used to intialize the camera matrix and the cost function used are shown below: The focal length is derived from the equation: f= c T [e ] x IFC(c T F T c ) c T [e ] x IFIF T c I = c = ( u 0 v 0 1 ) T c = ( u 0 v 0 1 ) T This is the cost function used: C(K i,i=1,...,n) = n ij w ij Pnkl w kl 1 σij 2 σij 1 σij 6.4 Pseudo Code The algorithm is described as follows: Compute the fundamental Matrix using RANSAC for each pair of the images. Compute the best estimate of the focal length, which is used for initialization of K matrix.

8 8 3D Reconstruction on GPU: A Parallel Processing Approach Compute the Essential Matrix using E = K FK and find the singular values of E from SVD. Minimize the cost function of the two singular values using gradient Descent. After finding Fundamental Matrix, and Entrinsic parameters we can calculate the Essential Matrix. After finding Essential matrix, using triangulation we can calculate the relative Rotation and Translation matrices. 7 Bundle Adjustment 7.1 Introduction In 3D reconstruction from multiple views, point to point correspondences are computed using Sift algorithm. If there is error or noise in this point to point correspondence, this error will also reflect in the computation of Fundamental matrix, self calibration of camera and finally 3D point computation by triangulation method. So we have to some how minimize the total accumulated error at the end. We use bundle adjustment algorithm to rectify this error as the final step of 3D structure and viewing parameter estimate. It s name refers to the bundles of light rays originating from each 3D feature and converging on each camera centre, which are adjusted optimally with respect to both structure and viewing parameters. Bundle Adjustment amounts to minimizing the reprojection error between the observed and predicted image points, which is expressed as the sum of squares of a number of non-linear real-valued functions. Thus, the minimization is achieved using non-linear least squares algorithm, of which the Levenberg- Marquardt(LM) has proven to be the most successful due to its use of an effective damping strategy that lends it the ability to converge promptly from a wide range of initial guesses. By iteratively linearizing the function to be minimized in the neighborhoods of the current estimate, the LM algorithm involves the solution of linear system known as the normal equations. Considering that the normal equations are solved repeatedly in the course of the LM algorithm. Each computation of the solutions to a dense linear system has complexity O(N 3 ) in the number of parameters, it is clear that general purpose implementation of the LM algorithm are computationally very demanding when employed to minimize functions depending on many parameters. Fortunately, when solving minimization problems arising in Bundle Adjustment, the normal equations matrix has a sparse block structure owing to the lack of interaction among parameters for different 3D points and cameras. Therefore, considerable computational benefits can be gained by developing a tailored sparse variant of the LM algorithm which explicitly takes advantage of the normal equations zeros pattern. 7.2 Levenberg - Marquardt Algorithm The Levenberg-Marquardt (LM) algorithm is an iterative technique that locates the minimum of a multivariate function that is expressed as the sum of

9 3D Reconstruction on GPU: A Parallel Processing Approach 9 squares of non-linear real-valued functions. It has become a standard technique for non-linear least-squares problems, widely adopted in a broad spectrum of disciplines. LM can be thought of as a combination of steepest descent and the Gauss-Newton method. When the current solution is far from the correct one, the algorithm behaves like a steepest descent method: slow, but guaranteed to converge. When the current solution is close to the correct solution, it becomes a Gauss-Newton method. The equation for Gauss-Newton method is given by LM equation is given by: J T Jδ f = J T ε (J T J+µI)δ f = J T ε The strategy of altering the diagonal elements of J T J is called damping and µ 0 is referred to as the damping term. If the updated parameter vector [p+δ f ] with [δ f ] computed from Eq 2 leads to a reduction in the error µ, the update is accepted and the process repeats with a decreased damping term. Otherwise, the damping term is increased, the augmented normal equations are solved again and the process iterates until a value of δ f that decreases error is found. The process of repeatedly solving Eq. 2 for different values of the damping term until an acceptable update to the parameter vector is found corresponds to one iteration of the LM algorithm. In LM, the damping term is adjusted at each iteration to assure a reduction in the error If the damping is set to a large value, matrix (J T J+µI) in Eq.2 is nearly diagonal and LM update step δ f is near the steepest descent direction. Moreover, the magnitude of δ f is reduced in this case. Damping also handles situations where the Jacobian is rank deficient and J T J is therefore singular. In this way, LM can defensively navigate a region of the parameter space in which the model is highly nonlinear. If the damping is small, the LM step approximates the exact quadratic step appropriate for a fully linear problem. LM is adaptive because it controls its own damping: it raises the damping if a step fails to reduce ; otherwise it reduces the damping. In this way LM is capable to alternate between a slow descent approach when being far from the minimum and a fast convergence when being at the minimum s neighborhood. The LM algorithm terminates when at least one of the following conditions is met: The magnitude of the gradient of ǫ T ǫ, i.e, J T ǫ in the right hand side of Eq. 1, drops below a threshold ǫ f The relative change in the magnitude of δ f drops below a threshold. The error ǫ T ǫ drops below a threshold ǫ 3 A maximum number of iteratopms k max is completed 7.3 Sparse Bundle Adjustment This section explains how can sparse variant of LM algorithm to deal efficiently with the problem of bundle adjustment. Assume that n 3D points are seen in

10 10 3D Reconstruction on GPU: A Parallel Processing Approach m views and let 000 be the projection of the 000 point on the image j. Bundle adjustment amounts to refining a set of parameters that most accurately predict the locations of the observed n points in the set of the m available images. More formally, assume that each camera j is parameterized by a vector 000 and each 3D point i by a vector 000. BA minimizes the reprojection error with respect to all 3D point of the camera parameters. n m min P,X i=1 j=1 d(q(p j, X j ), x ij ) 2 Q(P j,x j ) is the predicted projection of point i on image j. d(x,y) denotes the Euclidean distance between the inhomogeneous image points represented by x and y. Here P is the 3D point and M is the camera matrix. For a typical sequence of 20 views and 2000 points, a minimization problem in more than 6000 variables has to be solved. A straight-forward computation is obviously not feasible. However, the special structure of the problem can be exploited to solve the problem much more efficiently. The 3D points, camera points are independent of each other. This results in sparse matrices. Here, The J matrix is given by (for 3 camera views and 4 world points): Let A ij = δxij δa j Where x ij is the measured value of projection of ith 3D point on jth image and a j are the parameters of jth camera/image. A B A 12 0 B A 13 B A B A B J = 0 0 A 23 0 B A B A B A B 33 0 A B 41 0 A B A B 43 Here, A 11 etc. can be calculated parallely. For, calculation of J T J: Let: U j = 4 x=1 AT ij AT ij, V i= 3 j=1 BT ij B ij, W ij =A T ij B ij

11 3D Reconstruction on GPU: A Parallel Processing Approach 11 U W 11 W 21 W 31 W 41 0 U 2 0 W 12 W 22 W 32 W U 3 W 13 W 23 W 33 W 43 J T J = W11 T W 12 T W 13 T V W21 T W 22 T W 23 T 0 V W31 T W32 T W33 T 0 0 V 3 0 W41 T W42 T W43 T V 4 Note again that this matrix is build for 3 cameras and 4 world coordinates. The size of this square matrix is = 7*(no of cameras)+3*(no of Images) So for this particular case, the size of the matrix is = (7*3+3*4)x(3*4+7*4) After calculating the J T J parallely, we can solve for δ camera and δ images using shur complement and cholesky factorisation. And hence updating the damping parameter before each iteration. References 1. BROWN, M., AND LOWE, D Unsupervised 3d object recognition and reconstruction in unordered datasets. In Proc. Int. Conf. on 3D Digital Imaging and Modelling, Wei Wang, Hung Tat Tsui. A SVD Decomposition of Essential Matrix with Eight Solutions for the Relative Positions of Two Perspective Cameras 3. Noah Snavely, Steven M. Seitz, Richard Szeliski. Modeling the World from Internet Photo Collections 4. E Mouragnon, Mazime Lhuiller, Michel Dhome, Fabien Dekeyser, Patrick Sayd. 3D reconstruction of complex structures with bundle adjustment: an incremental approach 5. M Goesele, Noah Snavely. Multi-View Stereo for Community Photo Collections. 6. Noah Snavely, Steven M Seitz, Richard Szeliski. Photo Tourism- Exploring Photo Collection in 3D

Multiview Stereo COSC450. Lecture 8

Multiview Stereo COSC450. Lecture 8 Multiview Stereo COSC450 Lecture 8 Stereo Vision So Far Stereo and epipolar geometry Fundamental matrix captures geometry 8-point algorithm Essential matrix with calibrated cameras 5-point algorithm Intersect

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

CS 395T Lecture 12: Feature Matching and Bundle Adjustment. Qixing Huang October 10 st 2018

CS 395T Lecture 12: Feature Matching and Bundle Adjustment. Qixing Huang October 10 st 2018 CS 395T Lecture 12: Feature Matching and Bundle Adjustment Qixing Huang October 10 st 2018 Lecture Overview Dense Feature Correspondences Bundle Adjustment in Structure-from-Motion Image Matching Algorithm

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Image correspondences and structure from motion

Image correspondences and structure from motion Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.

More information

Homographies and RANSAC

Homographies and RANSAC Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position

More information

Lecture 8.2 Structure from Motion. Thomas Opsahl

Lecture 8.2 Structure from Motion. Thomas Opsahl Lecture 8.2 Structure from Motion Thomas Opsahl More-than-two-view geometry Correspondences (matching) More views enables us to reveal and remove more mismatches than we can do in the two-view case More

More information

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo -- Computational Optical Imaging - Optique Numerique -- Multiple View Geometry and Stereo -- Winter 2013 Ivo Ihrke with slides by Thorsten Thormaehlen Feature Detection and Matching Wide-Baseline-Matching

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

Geometry for Computer Vision

Geometry for Computer Vision Geometry for Computer Vision Lecture 5b Calibrated Multi View Geometry Per-Erik Forssén 1 Overview The 5-point Algorithm Structure from Motion Bundle Adjustment 2 Planar degeneracy In the uncalibrated

More information

Structure from Motion

Structure from Motion /8/ Structure from Motion Computer Vision CS 43, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from motion

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Vision par ordinateur

Vision par ordinateur Epipolar geometry π Vision par ordinateur Underlying structure in set of matches for rigid scenes l T 1 l 2 C1 m1 l1 e1 M L2 L1 e2 Géométrie épipolaire Fundamental matrix (x rank 2 matrix) m2 C2 l2 Frédéric

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com

More information

Structure from Motion CSC 767

Structure from Motion CSC 767 Structure from Motion CSC 767 Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R,t R 2,t 2 R 3,t 3 Camera??

More information

Structure from Motion

Structure from Motion 11/18/11 Structure from Motion Computer Vision CS 143, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from

More information

CS 532: 3D Computer Vision 7 th Set of Notes

CS 532: 3D Computer Vision 7 th Set of Notes 1 CS 532: 3D Computer Vision 7 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Logistics No class on October

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

Epipolar Geometry CSE P576. Dr. Matthew Brown

Epipolar Geometry CSE P576. Dr. Matthew Brown Epipolar Geometry CSE P576 Dr. Matthew Brown Epipolar Geometry Epipolar Lines, Plane Constraint Fundamental Matrix, Linear solution + RANSAC Applications: Structure from Motion, Stereo [ Szeliski 11] 2

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Multiple View Geometry in Computer Vision

Multiple View Geometry in Computer Vision Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Structure Computation Lecture 18 March 22, 2005 2 3D Reconstruction The goal of 3D reconstruction

More information

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching -- Computational Optical Imaging - Optique Numerique -- Single and Multiple View Geometry, Stereo matching -- Autumn 2015 Ivo Ihrke with slides by Thorsten Thormaehlen Reminder: Feature Detection and Matching

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Structure from Motion

Structure from Motion Structure from Motion Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions Structure from motion Recover both 3D scene geoemetry and camera positions SLAM: Simultaneous

More information

Camera Calibration. COS 429 Princeton University

Camera Calibration. COS 429 Princeton University Camera Calibration COS 429 Princeton University Point Correspondences What can you figure out from point correspondences? Noah Snavely Point Correspondences X 1 X 4 X 3 X 2 X 5 X 6 X 7 p 1,1 p 1,2 p 1,3

More information

Robust Geometry Estimation from two Images

Robust Geometry Estimation from two Images Robust Geometry Estimation from two Images Carsten Rother 09/12/2016 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation Process 09/12/2016 2 Appearance-based

More information

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential

More information

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne

More information

Noah Snavely Steven M. Seitz. Richard Szeliski. University of Washington. Microsoft Research. Modified from authors slides

Noah Snavely Steven M. Seitz. Richard Szeliski. University of Washington. Microsoft Research. Modified from authors slides Photo Tourism: Exploring Photo Collections in 3D Noah Snavely Steven M. Seitz University of Washington Richard Szeliski Microsoft Research 2006 2006 Noah Snavely Noah Snavely Modified from authors slides

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Camera calibration. Robotic vision. Ville Kyrki

Camera calibration. Robotic vision. Ville Kyrki Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis

More information

Photo Tourism: Exploring Photo Collections in 3D

Photo Tourism: Exploring Photo Collections in 3D Photo Tourism: Exploring Photo Collections in 3D Noah Snavely Steven M. Seitz University of Washington Richard Szeliski Microsoft Research 15,464 37,383 76,389 2006 Noah Snavely 15,464 37,383 76,389 Reproduced

More information

Augmenting Reality, Naturally:

Augmenting Reality, Naturally: Augmenting Reality, Naturally: Scene Modelling, Recognition and Tracking with Invariant Image Features by Iryna Gordon in collaboration with David G. Lowe Laboratory for Computational Intelligence Department

More information

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary

More information

Epipolar geometry. x x

Epipolar geometry. x x Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections

More information

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 Forward Projection (Reminder) u v 1 KR ( I t ) X m Y m Z m 1 Backward Projection (Reminder) Q K 1 q Presentation Outline 1 2 3 Sample Problem

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

CSCI 5980/8980: Assignment #4. Fundamental Matrix

CSCI 5980/8980: Assignment #4. Fundamental Matrix Submission CSCI 598/898: Assignment #4 Assignment due: March 23 Individual assignment. Write-up submission format: a single PDF up to 5 pages (more than 5 page assignment will be automatically returned.).

More information

Camera Drones Lecture 3 3D data generation

Camera Drones Lecture 3 3D data generation Camera Drones Lecture 3 3D data generation Ass.Prof. Friedrich Fraundorfer WS 2017 Outline SfM introduction SfM concept Feature matching Camera pose estimation Bundle adjustment Dense matching Data products

More information

Computer Vision I - Robust Geometry Estimation from two Cameras

Computer Vision I - Robust Geometry Estimation from two Cameras Computer Vision I - Robust Geometry Estimation from two Cameras Carsten Rother 16/01/2015 Computer Vision I: Image Formation Process FYI Computer Vision I: Image Formation Process 16/01/2015 2 Microsoft

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Photo Tourism: Exploring Photo Collections in 3D

Photo Tourism: Exploring Photo Collections in 3D Photo Tourism: Exploring Photo Collections in 3D SIGGRAPH 2006 Noah Snavely Steven M. Seitz University of Washington Richard Szeliski Microsoft Research 2006 2006 Noah Snavely Noah Snavely Reproduced with

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 Forward Projection (Reminder) u v 1 KR ( I t ) X m Y m Z m 1 Backward Projection (Reminder) Q K 1 q Q K 1 u v 1 What is pose estimation?

More information

C280, Computer Vision

C280, Computer Vision C280, Computer Vision Prof. Trevor Darrell trevor@eecs.berkeley.edu Lecture 11: Structure from Motion Roadmap Previous: Image formation, filtering, local features, (Texture) Tues: Feature-based Alignment

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

Project 2: Structure from Motion

Project 2: Structure from Motion Project 2: Structure from Motion CIS 580, Machine Perception, Spring 2015 Preliminary report due: 2015.04.27. 11:59AM Final Due: 2015.05.06. 11:59AM This project aims to reconstruct a 3D point cloud and

More information

EECS 442: Final Project

EECS 442: Final Project EECS 442: Final Project Structure From Motion Kevin Choi Robotics Ismail El Houcheimi Robotics Yih-Jye Jeffrey Hsu Robotics Abstract In this paper, we summarize the method, and results of our projective

More information

A Systems View of Large- Scale 3D Reconstruction

A Systems View of Large- Scale 3D Reconstruction Lecture 23: A Systems View of Large- Scale 3D Reconstruction Visual Computing Systems Goals and motivation Construct a detailed 3D model of the world from unstructured photographs (e.g., Flickr, Facebook)

More information

Parameter estimation. Christiano Gava Gabriele Bleser

Parameter estimation. Christiano Gava Gabriele Bleser Parameter estimation Christiano Gava Christiano.Gava@dfki.de Gabriele Bleser gabriele.bleser@dfki.de Introduction Previous lectures: P-matrix 2D projective transformations Estimation (direct linear transform)

More information

Project: Camera Rectification and Structure from Motion

Project: Camera Rectification and Structure from Motion Project: Camera Rectification and Structure from Motion CIS 580, Machine Perception, Spring 2018 April 26, 2018 In this project, you will learn how to estimate the relative poses of two cameras and compute

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X

More information

Last lecture. Passive Stereo Spacetime Stereo

Last lecture. Passive Stereo Spacetime Stereo Last lecture Passive Stereo Spacetime Stereo Today Structure from Motion: Given pixel correspondences, how to compute 3D structure and camera motion? Slides stolen from Prof Yungyu Chuang Epipolar geometry

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

Recovering structure from a single view Pinhole perspective projection

Recovering structure from a single view Pinhole perspective projection EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their

More information

A New Representation for Video Inspection. Fabio Viola

A New Representation for Video Inspection. Fabio Viola A New Representation for Video Inspection Fabio Viola Outline Brief introduction to the topic and definition of long term goal. Description of the proposed research project. Identification of a short term

More information

Project: Camera Rectification and Structure from Motion

Project: Camera Rectification and Structure from Motion Project: Camera Rectification and Structure from Motion CIS 580, Machine Perception, Spring 2018 April 18, 2018 In this project, you will learn how to estimate the relative poses of two cameras and compute

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

CS 231A: Computer Vision (Winter 2018) Problem Set 2

CS 231A: Computer Vision (Winter 2018) Problem Set 2 CS 231A: Computer Vision (Winter 2018) Problem Set 2 Due Date: Feb 09 2018, 11:59pm Note: In this PS, using python2 is recommended, as the data files are dumped with python2. Using python3 might cause

More information

Lecture 5 Epipolar Geometry

Lecture 5 Epipolar Geometry Lecture 5 Epipolar Geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 5-24-Jan-18 Lecture 5 Epipolar Geometry Why is stereo useful? Epipolar constraints Essential

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

MAPI Computer Vision. Multiple View Geometry

MAPI Computer Vision. Multiple View Geometry MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry

More information

Photo Tourism: Exploring Photo Collections in 3D

Photo Tourism: Exploring Photo Collections in 3D Click! Click! Oooo!! Click! Zoom click! Click! Some other camera noise!! Photo Tourism: Exploring Photo Collections in 3D Click! Click! Ahhh! Click! Click! Overview of Research at Microsoft, 2007 Jeremy

More information

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify

More information

3D Computer Vision. Structure from Motion. Prof. Didier Stricker

3D Computer Vision. Structure from Motion. Prof. Didier Stricker 3D Computer Vision Structure from Motion Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Structure

More information

CS 664 Structure and Motion. Daniel Huttenlocher

CS 664 Structure and Motion. Daniel Huttenlocher CS 664 Structure and Motion Daniel Huttenlocher Determining 3D Structure Consider set of 3D points X j seen by set of cameras with projection matrices P i Given only image coordinates x ij of each point

More information

Lecture 10: Multi view geometry

Lecture 10: Multi view geometry Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

Final project bits and pieces

Final project bits and pieces Final project bits and pieces The project is expected to take four weeks of time for up to four people. At 12 hours per week per person that comes out to: ~192 hours of work for a four person team. Capstone:

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Multi-view geometry problems

Multi-view geometry problems Multi-view geometry Multi-view geometry problems Structure: Given projections o the same 3D point in two or more images, compute the 3D coordinates o that point? Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam Presented by Based on work by, Gilad Lerman, and Arthur Szlam What is Tracking? Broad Definition Tracking, or Object tracking, is a general term for following some thing through multiple frames of a video

More information

3D Visualization through Planar Pattern Based Augmented Reality

3D Visualization through Planar Pattern Based Augmented Reality NATIONAL TECHNICAL UNIVERSITY OF ATHENS SCHOOL OF RURAL AND SURVEYING ENGINEERS DEPARTMENT OF TOPOGRAPHY LABORATORY OF PHOTOGRAMMETRY 3D Visualization through Planar Pattern Based Augmented Reality Dr.

More information

Identifying Car Model from Photographs

Identifying Car Model from Photographs Identifying Car Model from Photographs Fine grained Classification using 3D Reconstruction and 3D Shape Registration Xinheng Li davidxli@stanford.edu Abstract Fine grained classification from photographs

More information

C / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.

C / 35. C18 Computer Vision. David Murray.   dwm/courses/4cv. C18 2015 1 / 35 C18 Computer Vision David Murray david.murray@eng.ox.ac.uk www.robots.ox.ac.uk/ dwm/courses/4cv Michaelmas 2015 C18 2015 2 / 35 Computer Vision: This time... 1. Introduction; imaging geometry;

More information

CS231A Midterm Review. Friday 5/6/2016

CS231A Midterm Review. Friday 5/6/2016 CS231A Midterm Review Friday 5/6/2016 Outline General Logistics Camera Models Non-perspective cameras Calibration Single View Metrology Epipolar Geometry Structure from Motion Active Stereo and Volumetric

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

CS231M Mobile Computer Vision Structure from motion

CS231M Mobile Computer Vision Structure from motion CS231M Mobile Computer Vision Structure from motion - Cameras - Epipolar geometry - Structure from motion Pinhole camera Pinhole perspective projection f o f = focal length o = center of the camera z y

More information

Application questions. Theoretical questions

Application questions. Theoretical questions The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

Visualization 2D-to-3D Photo Rendering for 3D Displays

Visualization 2D-to-3D Photo Rendering for 3D Displays Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based

More information

Lecture 10: Multi-view geometry

Lecture 10: Multi-view geometry Lecture 10: Multi-view geometry Professor Stanford Vision Lab 1 What we will learn today? Review for stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

Two-View Geometry (Course 23, Lecture D)

Two-View Geometry (Course 23, Lecture D) Two-View Geometry (Course 23, Lecture D) Jana Kosecka Department of Computer Science George Mason University http://www.cs.gmu.edu/~kosecka General Formulation Given two views of the scene recover the

More information

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Engin Tola 1 and A. Aydın Alatan 2 1 Computer Vision Laboratory, Ecóle Polytechnique Fédéral de Lausanne

More information