Robust Geometry Estimation from two Images
|
|
- Thomasine Horn
- 6 years ago
- Views:
Transcription
1 Robust Geometry Estimation from two Images Carsten Rother 09/12/2016 Computer Vision I: Image Formation Process
2 Roadmap for next four lectures Computer Vision I: Image Formation Process 09/12/ Appearance-based Matching (sec. 4.1) Projective Geometry - Basics (sec ) Geometry of a Single Camera (sec 2.1.5, 2.1.6) Camera versus Human Perception The Pinhole Camera Lens effects Geometry of two Views (sec. 7.2) The Homography (e.g. rotating camera) Camera Calibration (3D to 2D Mapping) The Fundamental and Essential Matrix (two arbitrary images) Robust Geometry estimation for two cameras (sec ) Multi-View 3D reconstruction (sec ) General scenario From Projective to Metric Space Special Cases
3 Topic 3: Fundamental/Essential Matrix F/E Computer Vision I: Two-View Geometry 09/12/ Derive geometrically F/E Calibration: Take measurements (points) to compute F/E How do we do that with a minimal number of points? How do we do that with many points? Can we derive the intrinsic (K) an extrinsic (R, C) parameters from F/E? What can we do with F/E?
4 Reminder: Matching two Images Computer Vision I: Image Formation Process 09/12/ Find interest points Find orientated patches around interest points to capture appearance Encode patch appearance in a descriptor Find matching patches according to appearance (similar descriptors) Verify matching patches according to geometry (later lecture) v We will discover in next slides: Seven 3D points defines how other 3D points match between 2 views!
5 Epipolar Geometry epipol epipol Epipolar line: Constrains the location where a particular point (here p 1 ) from one view can be found in the other. Epipolar lines intersect at the epipoles Computer Vision I: Two-View Geometry 09/12/2016 5
6 Epipolar Geometry Computer Vision I: Two-View Geometry 09/12/ epipol Epipolar lines: Intersect at the epipoles In general not parallel epipol
7 Question Computer Vision I: Image Formation Process 09/12/ When is the epipol a point in the view of the camera, i.e. image? Answers: 1) Sidewise moving camera 2) Forward moving camera 3) Rotating camera 4) In case 1) and 2) Camera centre Camera motion
8 Example: Converging Cameras Computer Vision I: Two-View Geometry 09/12/2016 8
9 Example: Motion Parallel to Camera Computer Vision I: Two-View Geometry 09/12/ We will use this idea when it comes to stereo matching
10 Example: Forward Motion Computer Vision I: Two-View Geometry 09/12/ Epipoles have same coordinate in both images Points move along lines radiating from epipole focus of expansion
11 The maths behind it: Fundamental/Essential Matrix Computer Vision I: Robst Two-View Geometry 09/12/ derivation on black board X Camera 0 Camera 1 (R, T)
12 Computer Vision I: Two-View Geometry 09/12/ The maths behind it: Fundamental/Essential Matrix The 3 vectors are in same plane (co-planar): 1) T (= C 1 C 0 ) 2) X C 0 3) X C 1 X Set camera matrix: x 0 = K 0 I 0 X and x 1 = K 1 R 1 I C 1 X Hence, C 0 = 0; K 1 0 x 0 = X; RK 1 1 x 1 + C 1 = X (note X = X, 1 T ) The three vectors can be re-writting using x 0, x 1 : 1) T 2) X C 0 = X = K 1 0 x 0 3) X C 1 = RK 1 1 x 1 + C 1 C 1 = RK 1 1 x 1 We know that: K 1 0 x T 0 T RK 1 1 x 1 = 0 which gives: x T 0 K T 0 T RK 1 1 x 1 = 0
13 The Maths behind it: Fundamental/Essential Matrix Computer Vision I: Two-View Geometry 09/12/ In an un-calibrated setting (K s not known): x 0 T K 0 T T RK 1 1 x 1 = 0 In short: x 0 T Fx 1 = 0 where F is called the Fundamental Matrix (discovered by Faugeras and Luong 1992, Hartley 1992) In an calibrated setting (K s are known): we use rays: x i = K 1 i x i then we get: x T 0 T Rx 1 = 0 In short: x T 0 Ex 1 = 0 where E is called the Essential Matrix (discovered by Longuet-Higgins 1981)
14 Fundamental Matrix: Properties Computer Vision I: Two-View Geometry 09/12/ We have x 0 T Fx 1 = 0 where F is called the Fundamental Matrix It is det F = 0 and F has Rank 2. Hence F has 7 DoF Proof: F = K T 1 0 T RK 1 F has Rank 2 since T has Rank 2 x = 0 x 3 x 2 x 3 0 x 1 x 2 x 1 0 Check: det( x ) = x 3 x 3 0 x 1 x 2 + x 2 x 1 x 3 + x 2 0 = 0
15 Fundamental Matrix: Properties Computer Vision I: Two-View Geometry 09/12/ For any two matching points (i.e. they have the same 3D point) we have: x 0 T Fx 1 = 0 Compute epipolar line in camera 1 of a point x 0 : l 1 T = x 0 T F (since l 1 T x 1 = x 0 T Fx 1 = 0) Compute epipolar line in camera 0 of a point x 1 : l 0 = Fx 1 (since x 0 T l 0 = x 0 T Fx 1 = 0) l 1 x 1 camera 1 Camera 0 Camera 1
16 Fundamental Matrix: Properties Computer Vision I: Robust Two-View Geometry 09/12/ For any two matching points (i.e. have the same 3D point) we have: x 0 T Fx 1 = 0 Compute e 0 with e 0 T F = 0 T (i.e. left nullspace of F; can be computed with SVD) This is the Epipole e 0. It is: e 0 T Fx 1 = 0 T x 1 = 0 for all points x 1. Hence all lines l 0 for any x 1 : l 0 = Fx 1 go through e 0 T. e 0 x 1 l 0 l 0 x 1 camera 1 camera 1 X Epipole e 1 is right null space of F (Fe 1 = 0) Camera 0 (R, T) Camera 1
17 How can we compute F (2-view calibration)? Computer Vision I: Two-View Geometry 09/12/ Each pair of matching points gives one linear constraint x T Fx = 0 in F. For x, x we get: x 1 x 1 x 1 x 2 x 1 x 3 x 2 x 1 x 2 x 2 x 2 x 3 x 3 x 1 x 3 x 2 x 3 x 3... ( here x = x 1, x 2, x 3 T ) Given m 8 matching points (x, x) we can compute the F in a simple way. f 11 f 12 f 13 f 21 f 22 f 23 f 31 f 32 f 33 =
18 How can we compute F (2-view calibration)? Computer Vision I: Two-View Geometry 09/12/ Method (normalized 8-point algorithm): 1) Take m 8 points 2) Compute T, and condition points: x = Tx; x = T x 3) Assemble A with Af = 0, here A is of size m 9, and f vectorized F 4) Compute f = argmin f Af 2 subject to f 2 = 1. Use SVD to do this. 5) Get F of unconditioned points: T T FT (note: (Tx) T F T x = 0) 4) Make rank F = 2 2 [See HZ page 282]
19 How to make F Rank 2? Computer Vision I: Two-View Geometry 09/12/ (Again) Use SVD: Set last singular value σ p 1 to 0 then A has Rank p 1 and not p (assuming A has originally full Rank) Proof: diagonal matrix has Rank p 1 hence A has Rank p 1
20 Can we compute F with just 7 points? Method (7-point algorithm): 1) Take m = 7 points 2) Assemble A with Af = 0, here A is of size 7 9, and f vectorized F 3) Compute 2D right null space: F 1 and F 2 from last two rows in V T (use the SVD decomposition: A = UDV T ) 4) Choose: F = αf α F 2 (see comments next slide) 5) Determine α s (either 1 or 3 solutions for α) by using the constraint: det(αf α F 2 ) = 0.(see comments next slide) Note an 8 th point would determine which of these 3 solutions is the correct one. We will see later that the 7-point algorithm is the best choice for the robust case. Computer Vision I: Robust Two-View Geometry 09/12/
21 Comments to previous slide Computer Vision I: Image Formation Process 09/12/ Step 4) Choose: F = αf α F 2 The full null-space is given by: F = αf 1 + βf 2. We can say that some norm of F has a fixed value. We are free to say that we want: αf 1 + βf 2 1 (here F 1, F 2 are in vectorised form. Note that this is the same as having F 1, F 2 in matrix form and using the Frobenius norm for matrices) It is: α F 1 + β F 2 = αf 1 + βf 2 αf 1 + βf 2 (triangulation inequality) Hence we want: α F 1 + β F 2 1 Hence we want: α + β 1 (since F 1, F 2 are rows in V T and have norm 1) Hence we can choose: β = 1 α
22 Comments to previous slide Computer Vision I: Image Formation Process 09/12/ Step 5) Compute det(αf α F 2 ) = 0 (This is a cubic polynomial equation for α which has one or three real-value solutions for α)
23 Computer Vision I: Robust Two-View Geometry 09/12/ Can we get K s, R, T from F? Assume we have Can we get out K 1, R, K 0, T? F = x 0 T K 0 T T RK 1 1 F has 7 DoF K 1, R, K 0, T have together 16 DoF Not directly possible. Only with assumptions such as: External constraints Camera does not change over several frames (This is a challenging topic (more than 10 years of research!) called auto-calibration or self-calibration. We look at it in detail in next lecture.)
24 Coming back to Essential Matrix Computer Vision I: Robust Two-View Geometry 09/12/ In a calibrated setting (K s are known): we use rays: x i = K 1 i x i then we get: x T 0 T Rx 1 = 0 In short: x T 0 Ex 1 = 0 where E is called the Essential Matrix E has 5 DoF, since T has 3DoF, R 3DoF (note overall scale of T is unknown) X E has also Rank 2 (R, T)
25 Question Computer Vision I: Image Formation Process 09/12/ What is the minimal number of matching points to compute E? Answer: 1) 3 2) 4 3) 5 4) 6 5) 7 6) 8
26 Computer Vision I: Robust Two-View Geometry 09/12/ How to compute E We have: x 0 T Ex 1 = 0 Given m 8 matching run 8-point algorithm (as for F) Given m = 7 run 7-point algorithm and get 1 or 3 solutions Given m = 5 run 5-point algorithm to get up to 10 solutions. This is the minimal case since E has 5 DoF. 5-point algorithm history: Kruppa, Zur Ermittlung eines Objektes aus zwei Perspektiven mit innere Orientierung, Sitz.-Ber. Akad. Wiss., Wien, Math.-Naturw. Kl., Abt. IIa, (122): , found 11 solutions M. Demazure, Sur deux problemes de reconstruction, Technical Report Rep. 882, INRIA, Les Chesnay, France, 1988 showed that only up to 10 valid solutions exist D. Nister, An Efficient Solution to the Five-Point Relative Pose Problem, IEEE Conference on Computer Vision and Pattern Recognition, Volume 2, pp , 2003 fast method which gives up to 10 solutions of a 10 degree polynomial
27 Computer Vision I: Robust Two-View Geometry 09/12/ Can we get R, T from E? Assume we have E = T R, can we get out R, T? E has 5 DoF R, T have together 6 DoF Yes: We can get T up to scale, and a unique R X (R, T)
28 Computer Vision I: Image Formation Process 09/12/ How to get a unique T, R? 1) Compute T Note: E has rank 2, and T is in the left nullspace of E since This means that an SVD of E must look like: T v 0 E = UDV T = u 0 u 1 T T v ) Compute 4 possible solutions for R T t T = (0,0,0) This fixes the norm of T to 1, and correct sign (+/ T) is done in step 3 R 1,2 =+/ UR T 90 o where E = UDV T, R 90 = 3) Derive the unique solution for R and sign for T: 1) det(r) = 1 2) Reconstruct a 3D point and choose the solution where it lies in front of the two cameras. (In robust case: Take solution where most ( 5) points lie in front of the cameras) v 2 T V T T ; R 3,4 =+/ UR 90 ov T (see derivation HZ page 259; Szeliski page 310) , R 90 =
29 Visualization of the 4 solutions for R, T Computer Vision I: Robust Two-View Geometry 09/12/ This is the correct solution, since point is in front of cameras! T T T T The property that points must lie in front of the camera is called Chirality (Hartley 1998)
30 What can we do with F, E? Computer Vision I: Robust Two-View Geometry 09/12/ F/E encode the geometry of 2 cameras Can be used to find matching points (dense or sparse) between two views F/E encodes the essential information to do 3D reconstruction
31 Fundamental and Essential Matrix: Summary Computer Vision I: Robust Two-View Geometry 09/12/ Derive geometrically F, E F for un-calibrated cameras E for calibrated cameras Calibration: Take measurements (points) to compute F, E F minimum of 7 points -> 1 or 3 real solutions. F many points -> least square solution with SVD E minimum of 5 points -> 10 solutions E many points -> least square solution with SVD Can we derive the intrinsic (K) an extrinsic (R, T) parameters from F, E? -> F next lecture -> E yes can be done (translation up to scale) What can we do with F, E? -> essential tool for 3D reconstruction
32 Roadmap for next four lectures Computer Vision I: Image Formation Process 09/12/ Appearance-based Matching (sec. 4.1) Projective Geometry - Basics (sec ) Geometry of a Single Camera (sec 2.1.5, 2.1.6) Camera versus Human Perception The Pinhole Camera Lens effects Geometry of two Views (sec. 7.2) The Homography (e.g. rotating camera) Camera Calibration (3D to 2D Mapping) The Fundamental and Essential Matrix (two arbitrary images) Robust Geometry estimation for two cameras (sec ) Multi-View 3D reconstruction (sec ) General scenario From Projective to Metric Space Special Cases
33 In last lecture we asked (for rotating camera) Computer Vision I: Robust Two-View Geometry 09/12/ Question 1: If a match is completely wrong then argmin h Ah is a bad idea Question 2: If a match is slightly wrong then argmin h Ah might not be perfect. Better might be a geometric error: argmin h Hx x
34 Robust model fitting Computer Vision I: Robust Two-View Geometry 09/12/ RANSAC: Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography Martin A. Fischler and Robert C. Bolles (June 1981). [Side credits: Dimitri Schlesinger]
35 Computer Vision I: Robust Two-View Geometry 09/12/ Example Tasks Search for a straight line in a clutter of points y x i.e. search for parameters and for the model given a training set
36 Example Tasks Computer Vision I: Robust Two-View Geometry 09/12/ Estimate the fundamental matrix i.e. parameters satisfying given a training set of correspondent pairs For Homography of rotating camera we have: x l i H = x r i
37 Two sources of errors Computer Vision I: Robust Two-View Geometry 09/12/ Noise: the coordinates deviate from the true ones according to some rule (probability) the father away the less confident 2. Outliers: the data have nothing in common with the model to be estimated Ignoring outliers can lead to a wrong estimation. The way out: find outliers explicitly, estimate the model from inliers only
38 Computer Vision I: Robust Two-View Geometry 09/12/ Task formulation Let be the input space and be the parameter space. The training data consist of data points Let an evaluation function of a point with a model. Straight line Fundamental matrix be given that checks the consistency (Inlier) f x 1, x 2, a, b = 0 if ax 1 + bx 2 1 t (e. g. 0.1) 1 otherwise (Outlier) f x l, x r, F = 0 if x l t Fx r t (e. g. 0.1) 1 otherwise (Outlier) (Inlier) The task is to find the parameter that is consistent with the majority of the data points: y = argmin y i f(x i, y) x 2 x 1
39 First Idea: 2D Line estimation Computer Vision I: Robust Two-View Geometry 09/12/ Question: How to compute: y = argmin y A naïve approach: enumerate all parameter values know as Hough Transform (very time consuming and not possible at all for many free parameters (i.e. high dimensional parameter space) i f(x i, y) Image with points y 100 Θ r x Encode all lines with two parameters (r, Θ) Hough transform Goal: Find point in the figure where most lines meet Image with just 3 points 0 All lines that go through these 3 points
40 First Idea: 2D Line estimation Computer Vision I: Robust Two-View Geometry 09/12/ Hough transform r Θ Accumulator space number of inliers (sketched) Observation: The parameter space have very low counts Idea: do not try all values but only some of them. Which ones?
41 Data-driven Oracle Computer Vision I: Robust Two-View Geometry 09/12/ An Oracle is a function that predicts a parameter given the minimum amount of data points (d-tuple): Examples: Line can be estimated from d = 2 points Fundamental matrix from d = 7 or 8 points correspondences Homography can be computed from d = 4 points correspondences First Idea: Do not enumerate all parameter values but all d-tuples of data points That is then n d number of tests, e.g. n 2 for lines (with n points) The optimization is performed over a discrete domain. y = argmin y i f(x i, y) Second Idea: Do not try all subsets, but sample them randomly
42 Computer Vision I: Robust Two-View Geometry 09/12/ RANSAC [Random Sample Consensus, Fischler and Bolles 1981] Basic RANSAC method: Can be done in parallel! Repeat many times select d-tuple, e.g. (x 1, x 2 ) for lines compute parameter(s) y, e.g. line y = g x 1, x 2 evaluate f y = i f(x i, y) If f y f y set y = y and keep value f y Sometimes we get a discrete set of intermediate solutions y. For example for F-matrix computation from 7 points we have up to 3 solutions. The we simply evaluate f y for all solutions. How many times do you have to sample in order to reliable estimate the true model?
43 Computer Vision I: Robust Two-View Geometry 09/12/ Convergence Observation: it is necessary to sample any in order to estimate the model correctly. -tuple of inliers just once Let ε be the probability of outliers. The probability to sample d inliers is 1 ε d (here = 0.64) The probability of a wrong d-tuple is (here 0.36) The probability to sample n times only wrong tuples is (here = ) 1000 points overall ε 0.2 The probability to sample the right tuple at least once during the process (i.e. to estimate the correct model according to assumptions) (here %)
44 Convergence probabity Computer Vision I: Robust Two-View Geometry 09/12/ ε (outliers) n
45 Comment Computer Vision I: Image Formation Process 09/12/ In our derivation for p = we were slightly optimistic since degenerate inliers may give rise to bad lines + + However, these bad lines have little support wrt number of inliers We also define later a refinement procedure which can correct such bad lines
46 The choice of the oracle is crucial Example the fundamental matrix: a) 8-point algorithm Probability: 70% (n = 300; ε = 0.5; d = 8) b) 7-point algorithm Probability: 90% (n = 300; ε = 0.5; d = 7) Number of trials to get p% accuracy (here 99%) d ε p = ε d n n = log 1 p log(1 1 ε d ) Computer Vision I: Robust Two-View Geometry 09/12/
47 The choice of evaluation function is crucial Computer Vision I: Image Formation Process 09/12/ Evaluation function: f x 1, x 2, a, b = 1 if ax 1 + bx 2 1 t (e. g. 0.1) 0 otherwise Algebraic error: Is a measure that has no geometric meaning Example: For a line: d x 1, x 2, a, b = ax 1 + bx 2 1 For a homograpy: d x 1, x 2, a, b = Ah (where A is 1 8 matrix derived as above For F-matrix: d x l, x r, F = x l t Fx r error function Geometric error: Is a measure that considers a distance in image plane Example: For a line: d x 1, x 2, a, b = d( x 1, x 2, l a, b ) Line: l a, b (d is Euclidean distance between point to line) d( x 1, x 2, l a, b ) x 1, x 2 Geometric error: for homography and F-matrix to come
48 The choice of confidence interval is crucial Computer Vision I: Robust Two-View Geometry 09/12/ Examples: Large confidence, right model, 2 outliers Large confidence, wrong model, 2 outliers Small confidence, Almost all points are outliers (independent of the model)
49 Question Computer Vision I: Image Formation Process 09/12/ Can you think of an algorithm which adapts the number of samples n itself? Answers: 1) Yes 2) No
50 Extension: Adaptive number of samples n Computer Vision I: Robst Two-View Geometry 09/12/ Choose n in an adaptive way: 1) Fix p = 99.9% (very large value) 2) Set n = and ε = 0.9 (large value for outlier) 3) During RANSAC adapt n, ε : 1) Re-compute ε from current best solution ε = outliers / all points 2) Re-Compute new n: n = log 1 p log(1 1 ε d )
51 MSAC (M-Estimator SAmple Consensus) Computer Vision I: Robust Two-View Geometry 09/12/ If a data point is an inlier the penalty is not 0, but it depends on the distance to the model. Example for the fundamental matrix: becomes f x l, x r, F = 0 if x l t Fx r t (e. g. 0.1) 1 otherwise f x l, x r, F = x l t Fx r if x t l Fx r t (e. g. 0.1) t otherwise the task is to find the model with the minimum average penalty robust function t t) [P.H.S. Torr und A. Zisserman 1996]
52 Randomized RANSAC Computer Vision I: Robust Two-View Geometry 09/12/ Evaluation of a hypothesis, i.e. is often time consuming Randomized RANSAC: instead of checking all data points 1. Sample m points from 2. If all of them are inliers, check all others as before, i.e. evaluate hypothesis. But, if there is at least one bad point, among m, reject the hypothesis It is possible that good hypotheses are rejected. However it saves time (bad hypotheses are recognized fast) one can sample more often overall often profitable (depends on application).
Computer Vision I - Robust Geometry Estimation from two Cameras
Computer Vision I - Robust Geometry Estimation from two Cameras Carsten Rother 16/01/2015 Computer Vision I: Image Formation Process FYI Computer Vision I: Image Formation Process 16/01/2015 2 Microsoft
More informationA Minimal Solution for Relative Pose with Unknown Focal Length
A Minimal Solution for Relative Pose with Unknown Focal Length Henrik Stewénius Lund University Sweden stewe@maths.lth.se David Nistér University of Kentucky Lexington, USA dnister@cs.uky.edu Fredrik Kahl
More informationTwo-view geometry Computer Vision Spring 2018, Lecture 10
Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of
More informationEpipolar geometry. x x
Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationComputer Vision I - Algorithms and Applications: Multi-View 3D reconstruction
Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View
More informationWeek 2: Two-View Geometry. Padua Summer 08 Frank Dellaert
Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential
More informationMultiple View Geometry. Frank Dellaert
Multiple View Geometry Frank Dellaert Outline Intro Camera Review Stereo triangulation Geometry of 2 views Essential Matrix Fundamental Matrix Estimating E/F from point-matches Why Consider Multiple Views?
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationEpipolar Geometry CSE P576. Dr. Matthew Brown
Epipolar Geometry CSE P576 Dr. Matthew Brown Epipolar Geometry Epipolar Lines, Plane Constraint Fundamental Matrix, Linear solution + RANSAC Applications: Structure from Motion, Stereo [ Szeliski 11] 2
More informationUndergrad HTAs / TAs. Help me make the course better! HTA deadline today (! sorry) TA deadline March 21 st, opens March 15th
Undergrad HTAs / TAs Help me make the course better! HTA deadline today (! sorry) TA deadline March 2 st, opens March 5th Project 2 Well done. Open ended parts, lots of opportunity for mistakes. Real implementation
More informationAn Alternative Formulation for Five Point Relative Pose Problem
An Alternative Formulation for Five Point Relative Pose Problem Dhruv Batra 1 Bart Nabbe 1,2 Martial Hebert 1 dbatra@ece.cmu.edu bart.c.nabbe@intel.com hebert@ri.cmu.edu 1 Carnegie Mellon University 2
More informationMulti-view geometry problems
Multi-view geometry Multi-view geometry problems Structure: Given projections o the same 3D point in two or more images, compute the 3D coordinates o that point? Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera
More informationLast lecture. Passive Stereo Spacetime Stereo
Last lecture Passive Stereo Spacetime Stereo Today Structure from Motion: Given pixel correspondences, how to compute 3D structure and camera motion? Slides stolen from Prof Yungyu Chuang Epipolar geometry
More informationis used in many dierent applications. We give some examples from robotics. Firstly, a robot equipped with a camera, giving visual information about th
Geometry and Algebra of Multiple Projective Transformations Anders Heyden Dept of Mathematics, Lund University Box 8, S-22 00 Lund, SWEDEN email: heyden@maths.lth.se Supervisor: Gunnar Sparr Abstract In
More informationReminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches
Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary
More informationC / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.
C18 2015 1 / 35 C18 Computer Vision David Murray david.murray@eng.ox.ac.uk www.robots.ox.ac.uk/ dwm/courses/4cv Michaelmas 2015 C18 2015 2 / 35 Computer Vision: This time... 1. Introduction; imaging geometry;
More informationCalculation of the fundamental matrix
D reconstruction alculation of the fundamental matrix Given a set of point correspondences x i x i we wish to reconstruct the world points X i and the cameras P, P such that x i = PX i, x i = P X i, i
More informationRecovering structure from a single view Pinhole perspective projection
EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their
More informationSolutions to Minimal Generalized Relative Pose Problems
Solutions to Minimal Generalized Relative Pose Problems Henrik Stewénius, Magnus Oskarsson, Kalle Åström David Nistér Centre for Mathematical Sciences Department of Computer Science Lund University, P.O.
More informationStructure from motion
Multi-view geometry Structure rom motion Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera 3 R 3,t 3 Figure credit: Noah Snavely Structure rom motion? Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera 3 R 3,t 3 Structure:
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X
More informationTwo-View Geometry (Course 23, Lecture D)
Two-View Geometry (Course 23, Lecture D) Jana Kosecka Department of Computer Science George Mason University http://www.cs.gmu.edu/~kosecka General Formulation Given two views of the scene recover the
More informationStructure from Motion. Introduction to Computer Vision CSE 152 Lecture 10
Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley
More informationComputer Vision I - Appearance-based Matching and Projective Geometry
Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 05/11/2015 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation
More informationAuto-calibration Kruppa's equations and the intrinsic parameters of a camera
Auto-calibration Kruppa's equations and the intrinsic parameters of a camera S.D. Hippisley-Cox & J. Porrill AI Vision Research Unit University of Sheffield e-mail: [S.D.Hippisley-Cox,J.Porrill]@aivru.sheffield.ac.uk
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Structure Computation Lecture 18 March 22, 2005 2 3D Reconstruction The goal of 3D reconstruction
More informationMulti-View Geometry Part II (Ch7 New book. Ch 10/11 old book)
Multi-View Geometry Part II (Ch7 New book. Ch 10/11 old book) Guido Gerig CS-GY 6643, Spring 2016 gerig@nyu.edu Credits: M. Shah, UCF CAP5415, lecture 23 http://www.cs.ucf.edu/courses/cap6411/cap5415/,
More informationMultiple Views Geometry
Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective
More informationComputer Vision I - Appearance-based Matching and Projective Geometry
Computer Vision I - Appearance-based Matching and Projective Geometry Carsten Rother 01/11/2016 Computer Vision I: Image Formation Process Roadmap for next four lectures Computer Vision I: Image Formation
More informationCS231M Mobile Computer Vision Structure from motion
CS231M Mobile Computer Vision Structure from motion - Cameras - Epipolar geometry - Structure from motion Pinhole camera Pinhole perspective projection f o f = focal length o = center of the camera z y
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More informationMachine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationAgenda. Rotations. Camera calibration. Homography. Ransac
Agenda Rotations Camera calibration Homography Ransac Geometric Transformations y x Transformation Matrix # DoF Preserves Icon translation rigid (Euclidean) similarity affine projective h I t h R t h sr
More informationCS231A Course Notes 4: Stereo Systems and Structure from Motion
CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationAgenda. Rotations. Camera models. Camera calibration. Homographies
Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More informationLecture 5 Epipolar Geometry
Lecture 5 Epipolar Geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 5-24-Jan-18 Lecture 5 Epipolar Geometry Why is stereo useful? Epipolar constraints Essential
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationRecent Developments on Direct Relative Orientation
Recent Developments on Direct Relative Orientation Henrik Stewénius 1, Christopher Engels, David Nistér Center for Visualization and Virtual Environments, Computer Science Department, University of Kentucky
More informationLecture 10: Multi-view geometry
Lecture 10: Multi-view geometry Professor Stanford Vision Lab 1 What we will learn today? Review for stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from
More informationVision par ordinateur
Epipolar geometry π Vision par ordinateur Underlying structure in set of matches for rigid scenes l T 1 l 2 C1 m1 l1 e1 M L2 L1 e2 Géométrie épipolaire Fundamental matrix (x rank 2 matrix) m2 C2 l2 Frédéric
More informationAn Improved Evolutionary Algorithm for Fundamental Matrix Estimation
03 0th IEEE International Conference on Advanced Video and Signal Based Surveillance An Improved Evolutionary Algorithm for Fundamental Matrix Estimation Yi Li, Senem Velipasalar and M. Cenk Gursoy Department
More informationLecture 14: Basic Multi-View Geometry
Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram
More informationEpipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz
Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two
More informationarxiv: v1 [cs.cv] 18 Sep 2017
Direct Pose Estimation with a Monocular Camera Darius Burschka and Elmar Mair arxiv:1709.05815v1 [cs.cv] 18 Sep 2017 Department of Informatics Technische Universität München, Germany {burschka elmar.mair}@mytum.de
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More informationCS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003
CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass
More informationCamera Geometry II. COS 429 Princeton University
Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence
More information3D Computer Vision. Structure from Motion. Prof. Didier Stricker
3D Computer Vision Structure from Motion Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Structure
More informationThe Geometry Behind the Numerical Reconstruction of Two Photos
The Geometry Behind the Numerical Reconstruction of Two Photos Hellmuth Stachel stachel@dmg.tuwien.ac.at http://www.geometrie.tuwien.ac.at/stachel ICEGD 2007, The 2 nd Internat. Conf. on Eng g Graphics
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationStructure from Motion and Multi- view Geometry. Last lecture
Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,
More informationEECS 442 Computer vision. Fitting methods
EECS 442 Computer vision Fitting methods - Problem formulation - Least square methods - RANSAC - Hough transforms - Multi-model fitting - Fitting helps matching! Reading: [HZ] Chapters: 4, 11 [FP] Chapters:
More informationLecture 8 Fitting and Matching
Lecture 8 Fitting and Matching Problem formulation Least square methods RANSAC Hough transforms Multi-model fitting Fitting helps matching! Reading: [HZ] Chapter: 4 Estimation 2D projective transformation
More informationHomographies and RANSAC
Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position
More informationFitting. Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! EECS Fall 2014! Foundations of Computer Vision!
Fitting EECS 598-08 Fall 2014! Foundations of Computer Vision!! Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! Readings: FP 10; SZ 4.3, 5.1! Date: 10/8/14!! Materials on these
More informationDescriptive Geometry Meets Computer Vision The Geometry of Two Images (# 82)
Descriptive Geometry Meets Computer Vision The Geometry of Two Images (# 8) Hellmuth Stachel stachel@dmg.tuwien.ac.at http://www.geometrie.tuwien.ac.at/stachel th International Conference on Geometry and
More informationEpipolar Geometry Based On Line Similarity
2016 23rd International Conference on Pattern Recognition (ICPR) Cancún Center, Cancún, México, December 4-8, 2016 Epipolar Geometry Based On Line Similarity Gil Ben-Artzi Tavi Halperin Michael Werman
More informationModel Fitting. Introduction to Computer Vision CSE 152 Lecture 11
Model Fitting CSE 152 Lecture 11 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 10: Grouping and Model Fitting What to do with edges? Segment linked edge chains into curve features (e.g.,
More informationStructure from Motion
Structure from Motion Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions Structure from motion Recover both 3D scene geoemetry and camera positions SLAM: Simultaneous
More informationInstance-level recognition part 2
Visual Recognition and Machine Learning Summer School Paris 2011 Instance-level recognition part 2 Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique,
More informationCS201 Computer Vision Camera Geometry
CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the
More informationProject: Camera Rectification and Structure from Motion
Project: Camera Rectification and Structure from Motion CIS 580, Machine Perception, Spring 2018 April 18, 2018 In this project, you will learn how to estimate the relative poses of two cameras and compute
More informationIndex. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253
Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationLecture 9 Fitting and Matching
Lecture 9 Fitting and Matching Problem formulation Least square methods RANSAC Hough transforms Multi- model fitting Fitting helps matching! Reading: [HZ] Chapter: 4 Estimation 2D projective transformation
More informationMAPI Computer Vision. Multiple View Geometry
MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry
More informationarxiv: v4 [cs.cv] 8 Jan 2017
Epipolar Geometry Based On Line Similarity Gil Ben-Artzi Tavi Halperin Michael Werman Shmuel Peleg School of Computer Science and Engineering The Hebrew University of Jerusalem, Israel arxiv:1604.04848v4
More informationInstance-level recognition
Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More information3D Photography: Epipolar geometry
3D Photograph: Epipolar geometr Kalin Kolev, Marc Pollefes Spring 203 http://cvg.ethz.ch/teaching/203spring/3dphoto/ Schedule (tentative) Feb 8 Feb 25 Mar 4 Mar Mar 8 Mar 25 Apr Apr 8 Apr 5 Apr 22 Apr
More information3D Reconstruction from Two Views
3D Reconstruction from Two Views Huy Bui UIUC huybui1@illinois.edu Yiyi Huang UIUC huang85@illinois.edu Abstract In this project, we study a method to reconstruct a 3D scene from two views. First, we extract
More informationA Summary of Projective Geometry
A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]
More informationVisual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech
Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com
More informationInstance-level recognition
Instance-level recognition 1) Local invariant features 2) Matching and recognition with local features 3) Efficient visual search 4) Very large scale indexing Matching of descriptors Matching and 3D reconstruction
More informationImage correspondences and structure from motion
Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.
More informationGeometry of Multiple views
1 Geometry of Multiple views CS 554 Computer Vision Pinar Duygulu Bilkent University 2 Multiple views Despite the wealth of information contained in a a photograph, the depth of a scene point along the
More informationSrikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah
School of Computing University of Utah Presentation Outline 1 2 3 Forward Projection (Reminder) u v 1 KR ( I t ) X m Y m Z m 1 Backward Projection (Reminder) Q K 1 q Presentation Outline 1 2 3 Sample Problem
More informationSummary Page Robust 6DOF Motion Estimation for Non-Overlapping, Multi-Camera Systems
Summary Page Robust 6DOF Motion Estimation for Non-Overlapping, Multi-Camera Systems Is this a system paper or a regular paper? This is a regular paper. What is the main contribution in terms of theory,
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More informationBinocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?
System 6 Introduction Is there a Wedge in this 3D scene? Binocular Stereo Vision Data a stereo pair of images! Given two 2D images of an object, how can we reconstruct 3D awareness of it? AV: 3D recognition
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationIndex. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263
Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine
More informationCamera Calibration Using Line Correspondences
Camera Calibration Using Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Ph: (518)-387-7333 Fax: (518)-387-6845 Email : hartley@crd.ge.com Abstract In this paper, a method of
More informationInstance-level recognition II.
Reconnaissance d objets et vision artificielle 2010 Instance-level recognition II. Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique, Ecole Normale
More information3D Reconstruction with two Calibrated Cameras
3D Reconstruction with two Calibrated Cameras Carlo Tomasi The standard reference frame for a camera C is a right-handed Cartesian frame with its origin at the center of projection of C, its positive Z
More informationComputer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16
Announcements Stereo III CSE252A Lecture 16 HW1 being returned HW3 assigned and due date extended until 11/27/12 No office hours today No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in
More informationLinear Multi View Reconstruction and Camera Recovery Using a Reference Plane
International Journal of Computer Vision 49(2/3), 117 141, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Linear Multi View Reconstruction and Camera Recovery Using a Reference
More informationRectification and Distortion Correction
Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification
More informationAnnouncements. Stereo
Announcements Stereo Homework 2 is due today, 11:59 PM Homework 3 will be assigned today Reading: Chapter 7: Stereopsis CSE 152 Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where relative
More information