BIL Computer Vision Apr 16, 2014

Size: px
Start display at page:

Download "BIL Computer Vision Apr 16, 2014"

Transcription

1 BIL Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University

2 Slide credit: S. Lazebnik Basic stereo matching algorithm For each pixel in the first image Find corresponding epipolar line in the right image Examine all pixels on the epipolar line and pick the best match Triangulate the matches to get depth information Simplest case: epipolar lines are scanlines When does this happen?

3 Slide credit: S. Lazebnik Simplest Case: Parallel images Image planes of cameras are parallel to each other and to the baseline Camera centers are at same height Focal lengths are the same

4 Slide credit: S. Lazebnik Simplest Case: Parallel images Image planes of cameras are parallel to each other and to the baseline Camera centers are at same height Focal lengths are the same Then, epipolar lines fall along the horizontal scan lines of the images

5 Essential matrix for parallel images R t E E x x T ] [, 0 = = " " # $ $ $ % & = = ] [ T T R t E R = I t = (T, 0, 0) Epipolar constraint: " # $ $ $ % & = ] [ x y x z y z a a a a a a a t x x Slide credit: S. Lazebnik

6 Essential matrix for parallel images R t E E x x T ] [, 0 = = " " # $ $ $ % & = = ] [ T T R t E ( ) ( ) v T Tv Tv T v u v u T T v u = = " " " # $ % % % & ' = " " " # $ % % % & ' ) ) ) * +,,, R = I t = (T, 0, 0) The y-coordinates of corresponding points are the same t x x Slide credit: S. Lazebnik Epipolar constraint:

7 Stereo image rectification Slide credit: S. Lazebnik. S. Seitz

8 Slide credit: S. Lazebnik. S. Seitz Stereo image rectification reproject image planes onto a common plane parallel to the line between optical centers pixel motion is horizontal after this transformation two homographies (3x3 transform), one for each input image reprojection C. Loop and Z. Zhang. Computing Rectifying Homographies for Stereo Vision. IEEE Conf. Computer Vision and Pattern Recognition, 1999.

9 Rectification example Slide credit: S. Lazebnik. S. Seitz

10 Why rectification is useful? Makes the correspondence problem easier Makes triangulation easy Slide credit: S. Savarese

11 Slide credit: S. Lazebnik. S. Seitz Depth from disparity X x x z f f O Baseline O B disparity = x x" B = z f Disparity is inversely proportional to depth

12 Disparity map / depth map Disparity map with occlusions Slide credit: S. Savarese Disparity maps Stereo pair

13 Slide credit: S. Lazebnik Correspondence search Left Right scanline Matching cost disparity Slide a window along the right scanline and compare contents of that window with the reference window in the left image Matching cost: SSD or normalized correlation

14 Correspondence search Left Right scanline SSD Slide credit: S. Lazebnik

15 Correlation Method Pick up a window around p(u,v) Build vector W Slide the window along v line in image 2 and compute w Keep sliding until w w is maximized. Slide credit: S. Savarese

16 Slide credit: S. Savarese Correlation Method Normalized Correlation; minimize:

17 Correspondence search Left Right scanline Norm. corr Slide credit: S. Lazebnik

18 Slide credit: S. Lazebnik, S. Seitz Effect of window size Smaller window + More detail More noise Larger window + Smoother disparity maps Less detail W = 3 W = 20

19 Failures of correspondence search Textureless surfaces Occlusions, repetition Non-Lambertian surfaces, specularities Slide credit: S. Lazebnik. S. Seitz

20 Slide credit: S. Seitz Stereo results Data from University of Tsukuba Similar results on other images without ground truth Scene Ground truth

21 Slide credit: S. Lazebnik. S. Seitz Results with window search Data Window-based matching (best window size) Ground truth

22 Better methods exist... Graph cuts Ground truth Y. Boykov, O. Veksler, and R. Zabih, Fast Approximate Energy Minimization via Graph Cuts, PAMI 2001 For the latest and greatest: Slide credit: S. Lazebnik. S. Seitz

23 Slide credit: S. Lazebnik. S. Seitz How can we improve window-based matching? The similarity constraint is local (each reference window is matched independently) Need to enforce non-local correspondence constraints

24 Slide credit: S. Lazebnik. S. Seitz Non-local constraints Uniqueness For any point in one image, there should be at most one matching point in the other image

25 Slide credit: S. Lazebnik. S. Seitz Non-local constraints Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views

26 Ordering constraint doesn t hold Slide credit: S. Lazebnik. S. Seitz Non-local constraints Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views

27 Slide credit: S. Lazebnik. S. Seitz Non-local constraints Uniqueness For any point in one image, there should be at most one matching point in the other image Ordering Corresponding points should be in the same order in both views Smoothness We expect disparity values to change slowly (for the most part)

28 Slide credit: S. Lazebnik. S. Seitz Scanline stereo Try to coherently match pixels on the entire scanline Different scanlines are still optimized independently Left image Right image

29 Shortest paths for scan-line stereo Left image I Right image I S left Right occlusion q t Left occlusion correspondence C occl s p S right C occl C corr Can be implemented with dynamic programming Ohta & Kanade 85, Cox et al. 96 Slide credit: Y. Boykov Slide credit: S. Lazebnik. S. Seitz

30 Stereo as energy minimization What defines a good stereo correspondence? 1. Match quality Want each pixel to find a good match in the other image 2. Smoothness If two pixels are adjacent, they should (usually) move about the same amount Slide credit: S. Seitz

31 Smoothness Slide credit: S. Savarese

32 Stereo matching as energy minimization I 1 I 2 D W 1 (i ) W 2 (i+d(i )) D(i ) 2 ( W ) 1 ( i) W2 ( i + D( i)) + λ ( D( i) D( j) ) E( D) = ρ i neighbors i, j data term smoothness term Energy functions of this form can be minimized using graph cuts Y. Boykov, O. Veksler, and R. Zabih, Fast Approximate Energy Minimization via Graph Cuts, PAMI 2001 Slide credit: S. Lazebnik

33 Stereo matching as energy minimization

34 L. Zhang, B. Curless, and S. M. Seitz. Rapid Shape Acquisition Using Color Structured Light and Multi-pass Dynamic Programming. 3DPVT 2002 Slide credit: S. Lazebnik Active stereo with structured light Project structured light patterns onto the object Simplifies the correspondence problem Allows us to use only one camera camera projector

35 Active stereo with structured light L. Zhang, B. Curless, and S. M. Seitz. Rapid Shape Acquisition Using Color Structured Light and Multi-pass Dynamic Programming. 3DPVT 2002 Slide credit: S. Lazebnik

36 Active stereo with structured light Slide credit: S. Lazebnik

37 Kinect: Structured infrared light Slide credit: S. Lazebnik. S. Seitz

38 Slide credit: S. Lazebnik. S. Seitz Laser scanning Digital Michelangelo Project Levoy et al. Optical triangulation Project a single stripe of laser light Scan it across the surface of the object This is a very precise version of structured light scanning

39 Slide credit: S. Seitz Laser scanned models The Digital Michelangelo Project, Levoy et al.

40 Laser scanned models The Digital Michelangelo Project, Levoy et al. Slide credit: S. Seitz

41 Slide credit: S. Seitz Laser scanned models The Digital Michelangelo Project, Levoy et al.

42 Beyond two-view stereo Slide credit: S. Lazebnik

43 Slide credit: B. Freeman and A. Torralba Structure from motion Multiple views of a static scene from different cameras One camera, a moving object

44 Slide credit: N. Snavely Volumetric stereo Scene Volume V Input Images (Calibrated) Goal: Determine occupancy, color of points in V

45 Discrete formulation: Voxel Coloring Discretized Scene Volume Input Images (Calibrated) Goal: Assign RGBA values to voxels in V photo-consistent with images Slide credit: N. Snavely

46 Complexity and computability Discretized Scene Volume 3 N voxels C colors True Scene Photo-Consistent Scenes All Scenes (C N3 ) Slide credit: N. Snavely

47 K.N. Kutulakos and S.M. Seitz, A Theory of Shape and Space Carving, ICCV 1999 Slide credit: S. Lazebnik Space Carving Image 1 Image N... Space Carving Algorithm Initialize to a volume V containing the true scene Choose a voxel on the outside of the volume Project to visible input images Carve if not photo-consistent Repeat until convergence

48 Space Carving Results: African Violet Input Input Image (1 (1 of 45) of 45) Reconstruction Reconstruction Reconstruction Slide credit: S. Seitz

49 Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates? Camera 1? Camera 2 Camera 3 R 1,t 1?? R 2,t R 2 3,t 3 Slide credit: N. Snavely

50 Slide credit: S. Lazebnik Structure from motion Given: m images of n fixed 3D points x ij = P i X j, i = 1,, m, j = 1,, n Problem: estimate m projection matrices P i and n 3D points X j from the mn correspondences x ij X j x 1j x 2j P 1 P 3 x 3j

51 Slide credit: S. Lazebnik Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of 1/k, the projections of the scene points in the image remain exactly the same: x = PX = & $ % 1 k # P ( kx) " It is impossible to recover the absolute scale of the scene

52 Slide credit: S. Lazebnik Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of 1/ k, the projections of the scene points in the image remain exactly the same More generally: if we transform the scene using a transformation Q and apply the inverse transformation to the camera matrices, then the images do not change x = PX = ( PQ -1)( QX)

53 With no constraints on the camera calibration matrix or on the scene, we get a projective reconstruction Need additional information to upgrade the reconstruction to affine, similarity, or Euclidean Slide credit: S. Lazebnik Types of ambiguity Projective 15dof & A $ T % v t# v " Preserves intersection and tangency Affine 12dof & A $ T % 0 t# 1 " Preserves parallellism, volume ratios Similarity 7dof & s R $ T % 0 t# 1 " Preserves angles, ratios of length Euclidean 6dof & R $ T % 0 t# 1 " Preserves angles, lengths

54 Slide credit: S. Lazebnik Projective ambiguity Q p = & A $ T % v t# v " x = PX = ( PQ -1)( Q X) P P

55 Projective ambiguity Slide credit: S. Lazebnik

56 Slide credit: S. Lazebnik Affine ambiguity Affine Q A = & A $ T % 0 t# 1 " x = PX = ( PQ -1 )( Q X) A A

57 Affine ambiguity Slide credit: S. Lazebnik

58 Slide credit: S. Lazebnik Similarity ambiguity Q s = & sr $ T % 0 t# 1 " x = PX = ( PQ -1)( Q X) S S

59 Similarity ambiguity Slide credit: S. Lazebnik

60 Slide credit: S. Lazebnik Structure from motion Let s start with affine cameras (the math is easier) center at infinity

61 Recall: Orthographic Projection Special case of perspective projection Distance from center of projection to image plane is infinite Projection matrix: Image World Slide credit: S. Seitz

62 Slide credit: S. Lazebnik Affine cameras Orthographic Projection Parallel Projection

63 Affine cameras A general affine camera combines the effects of an affine transformation of the 3D space, orthographic projection, and an affine transformation of the image: Affine projection is a linear mapping + translation in inhomogeneous coordinates " # $ % & = " # $ $ $ % & = " # $ $ $ % & = 1 0 b A P affine] [ affine] 3 [ b a a a b a a a x X a 1 a 2 b AX x + = " # $ $ % & + " # $ $ $ % & ' ( ) * +, = " # $ % & = b b Z Y X a a a a a a y x Projection of world origin Slide credit: S. Lazebnik

64 Affine structure from motion Given: m images of n fixed 3D points: x ij = A i X + b j i, i = 1,, m, j = 1,, n Problem: use the mn correspondences x ij to estimate m projection matrices A i and translation vectors b i, and n points X j The reconstruction is defined up to an arbitrary affine transformation Q (12 degrees of freedom): - A 1 +, 0 b* 1 ( ) -A +, 0 b* ( Q 1), & X# Q$ % 1 " We have 2mn knowns and 8m + 3n unknowns (minus 12 dof for affine ambiguity) Thus, we must have 2mn >= 8m + 3n 12 For two views, we need four point correspondences & $ % X 1 # " Slide credit: S. Lazebnik

65 Affine structure from motion Centering: subtract the centroid of the image points For simplicity, assume that the origin of the world coordinate system is at the centroid of the 3D points After centering, each normalized point x ij is related to the 3D point X i by ( ) j i n k k j i n k i k i i j i n k ik ij ij n n n A X X X A b A X b A X x x x ˆ ˆ = " # $ % & = + + = = = = = j i ij X x = A ˆ Slide credit: S. Lazebnik

66 C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2): , November Slide credit: S. Lazebnik Affine structure from motion Let s create a 2m n data (measurement) matrix: D = & xˆ $ $ xˆ $ $ % xˆ m1 xˆ xˆ xˆ m2 " xˆ xˆ xˆ 1n 2n mn # " cameras (2 m) points (n)

67 Affine structure from motion Let s create a 2m n data (measurement) matrix: [ ] n m mn m m n n X X X A A A x x x x x x x x x D " # ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ " # $ $ $ $ % & = " # $ $ $ $ % & = cameras (2 m 3) points (3 n) The measurement matrix D = MS must have rank 3 (Because the motion matrix M and the structure matrix S are rank 3) C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2): , November Slide credit: S. Lazebnik

68 Factorizing the measurement matrix Slide credit: M. Hebert

69 Slide credit: M. Hebert Factorizing the measurement matrix Singular value decomposition of D:

70 Slide credit: M. Hebert Factorizing the measurement matrix Singular value decomposition of D:

71 Factorizing the measurement matrix Obtaining a factorization from SVD: Slide credit: M. Hebert

72 Factorizing the measurement matrix Obtaining a factorization from SVD: This decomposition minimizes D-MS 2 Slide credit: M. Hebert

73 Affine ambiguity The decomposition is not unique. We get the same D by using any 3 3 matrix C and applying the transformations M MC, S C -1 S That is because we have only an affine transformation and we have not enforced any Euclidean constraints (like forcing the image axes to be perpendicular, for example) Slide credit: M. Hebert

74 Eliminating the affine ambiguity Orthographic: image axes are perpendicular and scale is 1 a 1 a 2 = 0 x a 1 2 = a 2 2 = 1 a 2 a 1 This translates into 3m equations in L = CC T : A i L A i T = Id, i = 1,, m X Solve for L Recover C from L by Cholesky decomposition: L = CC T Update M and S: M = MC, S = C -1 S Slide credit: M. Hebert

75 Algorithm summary Given: m images and n features x ij For each image i, center the feature coordinates Construct a 2m n measurement matrix D: Column j contains the projection of point j in all views Row i contains one coordinate of the projections of all the n points in image i Factorize D: Compute SVD: D = U W V T Create U 3 by taking the first 3 columns of U Create V 3 by taking the first 3 columns of V Create W 3 by taking the upper left 3 3 block of W Create the motion and shape matrices: M = U 3 W 3 ½ and S = W 3 ½ V 3 T (or M = U 3 and S = W 3 V 3T ) Eliminate affine ambiguity Slide credit: M. Hebert

76 Reconstruction results C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2): , November Slide credit: S. Lazebnik

77 Slide credit: S. Lazebnik Dealing with missing data So far, we have assumed that all points are visible in all views In reality, the measurement matrix typically looks something like this: points cameras

78 Dealing with missing data Possible solution: decompose matrix into dense sub-blocks, factorize each sub-block, and fuse the results Finding dense maximal sub-blocks of the matrix is NP-complete (equivalent to finding maximal cliques in a graph) Incremental bilinear refinement (1) Perform factorization on a dense sub-block (2) Solve for a new 3D point visible by at least two known cameras (linear least squares) (3) Solve for a new camera that sees at least three known 3D points (linear least squares) F. Rothganger, S. Lazebnik, C. Schmid, and J. Ponce. Segmenting, Modeling, and Matching Video Clips Containing Multiple Moving Objects. IEEE PAMI Slide credit: S. Lazebnik

79 Projective structure from motion Given: m images of n fixed 3D points z ij x ij = P i X j, i = 1,, m, j = 1,, n Problem: estimate m projection matrices P i and n 3D X j points X j from the mn correspondences x ij x 1j x 3j P 1 x 2j P 2 P 3 Slide credit: S. Lazebnik

80 Slide credit: S. Lazebnik Projective structure from motion Given: m images of n fixed 3D points z ij x ij = P i X j, i = 1,, m, j = 1,, n Problem: estimate m projection matrices P i and n 3D points X j from the mn correspondences x ij With no calibration info, cameras and points can only be recovered up to a 4x4 projective transformation Q: X QX, P PQ -1 We can solve for structure and motion when 2mn >= 11m +3n 15 For two cameras, at least 7 points are needed

81 Projective SFM: Two-camera case Compute fundamental matrix F between the two views First camera matrix: [I 0] Second camera matrix: [A b] Then b is the epipole (F T b = 0), A = [b ]F F&P sec Slide credit: S. Lazebnik

82 Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure by triangulation points For each additional view: Determine projection matrix of new camera using all the known 3D points that are visible in its image calibration cameras Slide credit: S. Lazebnik

83 Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure by triangulation points For each additional view: Determine projection matrix of new camera using all the known 3D points that are visible in its image calibration Refine and extend structure: compute new 3D points, re-optimize existing points that are also seen by this camera triangulation cameras Slide credit: S. Lazebnik

84 Slide credit: S. Lazebnik Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure by triangulation For each additional view: Determine projection matrix of new camera using all the known 3D points that are visible in its image calibration Refine and extend structure: compute new 3D points, re-optimize existing points that are also seen by this camera triangulation Refine structure and motion: bundle adjustment cameras points

85 Bundle adjustment Non-linear method for refining structure and motion Minimizing reprojection error m E( P, X) = D n i= 1 j= 1 X j ( ) 2 x, P X ij i j P 1 X j x 1j x 3j P 1 P 2 X j x 2j P 3 X j P 2 P 3 Slide credit: S. Lazebnik

86 Slide credit: S. Lazebnik Self-calibration Self-calibration (auto-calibration) is the process of determining intrinsic camera parameters directly from uncalibrated images For example, when the images are acquired by a single moving camera, we can use the constraint that the intrinsic parameter matrix remains fixed for all the images Compute initial projective reconstruction and find 3D projective transformation matrix Q such that all camera matrices are in the form P i = K [R i t i ] Can use constraints on the form of the calibration matrix: zero skew Can use vanishing points

87 Summary: 3D geometric vision Single-view geometry The pinhole camera model Variation: orthographic projection The perspective projection matrix Intrinsic parameters Extrinsic parameters Calibration Multiple-view geometry Triangulation The epipolar constraint Essential matrix and fundamental matrix Stereo Binocular, multi-view Structure from motion Reconstruction ambiguity Affine SFM Projective SFM Slide credit: S. Lazebnik

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

Structure from Motion CSC 767

Structure from Motion CSC 767 Structure from Motion CSC 767 Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R,t R 2,t 2 R 3,t 3 Camera??

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Stereo. Many slides adapted from Steve Seitz

Stereo. Many slides adapted from Steve Seitz Stereo Many slides adapted from Steve Seitz Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1 image 2 Dense depth map Binocular stereo Given a calibrated

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information

Lecture 10: Multi view geometry

Lecture 10: Multi view geometry Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Lecture 10: Multi-view geometry

Lecture 10: Multi-view geometry Lecture 10: Multi-view geometry Professor Stanford Vision Lab 1 What we will learn today? Review for stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

Structure from Motion

Structure from Motion Structure from Motion Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions Structure from motion Recover both 3D scene geoemetry and camera positions SLAM: Simultaneous

More information

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from? Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo

More information

Structure from Motion

Structure from Motion /8/ Structure from Motion Computer Vision CS 43, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from motion

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X

More information

C280, Computer Vision

C280, Computer Vision C280, Computer Vision Prof. Trevor Darrell trevor@eecs.berkeley.edu Lecture 11: Structure from Motion Roadmap Previous: Image formation, filtering, local features, (Texture) Tues: Feature-based Alignment

More information

Structure from Motion

Structure from Motion 11/18/11 Structure from Motion Computer Vision CS 143, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

Image Based Reconstruction II

Image Based Reconstruction II Image Based Reconstruction II Qixing Huang Feb. 2 th 2017 Slide Credit: Yasutaka Furukawa Image-Based Geometry Reconstruction Pipeline Last Lecture: Multi-View SFM Multi-View SFM This Lecture: Multi-View

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15 Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15 Lecture 6 Stereo Systems Multi- view geometry Stereo systems

More information

Lecture 6 Stereo Systems Multi-view geometry

Lecture 6 Stereo Systems Multi-view geometry Lecture 6 Stereo Systems Multi-view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-5-Feb-4 Lecture 6 Stereo Systems Multi-view geometry Stereo systems

More information

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems EECS 442 Computer vision Stereo systems Stereo vision Rectification Correspondence problem Active stereo vision systems Reading: [HZ] Chapter: 11 [FP] Chapter: 11 Stereo vision P p p O 1 O 2 Goal: estimate

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Lecture 9 & 10: Stereo Vision

Lecture 9 & 10: Stereo Vision Lecture 9 & 10: Stereo Vision Professor Fei- Fei Li Stanford Vision Lab 1 What we will learn today? IntroducEon to stereo vision Epipolar geometry: a gentle intro Parallel images Image receficaeon Solving

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few... STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own

More information

Introduction à la vision artificielle X

Introduction à la vision artificielle X Introduction à la vision artificielle X Jean Ponce Email: ponce@di.ens.fr Web: http://www.di.ens.fr/~ponce Planches après les cours sur : http://www.di.ens.fr/~ponce/introvis/lect10.pptx http://www.di.ens.fr/~ponce/introvis/lect10.pdf

More information

Final project bits and pieces

Final project bits and pieces Final project bits and pieces The project is expected to take four weeks of time for up to four people. At 12 hours per week per person that comes out to: ~192 hours of work for a four person team. Capstone:

More information

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence CS4495/6495 Introduction to Computer Vision 3B-L3 Stereo correspondence For now assume parallel image planes Assume parallel (co-planar) image planes Assume same focal lengths Assume epipolar lines are

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Session 4 Affine Structure from Motion Mani Golparvar-Fard Department of Civil and Environmental Engineering 329D, Newmark Civil Engineering

More information

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental

More information

CS 532: 3D Computer Vision 7 th Set of Notes

CS 532: 3D Computer Vision 7 th Set of Notes 1 CS 532: 3D Computer Vision 7 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Logistics No class on October

More information

CS231M Mobile Computer Vision Structure from motion

CS231M Mobile Computer Vision Structure from motion CS231M Mobile Computer Vision Structure from motion - Cameras - Epipolar geometry - Structure from motion Pinhole camera Pinhole perspective projection f o f = focal length o = center of the camera z y

More information

Lecture'9'&'10:'' Stereo'Vision'

Lecture'9'&'10:'' Stereo'Vision' Lecture'9'&'10:'' Stereo'Vision' Dr.'Juan'Carlos'Niebles' Stanford'AI'Lab' ' Professor'FeiAFei'Li' Stanford'Vision'Lab' 1' Dimensionality'ReducIon'Machine'(3D'to'2D)' 3D world 2D image Point of observation

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints

More information

Cameras and Stereo CSE 455. Linda Shapiro

Cameras and Stereo CSE 455. Linda Shapiro Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation

More information

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision CS 1674: Intro to Computer Vision Epipolar Geometry and Stereo Vision Prof. Adriana Kovashka University of Pittsburgh October 5, 2016 Announcement Please send me three topics you want me to review next

More information

Stereo: Disparity and Matching

Stereo: Disparity and Matching CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Structure from Motion

Structure from Motion Structure from Motion Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz, N Snavely, and D. Hoiem Administrative stuffs HW 3 due 11:55 PM, Oct 17 (Wed) Submit your alignment results!

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision CS 1699: Intro to Computer Vision Epipolar Geometry and Stereo Vision Prof. Adriana Kovashka University of Pittsburgh October 8, 2015 Today Review Projective transforms Image stitching (homography) Epipolar

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Recap from Previous Lecture

Recap from Previous Lecture Recap from Previous Lecture Tone Mapping Preserve local contrast or detail at the expense of large scale contrast. Changing the brightness within objects or surfaces unequally leads to halos. We are now

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

Multi-view stereo. Many slides adapted from S. Seitz

Multi-view stereo. Many slides adapted from S. Seitz Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window

More information

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by  ) Readings Szeliski, Chapter 10 (through 10.5) Announcements Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by email) One-page writeup (from project web page), specifying:» Your team members» Project goals. Be specific.

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5) Announcements Stereo Project 2 due today Project 3 out today Single image stereogram, by Niklas Een Readings Szeliski, Chapter 10 (through 10.5) Public Library, Stereoscopic Looking Room, Chicago, by Phillips,

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can

More information

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Unit 3 Multiple View Geometry

Unit 3 Multiple View Geometry Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover

More information

Lecture 14: Basic Multi-View Geometry

Lecture 14: Basic Multi-View Geometry Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram

More information

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Teesta suspension bridge-darjeeling, India Mark Twain at Pool Table", no date, UCR Museum of Photography Woman getting eye exam during

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Structure from Motion and Multi- view Geometry. Last lecture

Structure from Motion and Multi- view Geometry. Last lecture Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,

More information

Recovering structure from a single view Pinhole perspective projection

Recovering structure from a single view Pinhole perspective projection EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their

More information

Introduction to Computer Vision. Week 10, Winter 2010 Instructor: Prof. Ko Nishino

Introduction to Computer Vision. Week 10, Winter 2010 Instructor: Prof. Ko Nishino Introduction to Computer Vision Week 10, Winter 2010 Instructor: Prof. Ko Nishino Today How do we recover geometry from 2 views? Stereo Can we recover geometry from a sequence of images Structure-from-Motion

More information

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017 CS 277: Intro to Computer Vision Multiple Views Prof. Adriana Kovashka Universit of Pittsburgh March 4, 27 Plan for toda Affine and projective image transformations Homographies and image mosaics Stereo

More information

CS231A Course Notes 4: Stereo Systems and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance

More information

Epipolar geometry. x x

Epipolar geometry. x x Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections

More information

Epipolar geometry contd.

Epipolar geometry contd. Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Computer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15

Computer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15 Announcement Stereo II CSE252A Lecture 15 HW3 assigned No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in WLH Room 2112 Mars Exploratory Rovers: Spirit and Opportunity Stereo Vision Outline

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

CS231A Midterm Review. Friday 5/6/2016

CS231A Midterm Review. Friday 5/6/2016 CS231A Midterm Review Friday 5/6/2016 Outline General Logistics Camera Models Non-perspective cameras Calibration Single View Metrology Epipolar Geometry Structure from Motion Active Stereo and Volumetric

More information

The end of affine cameras

The end of affine cameras The end of affine cameras Affine SFM revisited Epipolar geometry Two-view structure from motion Multi-view structure from motion Planches : http://www.di.ens.fr/~ponce/geomvis/lect3.pptx http://www.di.ens.fr/~ponce/geomvis/lect3.pdf

More information

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due

More information

6.819 / 6.869: Advances in Computer Vision Antonio Torralba and Bill Freeman. Lecture 11 Geometry, Camera Calibration, and Stereo.

6.819 / 6.869: Advances in Computer Vision Antonio Torralba and Bill Freeman. Lecture 11 Geometry, Camera Calibration, and Stereo. 6.819 / 6.869: Advances in Computer Vision Antonio Torralba and Bill Freeman Lecture 11 Geometry, Camera Calibration, and Stereo. 2d from 3d; 3d from multiple 2d measurements? 2d 3d? Perspective projection

More information

A virtual tour of free viewpoint rendering

A virtual tour of free viewpoint rendering A virtual tour of free viewpoint rendering Cédric Verleysen ICTEAM institute, Université catholique de Louvain, Belgium cedric.verleysen@uclouvain.be Organization of the presentation Context Acquisition

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two

More information

Multiple Views Geometry

Multiple Views Geometry Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective

More information

Kinect Device. How the Kinect Works. Kinect Device. What the Kinect does 4/27/16. Subhransu Maji Slides credit: Derek Hoiem, University of Illinois

Kinect Device. How the Kinect Works. Kinect Device. What the Kinect does 4/27/16. Subhransu Maji Slides credit: Derek Hoiem, University of Illinois 4/27/16 Kinect Device How the Kinect Works T2 Subhransu Maji Slides credit: Derek Hoiem, University of Illinois Photo frame-grabbed from: http://www.blisteredthumbs.net/2010/11/dance-central-angry-review

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Guido Gerig CS 6320 Spring 2013 Credits: Prof. Mubarak Shah, Course notes modified from: http://www.cs.ucf.edu/courses/cap6411/cap5415/, Lecture

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Structure from Motion

Structure from Motion 04/4/ Structure from Motion Comuter Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adated from Lana Lazebnik, Silvio Saverese, Steve Seitz his class: structure from motion Reca

More information

CS 664 Structure and Motion. Daniel Huttenlocher

CS 664 Structure and Motion. Daniel Huttenlocher CS 664 Structure and Motion Daniel Huttenlocher Determining 3D Structure Consider set of 3D points X j seen by set of cameras with projection matrices P i Given only image coordinates x ij of each point

More information

Lecture 8 Active stereo & Volumetric stereo

Lecture 8 Active stereo & Volumetric stereo Lecture 8 Active stereo & Volumetric stereo Active stereo Structured lighting Depth sensing Volumetric stereo: Space carving Shadow carving Voxel coloring Reading: [Szelisky] Chapter 11 Multi-view stereo

More information

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views? Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

More information

Lecture 5 Epipolar Geometry

Lecture 5 Epipolar Geometry Lecture 5 Epipolar Geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 5-24-Jan-18 Lecture 5 Epipolar Geometry Why is stereo useful? Epipolar constraints Essential

More information

Lecture 14: Computer Vision

Lecture 14: Computer Vision CS/b: Artificial Intelligence II Prof. Olga Veksler Lecture : Computer Vision D shape from Images Stereo Reconstruction Many Slides are from Steve Seitz (UW), S. Narasimhan Outline Cues for D shape perception

More information

3D Photography: Stereo Matching

3D Photography: Stereo Matching 3D Photography: Stereo Matching Kevin Köser, Marc Pollefeys Spring 2012 http://cvg.ethz.ch/teaching/2012spring/3dphoto/ Stereo & Multi-View Stereo Tsukuba dataset http://cat.middlebury.edu/stereo/ Stereo

More information

Multi-view geometry problems

Multi-view geometry problems Multi-view geometry Multi-view geometry problems Structure: Given projections o the same 3D point in two or more images, compute the 3D coordinates o that point? Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information