MAPI Computer Vision. Multiple View Geometry
|
|
- Bernadette Barrett
- 5 years ago
- Views:
Transcription
1 MAPI Computer Vision Multiple View Geometry
2 Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p
3 Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry is the intrinsic projective geometry between two views. It is independent o scene structure, and only depends on the cameras' internal parameters and relative pose. Consider the images p and p o a point P observed by two cameras with optical centers O and O.
4 Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry These ive points all belong to the epipolar plane deined by the two intersecting rays OP and O P. In particular, the point p lies on the line l where this plane and the retina Π o the second camera intersect. The line l is the epipolar line associated with the point p, and it passes through the point e where the baseline joining the optical centers O and O intersects Π. Likewise, the point p lies on the epipolar line l associated with the point p, and this line passes through the intersection e o the baseline with the plane Π.
5 Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The points e and e are called the epipoles o the two cameras. The epipole e is the (virtual) image o the optical center O o the irst camera in the image observed by the second camera, and vice versa. i p and p are images o the same point, then p must lie on the epipolar line associated with p. Base Line
6 Geometry o Multiple Views 2- and 3- view geometry 2- Camera Geometry Given the projection o P in one image, its projection in the other image is constrained to be on a line, epipolar line associated with p This epipolar constraint plays a undamental role in stereo vision and motion analysis
7 Geometry o Multiple Views 2- and 3- view geometry Epipoles e and e are at the intersection o the epipoles lines
8 Geometry o Multiple Views 2- and 3- view geometry Special case: rontoparallel cameras. Epipoles are at ininity
9 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F The undamental matrix is the algebraic representation o epipolar geometry. Geometric derivation The mapping rom a point in one image to a corresponding epipolar line in the other image may be decomposed into two steps. irst step, the point x is mapped to some point x' in the other image lying on the epipolar line l'. This point x' is a potential match or the point x. second step, the epipolar line l' is obtained as the line joining x' to the epipole e'.
10 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F l' = Geometric derivation l' = e' x' = x' = Hx [ e' ] x' [ e' ] Hx = Fx [] v = 0 v3 v 2 v 0 v 1 3 v 2 v 0 1
11 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F x' = x = Correspondence condition: map x l The undamental matrix satisies the condition that or any pair o corresponding points x x' in the two images [ x' y' 1] [ x y 1] x' T Fx = 0 [ x' y' 1] x' x 11 + x' y x' x 23 y = y' x + y' y y' 23 + x 31 + y = 0
12 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F Properties Transpose: I F is the undamental matrix o the pair o cameras (P, P'), then F T is the undamental matrix o the pair in the opposite order: (P', P). Epipolar lines: For any point x in the irst image, the corresponding epipolar line is l' = Fx. Similarly, l = F T x' represents the epipolar line corresponding to x' in the second image. The epipole: or any point x (other than e) the epipolar line l' = Fx contains the epipole e'. Thus e' satisies e T (Fx) = (e T F)x = 0 or all x. It ollows that e T F= 0, i.e. e' is the let null-vector o F. Similarly Fe=0, i.e. e is the right null-vector o F. F has seven degrees o reedom: a 3 x 3 homogeneous matrix has eight independent ratios; however, F also satisies the constraint detf=0 which removes one degree o reedom. F is a correlation, a projective map taking a point to a line. In this case a point in the irst image x deines a line in the second l' = Fx, which is the epipolar line o x. I l and l' are corresponding epipolar lines then any point x on l is mapped to the same line l'. This means there is no inverse mapping
13 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F Epipolar line equation: l ' ax + by + c = 0 ( a, b, c) l ' = Fx T x belongs to l x T l' = 0 l' = a = b c x + x + x x y 1 y + y + y
14 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F The epipolar line homography The set o epipolar lines in each o the images orms a pencil o lines passing through the epipole. Such a pencil o lines may be considered as a 1-dimensional projective space. It is clear rom igure that corresponding epipolar lines are perspectively related, so that there is a homography between the pencil o epipolar lines centred at e in the irst view, and the pencil centred at e' in the second. A homography between two such 1-dimensional projective spaces has 3 degrees o reedom.
15 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F The epipolar line homography The 7 degrees o reedom o the undamental matrix can thus be counted as ollows: 2 or e, 2 or e', 3 or the epipolar line homography which maps a line through e to a line through e'.
16 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F [ ] 0 ' ' ' ' ' ' ' ' = = y x y y y x y x y x x x y x y x = = 1 ' ' ' ' ' ' 1 ' ' ' ' ' ' 0 A 0,1), ',, ', ' ',, ', ' ( n n n n n n n n n n n n y x y y y x y x y x x x y x y y y x y x y x x x y x y y y x y x y x x x M M M M M M M M M
17 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F For a solution to exist, matrix A must have rank at most 8, and i the rank is exactly 8, then the solution is unique, and can be ound by linear methods I the data is not exact, because o noise in the point coordinates, then the rank o A may be greater than 8 (in act equal to 9, since A has 9 columns). In this case, one inds a least-squares solution. The least-squares solution or is the singular vector corresponding to the smallest singular value o A. The solution vector ound in this way minimizes A subject to the condition = 1. The algorithm just described is the essence o a method called the 8-point algorithm or computation o the undamental matrix.
18 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F The singularity constraint An important property o the undamental matrix is that it is singular, in act o rank 2. Furthermore, the let and right null-spaces o F are generated by the vectors representing (in homogeneous coordinates) the two epipoles in the two images. I the undamental matrix is not singular then computed epipolar lines are not coincident, as is demonstrated by igure
19 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F The singularity constraint The matrix F ound by solving the set o linear equations will not in general have rank 2, and we should take steps to enorce this constraint. The most convenient way to do this is to correct the matrix F ound by the SVD solution rom A. Matrix F is replaced by the matrix F' that minimizes the Frobenius norm F F' subject to the condition detf'=0 SVD single value decomposition solution o over-determined systems o equations
20 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F The 8-point algorithm or computation o the undamental matrix may be ormulated as consisting o two steps, as ollows. Linear solution. A solution F is obtained rom the vector corresponding to the smallest singular value o A, Constraint enorcement. Replace F by F', the closest singular matrix to F under a Frobenius norm. This correction is done using the SVD.
21 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F The 8-point algorithm assume that we know the correspondences between the images In general, we do not know the correspondences and we must ind them automatically Solution: RANSAC = RANdom SAmple Consensus
22 Geometry o Multiple Views 2- and 3- view geometry Fundamental matrix F Automatic computation o F RANSAC Do k times:» Draw set with minimum number o correspondences»fit F to the set» Count the number d o correspondences that are closer than t to the itted epipolar lines» I d > dmin, recompute it error using all the correspondences Return best it ound
23 Geometry o Multiple Views 2- and 3- view geometry Structure Computation Back-projecting rays From the measured image points Triangulation
24 Stereo Vision Stereo vision is the process o recovering the three-dimensional location o points in the scene rom their projections in images. More precisely, i we have two images I l and I r (let and right rom the let and right eyes), given a pixel p l in the let image and the corresponding pixel p r in the right image, then the coordinates (X,Y,Z) o the corresponding point in space is computed.
25 Stereo Vision Geometrically, given p l, we know that the point P lies on the line L l joining p l and the let optical center C l (this line is the viewing ray), although we don t know the distance along this line. Similarily, we know that P lies along a line L r joining p r and P. knowing exactly the parameters o the cameras (intrinsic and extrinsic), we can explicitly compute the parameters o L l and L r. Thereore, we can compute the intersection o the two lines, which is the point P. This procedure is called triangulation.
26 Stereo Vision Correspondence: Given a point p l in one image, ind the corresponding point in the other image. Reconstruction: Given a correspondence (p l,p r ), compute the 3-D coordinates o the corresponding point in space, P.
27 Correspondence Given p l, inding the corresponding point p r involves searching the right image or the location p r such that the right image around p r looks like the let image around p l. Take a small window W around p l and compare it with the right image at all possible locations. The position p r that gives the best match is reported. The undamental operation, thereore, is to compare the pixels in a window W(p l ) with the pixels in a window W(p r ). Sum o Absolute Dierences Sum o Squared Dierences Normalized Correlation Stereo Vision
28 Matching Functions Stereo Vision
29 Correspondence Rectiication Searching along epipolar lines at arbitrary orientation is intuitively expensive. Always search along the rows o the right image. Given the epipolar geometry o the stereo pair, there exists in general a transormation that maps the images into a pair o images with the epipolar lines parallel to the rows o the image. This transormation is called rectiication. Images are almost always rectiied beore searching or correspondences in order to simpliy the search. The exception is when the epipole is inside one o the images. In that case, rectiication is not possible.
30 Correspondence Rectiication Given a plane P in space, there exists two homographies H l and H r that map each image plane onto P. That is, i p l is a point in the let image, then the corresponding point in P is Hp (in homogeneous coordinates). I we map both images to a common plane P such that P is parallel to the line C l C r, then the pair o virtual (rectiied) images is such that the epipolar lines are parallel. Stereo Vision
31 Correspondence Rectiication The algorithm or rectiication is then: Select a plane P parallel to C r C l Deine the let and right image coordinate systems on P Construct the rectiication matrices H l and H r rom P and the virtual image s coordinate systems.
32 Disparity Assuming that images are rectiied, given two corresponding points p l and p r, the dierence o their coordinates along the epipolar line x l -x r is called the disparity d. The disparity is the quantity that is directly measured rom the correspondence. The corresponding 3-D point P can be computed rom p l and d, assuming that the camera parameters are known.
33 Disparity Stereo Vision
34 Recover the 3-D coordinates Two cameras with parallel image planes Consider a point P o coordinates X,Y, Z. P is projected to p l and p r in the two images. The ocal length o the cameras is denoted by, and the distance between the optical centers (the baseline) is denoted by B. Looking at the diagram, we see that the triangles (C r,c l,m) and (p r,p l,p) are similar triangles. Thereore, the ratios o their height to base are equal Z B = B Z x l + x r Z = B / d
35 Recover the 3-D coordinates Two cameras with parallel image planes This relation is the undamental relation o stereo. Z = B / d It basically states that the depth is inversely proportional to the disparity. Once we know Z, the other two coordinates are derived using the standard perspective equations: xlz X = ylz Y =
36 Recover the 3-D coordinates Two cameras with parallel image planes Commercial solutions
37 Recover the 3-D coordinates Two cameras with parallel image planes Commercial solutions
38 Recover the 3-D coordinates Two cameras with parallel image planes Commercial solutions
39 Recover the 3-D coordinates Two cameras with parallel image planes How accurately can those coordinates be computed? Consider a matching pair o disparity d corresponding to a depth Z. I we make an error o one pixel (d+1 instead o d), the depth becomes Z. We want to evaluate DZ, the error in depth due to the error in disparity. Taking the derivative o Z as a unction o d, we get: ΔZ Δd = B d 2 = Z 2 B
40 Recover the 3-D coordinates Two cameras with parallel image planes The inal relation illustrates the undamental relation between baseline, ocal length and accuracy o stereo reconstruction. For an error o 1 pixel on disparity, we get an error in Z o: Z 2 B
41 Recover the 3-D coordinates Two cameras with parallel image planes Depth: The resolution o the stereo reconstruction decreases quadratically with depth. This implies severe limitation on the applicability o stereo. I we assume sub-pixel disparity interpolation with a resolution Δd, the depth resolution becomes: Z B 2 Δ d but it still remains quadratic in depth.
42 Recover the 3-D coordinates Two cameras with parallel image planes Baseline: The resolution improves as the baseline increases. We would be tempted to always use a baseline as large as possible. However, the matching becomes increasingly diicult as the Z 2 B baseline increases (or a given depth) because o the increasing amount o distortion between the let and the right images. Focal length: The resolution improves with ocal length. Intuitively, this is due to the act that, or a given image size, the density o pixels in the image plane increases as increases. Thereore, the disparity resolution is higher.
43 Recover the 3-D coordinates Two cameras with parallel image planes The previous discussion assumes that the viewing rays rom the let and right cameras intersect exactly. That is not usually the case because o small errors in calibration. The two viewing rays pass close to each other but do not exactly intersect. The point P is reconstructed as the point that is the closest to both lines.
44 Recover the 3-D coordinates Two cameras with parallel image planes Sub-pixel disparity The disparity is computed by moving a window one pixel at a time. As a result, the disparity is known only up to one pixel. This limitation on the resolution o the disparity translates into a severe limitation on the accuracy o the recovered 3-D coordinates. One eective way to get this problem is to recover the disparity at a iner resolution by interpolating between the pixel disparities using quadratic interpolation.
45 Recover the 3-D coordinates Two cameras with parallel image planes Sub-pixel disparity Suppose that the best disparity at a pixel is obtained at d o with a matching value (or example SSD) o S(d o ). We can obtain a second order approximation o the (unknown) unction S(d ) by approximating S by a parabola. At the position d opt corresponding to the bottom o the parabola, we have S( dopt )<=S(d o ). Thereore, d opt is a better estimate o the disparity than d o. The question remains as to how to ind this approximating parabola.
46 Recover the 3-D coordinates Two cameras with parallel image planes Sub-pixel disparity Let us irst translate all the disparity values so that d o =0. The equation o a general parabola is: S(d ) =ad 2 +bd+c. To recover the 3 parameters o the parabola we need 3 equations which we obtain by writing that the parabola passes through the point o disparity 0, -1, and +1: S(0) = c S(1) =a+b+c S(-1) =a-b+c Solving this, we obtain: c = S(0) a = ( S(1) + S(-1) - 2 S(0))/2 b = ( S(1) - S(-1))/2 The bottom o the parabola is obtained at dopt such that S (d ) = 2ad+b = 0. Thereore, the optimal disparity is obtained
47 Recover the 3-D coordinates Two cameras with parallel image planes Matching Conidence Stereo matching assumes that there is enough inormation in the images to discriminate between dierent positions o the matching window. That is not the case in regions o the image in which the intensity is nearly constant. In those regions, there is not a single minimum o the matching unction and the disparity cannot be computed. Detecting that the disparity estimate is unreliable This can be done by compute the gradient o the image in the x direction, I x and computing the local average o its squared magnitude: ΣI x2. This quantity can then be used as a measure o reliability o matching (conidence measure.)
48 Recover the 3-D coordinates Two cameras with parallel image planes Lighting Issues A problem is that the lighting conditions maybe substantially dierent between the let and right image. Because, or example, o dierent exposures or dierent settings o the camera. Because ψ measures directly the dierence in pixel values, its value will be corrupted.
49 Recover the 3-D coordinates Two cameras with parallel image planes Lighting Issues The normalized correlation reduces this problem Laplacian o Gaussian (LOG)» smoothly varying parts o the image do not carry much inormation or matching. The useul inormation is contained in higher-requency variations o intensity.
50 Recover the 3-D coordinates Two cameras with parallel image planes Laplacian o Gaussian (LOG)» smoothly varying parts o the image do not carry much inormation or matching. The useul inormation is contained in higherrequency variations o intensity. So we want to eliminate the slowly-varying parts o the image (lowrequency), and this can be done by using a second derivatives. Also, we want to eliminate the high-requency components which correspond to noise in the image, which suggests blurring with a Gaussian ilter. The combination o the two suggests the use o the Laplacian o Gaussian (LOG) previously deined in the context o edge detection. This ilter is technically a band-pass ilter (removes both high and low requencies.) In practice, the images are irst convoluted with the LOG ilter
51 Recover the 3-D coordinates Two cameras with parallel image planes Laplacian o Gaussian (LOG)» smoothly varying parts o the image do not carry much inormation or matching. The useul inormation is contained in higher-requency variations o intensity.
52 Recover the 3-D coordinates Two cameras with parallel image planes Eect o Window Size Window size has qualitatively the same eect as smoothing. Localization is better with smaller windows. In particular, the disparity image is corrupted less near occluding edges because o better localized match. Matching is better with larger windows because more pixels are taking into account, and there is thereore better discrimination between window positions. Localization degrades as the window size increases, or the same reason. For large values o w, the minimum is latter (lower curvature), leading to lower conidence in the matching as deined beore.
53 Recover the 3-D coordinates Two cameras with parallel image planes Ambiguity In many cases, several positions along the epipolar line can match the window around the let pixel. In that case, there is an ambiguity as to which point leads to the correct 3-D reconstruction. I the ambiguous matches are ar apart, they will correspond to very dierent points in space, thus leading to large errors. In practice, it is nearly impossible to eliminate all such ambiguities, especially in environments with lots o regular structures. The solution is generally to use more than two cameras in stereo.
54 Recover the 3-D coordinates >2 cameras - multibaseline approach Suppose that the three cameras are aligned. A point P corresponds to three points in the images, aligned along the horizontal epipolar line.
55 Recover the 3-D coordinates >2 cameras - multibaseline approach Suppose that the three cameras are aligned. A point P corresponds to three points in the images, aligned along the horizontal epipolar line. The disparities o those points d 12 and d 13 between the irst image and images 2 and 3 are in general dierent because the baselines are dierent: d d = = B Z B Z Z = d B = d B 13 13
56 Recover the 3-D coordinates >2 cameras - multibaseline approach i we plot the matching curves (S 12 (d )) not as a unction o the disparity but at a unction o d = d/b, assuming that the matches are correct, all the curves should have the same minimum at d = 1/Z. Instead o using S 12 and S 13 separately, we can just combine them into a single matching unction: S(d ) = S 12 + S 13 and ind the minimum o S(d ).
57 Recover the 3-D coordinates >2 cameras - multibaseline approach Using a multibaseline approach combines the advantages o both short and long baseline: The short baseline will make the matching easier but leads to poorer localization in space. The longer baseline leads to higher precision localization but more diicult matching. Furthermore, ambiguous matches would not generally appear at multiple baselines, thus, by combining the matching unction into a single unction, we get a sharper, unique peak o the matching unction.
Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationStructure from motion
Multi-view geometry Structure rom motion Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera 3 R 3,t 3 Figure credit: Noah Snavely Structure rom motion? Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera 3 R 3,t 3 Structure:
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationEpipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz
Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationMulti-view geometry problems
Multi-view geometry Multi-view geometry problems Structure: Given projections o the same 3D point in two or more images, compute the 3D coordinates o that point? Camera 1 Camera 2 R 1,t 1 R 2,t 2 Camera
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X
More informationLecture 14: Basic Multi-View Geometry
Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram
More informationMultiple Views Geometry
Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective
More informationEpipolar Geometry and the Essential Matrix
Epipolar Geometry and the Essential Matrix Carlo Tomasi The epipolar geometry of a pair of cameras expresses the fundamental relationship between any two corresponding points in the two image planes, and
More informationC / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.
C18 2015 1 / 35 C18 Computer Vision David Murray david.murray@eng.ox.ac.uk www.robots.ox.ac.uk/ dwm/courses/4cv Michaelmas 2015 C18 2015 2 / 35 Computer Vision: This time... 1. Introduction; imaging geometry;
More informationRectification and Distortion Correction
Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationCS201 Computer Vision Camera Geometry
CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the
More informationTwo-view geometry Computer Vision Spring 2018, Lecture 10
Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of
More informationStereo imaging ideal geometry
Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and
More informationCS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003
CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass
More informationRecovering structure from a single view Pinhole perspective projection
EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More information1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)
Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 Structure Computation Lecture 18 March 22, 2005 2 3D Reconstruction The goal of 3D reconstruction
More informationThe end of affine cameras
The end of affine cameras Affine SFM revisited Epipolar geometry Two-view structure from motion Multi-view structure from motion Planches : http://www.di.ens.fr/~ponce/geomvis/lect3.pptx http://www.di.ens.fr/~ponce/geomvis/lect3.pdf
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More informationEpipolar geometry. x x
Two-view geometry Epipolar geometry X x x Baseline line connecting the two camera centers Epipolar Plane plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More information3-D TERRAIN RECONSTRUCTION WITH AERIAL PHOTOGRAPHY
3-D TERRAIN RECONSTRUCTION WITH AERIAL PHOTOGRAPHY Bin-Yih Juang ( 莊斌鎰 ) 1, and Chiou-Shann Fuh ( 傅楸善 ) 3 1 Ph. D candidate o Dept. o Mechanical Engineering National Taiwan University, Taipei, Taiwan Instructor
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationTwo-View Geometry (Course 23, Lecture D)
Two-View Geometry (Course 23, Lecture D) Jana Kosecka Department of Computer Science George Mason University http://www.cs.gmu.edu/~kosecka General Formulation Given two views of the scene recover the
More information3D Geometry and Camera Calibration
3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationCS231A Course Notes 4: Stereo Systems and Structure from Motion
CS231A Course Notes 4: Stereo Systems and Structure from Motion Kenji Hata and Silvio Savarese 1 Introduction In the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationComputer Vision cmput 428/615
Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?
More informationAnnouncements. Stereo
Announcements Stereo Homework 2 is due today, 11:59 PM Homework 3 will be assigned today Reading: Chapter 7: Stereopsis CSE 152 Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where relative
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationReminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches
Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary
More informationComputer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15
Announcement Stereo II CSE252A Lecture 15 HW3 assigned No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in WLH Room 2112 Mars Exploratory Rovers: Spirit and Opportunity Stereo Vision Outline
More informationCamera Model and Calibration
Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationStereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world
More informationCorrespondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong
More informationLecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15
Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15 Lecture 6 Stereo Systems Multi- view geometry Stereo systems
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationLecture'9'&'10:'' Stereo'Vision'
Lecture'9'&'10:'' Stereo'Vision' Dr.'Juan'Carlos'Niebles' Stanford'AI'Lab' ' Professor'FeiAFei'Li' Stanford'Vision'Lab' 1' Dimensionality'ReducIon'Machine'(3D'to'2D)' 3D world 2D image Point of observation
More informationWeek 2: Two-View Geometry. Padua Summer 08 Frank Dellaert
Week 2: Two-View Geometry Padua Summer 08 Frank Dellaert Mosaicking Outline 2D Transformation Hierarchy RANSAC Triangulation of 3D Points Cameras Triangulation via SVD Automatic Correspondence Essential
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationGeometry of a single camera. Odilon Redon, Cyclops, 1914
Geometr o a single camera Odilon Redon, Cclops, 94 Our goal: Recover o 3D structure Recover o structure rom one image is inherentl ambiguous??? Single-view ambiguit Single-view ambiguit Rashad Alakbarov
More information3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,
3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationEpipolar Geometry CSE P576. Dr. Matthew Brown
Epipolar Geometry CSE P576 Dr. Matthew Brown Epipolar Geometry Epipolar Lines, Plane Constraint Fundamental Matrix, Linear solution + RANSAC Applications: Structure from Motion, Stereo [ Szeliski 11] 2
More informationLecture 5 Epipolar Geometry
Lecture 5 Epipolar Geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 5-24-Jan-18 Lecture 5 Epipolar Geometry Why is stereo useful? Epipolar constraints Essential
More informationAnnouncements. Stereo
Announcements Stereo Homework 1 is due today, 11:59 PM Homework 2 will be assigned on Thursday Reading: Chapter 7: Stereopsis CSE 252A Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where
More information3D reconstruction class 11
3D reconstruction class 11 Multiple View Geometry Comp 290-089 Marc Pollefeys Multiple View Geometry course schedule (subject to change) Jan. 7, 9 Intro & motivation Projective 2D Geometry Jan. 14, 16
More informationFundamental Matrix & Structure from Motion
Fundamental Matrix & Structure from Motion Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Transformations between images Structure from Motion The Essential Matrix The Fundamental
More informationA Summary of Projective Geometry
A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]
More informationGeometry of Multiple views
1 Geometry of Multiple views CS 554 Computer Vision Pinar Duygulu Bilkent University 2 Multiple views Despite the wealth of information contained in a a photograph, the depth of a scene point along the
More informationComputer Vision I - Algorithms and Applications: Multi-View 3D reconstruction
Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More information3D Computer Vision. Structure from Motion. Prof. Didier Stricker
3D Computer Vision Structure from Motion Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Structure
More informationCamera Geometry II. COS 429 Princeton University
Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence
More informationComputer Vision Projective Geometry and Calibration. Pinhole cameras
Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole
More informationMidterm Exam Solutions
Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow
More informationEpipolar Geometry and Stereo Vision
CS 1699: Intro to Computer Vision Epipolar Geometry and Stereo Vision Prof. Adriana Kovashka University of Pittsburgh October 8, 2015 Today Review Projective transforms Image stitching (homography) Epipolar
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection
More information521466S Machine Vision Exercise #1 Camera models
52466S Machine Vision Exercise # Camera models. Pinhole camera. The perspective projection equations or a pinhole camera are x n = x c, = y c, where x n = [x n, ] are the normalized image coordinates,
More informationBut First: Multi-View Projective Geometry
View Morphing (Seitz & Dyer, SIGGRAPH 96) Virtual Camera Photograph Morphed View View interpolation (ala McMillan) but no depth no camera information Photograph But First: Multi-View Projective Geometry
More informationCameras and Stereo CSE 455. Linda Shapiro
Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation
More informationComputer Vision Project-1
University of Utah, School Of Computing Computer Vision Project- Singla, Sumedha sumedha.singla@utah.edu (00877456 February, 205 Theoretical Problems. Pinhole Camera (a A straight line in the world space
More informationCamera Model and Calibration. Lecture-12
Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt
ECE 47: Homework 5 Due Tuesday, October 7 in class @:3pm Seth Hutchinson Luke A Wendt ECE 47 : Homework 5 Consider a camera with focal length λ = Suppose the optical axis of the camera is aligned with
More informationA Factorization Method for Structure from Planar Motion
A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College
More informationCOSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor
COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The
More informationDepth from two cameras: stereopsis
Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture
More informationLecture 6 Stereo Systems Multi-view geometry
Lecture 6 Stereo Systems Multi-view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-5-Feb-4 Lecture 6 Stereo Systems Multi-view geometry Stereo systems
More informationFig. 3.1: Interpolation schemes for forward mapping (left) and inverse mapping (right, Jähne, 1997).
Eicken, GEOS 69 - Geoscience Image Processing Applications, Lecture Notes - 17-3. Spatial transorms 3.1. Geometric operations (Reading: Castleman, 1996, pp. 115-138) - a geometric operation is deined as
More informationNeighbourhood Operations
Neighbourhood Operations Neighbourhood operations simply operate on a larger neighbourhood o piels than point operations Origin Neighbourhoods are mostly a rectangle around a central piel Any size rectangle
More informationChapter 3 Image Enhancement in the Spatial Domain
Chapter 3 Image Enhancement in the Spatial Domain Yinghua He School o Computer Science and Technology Tianjin University Image enhancement approaches Spatial domain image plane itsel Spatial domain methods
More informationComputer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z
Computer Vision I Name : CSE 252A, Fall 202 Student ID : David Kriegman E-Mail : Assignment (Due date: 0/23/202). Perspective Projection [2pts] Consider a perspective projection where a point = z y x P
More informationElements of Computer Vision: Multiple View Geometry. 1 Introduction. 2 Elements of Geometry. Andrea Fusiello
Elements of Computer Vision: Multiple View Geometry. Andrea Fusiello http://www.sci.univr.it/~fusiello July 11, 2005 c Copyright by Andrea Fusiello. This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike
More informationIndex. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263
Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera Stereo
More informationEpipolar Geometry and Stereo Vision
CS 1674: Intro to Computer Vision Epipolar Geometry and Stereo Vision Prof. Adriana Kovashka University of Pittsburgh October 5, 2016 Announcement Please send me three topics you want me to review next
More informationMulti-View Geometry Part II (Ch7 New book. Ch 10/11 old book)
Multi-View Geometry Part II (Ch7 New book. Ch 10/11 old book) Guido Gerig CS-GY 6643, Spring 2016 gerig@nyu.edu Credits: M. Shah, UCF CAP5415, lecture 23 http://www.cs.ucf.edu/courses/cap6411/cap5415/,
More informationDepth from two cameras: stereopsis
Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture
More informationStructure from Motion and Multi- view Geometry. Last lecture
Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,
More informationUndergrad HTAs / TAs. Help me make the course better! HTA deadline today (! sorry) TA deadline March 21 st, opens March 15th
Undergrad HTAs / TAs Help me make the course better! HTA deadline today (! sorry) TA deadline March 2 st, opens March 5th Project 2 Well done. Open ended parts, lots of opportunity for mistakes. Real implementation
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a
More informationThere are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...
STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own
More informationRectification and Disparity
Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More information