Vision 3D articielle Disparity maps, correlation

Size: px
Start display at page:

Download "Vision 3D articielle Disparity maps, correlation"

Transcription

1 Vision 3D articielle Disparity maps, correlation Pascal Monasse IMAGINE, École des Ponts ParisTech

2 Contents Triangulation Epipolar rectication Disparity map

3 Contents Triangulation Epipolar rectication Disparity map

4 Triangulation Let us write again the binocular formulae: λx = K(RX + T ) λ x = K X Write Y T = ( X T 1 λ λ ) : ( ) KR KT x 0 3 K x Y = 0 6 (6 equations 5 unknowns+1 epipolar constraint) We can then recover X. Special case: R = Id, T = Be 1 We get: If also K = K, d is the disparity z(x KK 1 x ) = ( Bf 0 0 ) T z = fb/[(x x ) e 1 ] = fb/d

5 Triangulation Fundamental principle of stereo vision z h, B/H z = d H. f f focal length. H distance optical center-ground. B distance between optical centers (baseline). Goal Given two rectied images, point correspondences and computation of their apparent shift (disparity) gives information about relative depth of the scene.

6 Recovery of R and T Suppose we know K, K, and F or E. Recover R and T? From E = [T ] R, E T E = R T (TT T T 2 I )R = (R T T )(R T T ) T + R T T 2 I If x = R T T, E T E x = 0 and if y x = 0, E T E y = T 2 y. Therefore σ 1 = σ 2 = T and σ 3 = 0. Inversely, from E = Udiag(σ, σ, 0)V T, we can write: E = σu T U U T V = σ[t ] R { T = ±σu[e 3 ] U T Actually, there are up to 4 solutions: R = UR z (± π 2 )V T B A B B A A B A

7 What is possible without calibration? We can recover F, but not E. Actually, from x = PX x = P X we see that we have also: x = (PH 1 )(HX) x = (P H 1 )(HX) Interpretation: applying a space homography and transforming the projection matrices (this changes K, K, R and T ), we get exactly the same projections. Consequence: in the uncalibrated case, reconstruction can only be done modulo a 3D space homography.

8 Contents Triangulation Epipolar rectication Disparity map

9 Epipolar rectication It is convenient to get to a situation where epipolar lines are parallel and at same ordinate in both images. As a consequence, epipoles are at horizontal innity: 1 e = e = 0 0 It is always possible to get to that situation by virtual rotation of cameras (application of homography) Image planes coincide and are parallel to baseline.

10 Epipolar rectication Image 1

11 Epipolar rectication Image 2

12 Epipolar recti cation Image 1 Recti ed image 1

13 Epipolar recti cation Image 2 Recti ed image 2

14 Epipolar rectication Fundamental matrix can be written: 1 F = = thus T x Fx = 0 y y = 0 0 Writing matrices P = K K = f x s c x ( I 0 ) and P = K ( I Be 1 ) : f x s c x 0 f y c y K = 0 f y c y F = BK T [e 1 ] K 1 = B f y f y f y 0 f y c y f y c y f y We must have f y = f y and c y = c y, that is identical second rows of K and K

15 Epipolar rectication We are looking for homographies H and H to apply to images such that F = H T [e 1 ] H That is 9 equations and 16 variables, 7 degrees of freedom remain: the rst rows of K and K and the rotation angle around baseline α Invariance through rotation around baseline: T cos α sin α cos α sin α = [e 1 ] 0 sin α cos α sin α cos α Several methods exist, they try to distort as little as possible the image Rectif. of Gluckman-Nayar (2001)

16 Epipolar rectication of Fusiello-Irsara (2008) We are looking for H and H as rotations, supposing K = K known: H = K n RK 1 and H = K n R K 1 with K n and K of identical second row, n R and R rotation matrices parameterized by Euler angles and f 0 w/2 K = 0 f h/ Writing R = R x (θ x )R y (θ y )R z (θ z ) we must have: F = (K n RK 1 ) T [e 1 ] (K n R K 1 ) = K T R T z R T y [e 1 ] R K 1 We minimize the sum of squares of points to their epipolar line according to the 6 parameters (θ y, θ z, θ x, θ y, θ z, f )

17 Ruins E 0 = 3.21 pixels. E 6 = 0.12 pixels.

18 Ruins E 0 = 3.21 pixels. E 6 = 0.12 pixels.

19 Cake E 0 = 17.9 pixels. E 13 = 0.65 pixels.

20 Cake E 0 = 17.9 pixels. E 13 = 0.65 pixels.

21 Cluny E 0 = 4.87 pixels. E 14 = 0.26 pixels.

22 Cluny E 0 = 4.87 pixels. E 14 = 0.26 pixels.

23 Carcassonne E 0 = 15.6 pixels. E 4 = 0.24 pixels.

24 Carcassonne E 0 = 15.6 pixels. E 4 = 0.24 pixels.

25 Books E 0 = 3.22 pixels. E 14 = 0.27 pixels.

26 Books E 0 = 3.22 pixels. E 14 = 0.27 pixels.

27 Contents Triangulation Epipolar rectication Disparity map

28 Disparity map z = fb d Depth z is inversely proportional to disparity d (apparent motion, in pixels). Disparity map: At each pixel, its apparent motion between left and right images. We already know disparity at feature points, this gives an idea about min and max motion, which makes the search for matching points less ambiguous and faster.

29 Stereo Matching Principle: invariance of something between corresponding pixels in left and right images (I L, I R ) Example: color, x-derivative, census... Usage of a distance to capture this invariance, such as AD(p, q) = I L (p) I R (q) 1

30 Stereo Matching Principle: invariance of something between corresponding pixels in left and right images (I L, I R ) Example: color, x-derivative, census... Usage of a distance to capture this invariance, such as AD(p, q) = I L (p) I R (q) 1 Left image Ground truth Min AD

31 Stereo Matching Post-processing helps a lot! Example: left-right consistency check, followed by simple constant interpolation, and median weighted by original image bilateral weights Min CG Left-right test Post-processed

32 Stereo Matching Post-processing helps a lot! Example: left-right consistency check, followed by simple constant interpolation, and median weighted by original image bilateral weights Min CG Left-right test Post-processed Still, single pixel estimation not good enough Need to promote some regularity of the result

33 Global vs. local methods Global method: explicit smoothness term arg min E data (p, p + d(p); I L, I R ) d p + p p E reg (d(p), d(p ); p, p, I L, I R ) Examples: E reg = d(p) d(p ) 2 (Horn-Schunk), E reg = δ(d(p) = d(p )) (Potts), E reg = exp( (I L (p) I L (p )) 2 /σ 2 ) d(p) d(p )...

34 Global vs. local methods Global method: explicit smoothness term arg min E data (p, p + d(p); I L, I R ) d p + p p E reg (d(p), d(p ); p, p, I L, I R ) Examples: E reg = d(p) d(p ) 2 (Horn-Schunk), E reg = δ(d(p) = d(p )) (Potts), E reg = exp( (I L (p) I L (p )) 2 /σ 2 ) d(p) d(p )... Problem: NP-hard for almost all regularity terms except E reg = λ pp d(p) d(p ) (Ishikawa 2003) Aternative: sub-optimal solution for submodular regularity (graph-cuts: Boykov, Kolmogorov, Zabih), loopy-belief propagation (no guarantee at all), semi-global matching (Hirschmüller)

35 Global vs. local methods Local method: Take a patch around p, aggregate costs E data (Lucas-Kanade) No explicit regularity term Example: SAD(p, q) = r P I L(p + r) I R (q + r), SSD(p, q) = r P I L(p + r) I R (q + r) 2, SCG(p, q) = CG(p + r, q + r)... r P Can be interpreted as a cost-volume ltering. dmax dmin * 1 Px{0} Increasing patch size P promotes regularity.

36 Global vs. local methods Local method: Take a patch around p, aggregate costs E data (Lucas-Kanade) No explicit regularity term Example: SAD(p, q) = r P I L(p + r) I R (q + r), SSD(p, q) = r P I L(p + r) I R (q + r) 2, SCG(p, q) = CG(p + r, q + r)... r P Can be interpreted as a cost-volume ltering. Increasing patch size P promotes regularity. p p Proportion of common pixels between p + P and p + P: 1 1 n if P is n n

37 Local search At each pixel, we consider a context window and we look for the motion of this window. Distance between windows: d(q) = arg min (I (q + p) I (q + de 1 + p)) 2 d p F Variants to be more robust to illumination changes: 1. Translate intensities by the mean over the window. I (q + p) I (q + p) r F I (q + r)/#f 2. Normalize by mean and variance over window.

38 Distance between patches Several distances or similarity measures are popular: SAD: Sum of Absolute Dierences d(q) = arg min I (q + p) I (q + de 1 + p) d p F SSD: Sum of Squared Dierences d(q) = arg min (I (q + p) I (q + de 1 + p)) 2 d p F CSSD: Centered Sum of Squared Dierences d(q) = arg min (I (q + p) Ī F I (q + de 1 + p) + Ī F )2 d p F NCC: Normalized Cross-Correlation d(q) = arg max d p F (I (q + p) Ī F )(I (q + de 1 + p) Ī F ) (I (q + p) Ī F ) 2 (I (q + de 1 + p) Ī F )2

39 Another distance The following distance is more and more popular in recent articles: with ɛ(p, q) = (1 α) min ( I (p) I ) (q) 1, τ col + ( ) α min I I (p) (q), τ grad x x I (p) I (q) 1 = I r (p) I r (q) + I g (p) I g (q) + I b (p) I b (q) Usual parameters: α = 0.9 τ col = 30 (not very sensitive if larger) τ grad = 2 (not very sensitive if larger) Note that α = 0 is similar to SAD.

40 Varying patch size P = {(0, 0)}

41 Varying patch size P = [ 1, 1] 2

42 Varying patch size P = [ 7, 7] 2

43 Varying patch size P = [ 21, 21] 2

44 Varying patch size P = [ 35, 35] 2

45 Problems of local methods Implicit hypothesis: all points of window move with same motion, that is they are in a fronto-parallel plane. aperture problem: the context can be too small in certain regions, lack of information. adherence problem: intensity discontinuities inuence strongly the estimated disparity and if it corresponds with a depth discontinuity, we have a tendency to dilate the front object. C C patch rooftop O: aperture problem A: adherence problem ground

46 Example: seeds expansion We rely on best found distances and we put them in a priority queue (seeds) We pop the best seed G from the queue, we compute for neighbors the best disparity between d(g) 1, d(g), and d(g) + 1 and we push them in the queue. Right image

47 Example: seeds expansion We rely on best found distances and we put them in a priority queue (seeds) We pop the best seed G from the queue, we compute for neighbors the best disparity between d(g) 1, d(g), and d(g) + 1 and we push them in the queue. Left image

48 Example: seeds expansion We rely on best found distances and we put them in a priority queue (seeds) We pop the best seed G from the queue, we compute for neighbors the best disparity between d(g) 1, d(g), and d(g) + 1 and we push them in the queue. Seeds

49 Example: seeds expansion We rely on best found distances and we put them in a priority queue (seeds) We pop the best seed G from the queue, we compute for neighbors the best disparity between d(g) 1, d(g), and d(g) + 1 and we push them in the queue. Seeds expansion

50 Example: seeds expansion We rely on best found distances and we put them in a priority queue (seeds) We pop the best seed G from the queue, we compute for neighbors the best disparity between d(g) 1, d(g), and d(g) + 1 and we push them in the queue. Left image

51 Adaptive neighborhoods To reduce adherence (aka fattening eect), an image patch should be on the same object, or even better at constant depth Heuristic inspired by bilateral lter [Yoon&Kweon 2006]: ( ω I (p, p ) = exp p ) p 2 γ pos ( exp I (p) ) I (p ) 1 γ col Selected disparity: d(p) = arg min E(p, q) with d=q p r F E(p, q) = ω I (p, p + r) ω I (q, q + r) ɛ(p + r, q + r) ω r F I (p, p + r) ω I (q, q + r) We can take a large window F (e.g., 35 35)

52 Bilateral weights (a) (b) (c) (d)

53 Results Tsukuba Venus Teddy Cones Left image Ground truth Results

54 What is the limit of adaptive neighborhoods? The best patch is P p (r) = 1(d(p + r) = d(p)) Suppose we have an oracle giving P p Use ground-truth image to compute P p Since GT is subpixel, use P p (r) = 1( d(p + r) d(p) 1/2)

55 Test with oracle image ground truth oracle patches

56 Test with oracle image ground truth oracle patches

57 Conclusion We can get back to the canonical situation by epipolar rectication. Limit: when epipoles are in the image, standard methods are not adapted. For disparity map computation, there are many choices: 1. Size and shape of window? 2. Which distance? 3. Filtering of disparity map to reject uncertain disparities? You will see next session a global method for disparity computation Very active domain of research, >150 methods tested at

58 Practical session: Disparity map computation by propagation of seeds Objective: Compute the disparity map associated to a pair of images. We start from high condence points (seeds), then expand by supposing that the disparity map is regular. Get initial program from the website. Compute disparity map from image 1 to 2 of all points by highest NCC score. Keep only disparity where NCC is suciently high (0.95), put them as seeds in a std::priority_queue. While queue is not empty: 1. Pop P, the top of the queue. 2. For each 4-neighbor Q of P having no valid disparity, set d Q by highest NCC score among d P 1, d P, and d P Push Q in queue. Hint: the program may be too slow in Debug mode for the full images. Use cropped images to nd your bugs, then build in Release mode for original images.

Vision 3D articielle Multiple view geometry

Vision 3D articielle Multiple view geometry Vision 3D articielle Multiple view geometry Pascal Monasse monasse@imagine.enpc.fr IMAGINE, École des Ponts ParisTech Contents Multi-view constraints Multi-view calibration Incremental calibration Global

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka CS223b Midterm Exam, Computer Vision Monday February 25th, Winter 2008, Prof. Jana Kosecka Your name email This exam is 8 pages long including cover page. Make sure your exam is not missing any pages.

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Introduction à la vision artificielle X

Introduction à la vision artificielle X Introduction à la vision artificielle X Jean Ponce Email: ponce@di.ens.fr Web: http://www.di.ens.fr/~ponce Planches après les cours sur : http://www.di.ens.fr/~ponce/introvis/lect10.pptx http://www.di.ens.fr/~ponce/introvis/lect10.pdf

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Teesta suspension bridge-darjeeling, India Mark Twain at Pool Table", no date, UCR Museum of Photography Woman getting eye exam during

More information

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47 Flow Estimation Min Bai University of Toronto February 8, 2016 Min Bai (UofT) Flow Estimation February 8, 2016 1 / 47 Outline Optical Flow - Continued Min Bai (UofT) Flow Estimation February 8, 2016 2

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Multiple Views Geometry

Multiple Views Geometry Multiple Views Geometry Subhashis Banerjee Dept. Computer Science and Engineering IIT Delhi email: suban@cse.iitd.ac.in January 2, 28 Epipolar geometry Fundamental geometric relationship between two perspective

More information

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from? Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo

More information

Lecture 6 Stereo Systems Multi-view geometry

Lecture 6 Stereo Systems Multi-view geometry Lecture 6 Stereo Systems Multi-view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-5-Feb-4 Lecture 6 Stereo Systems Multi-view geometry Stereo systems

More information

MAPI Computer Vision. Multiple View Geometry

MAPI Computer Vision. Multiple View Geometry MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry

More information

Computer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15

Computer Vision I. Announcement. Stereo Vision Outline. Stereo II. CSE252A Lecture 15 Announcement Stereo II CSE252A Lecture 15 HW3 assigned No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in WLH Room 2112 Mars Exploratory Rovers: Spirit and Opportunity Stereo Vision Outline

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Application questions. Theoretical questions

Application questions. Theoretical questions The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision CS 1674: Intro to Computer Vision Epipolar Geometry and Stereo Vision Prof. Adriana Kovashka University of Pittsburgh October 5, 2016 Announcement Please send me three topics you want me to review next

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

3D Modeling using multiple images Exam January 2008

3D Modeling using multiple images Exam January 2008 3D Modeling using multiple images Exam January 2008 All documents are allowed. Answers should be justified. The different sections below are independant. 1 3D Reconstruction A Robust Approche Consider

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16 Announcements Stereo III CSE252A Lecture 16 HW1 being returned HW3 assigned and due date extended until 11/27/12 No office hours today No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few... STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

Stereo. Many slides adapted from Steve Seitz

Stereo. Many slides adapted from Steve Seitz Stereo Many slides adapted from Steve Seitz Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1 image 2 Dense depth map Binocular stereo Given a calibrated

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Embedded real-time stereo estimation via Semi-Global Matching on the GPU

Embedded real-time stereo estimation via Semi-Global Matching on the GPU Embedded real-time stereo estimation via Semi-Global Matching on the GPU Daniel Hernández Juárez, Alejandro Chacón, Antonio Espinosa, David Vázquez, Juan Carlos Moure and Antonio M. López Computer Architecture

More information

Stereo Vision Computer Vision (Kris Kitani) Carnegie Mellon University

Stereo Vision Computer Vision (Kris Kitani) Carnegie Mellon University Stereo Vision 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University What s different between these two images? Objects that are close move more or less? The amount of horizontal movement is

More information

Rectification and Disparity

Rectification and Disparity Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

More information

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental

More information

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence CS4495/6495 Introduction to Computer Vision 3B-L3 Stereo correspondence For now assume parallel image planes Assume parallel (co-planar) image planes Assume same focal lengths Assume epipolar lines are

More information

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views? Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

More information

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017 CS 277: Intro to Computer Vision Multiple Views Prof. Adriana Kovashka Universit of Pittsburgh March 4, 27 Plan for toda Affine and projective image transformations Homographies and image mosaics Stereo

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

Quasi-Euclidean Epipolar Rectification

Quasi-Euclidean Epipolar Rectification 2014/07/01 v0.5 IPOL article class Published in Image Processing On Line on 2011 09 13. Submitted on 2011 00 00, accepted on 2011 00 00. ISSN 2105 1232 c 2011 IPOL & the authors CC BY NC SA This article

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16 Computer Vision I Dense Stereo Correspondences Anita Sellent Stereo Two Cameras Overlapping field of view Known transformation between cameras From disparity compute depth [ Bradski, Kaehler: Learning

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

Cameras and Stereo CSE 455. Linda Shapiro

Cameras and Stereo CSE 455. Linda Shapiro Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation

More information

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

Image Based Reconstruction II

Image Based Reconstruction II Image Based Reconstruction II Qixing Huang Feb. 2 th 2017 Slide Credit: Yasutaka Furukawa Image-Based Geometry Reconstruction Pipeline Last Lecture: Multi-View SFM Multi-View SFM This Lecture: Multi-View

More information

Stereo: Disparity and Matching

Stereo: Disparity and Matching CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To

More information

Depth from two cameras: stereopsis

Depth from two cameras: stereopsis Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture

More information

Lecture 14: Computer Vision

Lecture 14: Computer Vision CS/b: Artificial Intelligence II Prof. Olga Veksler Lecture : Computer Vision D shape from Images Stereo Reconstruction Many Slides are from Steve Seitz (UW), S. Narasimhan Outline Cues for D shape perception

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision CS 1699: Intro to Computer Vision Epipolar Geometry and Stereo Vision Prof. Adriana Kovashka University of Pittsburgh October 8, 2015 Today Review Projective transforms Image stitching (homography) Epipolar

More information

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Guido Gerig CS 6320 Spring 2013 Credits: Prof. Mubarak Shah, Course notes modified from: http://www.cs.ucf.edu/courses/cap6411/cap5415/, Lecture

More information

Final project bits and pieces

Final project bits and pieces Final project bits and pieces The project is expected to take four weeks of time for up to four people. At 12 hours per week per person that comes out to: ~192 hours of work for a four person team. Capstone:

More information

Bilaterally Weighted Patches for Disparity Map Computation

Bilaterally Weighted Patches for Disparity Map Computation 2014/07/01 v0.5 IPOL article class Published in Image Processing On Line on 2015 03 11. Submitted on 2014 08 29, accepted on 2015 02 23. ISSN 2105 1232 c 2015 IPOL & the authors CC BY NC SA This article

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Depth from two cameras: stereopsis

Depth from two cameras: stereopsis Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture

More information

Lecture 14: Basic Multi-View Geometry

Lecture 14: Basic Multi-View Geometry Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram

More information

Perception II: Pinhole camera and Stereo Vision

Perception II: Pinhole camera and Stereo Vision Perception II: Pinhole camera and Stereo Vision Davide Scaramuzza Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Mobile Robot Control Scheme knowledge, data base mission commands Localization

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems EECS 442 Computer vision Stereo systems Stereo vision Rectification Correspondence problem Active stereo vision systems Reading: [HZ] Chapter: 11 [FP] Chapter: 11 Stereo vision P p p O 1 O 2 Goal: estimate

More information

Announcements. Stereo Vision Wrapup & Intro Recognition

Announcements. Stereo Vision Wrapup & Intro Recognition Announcements Stereo Vision Wrapup & Intro Introduction to Computer Vision CSE 152 Lecture 17 HW3 due date postpone to Thursday HW4 to posted by Thursday, due next Friday. Order of material we ll first

More information

Subpixel accurate refinement of disparity maps using stereo correspondences

Subpixel accurate refinement of disparity maps using stereo correspondences Subpixel accurate refinement of disparity maps using stereo correspondences Matthias Demant Lehrstuhl für Mustererkennung, Universität Freiburg Outline 1 Introduction and Overview 2 Refining the Cost Volume

More information

Computer Vision, Lecture 11

Computer Vision, Lecture 11 Computer Vision, Lecture 11 Professor Hager http://www.cs.jhu.edu/~hager Computational Stereo Much of geometric vision is based on information from (or more) camera locations hard to recover 3D information

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Introduction to Computer Vision. Week 10, Winter 2010 Instructor: Prof. Ko Nishino

Introduction to Computer Vision. Week 10, Winter 2010 Instructor: Prof. Ko Nishino Introduction to Computer Vision Week 10, Winter 2010 Instructor: Prof. Ko Nishino Today How do we recover geometry from 2 views? Stereo Can we recover geometry from a sequence of images Structure-from-Motion

More information

Assignment 2: Stereo and 3D Reconstruction from Disparity

Assignment 2: Stereo and 3D Reconstruction from Disparity CS 6320, 3D Computer Vision Spring 2013, Prof. Guido Gerig Assignment 2: Stereo and 3D Reconstruction from Disparity Out: Mon Feb-11-2013 Due: Mon Feb-25-2013, midnight (theoretical and practical parts,

More information

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching -- Computational Optical Imaging - Optique Numerique -- Single and Multiple View Geometry, Stereo matching -- Autumn 2015 Ivo Ihrke with slides by Thorsten Thormaehlen Reminder: Feature Detection and Matching

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Computer Vision I. Announcement

Computer Vision I. Announcement Announcement Stereo I HW3: Coming soon Stereo! CSE5A Lecture 3 Binocular Stereopsis: Mars Gien two images of a scene where relatie locations of cameras are known, estimate depth of all common scene points.

More information

1 Projective Geometry

1 Projective Geometry CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera Stereo

More information

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by  ) Readings Szeliski, Chapter 10 (through 10.5) Announcements Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by email) One-page writeup (from project web page), specifying:» Your team members» Project goals. Be specific.

More information

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras

More information