Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Size: px
Start display at page:

Download "Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision"

Transcription

1 Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

2 What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching problem Various applications of stereo

3 What Is Going to Happen Today? Stereo from a technical point of view: Stereo pipeline Epipolar geometry Epipolar rectification Depth via triangulation Challenges in stereo matching Commonly used assumptions Middlebury stereo benchmark

4 Stereo from a Technical Point of View Michael Bleyer LVA Stereo Vision

5 Stereo Pipeline Left Image Right Image Rectified Left Image Epipolar Rectification Stereo Matching Disparity Map Depth via Triangulation Rectified Right Image 3D Scene Reconstruction

6 Stereo Pipeline Left Image Right Image Rectified Left Image Epipolar Rectification Stereo Matching Disparity Map Rectified Right Image Discussed in this session Depth via Triangulation 3D Scene Reconstruction

7 Stereo Pipeline Left Image Right Image Rectified Left Image Epipolar Rectification Stereo Matching Disparity Map Depth via Triangulation Rectified Right Image Discussed in this and all other sessions 3D Scene Reconstruction

8 Stereo Pipeline Left Image Right Image Epipolar Rectification Let us start here Rectified Left Image Stereo Matching Disparity Map Depth via Triangulation Rectified Right Image 3D Scene Reconstruction

9 Pinhole Camera Focal Point Image Plane Simplest model for describing the projection of a 3D scene onto a 2D image. Model is commonly used in computer vision.

10 Image Formation Process Let us assume we have a pinhole camera. The pinhole camera is characterized by its focal point Cl and its image plane L.

11 Image Formation Process We also have a second pinhole camera <Cr,R>. We assume that the camera system is fully calibrated, i.e. the 3D positions of <Cl, L> and <Cr,R> are known.

12 Image Formation Process We have a 3D point P.

13 Image Formation Process We compute the 2D projection pl of P onto the image plane of the left camera L by intersecting the ray from Cl to P with the plane L. This is what is happening when you take a 2D image of a 3D scene with your camera (image formation process).

14 Image Formation Process Nice, but actually we want to do exactly the opposite: We have a 2D image and want to make it 3D. We compute the 2D projection pl of P onto the image plane of the left camera L by intersecting the ray from Cl to P with the plane L. This is what is happening when you take a 2D image of a 3D scene with your camera (image formation process).

15 3D Reconstruction Task: We have a 2D point pl and want to compute its 3D position P.

16 3D Reconstruction P has to lie on the ray of Cl to pl. Problem: It can lie anywhere on this ray.

17 3D Reconstruction Let us assume we also know the 2D projection pr of P onto the right image plane R.

18 3D Reconstruction P can now be reconstructed by intersecting the rays Clpl and Crpr.

19 3D Reconstruction The challenging part is to find the pair of corresponding pixels pl and pr that are projections of the same 3D point P = P can now be reconstructed by intersecting the rays Clpl and Crpr. Stereo Matching Problem

20 3D Reconstruction Problem: Given pl, the corresponding pixel pr can lie at any x and y coordinate in the right image. Can we make the search P can now be reconstructed by intersecting the rays Clpl and Crpr. easier?

21 Epipolar Geometry We have stated that P has to lie on the ray Clpl.

22 Epipolar Geometry If we project each candidate 3D point onto the right image plane, we see that the all lie on a line in R.

23 Epipolar Geometry If we project each candidate 3D point onto the right image plane, we see that the all lie on a line in R.

24 Epipolar Geometry If we project each candidate 3D point onto the right image plane, we see that the all lie on a line in R.

25 Epipolar Geometry If we project each candidate 3D point onto the right image plane, we see that the all lie on a line in R.

26 Epipolar Geometry Epipolar line of pl This line is called epipolar line of pl. This epipolar line is the projection of the ray Clpl onto the right image plane R. The pixel pr is forced to lie on pl s epipolar line.

27 Epipolar Geometry To find the corresponding pixel, we only have to search along the epipolar line (1D instead of 2D search). Epipolar line of pl This search space restriction is This line is called epipolar line of pl. known as epipolar constraint. This epipolar line is the projection of the ray Clpl onto the right image plane R. The pixel pr is forced to lie on pl s epipolar line.

28 Epipolar Rectification X axis Baseline Specifically interesting case: Image plane L and R lie in a common plane. X-axes are parallel to the baseline Epipolar lines coincide with horizontal scanlines => corresponding pixels have the same y-coordinate

29 Epipolar Rectification Specifically interesting case: Image plane L and R lie in a common plane. X-axes are parallel to the baseline Epipolar lines coincide with horizontal scanlines => corresponding pixels have the same y-coordinate

30 Epipolar Rectification Specifically interesting case: Image plane L and R lie in a common plane. X-axes are parallel to the baseline Epipolar lines coincide with horizontal scanlines => corresponding pixels have the same y-coordinate

31 Epipolar Rectification Specifically interesting case: Image plane L and R lie in a common plane. X-axes are parallel to the baseline Epipolar lines coincide with horizontal scanlines => corresponding pixels have the same y-coordinate

32 Epipolar Rectification To find the corresponding pixel, we only have to search along the horizontal scanline. More convenient than tracing arbitrary epipolar lines. The difference in x coordinates of Specifically interesting case: Image plane L and R lie in a common plane. corresponding pixels is called disparity X-axes are parallel to the baseline Epipolar lines coincide with horizontal scanlines => corresponding pixels have the same y-coordinate

33 Epipolar Rectification This special case can be achieved by reprojecting left and right images onto virtual cameras. This process is known as epipolar rectification. Throughout the rest of the lecture we assume that images have been rectified. Original images white lines represent epipolar lines Rectified images epipolar lines coincide with horizontal scanlines Images taken from

34 Epipolar Constraint Concluding Remarks Epipolar constraint should always be used, because: 1D search is computationally faster than 2D search. Reduced search range lowers chance of finding a wrong match (Quality of depth maps). More or less the only constraint that will always be valid in stereo matching (unless there are calibration errors).

35 Stereo Pipeline Left Image Right Image Rectified Right Image Epipolar Rectification Stereo Matching Disparity Map Depth via Triangulation Rectified Right Image Let us for now assume that stereo matching has been solved and look at this point 3D Scene Reconstruction

36 Depth via Triangulation

37 Depth via Triangulation

38 Depth via Triangulation Similar Triangles: X Z = x f l

39 Depth via Triangulation Similar Triangles: X B Z = x f r

40 Depth via Triangulation From similar triangles: Write X in explicit form: Combine both equations: Write Z in explicit form: f x Z X l = f x Z B X r = d f B x x f B Z r l.. = = f Z x X l. = B f Z x X r + =. B f x Z f x Z r l + =.. f B x Z x Z r l... + = f B x x Z r l. ).( = This is disparity

41 Depth via Triangulation From similar triangles: Write X in explicit form: Combine both equations: X Z l xr Disparity = and depth are inversely = X x f Z. x f X B Z proportional! Z. x f l = X = r + B Therefore, disparity is commonly used Z. xl Z. xr synonymously = with + B depth. f f Z. xl = Z. xr + B. f f Z.( xl xr) = B. f Write Z in explicit form: This is disparity Z = B. f xl x r = B. f d

42 Stereo Pipeline Left Image Right Image Rectified Right Image Epipolar Rectification Stereo Matching Disparity Map Depth via Triangulation Rectified Right Image We will now focus on this problem (throughout the rest of the lecture) 3D Scene Reconstruction

43 Challenges in Stereo Matching Michael Bleyer LVA Stereo Vision

44 Stereo Matching Left 2D Image Right 2D Image Disparity Map

45 Why is Stereo Matching Challenging? (1) Color inconsistencies: When solving the stereo matching problem, we typically assume that corresponding pixels have the same intensity/color (= Photo consistency assumption) That does not need to be true due to: Image noise Different illumination conditions in left and right images Different sensor characteristics of the two cameras. Specular reflections (mirroring) Sampling artefacts Matting artefacts

46 Why is Stereo Matching Challenging? (2) Untextured regions (Matching ambiguities) There needs to be a certain amount of intensity/color variation (i.e. texture) so that a pixel can be uniquely matched in the other view. Can you (as a human) depict depth if you are standing in front of a wall that is completely white? Left image (no texture in the background) Right image Computed disparity map (errors in background)

47 Why is Stereo Matching Challenging? (3) Occlusion Problem There are pixels that are only visible in exactly one view. We call this pixels occluded (or half-occluded) It is difficult to estimate depth for these pixels. Occlusion problem makes stereo more challenging than a lot of other computer vision problems. Occluded Pixel

48 The Occlusion Problem Background Object Foreground Object Let s consider a simple scene composed of a foreground and a background object

49 The Occlusion Problem Regular case: The white pixel P1 can be seen by both camera.

50 The Occlusion Problem Occlusion in the right camera: The left camera sees the grey pixel P2. The ray from the right camera to P2 hits the white foreground object => P2 cannot be seen by right camera.

51 The Occlusion Problem Occlusion in the left camera: The right camera sees the grey pixel P3. The ray from the left camera to P3 hits the white foreground object => P3 cannot be seen by left camera.

52 The Occlusion Problem Occlusions occur in the proximity of disparity discontinuities.

53 The Occlusion Problem Occlusions occur as a consequence of discontinuities in depth. They occur close object/depth boundaries. They occur in both frames (symmetrical) Occlusions occur in the proximity of disparity discontinuities.

54 The Occlusion Problem Left Image (Occlusions in red color) Right Image (Occlusions in red color) In the left image, occlusions are located to the left of a disparity boundary. In the right image, occlusions are located to the right of a disparity boundary.

55 The Occlusion Problem Correct Disparity Map (Geometry of Left Image) Computed Disparity Map (Occlusions Ignored) It is difficult to find disparity in the matching point does not exist. Ignoring the occlusion problem leads to disparity artefacts near disparity borders.

56 The Occlusion Problem Disparity Artefacts to the left of depth boundaries Correct Disparity Map (Geometry of Left Image) Computed Disparity Map (Occlusions Ignored) It is difficult to find disparity in the matching point does not exist. Ignoring the occlusion problem leads to disparity artefacts near disparity borders.

57 Commonly Used Assumptions Michael Bleyer LVA Stereo Vision

58 Assumptions Assumptions are needed to solve the stereo matching problem. Stereo methods differ in What assumptions they use How they implement these assumptions We have already learned two assumptions: Which ones?

59 Photo Consistency and Epipolar Assumptions Photo consistency assumption: Corresponding pixels have the same intensity/color in both images. Epipolar assumption: The matching point of a pixel has to lie on the same horizontal scanline in the other image. We can combine both assumptions to obtain our first stereo algorithm. Algorithm 1: For each pixel p of the left image, search the pixel q in the right image that lies on the same y-coordinate as p (Epipolar assumption) and has the most similar color in comparison to p (Photo Consistency).

60 Results of Algorithm 1 Left Image Computed Disparity Map Quite disappointing, why? We have posed the following task: I have a red pixel. Find me a red pixel in the other image. Problem: There are usually many red pixels in the other image (ambiguity) We need additional assumptions.

61 Results of Algorithm 1 Correct Disparity Map Computed Disparity Map What is the most obvious difference between the correct and the computed disparity maps?

62 Smoothness Assumption (1) Observation: A correct disparity map typically consists of regions of constant (or very similar) disparity. For example, lamp, head, table, We can give this apriori knowledge to a stereo algorithm in the form of a smoothness assumption Left Image Correct Disparity Map

63 Regions where smoothness assumption is valid Regions where smoothness assumption is not valid Smoothness Assumption (2) Smoothness assumption: Spatially close pixels have the same (or similar) disparity. (By spatially close I mean pixels of similar image coordinates.) Smoothness assumption typically holds true almost everywhere, except at disparity borders.

64 Smoothness Assumption (3) Almost every stereo algorithm uses the smoothness assumption. Stereo algorithms are commonly divided into two categories based on the form in which they apply the smoothness assumption. These categories are: Local methods Global methods

65 Local Methods Compare Find point color of values maximum within search correspondence windows (a) Left image (b) Right image Compare small windows in left and right images. Within the window, pixels are supposed to have the same disparity => implicit smoothness assumption. We will learn a lot about them in the next session.

66 Global Methods Define a cost function to measure the quality of a disparity map: High costs mean that the disparity map is bad. Low costs mean it is good. Costs function is typically in the form of: E = Edata + Esmooth where Edata measures photo consistency Esmooth measures smoothness Global methods express smoothness assumption in an explicit form (as a smoothness term). The challenge is to find a disparity map of minimum costs (sessions 4 and 5).

67 Uniqueness Constraint The uniqueness constraint will help us to handle the occlusion problem. It states: A pixel in one frame has at most a single matching point in the other frame. In general valid, but broken for: Transparent objects Slanted surfaces

68 Uniqueness Constraint Occluding Object Left Pixel 0 Matching Points

69 Uniqueness Constraint Left Pixel 1 Matching Points

70 Uniqueness Constraint Left Pixel 2 Matching Points

71 Uniqueness Constraint If we assume objects to be opaque (non transparent), this case cannot occur Left Pixel 2 Matching Points

72 Other Assumptions Ordering assumption: The order in which pixels occur is preserved in both images. Does not hold for thin foreground object. Disparity gradient limit: Originates from psychology Not clear whether assumption is valid for arbitrary camera setups. Both assumptions have rarely been used recently => they are slightly obsolete.

73 Middlebury Stereo Benchmark Michael Bleyer LVA Stereo Vision

74 Ground Truth Data Left image Right image Ground truth disparities Ground truth = the correct solution to a given problem. The absence of ground truth data has represented a major problem in computer vision: For most computer vision problems, not a single real test image with ground truth solution has been available. Computer-generated ground truth images do oftentimes not reflect the challenges of real data recorded with a camera. It is difficult to measure the progress in a field if there is no commonly agreed data set with ground truth solution.

75 Ground Truth Data Ground Truth data is now available for a wide range of computer vision problems including: Object recognition Alpha matting Optical flow MRF-optimization Multi view reconstruction For stereo, ground truth data is available on the Middlebury Stereo Evaluation website The Middlebury set is widely adopted in the stereo community.

76 The Middlebury Set

77 How Can One Generate Ground Truth Disparities? Hand labelling: Tsukuba test set Extremely labor-intensive Tsukuba test set disparity map Most other Middlebury ground truth disparity maps have been created using a more precise depth computation technique than stereo matching, namely structured light.

78 Setup Used for Generating the Middlebury Images Different light patterns are projected onto the scene to compute a high-quality depth map (Depth from structured light).

79 Setup Used for Generating the Middlebury Images We are currently looking for students who shall set up a similar ground truth systems. Tell me if you are interested. Different light patterns are projected onto the scene to compute a high-quality depth map (Depth from structured light).

80 Disparity Map Quality Evaluation in the Middlebury Benchmark Estimation of wrong pixels: Build absolute difference between computed and ground truth disparity maps. If absolute disparity difference is larger than one pixel, pixel is counted as error. Ground truth disparity map Ground truth disparity map Error map (Pixels having an absolute disparity error > 1 px) 3 Error metrics: Percentage of erroneous pixels in (1) unoccluded regions, (2) the whole image and (3) in regions close to disparity borders.

81 The Middlebury Online Benchmark If you have implemented a stereo algorithm, you can evaluate its performance using the Middlebury benchmark. You have to run it on these 4 image pairs: The 3 error metrics are then computed for each image pair. Your algorithm is then ranked according to the computed error values.

82 The Middlebury Table Currently, more than 70 methods evaluated. You should use this table to rank your stereo matching algorithm developed as your home work. Many more

83 General Findings in the Middlebury Table Global methods outperform local methods. Local methods: Adaptive weight methods represent the state-of-the-art. Global methods: Methods that apply Belief Propagation or Graph-Cuts in the optimization step outperform dynamic programming methods (if such categorization makes sense) All top-performing methods apply color segmentation.

84 Summary 3D geometry Challenges Ambiguity Occlusions Assumptions Photo consistency Smoothness assumption Uniqueness assumption Middlebury benchmark

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5) Announcements Stereo Project 2 due today Project 3 out today Single image stereogram, by Niklas Een Readings Szeliski, Chapter 10 (through 10.5) Public Library, Stereoscopic Looking Room, Chicago, by Phillips,

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints

More information

Data Term. Michael Bleyer LVA Stereo Vision

Data Term. Michael Bleyer LVA Stereo Vision Data Term Michael Bleyer LVA Stereo Vision What happened last time? We have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > N s( p, q) We have learned about an optimization algorithm that

More information

Segmentation Based Stereo. Michael Bleyer LVA Stereo Vision

Segmentation Based Stereo. Michael Bleyer LVA Stereo Vision Segmentation Based Stereo Michael Bleyer LVA Stereo Vision What happened last time? Once again, we have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > We have investigated the matching

More information

Evaluation of Different Methods for Using Colour Information in Global Stereo Matching Approaches

Evaluation of Different Methods for Using Colour Information in Global Stereo Matching Approaches Evaluation of Different Methods for Using Colour Information in Global Stereo Matching Approaches Michael Bleyer 1, Sylvie Chambon 2, Uta Poppe 1 and Margrit Gelautz 1 1 Vienna University of Technology,

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Lecture 14: Basic Multi-View Geometry

Lecture 14: Basic Multi-View Geometry Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Recap from Previous Lecture

Recap from Previous Lecture Recap from Previous Lecture Tone Mapping Preserve local contrast or detail at the expense of large scale contrast. Changing the brightness within objects or surfaces unequally leads to halos. We are now

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

Stereo: Disparity and Matching

Stereo: Disparity and Matching CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To

More information

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence CS4495/6495 Introduction to Computer Vision 3B-L3 Stereo correspondence For now assume parallel image planes Assume parallel (co-planar) image planes Assume same focal lengths Assume epipolar lines are

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Teesta suspension bridge-darjeeling, India Mark Twain at Pool Table", no date, UCR Museum of Photography Woman getting eye exam during

More information

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry

More information

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras

More information

Lecture 14: Computer Vision

Lecture 14: Computer Vision CS/b: Artificial Intelligence II Prof. Olga Veksler Lecture : Computer Vision D shape from Images Stereo Reconstruction Many Slides are from Steve Seitz (UW), S. Narasimhan Outline Cues for D shape perception

More information

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by  ) Readings Szeliski, Chapter 10 (through 10.5) Announcements Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by email) One-page writeup (from project web page), specifying:» Your team members» Project goals. Be specific.

More information

Final project bits and pieces

Final project bits and pieces Final project bits and pieces The project is expected to take four weeks of time for up to four people. At 12 hours per week per person that comes out to: ~192 hours of work for a four person team. Capstone:

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Image Based Reconstruction II

Image Based Reconstruction II Image Based Reconstruction II Qixing Huang Feb. 2 th 2017 Slide Credit: Yasutaka Furukawa Image-Based Geometry Reconstruction Pipeline Last Lecture: Multi-View SFM Multi-View SFM This Lecture: Multi-View

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Lecture 10: Multi view geometry

Lecture 10: Multi view geometry Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

Depth from two cameras: stereopsis

Depth from two cameras: stereopsis Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture

More information

Lecture 9 & 10: Stereo Vision

Lecture 9 & 10: Stereo Vision Lecture 9 & 10: Stereo Vision Professor Fei- Fei Li Stanford Vision Lab 1 What we will learn today? IntroducEon to stereo vision Epipolar geometry: a gentle intro Parallel images Image receficaeon Solving

More information

Announcements. Stereo Vision Wrapup & Intro Recognition

Announcements. Stereo Vision Wrapup & Intro Recognition Announcements Stereo Vision Wrapup & Intro Introduction to Computer Vision CSE 152 Lecture 17 HW3 due date postpone to Thursday HW4 to posted by Thursday, due next Friday. Order of material we ll first

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from? Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo

More information

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views? Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

More information

Depth from two cameras: stereopsis

Depth from two cameras: stereopsis Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture

More information

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012 Stereo Vision A simple system Dr. Gerhard Roth Winter 2012 Stereo Stereo Ability to infer information on the 3-D structure and distance of a scene from two or more images taken from different viewpoints

More information

Stereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17

Stereo Wrap + Motion. Computer Vision I. CSE252A Lecture 17 Stereo Wrap + Motion CSE252A Lecture 17 Some Issues Ambiguity Window size Window shape Lighting Half occluded regions Problem of Occlusion Stereo Constraints CONSTRAINT BRIEF DESCRIPTION 1-D Epipolar Search

More information

Stereo and structured light

Stereo and structured light Stereo and structured light http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 20 Course announcements Homework 5 is still ongoing. - Make sure

More information

Subpixel accurate refinement of disparity maps using stereo correspondences

Subpixel accurate refinement of disparity maps using stereo correspondences Subpixel accurate refinement of disparity maps using stereo correspondences Matthias Demant Lehrstuhl für Mustererkennung, Universität Freiburg Outline 1 Introduction and Overview 2 Refining the Cost Volume

More information

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few... STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own

More information

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

An investigation into stereo algorithms: An emphasis on local-matching. Thulani Ndhlovu

An investigation into stereo algorithms: An emphasis on local-matching. Thulani Ndhlovu An investigation into stereo algorithms: An emphasis on local-matching Thulani Ndhlovu Submitted to the Department of Electrical Engineering, University of Cape Town, in fullfillment of the requirements

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Stereo. Many slides adapted from Steve Seitz

Stereo. Many slides adapted from Steve Seitz Stereo Many slides adapted from Steve Seitz Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1 image 2 Dense depth map Binocular stereo Given a calibrated

More information

STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING

STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING Yuichi Ohta Institute of Information Sciences and Electronics University of Tsukuba IBARAKI, 305, JAPAN Takeo Kanade Computer Science Department Carnegie-Mellon

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

Lecture'9'&'10:'' Stereo'Vision'

Lecture'9'&'10:'' Stereo'Vision' Lecture'9'&'10:'' Stereo'Vision' Dr.'Juan'Carlos'Niebles' Stanford'AI'Lab' ' Professor'FeiAFei'Li' Stanford'Vision'Lab' 1' Dimensionality'ReducIon'Machine'(3D'to'2D)' 3D world 2D image Point of observation

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera Stereo

More information

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong

More information

Computer Vision cmput 428/615

Computer Vision cmput 428/615 Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?

More information

Rectification and Disparity

Rectification and Disparity Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

More information

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a 96 Chapter 7 Model-Based Stereo 7.1 Motivation The modeling system described in Chapter 5 allows the user to create a basic model of a scene, but in general the scene will have additional geometric detail

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Stereo Vision Computer Vision (Kris Kitani) Carnegie Mellon University

Stereo Vision Computer Vision (Kris Kitani) Carnegie Mellon University Stereo Vision 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University What s different between these two images? Objects that are close move more or less? The amount of horizontal movement is

More information

Supplemental Material: A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields

Supplemental Material: A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields Supplemental Material: A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields Katrin Honauer 1, Ole Johannsen 2, Daniel Kondermann 1, Bastian Goldluecke 2 1 HCI, Heidelberg University

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Guido Gerig CS 6320 Spring 2013 Credits: Prof. Mubarak Shah, Course notes modified from: http://www.cs.ucf.edu/courses/cap6411/cap5415/, Lecture

More information

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003 CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass

More information

Depth from Stereo. Dominic Cheng February 7, 2018

Depth from Stereo. Dominic Cheng February 7, 2018 Depth from Stereo Dominic Cheng February 7, 2018 Agenda 1. Introduction to stereo 2. Efficient Deep Learning for Stereo Matching (W. Luo, A. Schwing, and R. Urtasun. In CVPR 2016.) 3. Cascade Residual

More information

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems EECS 442 Computer vision Stereo systems Stereo vision Rectification Correspondence problem Active stereo vision systems Reading: [HZ] Chapter: 11 [FP] Chapter: 11 Stereo vision P p p O 1 O 2 Goal: estimate

More information

Topics and things to know about them:

Topics and things to know about them: Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will

More information

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15 Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15 Lecture 6 Stereo Systems Multi- view geometry Stereo systems

More information

Stereo Matching.

Stereo Matching. Stereo Matching Stereo Vision [1] Reduction of Searching by Epipolar Constraint [1] Photometric Constraint [1] Same world point has same intensity in both images. True for Lambertian surfaces A Lambertian

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Multi-Flash Stereopsis: Depth Edge Preserving Stereo with Small Baseline Illumination

Multi-Flash Stereopsis: Depth Edge Preserving Stereo with Small Baseline Illumination SUBMITTED TO IEEE TRANS ON PAMI, 2006 1 Multi-Flash Stereopsis: Depth Edge Preserving Stereo with Small Baseline Illumination Rogerio Feris 1, Ramesh Raskar 2, Longbin Chen 1, Karhan Tan 3, Matthew Turk

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Introduction à la vision artificielle X

Introduction à la vision artificielle X Introduction à la vision artificielle X Jean Ponce Email: ponce@di.ens.fr Web: http://www.di.ens.fr/~ponce Planches après les cours sur : http://www.di.ens.fr/~ponce/introvis/lect10.pptx http://www.di.ens.fr/~ponce/introvis/lect10.pdf

More information

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based

More information

Efficient Large-Scale Stereo Matching

Efficient Large-Scale Stereo Matching Efficient Large-Scale Stereo Matching Andreas Geiger*, Martin Roser* and Raquel Urtasun** *KARLSRUHE INSTITUTE OF TECHNOLOGY **TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGO KIT University of the State of Baden-Wuerttemberg

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation Introduction CMPSCI 591A/691A CMPSCI 570/670 Image Formation Lecture Outline Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic

More information

MAPI Computer Vision. Multiple View Geometry

MAPI Computer Vision. Multiple View Geometry MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry

More information

Lecture 6 Stereo Systems Multi-view geometry

Lecture 6 Stereo Systems Multi-view geometry Lecture 6 Stereo Systems Multi-view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-5-Feb-4 Lecture 6 Stereo Systems Multi-view geometry Stereo systems

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision CS 1674: Intro to Computer Vision Epipolar Geometry and Stereo Vision Prof. Adriana Kovashka University of Pittsburgh October 5, 2016 Announcement Please send me three topics you want me to review next

More information

CS664 Lecture #18: Motion

CS664 Lecture #18: Motion CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use

More information

Robert Collins CSE486, Penn State. Lecture 09: Stereo Algorithms

Robert Collins CSE486, Penn State. Lecture 09: Stereo Algorithms Lecture 09: Stereo Algorithms left camera located at (0,0,0) Recall: Simple Stereo System Y y Image coords of point (X,Y,Z) Left Camera: x T x z (, ) y Z (, ) x (X,Y,Z) z X right camera located at (T x,0,0)

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16 Announcements Stereo III CSE252A Lecture 16 HW1 being returned HW3 assigned and due date extended until 11/27/12 No office hours today No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in

More information

Computer Vision, Lecture 11

Computer Vision, Lecture 11 Computer Vision, Lecture 11 Professor Hager http://www.cs.jhu.edu/~hager Computational Stereo Much of geometric vision is based on information from (or more) camera locations hard to recover 3D information

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

Cameras and Stereo CSE 455. Linda Shapiro

Cameras and Stereo CSE 455. Linda Shapiro Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

CEng Computational Vision

CEng Computational Vision CEng 583 - Computational Vision 2011-2012 Spring Week 4 18 th of March, 2011 Today 3D Vision Binocular (Multi-view) cues: Stereopsis Motion Monocular cues Shading Texture Familiar size etc. "God must

More information

Epipolar Geometry and the Essential Matrix

Epipolar Geometry and the Essential Matrix Epipolar Geometry and the Essential Matrix Carlo Tomasi The epipolar geometry of a pair of cameras expresses the fundamental relationship between any two corresponding points in the two image planes, and

More information