# Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Save this PDF as:

Size: px
Start display at page:

Download "Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science."

## Transcription

1 Professor William Hoff Dept of Electrical Engineering &Computer Science 1

2 Stereo Vision 2

3 Inferring 3D from 2D Model based pose estimation single (calibrated) camera Stereo vision Known model -> Can determine the pose of the model two (calibrated) cameras Arbitrary scene -> Can determine the positions of points in the scene Relative pose between cameras is also known 3

4 Stereo Vision A way of getting depth (3-D) information about a scene from two (or more) 2-D images Used by humans and animals, now computers Computational stereo vision Studied extensively in the last 25 years Difficult; still being researched Some commercial systems available Good references Scharstein and Szeliski, A Taxonomy and Evaluation of Dense Two- Frame Stereo Correspondence Algorithms. International Journal of Computer Vision, 47(1-3), extensive website with evaluations of algorithms, test data, code 4

5 Example Left image Right image Davi Geiger Reconstructed surface with image texture 5

6 Example Notice how different parts of the two images align, for different values of the horizontal shift (disparity) Iright = im2double(imread('pentagonright.png')); Ileft = im2double(imread('pentagonleft.png')); % Disparity is d = xleft-xright % So Ileft(x,y) = Iright(x+d,y) for d=-20:20 d Idiff = abs(ileft(:, 21:end-20) - Iright(:, d+21:d+end-20)); imshow(idiff, []); pause end 6

7 Stereo Displays Stereograms were popular in the early 1900 s A special viewer was needed to display two different images to the left and right eyes 7

8 Stereo Displays 3D movies were popular in the 1950 s The left and right images were displayed as red and blue 8

9 Stereo Displays Current technology for 3D movies and computer displays is to use polarized glasses The viewer wears eyeglasses which contain circular polarizers of opposite handedness 1/01/3ds-to-feature-3d-movies/ 9

10 Stereo Principle If you know intrinsic parameters of each camera the relative pose between the cameras If you measure An image point in the left camera The corresponding point in the right camera Each image point corresponds to a ray emanating from that camera You can intersect the rays (triangulate) to find the absolute point position 10

11 Stereo Geometry Simple Case Assume image planes are coplanar There is only a translation in the X direction between the two coordinate frames b is the baseline distance between the cameras x X L L f, Z L x R f X Z R R Z X L L Z R X Z R b P(X L,Y L,Z L ) d x L x R Disparity d = x L - x R f x L X R f b Z X R b Z X R Z f f b Z b d Z L Left camera x L X L b Z R x R X R Right camera 11

12 Goal: a complete disparity map Disparity is the difference in position of corresponding points between the left and right images 12

13 Reconstruction Error Given the uncertainty in pixel projection of the point, what is the error in depth? Obviously the error in depth (DZ) will depend on: Z, b, f Dx L, Dx R Let s find the expected value of the error, and the variance of the error From 13

14 Reconstruction Error First, find the error in disparity Dd, from the error of locating the feature in each image, Dx L and Dx R d x L x R Taking the total derivative of each side d( d) d( x ) d( x ) Dd Dx L L Dx R R Assuming Dx L, Dx R are independent and zero mean and E Var Var Dd EDx EDx 0 Dd E Dd L 2 ED d Dd E Dx Dx E E R L R EDx L 2DxLDxR DxR 2 2 Dx L 2EDx LDxR EDx R 2 2 Dx ED L x R 2 So s d 2 = s L 2 + s R 2 14

15 Reconstruction Error Next, we take the total derivative of Z=fb/d If the only uncertainty is in the disparity d b DZ f 2 d The mean error is Z = E[DZ] Dd The variance of the error is s Z 2 = E [(DZ- Z ) 2 ] 15

16 Example A stereo vision system estimates the disparity of a point as d=10 pixels What is the depth (Z) of the point, if f = 500 pixels and b = 10 cm? What is the uncertainty (standard deviation) of the depth, if the standard deviation of locating a feature in each image = 1 pixel? How to handle uncertainty in both disparity and focal length? 16

17 Geometry - general case Cameras not aligned, but we still know relative pose Assuming f=1, we have p xl xr y, p y 1 1 L L R R In principle, you can find P by intersecting the rays O L p L and O R p R However, they may not intersect Instead, find the midpoint of the segment perpendicular to the two rays Z L X L Left camera p L Z R Right camera p R X R P(X L,Y L,Z L ) 17

18 Triangulation (continued) The projection of P onto the left image is Z L p L = M L P The projection of P onto the right image is where Z R p R = M R P p L P p R M L r r r t M R t r31 r32 r33 t z x R R R r21 r22 r23 ty L Lorg 18

19 Triangulation (continued) Note that p L and M L P are parallel, so their cross product should be zero Similarly for p R and M R P Point P should satisfy both p p L R MP L MP R 0 0 p L P p R This is a system of four equations; can solve for the three unknowns (X L, Y L, Z L ) using least squares Method also works for more than two cameras 19

20 Stereo Process Extract features from the left and right images Match the left and right image features, to get their disparity in position (the correspondence problem ) Use stereo disparity to compute depth (the reconstruction problem) The correspondence problem is the most difficult 20

21 Characteristics of Human Stereo Vision Matching features must appear similar in the left and right images For example, we can t fuse a left stereo image with a negative of the right image 21

22 Characteristics of Human Stereo Vision Can only fuse objects within a limited range of depth around the fixation distance Vergence eye movements are needed to fuse objects over larger range of depths 22

23 Panum's fusional area is the range of depths for which binocular fusion can occur (without changing vergence angles) It s actually quite small we are able to perceive a wide range of depths because we are changing vergence angles Panum s Fusional Area 23

24 Characteristics of Human Stereo Vision Cells in visual cortex are selective for stereo disparity Neurons that are selective for a larger disparity range have larger receptive fields zero disparity: at fixation distance near: in front of point of fixation far: behind point of fixation 24

25 Characteristics of Human Stereo Vision Can fuse random-dot stereograms Bela Julesz, 1971 Shows Stereo system can function independently We can match simple features Highlights the ambiguity of the matching process 25

26 Example Make a random dot stereogram L = rand(400,400); R = L; % Shift center portion by 50 pixels R(100:300, 150:350) = L(100:300, 100:300); % Fill in part that moved R(100:300, 100:149) = rand(201, 50); 26

27 Correspondence Problem Most difficult part of stereo vision For every point in the left image, there are many possible matches in the right image Locally, many points look similar -> matches are ambiguous We can use the (known) geometry of the cameras to help limit the search for matches The most important constraint is the epipolar constraint We can limit the search for a match to be along a certain line in the other image 27

28 Epipolar Constraint With aligned cameras, search for corresponding point is 1D along corresponding row of other camera. 28

29 Epipolar constraint for non baseline stereo computation If cameras are not aligned, a 1D search can still be determined for the corresponding point. P1, C1, C2 determine a plane that cuts image I2 in a line: P2 will be on that line. 29

30 Rectification If relative camera pose is known, it is possible to rectify the images effectively rotate both cameras so that they are looking perpendicular to the line joining the camera centers Original image pair overlaid with several epipolar lines These means that epipolar lines will be horizontal, and matching algorithms will be more efficient From Richard Szeliski, : Algorithms and Applications, Springer, 2010 Images rectified so that epipolar lines are horizontal and in vertical correspondence 30

31 Correspondence Problem Even using the epipolar constraint, there are many possible matches Worst case scenarios A white board (no features) A checkered wallpaper (ambiguous matches) The problem is under constrained To solve, we need to impose assumptions about the real world: Disparity limits Appearance Uniqueness Ordering Smoothness 31

32 Disparity limits Assume that valid disparities are within certain limits Constrains search Why usually true? When is it violated? 32

33 Appearance Assume features should have similar appearance in the left and right images Why usually true? When is it violated? 33

34 Uniqueness Assume that a point in the left image can have at most one match in the right image Why usually true? When is it violated? x L x R b Left camera X L X R Right camera 34

35 Ordering Assume features should be in the same left to right order in each image Why usually true? When is it violated? 35

36 Smoothness Assume objects have mostly smooth surfaces, meaning that disparities should vary smoothly (e.g., have a low second derivative) Why usually true? When is it violated? 36

37 Methods for Correspondence Match points based on local similarity between images Two general approaches Correlation-based approaches Matches image patches using correlation Assumes only a translational difference between the two local patches (no rotation, or differences in appearance due to perspective) A good assumption if patch covers a single surface, and surface is far away compared to baseline between cameras Works well for scenes with lots of texture Feature-based approaches Matches edges, lines, or corners Gives a sparse reconstruction May be better for scenes with little texture 37

38 Correlation Approach Select a range of disparities to search For each patch in the left image, compute cross correlation score for every point along the epipolar line Find maximum correlation score along that line 38

39 Parameters: Matlab demo Size of template patch Horizontal disparity search window Vertical disparity search window % Simple stereo system using cross correlation clear all close all Left Right % Constants W=16; DH = 50; DV = 8; % size of cross-correlation template is (2W+1 x 2W+1) % disparity horizontal search limit is -DH.. DH % disparity vertical search limit is -DV.. +DV Template Search area Ileft = imread('left.png'); Iright = imread('right.png'); figure(1), imshow(ileft, []), title('left image'); figure(2), imshow(iright, []), title('right image'); pause; % Calculate disparity at a set of discrete points xborder = W+DH+1; yborder = W+DV+1; xtsize = W+DH; % horizontal template size is 2*xTsize+1 ytsize = W+DV; % vertical template size is 2*yTsize+1 Template patch from left Correlation scores Search region in right Correlation scores (peak in red) 39

40 Matlab demo (continued) npts = 0; % number of found disparity points for x=xborder:w:size(ileft,2)-xborder for y=yborder:w:size(ileft,1)-yborder % Extract a template from the left image centered at x,y figure(1), hold on, plot(x, y, 'rd'), hold off; T = imcrop(ileft, [x-w y-w 2*W 2*W]); %figure(3), imshow(t, []), title('template'); % Search for match in the right image, in a region centered at x,y % and of dimensions DW wide by DH high. IR = imcrop(iright, [x-xtsize y-ytsize 2*xTsize 2*yTsize]); %figure(4), imshow(ir, []), title('search area'); % The correlation score image is the size of IR, expanded by W in % each direction. ccscores = normxcorr2(t,ir); %figure(5), imshow(ccscores, []), title('correlation scores'); % Get the location of the peak in the correlation score image [max_score, maxindex] = max(ccscores(:)); [ypeak, xpeak] = ind2sub(size(ccscores),maxindex); hold on, plot(xpeak, ypeak, 'rd'), hold off; % If score too low, ignore this point if max_score < 0.85 continue; end Scan through left image Extract a template patch from the left Do normalized crosscorrelation to match to the right Accept a match if score is greater than a threshold 40

41 Matlab demo (continued) Extract peak location, save disparity value Plot all points when done % These are the coordinates of the peak in the search image ypeak = ypeak - W; xpeak = xpeak - W; %figure(4), hold on, plot(xpeak, ypeak, 'rd'), hold off; % These are the coordinates in the full sized right image xpeak = xpeak + (x-xtsize); ypeak = ypeak + (y-ytsize); figure(2), hold on, plot(xpeak, ypeak, 'rd'), hold off; % Save the point in a list, along with its disparity npts = npts+1; xpt(npts) = x; ypt(npts) = y; dpt(npts) = xpeak-x; % disparity is xright-xleft end end %pause figure, plot3(xpt, ypt, dpt, 'd'); 41

42 Area-based matching Window size tradeoff Larger windows are more unique Smaller windows less likely to cross discontinuities Similarity measures CC (cross-correlation) SSD (sum of squared differences) SSD is equivalent to CC SAD (sum of absolute differences) 42

43 Additional notes Stereo vision website Example commercial system 43

### There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own

### Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

### Stereo. Shadows: Occlusions: 3D (Depth) from 2D. Depth Cues. Viewing Stereo Stereograms Autostereograms Depth from Stereo

Stereo Viewing Stereo Stereograms Autostereograms Depth from Stereo 3D (Depth) from 2D 3D information is lost by projection. How do we recover 3D information? Image 3D Model Depth Cues Shadows: Occlusions:

### Lecture 14: Basic Multi-View Geometry

Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram

### Robert Collins CSE486, Penn State Lecture 08: Introduction to Stereo

Lecture 08: Introduction to Stereo Reading: T&V Section 7.1 Stereo Vision Inferring depth from images taken at the same time by two or more cameras. Basic Perspective Projection Scene Point Perspective

### Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

### Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

### CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

### Depth from two cameras: stereopsis

Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture

### Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

### Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

Stereo Vision A simple system Dr. Gerhard Roth Winter 2012 Stereo Stereo Ability to infer information on the 3-D structure and distance of a scene from two or more images taken from different viewpoints

### Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

### Stereo: Disparity and Matching

CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To

### 3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

### Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Guido Gerig CS 6320 Spring 2013 Credits: Prof. Mubarak Shah, Course notes modified from: http://www.cs.ucf.edu/courses/cap6411/cap5415/, Lecture

### Think-Pair-Share. What visual or physiological cues help us to perceive 3D shape and depth?

Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth? [Figure from Prados & Faugeras 2006] Shading Focus/defocus Images from same point of view, different camera parameters

### Important concepts in binocular depth vision: Corresponding and non-corresponding points. Depth Perception 1. Depth Perception Part II

Depth Perception Part II Depth Perception 1 Binocular Cues to Depth Depth Information Oculomotor Visual Accomodation Convergence Binocular Monocular Static Cues Motion Parallax Perspective Size Interposition

### Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

### Rectification and Disparity

Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

### Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza

Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time, ICRA 14, by Pizzoli, Forster, Scaramuzza [M. Pizzoli, C. Forster,

### 1 CSE 252A Computer Vision I Fall 2017

Assignment 1 CSE A Computer Vision I Fall 01 1.1 Assignment This assignment contains theoretical and programming exercises. If you plan to submit hand written answers for theoretical exercises, please

### Camera Model and Calibration

Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

### CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

### Stereo vision. Many slides adapted from Steve Seitz

Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

### CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

### 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

### Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo

### Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

### Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

### Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.

### 3D Geometry and Camera Calibration

3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often

### Introduction to Computer Vision. Week 10, Winter 2010 Instructor: Prof. Ko Nishino

Introduction to Computer Vision Week 10, Winter 2010 Instructor: Prof. Ko Nishino Today How do we recover geometry from 2 views? Stereo Can we recover geometry from a sequence of images Structure-from-Motion

### Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

### Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

### Perception, Part 2 Gleitman et al. (2011), Chapter 5

Perception, Part 2 Gleitman et al. (2011), Chapter 5 Mike D Zmura Department of Cognitive Sciences, UCI Psych 9A / Psy Beh 11A February 27, 2014 T. M. D'Zmura 1 Visual Reconstruction of a Three-Dimensional

### StereoScan: Dense 3D Reconstruction in Real-time

STANFORD UNIVERSITY, COMPUTER SCIENCE, STANFORD CS231A SPRING 2016 StereoScan: Dense 3D Reconstruction in Real-time Peirong Ji, pji@stanford.edu June 7, 2016 1 INTRODUCTION In this project, I am trying

### Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras

### 3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,

3D Vision Real Objects, Real Cameras Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, anders@cb.uu.se 3D Vision! Philisophy! Image formation " The pinhole camera " Projective

### CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured

### Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

Tecnologie per la ricostruzione di modelli 3D da immagini Marco Callieri ISTI-CNR, Pisa, Italy Who am I? Marco Callieri PhD in computer science Always had the like for 3D graphics... Researcher at the

### A novel 3D torso image reconstruction procedure using a pair of digital stereo back images

Modelling in Medicine and Biology VIII 257 A novel 3D torso image reconstruction procedure using a pair of digital stereo back images A. Kumar & N. Durdle Department of Electrical & Computer Engineering,

### 3D Photography: Stereo Matching

3D Photography: Stereo Matching Kevin Köser, Marc Pollefeys Spring 2012 http://cvg.ethz.ch/teaching/2012spring/3dphoto/ Stereo & Multi-View Stereo Tsukuba dataset http://cat.middlebury.edu/stereo/ Stereo

### Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15 Lecture 6 Stereo Systems Multi- view geometry Stereo systems

### L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

L2 Data Acquisition Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods 1 Coordinate Measurement Machine Touch based Slow Sparse Data Complex planning Accurate 2

### Prof. Feng Liu. Spring /27/2014

Prof. Feng Liu Spring 2014 http://www.cs.pdx.edu/~fliu/courses/cs510/ 05/27/2014 Last Time Video Stabilization 2 Today Stereoscopic 3D Human depth perception 3D displays 3 Stereoscopic media Digital Visual

### Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

### Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary

### Efficient Large-Scale Stereo Matching

Efficient Large-Scale Stereo Matching Andreas Geiger*, Martin Roser* and Raquel Urtasun** *KARLSRUHE INSTITUTE OF TECHNOLOGY **TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGO KIT University of the State of Baden-Wuerttemberg

### Camera Model and Calibration. Lecture-12

Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

### Lecture 9 & 10: Stereo Vision

Lecture 9 & 10: Stereo Vision Professor Fei- Fei Li Stanford Vision Lab 1 What we will learn today? IntroducEon to stereo vision Epipolar geometry: a gentle intro Parallel images Image receficaeon Solving

### 3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava

3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

### Basic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues

Distance, depth, and 3D shape cues Pictorial depth cues: familiar size, relative size, brightness, occlusion, shading and shadows, aerial/ atmospheric perspective, linear perspective, height within image,

### 3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY

IJDW Volume 4 Number January-June 202 pp. 45-50 3D FACE RECONSRUCION BASED ON EPIPOLAR GEOMERY aher Khadhraoui, Faouzi Benzarti 2 and Hamid Amiri 3,2,3 Signal, Image Processing and Patterns Recognition

### Combining Monocular and Stereo Depth Cues

Combining Monocular and Stereo Depth Cues Fraser Cameron December 16, 2005 Abstract A lot of work has been done extracting depth from image sequences, and relatively less has been done using only single

### DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent

### Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

### CSE152 Introduction to Computer Vision Assignment 3 (SP15) Instructor: Ben Ochoa Maximum Points : 85 Deadline : 11:59 p.m., Friday, 29-May-2015

Instructions: CSE15 Introduction to Computer Vision Assignment 3 (SP15) Instructor: Ben Ochoa Maximum Points : 85 Deadline : 11:59 p.m., Friday, 9-May-015 This assignment should be solved, and written

### Depth Sensors Kinect V2 A. Fornaser

Depth Sensors Kinect V2 A. Fornaser alberto.fornaser@unitn.it Vision Depth data It is not a 3D data, It is a map of distances Not a 3D, not a 2D it is a 2.5D or Perspective 3D Complete 3D - Tomography

### Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Bundle Adjustment 2 Example Application A vehicle needs to map its environment that it is moving

### Mahdi Amiri. May Sharif University of Technology

Course Presentation Multimedia Systems 3D Technologies Mahdi Amiri May 2014 Sharif University of Technology Binocular Vision (Two Eyes) Advantages A spare eye in case one is damaged. A wider field of view

### Towards a visual perception system for LNG pipe inspection

Towards a visual perception system for LNG pipe inspection LPV Project Team: Brett Browning (PI), Peter Rander (co PI), Peter Hansen Hatem Alismail, Mohamed Mustafa, Joey Gannon Qri8 Lab A Brief Overview

### Omni Stereo Vision of Cooperative Mobile Robots

Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)

### Lecture 6 Stereo Systems Multi-view geometry

Lecture 6 Stereo Systems Multi-view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-5-Feb-4 Lecture 6 Stereo Systems Multi-view geometry Stereo systems

### Perceptual Grouping from Motion Cues Using Tensor Voting

Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project

### Natural Viewing 3D Display

We will introduce a new category of Collaboration Projects, which will highlight DoCoMo s joint research activities with universities and other companies. DoCoMo carries out R&D to build up mobile communication,

### CEng Computational Vision

CEng 583 - Computational Vision 2011-2012 Spring Week 4 18 th of March, 2011 Today 3D Vision Binocular (Multi-view) cues: Stereopsis Motion Monocular cues Shading Texture Familiar size etc. "God must

### Omni-directional Multi-baseline Stereo without Similarity Measures

Omni-directional Multi-baseline Stereo without Similarity Measures Tomokazu Sato and Naokazu Yokoya Graduate School of Information Science, Nara Institute of Science and Technology 8916-5 Takayama, Ikoma,

### Multiray Photogrammetry and Dense Image. Photogrammetric Week Matching. Dense Image Matching - Application of SGM

Norbert Haala Institut für Photogrammetrie Multiray Photogrammetry and Dense Image Photogrammetric Week 2011 Matching Dense Image Matching - Application of SGM p q d Base image Match image Parallax image

### A Robust Two Feature Points Based Depth Estimation Method 1)

Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

### CS201 Computer Vision Lect 4 - Image Formation

CS201 Computer Vision Lect 4 - Image Formation John Magee 9 September, 2014 Slides courtesy of Diane H. Theriault Question of the Day: Why is Computer Vision hard? Something to think about from our view

### Visual Recognition: Image Formation

Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

### Vision 3D articielle Multiple view geometry

Vision 3D articielle Multiple view geometry Pascal Monasse monasse@imagine.enpc.fr IMAGINE, École des Ponts ParisTech Contents Multi-view constraints Multi-view calibration Incremental calibration Global

### Depth. Chapter Stereo Imaging

Chapter 11 Depth Calculating the distance of various points in the scene relative to the position of the camera is one of the important tasks for a computer vision system. A common method for extracting

### Motion Estimation. There are three main types (or applications) of motion estimation:

Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

### 3D Model Acquisition by Tracking 2D Wireframes

3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract

### CSE 252B: Computer Vision II

CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

### 3D Reconstruction Of Occluded Objects From Multiple Views

3D Reconstruction Of Occluded Objects From Multiple Views Cong Qiaoben Stanford University Dai Shen Stanford University Kaidi Yan Stanford University Chenye Zhu Stanford University Abstract In this paper

### Static Scene Reconstruction

GPU supported Real-Time Scene Reconstruction with a Single Camera Jan-Michael Frahm, 3D Computer Vision group, University of North Carolina at Chapel Hill Static Scene Reconstruction 1 Capture on campus

### Conversion of 2D Image into 3D and Face Recognition Based Attendance System

Conversion of 2D Image into 3D and Face Recognition Based Attendance System Warsha Kandlikar, Toradmal Savita Laxman, Deshmukh Sonali Jagannath Scientist C, Electronics Design and Technology, NIELIT Aurangabad,

### Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

Augmented Reality II - Camera Calibration - Gudrun Klinker May, 24 Literature Richard Hartley and Andrew Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2. (Section 5,

### Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

### Final Exam Study Guide CSE/EE 486 Fall 2007

Final Exam Study Guide CSE/EE 486 Fall 2007 Lecture 2 Intensity Sufaces and Gradients Image visualized as surface. Terrain concepts. Gradient of functions in 1D and 2D Numerical derivatives. Taylor series.

### !!!"#\$%!&'()*&+,'-%%./01"&', Tokihiko Akita. AISIN SEIKI Co., Ltd. Parking Space Detection with Motion Stereo Camera applying Viterbi algorithm

!!!"#\$%!&'()*&+,'-%%./01"&', Tokihiko Akita AISIN SEIKI Co., Ltd. Parking Space Detection with Motion Stereo Camera applying Viterbi algorithm! !"#\$%&'(&)'*+%*+, -. /"0123'4*5 6.&/",70&"\$2'37+89&:&;7+%3#7&

### Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

### Subpixel accurate refinement of disparity maps using stereo correspondences

Subpixel accurate refinement of disparity maps using stereo correspondences Matthias Demant Lehrstuhl für Mustererkennung, Universität Freiburg Outline 1 Introduction and Overview 2 Refining the Cost Volume

### Detecting motion by means of 2D and 3D information

Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,

### Vision 3D articielle Disparity maps, correlation

Vision 3D articielle Disparity maps, correlation Pascal Monasse monasse@imagine.enpc.fr IMAGINE, École des Ponts ParisTech http://imagine.enpc.fr/~monasse/stereo/ Contents Triangulation Epipolar rectication

### CS251 Spring 2014 Lecture 7

CS251 Spring 2014 Lecture 7 Stephanie R Taylor Feb 19, 2014 1 Moving on to 3D Today, we move on to 3D coordinates. But first, let s recap of what we did in 2D: 1. We represented a data point in 2D data

### Measurement of 3D Foot Shape Deformation in Motion

Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The

### Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

### Structure from Motion. Prof. Marco Marcon

Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

### Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

### Image processing and features

Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

### Optic Flow and Basics Towards Horn-Schunck 1

Optic Flow and Basics Towards Horn-Schunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information.

### Stereo Video Processing for Depth Map

Stereo Video Processing for Depth Map Harlan Hile and Colin Zheng University of Washington Abstract This paper describes the implementation of a stereo depth measurement algorithm in hardware on Field-Programmable

### Recovering structure from a single view Pinhole perspective projection

EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their