Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.


 Maximillian Black
 1 years ago
 Views:
Transcription
1 Professor William Hoff Dept of Electrical Engineering &Computer Science 1
2 Stereo Vision 2
3 Inferring 3D from 2D Model based pose estimation single (calibrated) camera Stereo vision Known model > Can determine the pose of the model two (calibrated) cameras Arbitrary scene > Can determine the positions of points in the scene Relative pose between cameras is also known 3
4 Stereo Vision A way of getting depth (3D) information about a scene from two (or more) 2D images Used by humans and animals, now computers Computational stereo vision Studied extensively in the last 25 years Difficult; still being researched Some commercial systems available Good references Scharstein and Szeliski, A Taxonomy and Evaluation of Dense Two Frame Stereo Correspondence Algorithms. International Journal of Computer Vision, 47(13), extensive website with evaluations of algorithms, test data, code 4
5 Example Left image Right image Davi Geiger Reconstructed surface with image texture 5
6 Example Notice how different parts of the two images align, for different values of the horizontal shift (disparity) Iright = im2double(imread('pentagonright.png')); Ileft = im2double(imread('pentagonleft.png')); % Disparity is d = xleftxright % So Ileft(x,y) = Iright(x+d,y) for d=20:20 d Idiff = abs(ileft(:, 21:end20)  Iright(:, d+21:d+end20)); imshow(idiff, []); pause end 6
7 Stereo Displays Stereograms were popular in the early 1900 s A special viewer was needed to display two different images to the left and right eyes 7
8 Stereo Displays 3D movies were popular in the 1950 s The left and right images were displayed as red and blue 8
9 Stereo Displays Current technology for 3D movies and computer displays is to use polarized glasses The viewer wears eyeglasses which contain circular polarizers of opposite handedness 1/01/3dstofeature3dmovies/ 9
10 Stereo Principle If you know intrinsic parameters of each camera the relative pose between the cameras If you measure An image point in the left camera The corresponding point in the right camera Each image point corresponds to a ray emanating from that camera You can intersect the rays (triangulate) to find the absolute point position 10
11 Stereo Geometry Simple Case Assume image planes are coplanar There is only a translation in the X direction between the two coordinate frames b is the baseline distance between the cameras x X L L f, Z L x R f X Z R R Z X L L Z R X Z R b P(X L,Y L,Z L ) d x L x R Disparity d = x L  x R f x L X R f b Z X R b Z X R Z f f b Z b d Z L Left camera x L X L b Z R x R X R Right camera 11
12 Goal: a complete disparity map Disparity is the difference in position of corresponding points between the left and right images 12
13 Reconstruction Error Given the uncertainty in pixel projection of the point, what is the error in depth? Obviously the error in depth (DZ) will depend on: Z, b, f Dx L, Dx R Let s find the expected value of the error, and the variance of the error From 13
14 Reconstruction Error First, find the error in disparity Dd, from the error of locating the feature in each image, Dx L and Dx R d x L x R Taking the total derivative of each side d( d) d( x ) d( x ) Dd Dx L L Dx R R Assuming Dx L, Dx R are independent and zero mean and E Var Var Dd EDx EDx 0 Dd E Dd L 2 ED d Dd E Dx Dx E E R L R EDx L 2DxLDxR DxR 2 2 Dx L 2EDx LDxR EDx R 2 2 Dx ED L x R 2 So s d 2 = s L 2 + s R 2 14
15 Reconstruction Error Next, we take the total derivative of Z=fb/d If the only uncertainty is in the disparity d b DZ f 2 d The mean error is Z = E[DZ] Dd The variance of the error is s Z 2 = E [(DZ Z ) 2 ] 15
16 Example A stereo vision system estimates the disparity of a point as d=10 pixels What is the depth (Z) of the point, if f = 500 pixels and b = 10 cm? What is the uncertainty (standard deviation) of the depth, if the standard deviation of locating a feature in each image = 1 pixel? How to handle uncertainty in both disparity and focal length? 16
17 Geometry  general case Cameras not aligned, but we still know relative pose Assuming f=1, we have p xl xr y, p y 1 1 L L R R In principle, you can find P by intersecting the rays O L p L and O R p R However, they may not intersect Instead, find the midpoint of the segment perpendicular to the two rays Z L X L Left camera p L Z R Right camera p R X R P(X L,Y L,Z L ) 17
18 Triangulation (continued) The projection of P onto the left image is Z L p L = M L P The projection of P onto the right image is where Z R p R = M R P p L P p R M L r r r t M R t r31 r32 r33 t z x R R R r21 r22 r23 ty L Lorg 18
19 Triangulation (continued) Note that p L and M L P are parallel, so their cross product should be zero Similarly for p R and M R P Point P should satisfy both p p L R MP L MP R 0 0 p L P p R This is a system of four equations; can solve for the three unknowns (X L, Y L, Z L ) using least squares Method also works for more than two cameras 19
20 Stereo Process Extract features from the left and right images Match the left and right image features, to get their disparity in position (the correspondence problem ) Use stereo disparity to compute depth (the reconstruction problem) The correspondence problem is the most difficult 20
21 Characteristics of Human Stereo Vision Matching features must appear similar in the left and right images For example, we can t fuse a left stereo image with a negative of the right image 21
22 Characteristics of Human Stereo Vision Can only fuse objects within a limited range of depth around the fixation distance Vergence eye movements are needed to fuse objects over larger range of depths 22
23 Panum's fusional area is the range of depths for which binocular fusion can occur (without changing vergence angles) It s actually quite small we are able to perceive a wide range of depths because we are changing vergence angles Panum s Fusional Area 23
24 Characteristics of Human Stereo Vision Cells in visual cortex are selective for stereo disparity Neurons that are selective for a larger disparity range have larger receptive fields zero disparity: at fixation distance near: in front of point of fixation far: behind point of fixation 24
25 Characteristics of Human Stereo Vision Can fuse randomdot stereograms Bela Julesz, 1971 Shows Stereo system can function independently We can match simple features Highlights the ambiguity of the matching process 25
26 Example Make a random dot stereogram L = rand(400,400); R = L; % Shift center portion by 50 pixels R(100:300, 150:350) = L(100:300, 100:300); % Fill in part that moved R(100:300, 100:149) = rand(201, 50); 26
27 Correspondence Problem Most difficult part of stereo vision For every point in the left image, there are many possible matches in the right image Locally, many points look similar > matches are ambiguous We can use the (known) geometry of the cameras to help limit the search for matches The most important constraint is the epipolar constraint We can limit the search for a match to be along a certain line in the other image 27
28 Epipolar Constraint With aligned cameras, search for corresponding point is 1D along corresponding row of other camera. 28
29 Epipolar constraint for non baseline stereo computation If cameras are not aligned, a 1D search can still be determined for the corresponding point. P1, C1, C2 determine a plane that cuts image I2 in a line: P2 will be on that line. 29
30 Rectification If relative camera pose is known, it is possible to rectify the images effectively rotate both cameras so that they are looking perpendicular to the line joining the camera centers Original image pair overlaid with several epipolar lines These means that epipolar lines will be horizontal, and matching algorithms will be more efficient From Richard Szeliski, : Algorithms and Applications, Springer, 2010 Images rectified so that epipolar lines are horizontal and in vertical correspondence 30
31 Correspondence Problem Even using the epipolar constraint, there are many possible matches Worst case scenarios A white board (no features) A checkered wallpaper (ambiguous matches) The problem is under constrained To solve, we need to impose assumptions about the real world: Disparity limits Appearance Uniqueness Ordering Smoothness 31
32 Disparity limits Assume that valid disparities are within certain limits Constrains search Why usually true? When is it violated? 32
33 Appearance Assume features should have similar appearance in the left and right images Why usually true? When is it violated? 33
34 Uniqueness Assume that a point in the left image can have at most one match in the right image Why usually true? When is it violated? x L x R b Left camera X L X R Right camera 34
35 Ordering Assume features should be in the same left to right order in each image Why usually true? When is it violated? 35
36 Smoothness Assume objects have mostly smooth surfaces, meaning that disparities should vary smoothly (e.g., have a low second derivative) Why usually true? When is it violated? 36
37 Methods for Correspondence Match points based on local similarity between images Two general approaches Correlationbased approaches Matches image patches using correlation Assumes only a translational difference between the two local patches (no rotation, or differences in appearance due to perspective) A good assumption if patch covers a single surface, and surface is far away compared to baseline between cameras Works well for scenes with lots of texture Featurebased approaches Matches edges, lines, or corners Gives a sparse reconstruction May be better for scenes with little texture 37
38 Correlation Approach Select a range of disparities to search For each patch in the left image, compute cross correlation score for every point along the epipolar line Find maximum correlation score along that line 38
39 Parameters: Matlab demo Size of template patch Horizontal disparity search window Vertical disparity search window % Simple stereo system using cross correlation clear all close all Left Right % Constants W=16; DH = 50; DV = 8; % size of crosscorrelation template is (2W+1 x 2W+1) % disparity horizontal search limit is DH.. DH % disparity vertical search limit is DV.. +DV Template Search area Ileft = imread('left.png'); Iright = imread('right.png'); figure(1), imshow(ileft, []), title('left image'); figure(2), imshow(iright, []), title('right image'); pause; % Calculate disparity at a set of discrete points xborder = W+DH+1; yborder = W+DV+1; xtsize = W+DH; % horizontal template size is 2*xTsize+1 ytsize = W+DV; % vertical template size is 2*yTsize+1 Template patch from left Correlation scores Search region in right Correlation scores (peak in red) 39
40 Matlab demo (continued) npts = 0; % number of found disparity points for x=xborder:w:size(ileft,2)xborder for y=yborder:w:size(ileft,1)yborder % Extract a template from the left image centered at x,y figure(1), hold on, plot(x, y, 'rd'), hold off; T = imcrop(ileft, [xw yw 2*W 2*W]); %figure(3), imshow(t, []), title('template'); % Search for match in the right image, in a region centered at x,y % and of dimensions DW wide by DH high. IR = imcrop(iright, [xxtsize yytsize 2*xTsize 2*yTsize]); %figure(4), imshow(ir, []), title('search area'); % The correlation score image is the size of IR, expanded by W in % each direction. ccscores = normxcorr2(t,ir); %figure(5), imshow(ccscores, []), title('correlation scores'); % Get the location of the peak in the correlation score image [max_score, maxindex] = max(ccscores(:)); [ypeak, xpeak] = ind2sub(size(ccscores),maxindex); hold on, plot(xpeak, ypeak, 'rd'), hold off; % If score too low, ignore this point if max_score < 0.85 continue; end Scan through left image Extract a template patch from the left Do normalized crosscorrelation to match to the right Accept a match if score is greater than a threshold 40
41 Matlab demo (continued) Extract peak location, save disparity value Plot all points when done % These are the coordinates of the peak in the search image ypeak = ypeak  W; xpeak = xpeak  W; %figure(4), hold on, plot(xpeak, ypeak, 'rd'), hold off; % These are the coordinates in the full sized right image xpeak = xpeak + (xxtsize); ypeak = ypeak + (yytsize); figure(2), hold on, plot(xpeak, ypeak, 'rd'), hold off; % Save the point in a list, along with its disparity npts = npts+1; xpt(npts) = x; ypt(npts) = y; dpt(npts) = xpeakx; % disparity is xrightxleft end end %pause figure, plot3(xpt, ypt, dpt, 'd'); 41
42 Areabased matching Window size tradeoff Larger windows are more unique Smaller windows less likely to cross discontinuities Similarity measures CC (crosscorrelation) SSD (sum of squared differences) SSD is equivalent to CC SAD (sum of absolute differences) 42
43 Additional notes Stereo vision website Example commercial system 43
There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...
STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UTAustin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More informationStereo. Shadows: Occlusions: 3D (Depth) from 2D. Depth Cues. Viewing Stereo Stereograms Autostereograms Depth from Stereo
Stereo Viewing Stereo Stereograms Autostereograms Depth from Stereo 3D (Depth) from 2D 3D information is lost by projection. How do we recover 3D information? Image 3D Model Depth Cues Shadows: Occlusions:
More informationLecture 14: Basic MultiView Geometry
Lecture 14: Basic MultiView Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram
More informationRobert Collins CSE486, Penn State Lecture 08: Introduction to Stereo
Lecture 08: Introduction to Stereo Reading: T&V Section 7.1 Stereo Vision Inferring depth from images taken at the same time by two or more cameras. Basic Perspective Projection Scene Point Perspective
More informationMachine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationCHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION
CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes
More informationDepth from two cameras: stereopsis
Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationStereo Vision A simple system. Dr. Gerhard Roth Winter 2012
Stereo Vision A simple system Dr. Gerhard Roth Winter 2012 Stereo Stereo Ability to infer information on the 3D structure and distance of a scene from two or more images taken from different viewpoints
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationStereo: Disparity and Matching
CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To
More information3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller
3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.unikl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationImage Rectification (Stereo) (New book: 7.2.1, old book: 11.1)
Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Guido Gerig CS 6320 Spring 2013 Credits: Prof. Mubarak Shah, Course notes modified from: http://www.cs.ucf.edu/courses/cap6411/cap5415/, Lecture
More informationThinkPairShare. What visual or physiological cues help us to perceive 3D shape and depth?
ThinkPairShare What visual or physiological cues help us to perceive 3D shape and depth? [Figure from Prados & Faugeras 2006] Shading Focus/defocus Images from same point of view, different camera parameters
More informationImportant concepts in binocular depth vision: Corresponding and noncorresponding points. Depth Perception 1. Depth Perception Part II
Depth Perception Part II Depth Perception 1 Binocular Cues to Depth Depth Information Oculomotor Visual Accomodation Convergence Binocular Monocular Static Cues Motion Parallax Perspective Size Interposition
More informationMiniature faking. In closeup photo, the depth of field is limited.
Miniature faking In closeup photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tiltshift_miniature_greg_keene.jpg
More informationRectification and Disparity
Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide
More informationLecture 10 Multiview Stereo (3D Dense Reconstruction) Davide Scaramuzza
Lecture 10 Multiview Stereo (3D Dense Reconstruction) Davide Scaramuzza REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time, ICRA 14, by Pizzoli, Forster, Scaramuzza [M. Pizzoli, C. Forster,
More information1 CSE 252A Computer Vision I Fall 2017
Assignment 1 CSE A Computer Vision I Fall 01 1.1 Assignment This assignment contains theoretical and programming exercises. If you plan to submit hand written answers for theoretical exercises, please
More informationCamera Model and Calibration
Camera Model and Calibration Lecture10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationStereo vision. Many slides adapted from Steve Seitz
Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More information3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera
3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 351 Johoku,
More informationBinocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?
Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More informationThreeDimensional Sensors Lecture 2: ProjectedLight Depth Cameras
ThreeDimensional Sensors Lecture 2: ProjectedLight Depth Cameras Radu Horaud INRIA Grenoble RhoneAlpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.
More information3D Geometry and Camera Calibration
3D Geometry and Camera Calibration 3D Coordinate Systems Righthanded vs. lefthanded x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often
More informationIntroduction to Computer Vision. Week 10, Winter 2010 Instructor: Prof. Ko Nishino
Introduction to Computer Vision Week 10, Winter 2010 Instructor: Prof. Ko Nishino Today How do we recover geometry from 2 views? Stereo Can we recover geometry from a sequence of images StructurefromMotion
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationChaplin, Modern Times, 1936
Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multiview geometry problems Structure: Given projections
More informationPerception, Part 2 Gleitman et al. (2011), Chapter 5
Perception, Part 2 Gleitman et al. (2011), Chapter 5 Mike D Zmura Department of Cognitive Sciences, UCI Psych 9A / Psy Beh 11A February 27, 2014 T. M. D'Zmura 1 Visual Reconstruction of a ThreeDimensional
More informationStereoScan: Dense 3D Reconstruction in Realtime
STANFORD UNIVERSITY, COMPUTER SCIENCE, STANFORD CS231A SPRING 2016 StereoScan: Dense 3D Reconstruction in Realtime Peirong Ji, pji@stanford.edu June 7, 2016 1 INTRODUCTION In this project, I am trying
More informationStereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multiview geometry, matching, invariant features, stereo vision
Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras
More information3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,
3D Vision Real Objects, Real Cameras Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, anders@cb.uu.se 3D Vision! Philisophy! Image formation " The pinhole camera " Projective
More informationCV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more
CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured
More informationTecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTICNR, Pisa, Italy
Tecnologie per la ricostruzione di modelli 3D da immagini Marco Callieri ISTICNR, Pisa, Italy Who am I? Marco Callieri PhD in computer science Always had the like for 3D graphics... Researcher at the
More informationA novel 3D torso image reconstruction procedure using a pair of digital stereo back images
Modelling in Medicine and Biology VIII 257 A novel 3D torso image reconstruction procedure using a pair of digital stereo back images A. Kumar & N. Durdle Department of Electrical & Computer Engineering,
More information3D Photography: Stereo Matching
3D Photography: Stereo Matching Kevin Köser, Marc Pollefeys Spring 2012 http://cvg.ethz.ch/teaching/2012spring/3dphoto/ Stereo & MultiView Stereo Tsukuba dataset http://cat.middlebury.edu/stereo/ Stereo
More informationLecture 6 Stereo Systems Multi view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 624Jan15
Lecture 6 Stereo Systems Multi view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 624Jan15 Lecture 6 Stereo Systems Multi view geometry Stereo systems
More informationL2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods
L2 Data Acquisition Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods 1 Coordinate Measurement Machine Touch based Slow Sparse Data Complex planning Accurate 2
More informationProf. Feng Liu. Spring /27/2014
Prof. Feng Liu Spring 2014 http://www.cs.pdx.edu/~fliu/courses/cs510/ 05/27/2014 Last Time Video Stabilization 2 Today Stereoscopic 3D Human depth perception 3D displays 3 Stereoscopic media Digital Visual
More informationLecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013
Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today:  Capturing scene depth
More informationReminder: Lecture 20: The EightPoint Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches
Reminder: Lecture 20: The EightPoint Algorithm F = 0.003106950.0025646 2.965840.0280940.00771621 56.3813 13.190529.20079999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary
More informationEfficient LargeScale Stereo Matching
Efficient LargeScale Stereo Matching Andreas Geiger*, Martin Roser* and Raquel Urtasun** *KARLSRUHE INSTITUTE OF TECHNOLOGY **TOYOTA TECHNOLOGICAL INSTITUTE AT CHICAGO KIT University of the State of BadenWuerttemberg
More informationCamera Model and Calibration. Lecture12
Camera Model and Calibration Lecture12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationLecture 9 & 10: Stereo Vision
Lecture 9 & 10: Stereo Vision Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? IntroducEon to stereo vision Epipolar geometry: a gentle intro Parallel images Image receficaeon Solving
More information3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava
3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.unikl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationBasic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues
Distance, depth, and 3D shape cues Pictorial depth cues: familiar size, relative size, brightness, occlusion, shading and shadows, aerial/ atmospheric perspective, linear perspective, height within image,
More information3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY
IJDW Volume 4 Number JanuaryJune 202 pp. 4550 3D FACE RECONSRUCION BASED ON EPIPOLAR GEOMERY aher Khadhraoui, Faouzi Benzarti 2 and Hamid Amiri 3,2,3 Signal, Image Processing and Patterns Recognition
More informationCombining Monocular and Stereo Depth Cues
Combining Monocular and Stereo Depth Cues Fraser Cameron December 16, 2005 Abstract A lot of work has been done extracting depth from image sequences, and relatively less has been done using only single
More informationDEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION
2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent
More informationAccurate and Dense WideBaseline Stereo Matching Using SWPOC
Accurate and Dense WideBaseline Stereo Matching Using SWPOC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp
More informationCSE152 Introduction to Computer Vision Assignment 3 (SP15) Instructor: Ben Ochoa Maximum Points : 85 Deadline : 11:59 p.m., Friday, 29May2015
Instructions: CSE15 Introduction to Computer Vision Assignment 3 (SP15) Instructor: Ben Ochoa Maximum Points : 85 Deadline : 11:59 p.m., Friday, 9May015 This assignment should be solved, and written
More informationDepth Sensors Kinect V2 A. Fornaser
Depth Sensors Kinect V2 A. Fornaser alberto.fornaser@unitn.it Vision Depth data It is not a 3D data, It is a map of distances Not a 3D, not a 2D it is a 2.5D or Perspective 3D Complete 3D  Tomography
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Bundle Adjustment 2 Example Application A vehicle needs to map its environment that it is moving
More informationMahdi Amiri. May Sharif University of Technology
Course Presentation Multimedia Systems 3D Technologies Mahdi Amiri May 2014 Sharif University of Technology Binocular Vision (Two Eyes) Advantages A spare eye in case one is damaged. A wider field of view
More informationTowards a visual perception system for LNG pipe inspection
Towards a visual perception system for LNG pipe inspection LPV Project Team: Brett Browning (PI), Peter Rander (co PI), Peter Hansen Hatem Alismail, Mohamed Mustafa, Joey Gannon Qri8 Lab A Brief Overview
More informationOmni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More informationLecture 6 Stereo Systems Multiview geometry
Lecture 6 Stereo Systems Multiview geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 65Feb4 Lecture 6 Stereo Systems Multiview geometry Stereo systems
More informationPerceptual Grouping from Motion Cues Using Tensor Voting
Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project
More informationNatural Viewing 3D Display
We will introduce a new category of Collaboration Projects, which will highlight DoCoMo s joint research activities with universities and other companies. DoCoMo carries out R&D to build up mobile communication,
More informationCEng Computational Vision
CEng 583  Computational Vision 20112012 Spring Week 4 18 th of March, 2011 Today 3D Vision Binocular (Multiview) cues: Stereopsis Motion Monocular cues Shading Texture Familiar size etc. "God must
More informationOmnidirectional Multibaseline Stereo without Similarity Measures
Omnidirectional Multibaseline Stereo without Similarity Measures Tomokazu Sato and Naokazu Yokoya Graduate School of Information Science, Nara Institute of Science and Technology 89165 Takayama, Ikoma,
More informationMultiray Photogrammetry and Dense Image. Photogrammetric Week Matching. Dense Image Matching  Application of SGM
Norbert Haala Institut für Photogrammetrie Multiray Photogrammetry and Dense Image Photogrammetric Week 2011 Matching Dense Image Matching  Application of SGM p q d Base image Match image Parallax image
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG ZhiGuang YI JianQiang ZHAO DongBin (Laboratory of Complex Systems and Intelligence
More informationCS201 Computer Vision Lect 4  Image Formation
CS201 Computer Vision Lect 4  Image Formation John Magee 9 September, 2014 Slides courtesy of Diane H. Theriault Question of the Day: Why is Computer Vision hard? Something to think about from our view
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTIC) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationVision 3D articielle Multiple view geometry
Vision 3D articielle Multiple view geometry Pascal Monasse monasse@imagine.enpc.fr IMAGINE, École des Ponts ParisTech Contents Multiview constraints Multiview calibration Incremental calibration Global
More informationDepth. Chapter Stereo Imaging
Chapter 11 Depth Calculating the distance of various points in the scene relative to the position of the camera is one of the important tasks for a computer vision system. A common method for extracting
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More information3D Model Acquisition by Tracking 2D Wireframes
3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points
More information3D Reconstruction Of Occluded Objects From Multiple Views
3D Reconstruction Of Occluded Objects From Multiple Views Cong Qiaoben Stanford University Dai Shen Stanford University Kaidi Yan Stanford University Chenye Zhu Stanford University Abstract In this paper
More informationStatic Scene Reconstruction
GPU supported RealTime Scene Reconstruction with a Single Camera JanMichael Frahm, 3D Computer Vision group, University of North Carolina at Chapel Hill Static Scene Reconstruction 1 Capture on campus
More informationConversion of 2D Image into 3D and Face Recognition Based Attendance System
Conversion of 2D Image into 3D and Face Recognition Based Attendance System Warsha Kandlikar, Toradmal Savita Laxman, Deshmukh Sonali Jagannath Scientist C, Electronics Design and Technology, NIELIT Aurangabad,
More informationAugmented Reality II  Camera Calibration  Gudrun Klinker May 11, 2004
Augmented Reality II  Camera Calibration  Gudrun Klinker May, 24 Literature Richard Hartley and Andrew Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2. (Section 5,
More informationMotion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures
Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of
More informationFinal Exam Study Guide CSE/EE 486 Fall 2007
Final Exam Study Guide CSE/EE 486 Fall 2007 Lecture 2 Intensity Sufaces and Gradients Image visualized as surface. Terrain concepts. Gradient of functions in 1D and 2D Numerical derivatives. Taylor series.
More information!!!"#$%!&'()*&+,'%%./01"&', Tokihiko Akita. AISIN SEIKI Co., Ltd. Parking Space Detection with Motion Stereo Camera applying Viterbi algorithm
!!!"#$%!&'()*&+,'%%./01"&', Tokihiko Akita AISIN SEIKI Co., Ltd. Parking Space Detection with Motion Stereo Camera applying Viterbi algorithm! !"#$%&'(&)'*+%*+, . /"0123'4*5 6.&/",70&"$2'37+89&:&;7+%3#7&
More informationVisual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASAJPL / CalTech
Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASAJPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The
More informationSubpixel accurate refinement of disparity maps using stereo correspondences
Subpixel accurate refinement of disparity maps using stereo correspondences Matthias Demant Lehrstuhl für Mustererkennung, Universität Freiburg Outline 1 Introduction and Overview 2 Refining the Cost Volume
More informationDetecting motion by means of 2D and 3D information
Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,
More informationVision 3D articielle Disparity maps, correlation
Vision 3D articielle Disparity maps, correlation Pascal Monasse monasse@imagine.enpc.fr IMAGINE, École des Ponts ParisTech http://imagine.enpc.fr/~monasse/stereo/ Contents Triangulation Epipolar rectication
More informationCS251 Spring 2014 Lecture 7
CS251 Spring 2014 Lecture 7 Stephanie R Taylor Feb 19, 2014 1 Moving on to 3D Today, we move on to 3D coordinates. But first, let s recap of what we did in 2D: 1. We represented a data point in 2D data
More informationMeasurement of 3D Foot Shape Deformation in Motion
Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The
More informationProjector Calibration for Pattern Projection Systems
Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summingup 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationAdaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision
Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationOptic Flow and Basics Towards HornSchunck 1
Optic Flow and Basics Towards HornSchunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision SpringerVerlag, London, 2014 1 See last slide for copyright information.
More informationStereo Video Processing for Depth Map
Stereo Video Processing for Depth Map Harlan Hile and Colin Zheng University of Washington Abstract This paper describes the implementation of a stereo depth measurement algorithm in hardware on FieldProgrammable
More informationRecovering structure from a single view Pinhole perspective projection
EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their
More informationMultiple Baseline Stereo
A. Coste CS6320 3D Computer Vision, School of Computing, University of Utah April 22, 2013 A. Coste Outline 1 2 Square Differences Other common metrics 3 Rectification 4 5 A. Coste Introduction The goal
More informationSL A Tordivel  Thor Vollset Stereo Vision and structured illumination creates dense 3D Images Page 1
Tordivel ASTORDIVEL 20002015 Scorpion Vision Software Scorpion Stinger are trademarks SL20100001A AS  Scorpion Visionand 8 and 3DMaMa Tordivel ASof Tordivel AS 20002010 Page 1 Stereo Vision and structured
More information