Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

 Maximillian Black
 7 days ago
 Views:
Transcription
1 Professor William Hoff Dept of Electrical Engineering &Computer Science 1
2 Stereo Vision 2
3 Inferring 3D from 2D Model based pose estimation single (calibrated) camera Stereo vision Known model > Can determine the pose of the model two (calibrated) cameras Arbitrary scene > Can determine the positions of points in the scene Relative pose between cameras is also known 3
4 Stereo Vision A way of getting depth (3D) information about a scene from two (or more) 2D images Used by humans and animals, now computers Computational stereo vision Studied extensively in the last 25 years Difficult; still being researched Some commercial systems available Good references Scharstein and Szeliski, A Taxonomy and Evaluation of Dense Two Frame Stereo Correspondence Algorithms. International Journal of Computer Vision, 47(13), extensive website with evaluations of algorithms, test data, code 4
5 Example Left image Right image Davi Geiger Reconstructed surface with image texture 5
6 Example Notice how different parts of the two images align, for different values of the horizontal shift (disparity) Iright = im2double(imread('pentagonright.png')); Ileft = im2double(imread('pentagonleft.png')); % Disparity is d = xleftxright % So Ileft(x,y) = Iright(x+d,y) for d=20:20 d Idiff = abs(ileft(:, 21:end20)  Iright(:, d+21:d+end20)); imshow(idiff, []); pause end 6
7 Stereo Displays Stereograms were popular in the early 1900 s A special viewer was needed to display two different images to the left and right eyes 7
8 Stereo Displays 3D movies were popular in the 1950 s The left and right images were displayed as red and blue 8
9 Stereo Displays Current technology for 3D movies and computer displays is to use polarized glasses The viewer wears eyeglasses which contain circular polarizers of opposite handedness 1/01/3dstofeature3dmovies/ 9
10 Stereo Principle If you know intrinsic parameters of each camera the relative pose between the cameras If you measure An image point in the left camera The corresponding point in the right camera Each image point corresponds to a ray emanating from that camera You can intersect the rays (triangulate) to find the absolute point position 10
11 Stereo Geometry Simple Case Assume image planes are coplanar There is only a translation in the X direction between the two coordinate frames b is the baseline distance between the cameras x X L L f, Z L x R f X Z R R Z X L L Z R X Z R b P(X L,Y L,Z L ) d x L x R Disparity d = x L  x R f x L X R f b Z X R b Z X R Z f f b Z b d Z L Left camera x L X L b Z R x R X R Right camera 11
12 Goal: a complete disparity map Disparity is the difference in position of corresponding points between the left and right images 12
13 Reconstruction Error Given the uncertainty in pixel projection of the point, what is the error in depth? Obviously the error in depth (DZ) will depend on: Z, b, f Dx L, Dx R Let s find the expected value of the error, and the variance of the error From 13
14 Reconstruction Error First, find the error in disparity Dd, from the error of locating the feature in each image, Dx L and Dx R d x L x R Taking the total derivative of each side d( d) d( x ) d( x ) Dd Dx L L Dx R R Assuming Dx L, Dx R are independent and zero mean and E Var Var Dd EDx EDx 0 Dd E Dd L 2 ED d Dd E Dx Dx E E R L R EDx L 2DxLDxR DxR 2 2 Dx L 2EDx LDxR EDx R 2 2 Dx ED L x R 2 So s d 2 = s L 2 + s R 2 14
15 Reconstruction Error Next, we take the total derivative of Z=fb/d If the only uncertainty is in the disparity d b DZ f 2 d The mean error is Z = E[DZ] Dd The variance of the error is s Z 2 = E [(DZ Z ) 2 ] 15
16 Example A stereo vision system estimates the disparity of a point as d=10 pixels What is the depth (Z) of the point, if f = 500 pixels and b = 10 cm? What is the uncertainty (standard deviation) of the depth, if the standard deviation of locating a feature in each image = 1 pixel? How to handle uncertainty in both disparity and focal length? 16
17 Geometry  general case Cameras not aligned, but we still know relative pose Assuming f=1, we have p xl xr y, p y 1 1 L L R R In principle, you can find P by intersecting the rays O L p L and O R p R However, they may not intersect Instead, find the midpoint of the segment perpendicular to the two rays Z L X L Left camera p L Z R Right camera p R X R P(X L,Y L,Z L ) 17
18 Triangulation (continued) The projection of P onto the left image is Z L p L = M L P The projection of P onto the right image is where Z R p R = M R P p L P p R M L r r r t M R t r31 r32 r33 t z x R R R r21 r22 r23 ty L Lorg 18
19 Triangulation (continued) Note that p L and M L P are parallel, so their cross product should be zero Similarly for p R and M R P Point P should satisfy both p p L R MP L MP R 0 0 p L P p R This is a system of four equations; can solve for the three unknowns (X L, Y L, Z L ) using least squares Method also works for more than two cameras 19
20 Stereo Process Extract features from the left and right images Match the left and right image features, to get their disparity in position (the correspondence problem ) Use stereo disparity to compute depth (the reconstruction problem) The correspondence problem is the most difficult 20
21 Characteristics of Human Stereo Vision Matching features must appear similar in the left and right images For example, we can t fuse a left stereo image with a negative of the right image 21
22 Characteristics of Human Stereo Vision Can only fuse objects within a limited range of depth around the fixation distance Vergence eye movements are needed to fuse objects over larger range of depths 22
23 Panum's fusional area is the range of depths for which binocular fusion can occur (without changing vergence angles) It s actually quite small we are able to perceive a wide range of depths because we are changing vergence angles Panum s Fusional Area 23
24 Characteristics of Human Stereo Vision Cells in visual cortex are selective for stereo disparity Neurons that are selective for a larger disparity range have larger receptive fields zero disparity: at fixation distance near: in front of point of fixation far: behind point of fixation 24
25 Characteristics of Human Stereo Vision Can fuse randomdot stereograms Bela Julesz, 1971 Shows Stereo system can function independently We can match simple features Highlights the ambiguity of the matching process 25
26 Example Make a random dot stereogram L = rand(400,400); R = L; % Shift center portion by 50 pixels R(100:300, 150:350) = L(100:300, 100:300); % Fill in part that moved R(100:300, 100:149) = rand(201, 50); 26
27 Correspondence Problem Most difficult part of stereo vision For every point in the left image, there are many possible matches in the right image Locally, many points look similar > matches are ambiguous We can use the (known) geometry of the cameras to help limit the search for matches The most important constraint is the epipolar constraint We can limit the search for a match to be along a certain line in the other image 27
28 Epipolar Constraint With aligned cameras, search for corresponding point is 1D along corresponding row of other camera. 28
29 Epipolar constraint for non baseline stereo computation If cameras are not aligned, a 1D search can still be determined for the corresponding point. P1, C1, C2 determine a plane that cuts image I2 in a line: P2 will be on that line. 29
30 Rectification If relative camera pose is known, it is possible to rectify the images effectively rotate both cameras so that they are looking perpendicular to the line joining the camera centers Original image pair overlaid with several epipolar lines These means that epipolar lines will be horizontal, and matching algorithms will be more efficient From Richard Szeliski, : Algorithms and Applications, Springer, 2010 Images rectified so that epipolar lines are horizontal and in vertical correspondence 30
31 Correspondence Problem Even using the epipolar constraint, there are many possible matches Worst case scenarios A white board (no features) A checkered wallpaper (ambiguous matches) The problem is under constrained To solve, we need to impose assumptions about the real world: Disparity limits Appearance Uniqueness Ordering Smoothness 31
32 Disparity limits Assume that valid disparities are within certain limits Constrains search Why usually true? When is it violated? 32
33 Appearance Assume features should have similar appearance in the left and right images Why usually true? When is it violated? 33
34 Uniqueness Assume that a point in the left image can have at most one match in the right image Why usually true? When is it violated? x L x R b Left camera X L X R Right camera 34
35 Ordering Assume features should be in the same left to right order in each image Why usually true? When is it violated? 35
36 Smoothness Assume objects have mostly smooth surfaces, meaning that disparities should vary smoothly (e.g., have a low second derivative) Why usually true? When is it violated? 36
37 Methods for Correspondence Match points based on local similarity between images Two general approaches Correlationbased approaches Matches image patches using correlation Assumes only a translational difference between the two local patches (no rotation, or differences in appearance due to perspective) A good assumption if patch covers a single surface, and surface is far away compared to baseline between cameras Works well for scenes with lots of texture Featurebased approaches Matches edges, lines, or corners Gives a sparse reconstruction May be better for scenes with little texture 37
38 Correlation Approach Select a range of disparities to search For each patch in the left image, compute cross correlation score for every point along the epipolar line Find maximum correlation score along that line 38
39 Parameters: Matlab demo Size of template patch Horizontal disparity search window Vertical disparity search window % Simple stereo system using cross correlation clear all close all Left Right % Constants W=16; DH = 50; DV = 8; % size of crosscorrelation template is (2W+1 x 2W+1) % disparity horizontal search limit is DH.. DH % disparity vertical search limit is DV.. +DV Template Search area Ileft = imread('left.png'); Iright = imread('right.png'); figure(1), imshow(ileft, []), title('left image'); figure(2), imshow(iright, []), title('right image'); pause; % Calculate disparity at a set of discrete points xborder = W+DH+1; yborder = W+DV+1; xtsize = W+DH; % horizontal template size is 2*xTsize+1 ytsize = W+DV; % vertical template size is 2*yTsize+1 Template patch from left Correlation scores Search region in right Correlation scores (peak in red) 39
40 Matlab demo (continued) npts = 0; % number of found disparity points for x=xborder:w:size(ileft,2)xborder for y=yborder:w:size(ileft,1)yborder % Extract a template from the left image centered at x,y figure(1), hold on, plot(x, y, 'rd'), hold off; T = imcrop(ileft, [xw yw 2*W 2*W]); %figure(3), imshow(t, []), title('template'); % Search for match in the right image, in a region centered at x,y % and of dimensions DW wide by DH high. IR = imcrop(iright, [xxtsize yytsize 2*xTsize 2*yTsize]); %figure(4), imshow(ir, []), title('search area'); % The correlation score image is the size of IR, expanded by W in % each direction. ccscores = normxcorr2(t,ir); %figure(5), imshow(ccscores, []), title('correlation scores'); % Get the location of the peak in the correlation score image [max_score, maxindex] = max(ccscores(:)); [ypeak, xpeak] = ind2sub(size(ccscores),maxindex); hold on, plot(xpeak, ypeak, 'rd'), hold off; % If score too low, ignore this point if max_score < 0.85 continue; end Scan through left image Extract a template patch from the left Do normalized crosscorrelation to match to the right Accept a match if score is greater than a threshold 40
41 Matlab demo (continued) Extract peak location, save disparity value Plot all points when done % These are the coordinates of the peak in the search image ypeak = ypeak  W; xpeak = xpeak  W; %figure(4), hold on, plot(xpeak, ypeak, 'rd'), hold off; % These are the coordinates in the full sized right image xpeak = xpeak + (xxtsize); ypeak = ypeak + (yytsize); figure(2), hold on, plot(xpeak, ypeak, 'rd'), hold off; % Save the point in a list, along with its disparity npts = npts+1; xpt(npts) = x; ypt(npts) = y; dpt(npts) = xpeakx; % disparity is xrightxleft end end %pause figure, plot3(xpt, ypt, dpt, 'd'); 41
42 Areabased matching Window size tradeoff Larger windows are more unique Smaller windows less likely to cross discontinuities Similarity measures CC (crosscorrelation) SSD (sum of squared differences) SSD is equivalent to CC SAD (sum of absolute differences) 42
43 Additional notes Stereo vision website Example commercial system 43
There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...
STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own
More informationDense 3D Reconstruction
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Thanks to Yasutaka Furukawa Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching
More informationCHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION
CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes
More informationStereo: Disparity and Matching
CS 4495 Computer Vision Stereo: Disparity and Matching Aaron Bobick School of Interactive Computing Administrivia PS2 will be out tomrrow. Due Sunday Sept 22 nd, 11:55pm There is *no* grace period. We
More informationStereo Vision A simple system. Dr. Gerhard Roth Winter 2012
Stereo Vision A simple system Dr. Gerhard Roth Winter 2012 Stereo Stereo Ability to infer information on the 3D structure and distance of a scene from two or more images taken from different viewpoints
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationImage Rectification (Stereo) (New book: 7.2.1, old book: 11.1)
Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Guido Gerig CS 6320 Spring 2013 Credits: Prof. Mubarak Shah, Course notes modified from: http://www.cs.ucf.edu/courses/cap6411/cap5415/, Lecture
More informationImportant concepts in binocular depth vision: Corresponding and noncorresponding points. Depth Perception 1. Depth Perception Part II
Depth Perception Part II Depth Perception 1 Binocular Cues to Depth Depth Information Oculomotor Visual Accomodation Convergence Binocular Monocular Static Cues Motion Parallax Perspective Size Interposition
More informationRectification and Disparity
Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide
More informationLecture 10 Multiview Stereo (3D Dense Reconstruction) Davide Scaramuzza
Lecture 10 Multiview Stereo (3D Dense Reconstruction) Davide Scaramuzza REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time, ICRA 14, by Pizzoli, Forster, Scaramuzza [M. Pizzoli, C. Forster,
More information1 CSE 252A Computer Vision I Fall 2017
Assignment 1 CSE A Computer Vision I Fall 01 1.1 Assignment This assignment contains theoretical and programming exercises. If you plan to submit hand written answers for theoretical exercises, please
More informationBinocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?
Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More informationPerception, Part 2 Gleitman et al. (2011), Chapter 5
Perception, Part 2 Gleitman et al. (2011), Chapter 5 Mike D Zmura Department of Cognitive Sciences, UCI Psych 9A / Psy Beh 11A February 27, 2014 T. M. D'Zmura 1 Visual Reconstruction of a ThreeDimensional
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationStereo. Many slides adapted from Steve Seitz
Stereo Many slides adapted from Steve Seitz Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given
More informationTecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTICNR, Pisa, Italy
Tecnologie per la ricostruzione di modelli 3D da immagini Marco Callieri ISTICNR, Pisa, Italy Who am I? Marco Callieri PhD in computer science Always had the like for 3D graphics... Researcher at the
More information3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,
3D Vision Real Objects, Real Cameras Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, anders@cb.uu.se 3D Vision! Philisophy! Image formation " The pinhole camera " Projective
More informationLecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013
Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today:  Capturing scene depth
More information3D Geometry and Camera Calibration
3D Geometry and Camera Calibration 3D Coordinate Systems Righthanded vs. lefthanded x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often
More informationLecture 6 Stereo Systems Multi view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 624Jan15
Lecture 6 Stereo Systems Multi view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 624Jan15 Lecture 6 Stereo Systems Multi view geometry Stereo systems
More informationReminder: Lecture 20: The EightPoint Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches
Reminder: Lecture 20: The EightPoint Algorithm F = 0.003106950.0025646 2.965840.0280940.00771621 56.3813 13.190529.20079999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary
More information3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava
3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.unikl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More information3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY
IJDW Volume 4 Number JanuaryJune 202 pp. 4550 3D FACE RECONSRUCION BASED ON EPIPOLAR GEOMERY aher Khadhraoui, Faouzi Benzarti 2 and Hamid Amiri 3,2,3 Signal, Image Processing and Patterns Recognition
More informationDEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION
2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent
More informationCEng Computational Vision
CEng 583  Computational Vision 20112012 Spring Week 4 18 th of March, 2011 Today 3D Vision Binocular (Multiview) cues: Stereopsis Motion Monocular cues Shading Texture Familiar size etc. "God must
More informationCS201 Computer Vision Lect 4  Image Formation
CS201 Computer Vision Lect 4  Image Formation John Magee 9 September, 2014 Slides courtesy of Diane H. Theriault Question of the Day: Why is Computer Vision hard? Something to think about from our view
More informationPerceptual Grouping from Motion Cues Using Tensor Voting
Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project
More informationMahdi Amiri. May Sharif University of Technology
Course Presentation Multimedia Systems 3D Technologies Mahdi Amiri May 2014 Sharif University of Technology Binocular Vision (Two Eyes) Advantages A spare eye in case one is damaged. A wider field of view
More informationOmni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Bundle Adjustment 2 Example Application A vehicle needs to map its environment that it is moving
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationLecture 6 Stereo Systems Multiview geometry
Lecture 6 Stereo Systems Multiview geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 65Feb4 Lecture 6 Stereo Systems Multiview geometry Stereo systems
More informationDepth. Chapter Stereo Imaging
Chapter 11 Depth Calculating the distance of various points in the scene relative to the position of the camera is one of the important tasks for a computer vision system. A common method for extracting
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG ZhiGuang YI JianQiang ZHAO DongBin (Laboratory of Complex Systems and Intelligence
More informationStereo Video Processing for Depth Map
Stereo Video Processing for Depth Map Harlan Hile and Colin Zheng University of Washington Abstract This paper describes the implementation of a stereo depth measurement algorithm in hardware on FieldProgrammable
More informationMultiple Baseline Stereo
A. Coste CS6320 3D Computer Vision, School of Computing, University of Utah April 22, 2013 A. Coste Outline 1 2 Square Differences Other common metrics 3 Rectification 4 5 A. Coste Introduction The goal
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points
More informationMeasurement of 3D Foot Shape Deformation in Motion
Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The
More informationAugmented Reality II  Camera Calibration  Gudrun Klinker May 11, 2004
Augmented Reality II  Camera Calibration  Gudrun Klinker May, 24 Literature Richard Hartley and Andrew Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2. (Section 5,
More informationStructure from Motion. Prof. Marco Marcon
Structure from Motion Prof. Marco Marcon Summingup 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)
More informationMotion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures
Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of
More information!!!"#$%!&'()*&+,'%%./01"&', Tokihiko Akita. AISIN SEIKI Co., Ltd. Parking Space Detection with Motion Stereo Camera applying Viterbi algorithm
!!!"#$%!&'()*&+,'%%./01"&', Tokihiko Akita AISIN SEIKI Co., Ltd. Parking Space Detection with Motion Stereo Camera applying Viterbi algorithm! !"#$%&'(&)'*+%*+, . /"0123'4*5 6.&/",70&"$2'37+89&:&;7+%3#7&
More informationIntroduction à la vision artificielle X
Introduction à la vision artificielle X Jean Ponce Email: ponce@di.ens.fr Web: http://www.di.ens.fr/~ponce Planches après les cours sur : http://www.di.ens.fr/~ponce/introvis/lect10.pptx http://www.di.ens.fr/~ponce/introvis/lect10.pdf
More informationVision 3D articielle Disparity maps, correlation
Vision 3D articielle Disparity maps, correlation Pascal Monasse monasse@imagine.enpc.fr IMAGINE, École des Ponts ParisTech http://imagine.enpc.fr/~monasse/stereo/ Contents Triangulation Epipolar rectication
More informationDetecting motion by means of 2D and 3D information
Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,
More informationCS6320: 3D Computer Vision Final Project Multiple Baseline Stereo
CS6320: 3D Computer Vision Final Project Multiple Baseline Stereo Arthur Coste: coste.arthur@gmail.com April 2013 1 Contents 1 Introduction 3 2 Theoretical presentation 4 2.1 Stereoscopy.........................................
More informationInteractive 3D Scene Reconstruction from Images
Interactive 3D Scene Reconstruction from Images Aaron Hertzmann Media Research Laboratory Department of Computer Science New York University 719 Broadway, 12th Floor New York, NY 10003 hertzman@mrl.nyu.edu
More informationImproved depth map estimation in Stereo Vision
Improved depth map estimation in Stereo Vision Hajer Fradi and and JeanLuc Dugelay EURECOM, Sophia Antipolis, France ABSTRACT In this paper, we present a new approach for dense stereo matching which is
More informationColor and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception
Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both
More informationCS251 Spring 2014 Lecture 7
CS251 Spring 2014 Lecture 7 Stephanie R Taylor Feb 19, 2014 1 Moving on to 3D Today, we move on to 3D coordinates. But first, let s recap of what we did in 2D: 1. We represented a data point in 2D data
More informationOptic Flow and Basics Towards HornSchunck 1
Optic Flow and Basics Towards HornSchunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision SpringerVerlag, London, 2014 1 See last slide for copyright information.
More informationRecovering structure from a single view Pinhole perspective projection
EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their
More informationInexpensive Construction of a 3D Face Model from Stereo Images
Inexpensive Construction of a 3D Face Model from Stereo Images M. Shahriar Hossain, Monika Akbar and J. Denbigh Starkey Department of Computer Science, Montana State University, Bozeman, MT 59717, USA.
More informationRectification. Dr. Gerhard Roth
Rectification Dr. Gerhard Roth Problem Definition Given a pair of stereo images, the intrinsic parameters of each camera, and the extrinsic parameters of the system, R, and, compute the image transformation
More informationGeometric Accuracy Evaluation, DEM Generation and Validation for SPOT5 Level 1B Stereo Scene
Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT5 Level 1B Stereo Scene Buyuksalih, G.*, Oruc, M.*, Topan, H.*,.*, Jacobsen, K.** * Karaelmas University Zonguldak, Turkey **University
More informationStereo SLAM. Davide Migliore, PhD Department of Electronics and Information, Politecnico di Milano, Italy
Stereo SLAM, PhD migliore@elet.polimi.it Department of Electronics and Information, Politecnico di Milano, Italy What is a Stereo Camera? Slide n 2 Do you remember the pinhole camera? What is a Stereo
More informationComputer Vision Projective Geometry and Calibration. Pinhole cameras
Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model  box with a small hole
More informationRobot Localization based on Georeferenced Images and G raphic Methods
Robot Localization based on Georeferenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,
More informationREPRESENTATION REQUIREMENTS OF ASIS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA
REPRESENTATION REQUIREMENTS OF ASIS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA Engin Burak Anil 1 *, Burcu Akinci 1, and Daniel Huber 2 1 Department of Civil and Environmental
More informationImage Warping and Mosacing
Image Warping and Mosacing 15463: Rendering and Image Processing Alexei Efros with a lot of slides stolen from Steve Seitz and Rick Szeliski Today Mosacs Image Warping Homographies Programming Assignment
More informationIntegrating LIDAR into Stereo for Fast and Improved Disparity Computation
Integrating LIDAR into Stereo for Fast and Improved Computation Hernán Badino, Daniel Huber, and Takeo Kanade Robotics Institute, Carnegie Mellon University Pittsburgh, PA, USA Stereo/LIDAR Integration
More informationRectification for Any Epipolar Geometry
Rectification for Any Epipolar Geometry Daniel Oram Advanced Interfaces Group Department of Computer Science University of Manchester Mancester, M13, UK oramd@cs.man.ac.uk Abstract This paper proposes
More informationBumblebee2 Stereo Vision Camera
Bumblebee2 Stereo Vision Camera Description We use the Point Grey Bumblebee2 Stereo Vision Camera in this lab section. This stereo camera can capture 648 x 488 video at 48 FPS. 1) Microlenses 2) Status
More informationMultimedia Technology CHAPTER 4. Video and Animation
CHAPTER 4 Video and Animation  Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures.  Motion video is the element of multimedia
More informationRefinement of scene depth from stereo camera egomotion parameters
Refinement of scene epth from stereo camera egomotion parameters Piotr Skulimowski, Pawel Strumillo An algorithm for refinement of isparity (epth) map from stereoscopic sequences is propose. The metho
More informationOther Linear Filters CS 211A
Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin
More informationMultiModal Human Computer Interaction
MultiModal Human Computer Interaction Attila Fazekas University of Debrecen, Hungary Road Map Multimodal interactions and systems (main categories, examples, benefits) Face detection, facial gestures
More informationSLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using ScaleInvariant Visual Landmarks ) Se, Lowe, and Little
SLAM with SIFT (aka Mobile Robot Localization and Mapping with Uncertainty using ScaleInvariant Visual Landmarks ) Se, Lowe, and Little + Presented by Matt Loper CS2963: Robot Learning and Autonomy Brown
More informationModels and The Viewing Pipeline. Jian Huang CS456
Models and The Viewing Pipeline Jian Huang CS456 Vertex coordinates list, polygon table and (maybe) edge table Auxiliary: Per vertex normal Neighborhood information, arranged with regard to vertices and
More informationDoes the Brain do Inverse Graphics?
Does the Brain do Inverse Graphics? Geoffrey Hinton, Alex Krizhevsky, Navdeep Jaitly, Tijmen Tieleman & Yichuan Tang Department of Computer Science University of Toronto How to learn many layers of features
More informationUsing LoggerPro. Nothing is more terrible than to see ignorance in action. J. W. Goethe ( )
Using LoggerPro Nothing is more terrible than to see ignorance in action. J. W. Goethe (17491832) LoggerPro is a generalpurpose program for acquiring, graphing and analyzing data. It can accept input
More informationEvaluation of Geometric Depth Estimation Model for Virtual Environment
Evaluation of Geometric Depth Estimation Model for Virtual Environment Puneet Sharma 1, Jan H. Nilsen 1, Torbjørn Skramstad 2, Faouzi A. Cheikh 3 1 Department of Informatics & ELearning (AITeL), Sør Trøndelag
More informationCamera Calibration using Vanishing Points
Camera Calibration using Vanishing Points Paul Beardsley and David Murray * Department of Engineering Science, University of Oxford, Oxford 0X1 3PJ, UK Abstract This paper describes a methodformeasuringthe
More informationC P S C 314 S H A D E R S, O P E N G L, & J S RENDERING PIPELINE. Mikhail Bessmeltsev
C P S C 314 S H A D E R S, O P E N G L, & J S RENDERING PIPELINE UGRAD.CS.UBC.C A/~CS314 Mikhail Bessmeltsev 1 WHAT IS RENDERING? Generating image from a 3D scene 2 WHAT IS RENDERING? Generating image
More informationOcclusion Detection of Real Objects using Contour Based Stereo Matching
Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,13 Machikaneyamacho, Toyonaka,
More informationIEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER 2012 411 Consistent StereoAssisted Absolute Phase Unwrapping Methods for Structured Light Systems Ricardo R. Garcia, Student
More informationLecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20
Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit
More informationIsosurface Rendering. CSC 7443: Scientific Information Visualization
Isosurface Rendering What is Isosurfacing? An isosurface is the 3D surface representing the locations of a constant scalar value within a volume A surface with the same scalar field value Isosurfaces form
More informationOnLine Computer Graphics Notes CLIPPING
OnLine Computer Graphics Notes CLIPPING Kenneth I. Joy Visualization and Graphics Research Group Department of Computer Science University of California, Davis 1 Overview The primary use of clipping in
More informationLight source estimation using feature points from specular highlights and cast shadows
Vol. 11(13), pp. 168177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 19921950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps
More informationComparison of Stereo Vision Techniques for cloudtop height retrieval
Comparison of Stereo Vision Techniques for cloudtop height retrieval Anna Anzalone *,, Francesco Isgrò^, Domenico Tegolo *INAFIstituto Istituto di Astrofisica e Fisica cosmica di Palermo, Italy ^Dipartimento
More informationCapture and Dewarping of Page Spreads with a Handheld Compact 3D Camera
Capture and Dewarping of Page Spreads with a Handheld Compact 3D Camera Michael P. Cutter University of California at Santa Cruz Baskin School of Engineering (Computer Engineering department) Santa Cruz,
More informationKinect Device. How the Kinect Works. Kinect Device. What the Kinect does 4/27/16. Subhransu Maji Slides credit: Derek Hoiem, University of Illinois
4/27/16 Kinect Device How the Kinect Works T2 Subhransu Maji Slides credit: Derek Hoiem, University of Illinois Photo framegrabbed from: http://www.blisteredthumbs.net/2010/11/dancecentralangryreview
More informationRobert Collins CSE486, Penn State. Lecture 09: Stereo Algorithms
Lecture 09: Stereo Algorithms left camera located at (0,0,0) Recall: Simple Stereo System Y y Image coords of point (X,Y,Z) Left Camera: x T x z (, ) y Z (, ) x (X,Y,Z) z X right camera located at (T x,0,0)
More informationIncremental Realtime Bundle Adjustment for Multicamera Systems with Points at Infinity
Incremental Realtime Bundle Adjustment for Multicamera Systems with Points at Infinity Johannes Schneider, Thomas Läbe, Wolfgang Förstner 1 Department of Photogrammetry Institute of Geodesy and Geoinformation
More informationStereo Graphics. Visual Rendering for VR. Passive stereoscopic projection. Active stereoscopic projection. VergenceAccommodation Conflict
Stereo Graphics Visual Rendering for VR HsuehChien Chen, Derek Juba, and Amitabh Varshney Our left and right eyes see two views, which are processed by our visual cortex to create a sense of depth Computer
More informationRaycasting. Chapter Raycasting foundations. When you look at an object, like the ball in the picture to the left, what do
Chapter 4 Raycasting 4. Raycasting foundations When you look at an, like the ball in the picture to the left, what do lamp you see? You do not actually see the ball itself. Instead, what you see is the
More informationCS 534: Computer Vision 3D Modelbased recognition
CS 534: Computer Vision 3D Modelbased recognition Spring 2004 Ahmed Elgammal Dept of Computer Science CS 534 3D Modelbased Vision  1 Outlines Geometric ModelBased Object Recognition Choosing features
More informationThe reconstruction problem. Reconstruction by triangulation
The reconstruction problem Both intrinsic and extr insic parameters are known: we can solve the reconstruction problem unambiguously by triangulation. Only the intrinsic parameters are known: we can solve
More informationPART IV: RS & the Kinect
Computer Vision on Rolling Shutter Cameras PART IV: RS & the Kinect PerErik Forssén, Erik Ringaby, Johan Hedborg Computer Vision Laboratory Dept. of Electrical Engineering Linköping University Tutorial
More informationDTU M.SC.  COURSE EXAM Revised Edition
Written test, 16 th of December 1999. Course name : 04250  Digital Image Analysis Aids allowed : All usual aids Weighting : All questions are equally weighed. Name :...................................................
More informationGeneral Principles of 3D Image Analysis
General Principles of 3D Image Analysis highlevel interpretations objects scene elements Extraction of 3D information from an image (sequence) is important for  vision in general (= scene reconstruction)
More informationOrthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E AddisonWesley 2015
Orthogonal Projection Matrices 1 Objectives Derive the projection matrices used for standard orthogonal projections Introduce oblique projections Introduce projection normalization 2 Normalization Rather
More informationChapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc.
Chapter 32 Light: Reflection and Refraction Units of Chapter 32 The Ray Model of Light Reflection; Image Formation by a Plane Mirror Formation of Images by Spherical Mirrors Index of Refraction Refraction:
More informationViewing COMPSCI 464. Image Credits: Encarta and
Viewing COMPSCI 464 Image Credits: Encarta and http://www.sackville.ednet.ns.ca/art/grade/drawing/perspective4.html Graphics Pipeline Graphics hardware employs a sequence of coordinate systems The location
More informationTHE DEVELOPMENT of active machine vision platforms
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 14, NO. 5, OCTOBER 1998 755 FOVEA: A Foveated Vergent Active Stereo Vision System for Dynamic ThreeDimensional Scene Recovery William N. Klarquist, Member,
More informationThe Lens. Refraction and The Lens. Figure 1a:
Lenses are used in many different optical devices. They are found in telescopes, binoculars, cameras, camcorders and eyeglasses. Even your eye contains a lens that helps you see objects at different distances.
More informationScene Modeling for a Single View
on to 3D Scene Modeling for a Single View We want real 3D scene walkthroughs: rotation translation Can we do it from a single photograph? Reading: A. Criminisi, I. Reid and A. Zisserman, Single View Metrology
More informationHuman Body Recognition and Tracking: How the Kinect Works. Kinect RGBD Camera. What the Kinect Does. How Kinect Works: Overview
Human Body Recognition and Tracking: How the Kinect Works Kinect RGBD Camera Microsoft Kinect (Nov. 2010) Color video camera + laserprojected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due
More information