Depth Cameras. Didier Stricker Oliver Wasenmüller Lecture 3D Computer Vision
|
|
- Baldric Summers
- 6 years ago
- Views:
Transcription
1 1 Depth Cameras Lecture 3D Computer Vision Oliver Wasenmüller Didier Stricker
2 Content Motivation Depth Measurement Techniques Depth Image Enhancement Application Kinect Fusion Body Reconstruction Outlook for next semester(s) 2
3 Motivation 3
4 4 What is a depth camera? A depth camera captured depth images. A depth image indicates in each pixel the distance from the camera to the seen object. (x,y,z) Color Image Depth Image (color encoded) (x,y) z indicates the depth In the following slides: How did we capture depth in the previous lectures. Camera Center
5 Depth from Stereo Images image 1 image 2 Dense disparity map Parts of this slide are adapted from Derek Hoiem (University of Illinois), Steve Seitz (University of Washington) and Lana Lazebnik (University of Illinois) 5
6 Depth from Stereo Images Goal: recover depth by finding image coordinate x that corresponds to x X X x x x z x' f f C Baseline B C Parts of this slide are adapted from Derek Hoiem (University of Illinois), Steve Seitz (University of Washington) and Lana Lazebnik (University of Illinois) 6
7 Stereo and the Epipolar constraint X X X x x x x Potential matches for x have to lie on the corresponding line l. Potential matches for x have to lie on the corresponding line l. Parts of this slide are adapted from Derek Hoiem (University of Illinois), Steve Seitz (University of Washington) and Lana Lazebnik (University of Illinois) 7
8 Simplest Case: Parallel images Image planes of cameras are parallel to each other and to the baseline Camera centers are at same height Focal lengths are the same Then, epipolar lines fall along the horizontal scan lines of the images Parts of this slide are adapted from Derek Hoiem (University of Illinois), Steve Seitz (University of Washington) and Lana Lazebnik (University of Illinois) 8
9 Basic stereo matching algorithm For each pixel in the first image Find corresponding epipolar line in the right image Examine all pixels on the epipolar line and pick the best match Triangulate the matches to get depth information Parts of this slide are adapted from Derek Hoiem (University of Illinois), Steve Seitz (University of Washington) and Lana Lazebnik (University of Illinois) 9
10 Depth from disparity X x O x O f z z disparity x x B z f x x f f Baseline B O O Disparity is inversely proportional to depth! Parts of this slide are adapted from Derek Hoiem (University of Illinois), Steve Seitz (University of Washington) and Lana Lazebnik (University of Illinois) 10
11 Depth Measurement Techniques 11
12 Depth Measurement Techniques Parts of this slide are adapted from Victor Castaneda and Nassir Navab (both University of Munich) 12
13 13 Depth Measurement Techniques Laser Scanner Structured Light Projection Time of Flight (ToF)
14 Structured Light Projection Souce: Parts of this slide are adapted from Derek Hoiem (University of Illinois) 14
15 Structured Light Projection (see also lectures about structured light) Surface Projector Sensor Parts of this slide are adapted from Derek Hoiem (University of Illinois) 15
16 Structured Light Projection Projector Camera Parts of this slide are adapted from Derek Hoiem (University of Illinois) 16
17 Source: Example: Book vs. No Book Lecture 3D Computer Vision
18 Source: Example: Book vs. No Book Lecture 3D Computer Vision
19 Region-growing Random Dot Matching 1. Detect dots ( speckles ) and label them unknown 2. Randomly select a region anchor, a dot with unknown depth a. Windowed search via normalized cross correlation along scanline Check that best match score is greater than threshold; if not, mark as invalid and go to 2 b. Region growing 1. Neighboring pixels are added to a queue 2. For each pixel in queue, initialize by anchor s shift; then search small local neighborhood; if matched, add neighbors to queue 3. Stop when no pixels are left in the queue 3. Stop when all dots have known depth or are marked invalid Parts of this slide are adapted from Derek Hoiem (University of Illinois) 19
20 Projected IR vs. Natural Light Stereo What are the advantages of IR? Works in low light conditions Does not rely on having textured objects Not confused by repeated scene textures Can tailor algorithm to produced pattern What are advantages of natural light? Works outside, anywhere with sufficient light Uses less energy Resolution limited only by sensors, not projector Difficulties with both Very dark surfaces may not reflect enough light Specular reflection in mirrors or metal causes trouble Parts of this slide are adapted from Derek Hoiem (University of Illinois) 20
21 Example: The Kinect Sensor (v1) Microsoft Kinect (v1) was released in 2011 as a new kind of controller for the Xbox 360. Lecture 3D Computer Vision Parts of this slide are adapted from Rob Miles (University of Hull)
22 Example: The Kinect Sensor The Kinect is able to capture depth and color images. Therefore it contains two cameras and an infrared projector. It has also four microphones. Lecture 3D Computer Vision Parts of this slide are adapted from Rob Miles (University of Hull)
23 Example: The Kinect Sensor The Kinect sensor contains a high quality video camera which can provide up to 1280x1024 resolution at 30 frames a second. Lecture 3D Computer Vision Parts of this slide are adapted from Rob Miles (University of Hull)
24 Example: The Kinect Sensor IR Projector IR Camera The Kinect depth sensor uses an IR projector and an IR camera to measure the depth of objects in the scene in front of the sensor. Lecture 3D Computer Vision Parts of this slide are adapted from Rob Miles (University of Hull)
25 25 Time of Flight (ToF) Time-of-Flight (ToF) Imaging refers to the process of measuring the depth of a scene by quantifying the changes that an emitted light signal encounters when it bounces back from objects in a scene. Two common principals: Pulsed Modulation Continuous Wave Modulation
26 Time of Flight (ToF) Pulsed Modulation Measure distance to a 3D object by measuring the absolute time a light pulse needs to travel from a source into the 3D scene and back, after reflection Speed of light is constant and known, c = m/s Parts of this slide are adapted from Victor Castaneda and Nassir Navab (both University of Munich) 26
27 Time of Flight (ToF) Pulsed Modulation Advantages: Direct measurement of time-of-flight High-energy light pulses limit influence of background illumination Illumination and observation directions are collinear Disadvantages: High-accuracy time measurement required Measurement of light pulse return is inexact, due to light scattering Difficulty to generate short light pulses with fast rise and fall times Usable light sources (e.g. lasers) suffer low repetition rates for pulses Parts of this slide are adapted from Victor Castaneda and Nassir Navab (both University of Munich) 27
28 Time of Flight (ToF) Continuous Wave Modulation Microsoft Kinect v2 works with this principal Continuous light waves instead of short light pulses Modulation in terms of frequency of sinusoidal waves Detected wave after reflection has shifted phase Phase shift proportional to distance from reflecting surface Parts of this slide are adapted from Victor Castaneda and Nassir Navab (both University of Munich) 28
29 Time of Flight (ToF) Continuous Wave Modulation Microsoft Kinect v2 works with this principal Retrieve phase shift by demodulation of received signal Demodulation by cross-correlation of received signal with emitted signal Emitted sinusoidal signal: Received signal after reflection from 3D surface: Cross-correlation of both signals: Parts of this slide are adapted from Victor Castaneda and Nassir Navab (both University of Munich) 29
30 Time of Flight (ToF) Continuous Wave Modulation Microsoft Kinect v2 works with this principal Cross-correlation function simplifies to Sample at four sequential instants with different phase offset : Directly obtain sought parameters: Parts of this slide are adapted from Victor Castaneda and Nassir Navab (both University of Munich) 30
31 Time of Flight (ToF) Continuous Wave Modulation Microsoft Kinect v2 works with this principal Advantages: Variety of light sources available as no short/strong pulses required Applicable to different modulation techniques (other than frequency) Simultaneous range and amplitude images Disadvantages: In practice, integration over time required to reduce noise Frame rates limited by integration time Motion blur caused by long integration time Parts of this slide are adapted from Victor Castaneda and Nassir Navab (both University of Munich) 31
32 32 Depth Quality e.g. Kinect v1 Souce: Main problems: Resolution Noise
33 33 Kinect v1 quality Souce:
34 Depth Image Enhancement 34
35 35 Depth image enhancement - Overview Patch Based Single Depth Image Static Multiple Depth Images Dynamic Joint Filtering Additional Color Information Cost Volume Markov Random Field
36 36 Single Depth-Image: Patch-Based Learning: Store normalized (artificial) high-resolution patches Procedure (Aodha et al.): Divide low-resolution image into patches For each patch: Find nearest neighbor candidates Solve resulting labelling-problem via MRF De-normalize Aodha et al., [6] Find best possible match for each patch Minimize difference on the border of neighboring patches Oisin Mac Aodha, Neill D. F. Campbell, Arun Nair, and Gabriel J. Brostow. Patch based synthesis for single depth image super-resolution. In Computer Vision ECCV 2012, volume 7574 Lecture Notes in Computer Science, pages Springer Berlin Heidelberg, 2012.
37 Multiple Depth-Images: Static Methods Only small displacement w.r.t. starting position Alignment possible with simple methods Example for laser-measurement (Kil et al.): Iterate after initial registration: Realignment via ICP Local weighted average Yong Joo Kil, B. Mederos, and N. Amenta. Laser scanner super-resolution. In Proceedings of the 3rd Eurographics / IEEE VGTC Conference on Point-Based Graphics, SPBG'06, pages Eurographics Association,
38 Multiple Depth-Images: LidarBoost Schuon et al.: Alignment via optical flow Again formulation as optimization problem Energy term for data corresponds to MRF Smoothness term: multi-scale gradient approximation S. Schuon, C. Theobalt, J. Davis, and S. Thrun. Lidarboost: Depth superresolution for tof 3d shape scanning. In Computer Vision and Pattern Recognition, CVPR IEEE Conference on, pages ,
39 Multiple Depth-Images: Dynamic Methods Yan Cui et al.: Initial alignment LidarBoost for chunks C Probabilistic scan alignment of resulting point clouds Yan Cui, S. Schuon, D. Chan, S. Thrun, and C. Theobalt. 3d shape scanning with a time-of-fight camera. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages ,
40 Multiple Depth-Images: Dynamic Methods Cui et al.: Probabilistic scan alignment of resulting point clouds: Model rotation, translation, systematic error of ToF-Sensors (shift along projection ray, radial symmetric) Choose reference cloud, for all other clouds construct Gaussian mixture model Maximum-Likelihood in EM-like procedure Yan Cui, S. Schuon, D. Chan, S. Thrun, and C. Theobalt. 3d shape scanning with a time-of-fight camera. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages ,
41 Additional RGB-Image: Overview Obvious choice for RGBD-Sensors Good complement for noisy, low-resolution depth-data Basic assumption: Image consistency RGB RGB(Sobel) Depth Images: Scharstein et al. D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV, 7-42,
42 Joint Filtering: Overview h i = W ij I, L l i j N i J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele. Joint bilateral upsampling. In ACM SIGGRAPH 2007 Papers, SIGGRAPH '07. ACM, D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV, 7-42,
43 Joint Bilateral Filter and Extensions Bilateral filter: Like Gaussian, additional factor for intensity difference Joint bilateral filter: Intensity difference provided by RGB-image (Kopf et al.) Fast approximation via decomposition in one linear filter for different depth value (Yang et al.) Prevent texture copying, by multilateral filter: Combined Bilateral Filter (CBF) (Wasenmüller et al.): switch between JBF and standard bilateral filter Confident map based on depth-gradients (Garcia et al.) J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele. Joint bilateral upsampling. In ACM SIGGRAPH 2007 Papers, SIGGRAPH '07. ACM, Qingxiong Yang, Kar-Han Tan, and N. Ahuja. Real-time o(1) bilateral ltering. In Computer Vision and Pattern Recognition, CVPR IEEE Conference on, pages , F. Garcia, B. Mirbach, B. Ottersten, F. Grandidier, and A. Cuesta. Pixel weighted average strategy for depth sensor data fusion. In Image Processing (ICIP), th IEEE International Conference on, pages , O. Wasenmüller, G. Bleser, and D. Stricker. Combined Bilateral Filter for Enhanced Real-Time Upsampling of Depth Images. International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP),
44 Exemplary results of CBF (Wasenmüller et al.) Color Input Depth Input CBF Output O. Wasenmüller, G. Bleser, and D. Stricker. Combined Bilateral Filter for Enhanced Real-Time Upsampling of Depth Images. International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP),
45 Cost Volume Yang et al.: Initialize H 0 by nearest neighbor upsampling Cost Volume: cost image for each depth hypotheses. Truncated squared difference Filter each cost image by a Joint Bilateral Filter (JBF) Subpixel-Refinement: Minimize quadratic interpolation polynomial of depth-triples C k I Bilateral Filter & Subpixel- Refinement inc(k) H k Qingxiong Yang, Ruigang Yang, J. Davis, and D. Nister. Spatial-depth super resolution for range images. In Computer Vision and Pattern Recognition, CVPR '07. IEEE Conference on, pages 1-8,
46 Cost Volume Qingxiong Yang, Ruigang Yang, J. Davis, and D. Nister. Spatial-depth super resolution for range images. In Computer Vision and Pattern Recognition, CVPR '07. IEEE Conference on, pages 1-8,
47 Markov Random Fields: Overview [Lo et al.] Hammersley-Clifford: Minimize sum of data terms and weighted smoothness terms E H = U(L i, H i ) + λ w ij V(H i, H j ) i i,j Kai-Han Lo, Kai-Lung Hua, and Y.-C.F. Wang. Depth map super-resolution via markov random elds without texture-copying artifacts. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages ,
48 Markov Random Field: Example Crucial points: Optimization method + Energy function Energy function often norm inside Gaussian kernel Euclidean: not robust (Diebel et al.) Truncated absolute difference better (Lu et al.) Example use more complex distance involving structural tensor: RGB LR HR [Park et al.] Kai-Han Lo, Kai-Lung Hua, and Y.-C.F. Wang. Depth map super-resolution via markov random elds without texture-copying artifacts. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages , J. Diebel and S. Thrun. An application of markov random fields to range sensing. In Proceedings of Conference on Neural Information Processing Systems (NIPS). MIT Press, Jiangbo Lu, Dongbo Min, R.S. Pahwa, and M.N. Do. A revisit to mrf-based depth map super-resolution and enhancement. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, pages , Jaesik Park, Hyeongwoo Kim, Yu-Wing Tai, M.S. Brown, and Inso Kweon. High quality depth map upsampling for 3d-tof cameras. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages ,
49 49 Additional RGB-Image: Conclusion Joint bilateral filter: Real-time is possible (by trend) Over-smoothing MRF and Cost-Volume: Both can provide high-quality MRF probably slower, but better results possible All methods: High dependency on particular choice of methods, parameters and data-set
50 50 Evaluation Single Depth-Image (Learning) Advantages (After training) single depth image is enough Multiple Depth-Images Random noise is removed effectively Dynamic methods incorporate systematic error Additional RGB-Image Random noise is removed effectively Complements characteristics of depthimages Good preservation of detail if image consistency holds Good results for dynamic objects/scenes Disadvantages Training Good samples Learning representations that work well in many cases Small structures vanish Only data of same type Registration of RGB and depth
51 51 Applications Kinect Fusion Body Reconstruction
52 Kinect Fusion - Overview 52
53 Challenges Tracking camera precisely Fusing and de-noising measurements (depth estimates) Avoiding drift Real-Time Low-Cost hardware Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 53
54 Proposed Solution Fast optimization for tracking; due to high frame rate Global framework for fusing data Interleaving tracking & mapping Using Kinect to get depth data ( low cost) Using GPU to get real-time performance ( low cost) Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 54
55 Method Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 55
56 Tracking Finding camera position is the same as fitting the depth map of a frame onto Model Tracking Mapping Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 56
57 Tracking ICP algorithm ICP = iterative closest point Already explained in Structured Light lecture Goal: fit two 3D point sets Problem: What are the correspondences? Kinect fusion chosen solution: 1) Start with T 0 2) Project model onto camera 3) Correspondences are points with same coordinates 4) Find new T with Least - Squares 5) Apply T, and repeat 2-5 until convergence Tracking Mapping Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 57
58 Tracking ICP algorithm Tracking Mapping Assumption: frame and model are roughly aligned. True because of high frame rate Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 58
59 Mapping Mapping is fusing depth maps when camera poses are known Problems: measurements are noisy Depth maps have holes Solution: Using implicit surface representation Fusing = estimations from all frames relevant Tracking Mapping Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 59
60 Mapping surface representation Surface is represented implicitly using Truncated Signed Distance Function (TSDF) Voxel grid Tracking Mapping Numbers in cells measure voxel distance to surface Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 60
61 Mapping Tracking Mapping Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 61
62 Mapping Tracking Mapping d= [pixel depth] [distance from sensor to voxel] Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 62
63 Mapping Tracking Mapping Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 63
64 Mapping Tracking Mapping Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 64
65 Mapping Tracking Mapping Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 65
66 Mapping Each Voxel also has a weight W, proportional to grazing angle Voxel D is the weighted average of all measurements Sensor 1 Sensor 2 d ( x) 2 d ( x) 1 Fx ( ) W( x) w( x) 1 w ( x) 2 Tracking Mapping Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 66
67 Handling drift Drift would have happened, if tracking was done from frame to frame Thus, tracking is done on built model Tracking Mapping Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 67
68 Pros & Cons Pros: Nice results Real time performance (30 Hz) Dense model No drift with local optimization Elegant solution Cons : 3D grid can not be trivially up-scaled Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 68
69 Limitations Doesn t work for large areas (Voxel-Grid) Doesn t work far away from objects (active ranging) Doesn t work well out-doors (IR) Requires powerful graphics card Uses lots of battery (active ranging) Parts of this slide are adapted from Richard A. Newcombe (Imperial College London) and Boaz Petersil (Israel Institute of Technology) 69
70 Application: Body Reconstruction 70
71 71 What comes next? Overview of further teaching activities of the Department Augmented Vision
72 Other courses SS 2015: Lecture Computer Vision: Object and People Tracking, 4 CP, 2+1 Seminar 3D Computer Vision & Augmented Reality, 4 CP Project 3D Computer Vision & Augmented Reality, 8 CP WS 2015/16 Lecture 3D Computer Vision, 4 CP, 2+1 Seminar Computer Vision: Object and People Tracking, 4 CP Project Computer Vision: Object and People Tracking, 8 CP Individual topics and supervisors We also offer student jobs and master thesis in various areas! (computer vision, 3D reconstruction, sensor fusion, HCI, ) Just ask us! 72
3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller
3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationDepth Sensors Kinect V2 A. Fornaser
Depth Sensors Kinect V2 A. Fornaser alberto.fornaser@unitn.it Vision Depth data It is not a 3D data, It is a map of distances Not a 3D, not a 2D it is a 2.5D or Perspective 3D Complete 3D - Tomography
More informationKinect Device. How the Kinect Works. Kinect Device. What the Kinect does 4/27/16. Subhransu Maji Slides credit: Derek Hoiem, University of Illinois
4/27/16 Kinect Device How the Kinect Works T2 Subhransu Maji Slides credit: Derek Hoiem, University of Illinois Photo frame-grabbed from: http://www.blisteredthumbs.net/2010/11/dance-central-angry-review
More informationCombined Bilateral Filter for Enhanced Real-Time Upsampling of Depth Images
Combined Bilateral Filter for Enhanced Real-Time Upsampling of Depth Images Oliver Wasenmüller, Gabriele Bleser and Didier Stricker Germany Research Center for Artificial Intelligence (DFKI), Trippstadter
More informationLecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013
Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth
More informationHuman Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview
Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationNoise vs Feature: Probabilistic Denoising of Time-of-Flight Range Data
Noise vs Feature: Probabilistic Denoising of Time-of-Flight Range Data Derek Chan CS229 Final Project Report ddc@stanford.edu Abstract Advances in active 3D range sensors have enabled the recording of
More informationStereo vision. Many slides adapted from Steve Seitz
Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is
More informationCS4495/6495 Introduction to Computer Vision
CS4495/6495 Introduction to Computer Vision 9C-L1 3D perception Some slides by Kelsey Hawkins Motivation Why do animals, people & robots need vision? To detect and recognize objects/landmarks Is that a
More informationBinocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?
Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo
More information3D Photography: Stereo
3D Photography: Stereo Marc Pollefeys, Torsten Sattler Spring 2016 http://www.cvg.ethz.ch/teaching/3dvision/ 3D Modeling with Depth Sensors Today s class Obtaining depth maps / range images unstructured
More information3D Photography: Active Ranging, Structured Light, ICP
3D Photography: Active Ranging, Structured Light, ICP Kalin Kolev, Marc Pollefeys Spring 2013 http://cvg.ethz.ch/teaching/2013spring/3dphoto/ Schedule (tentative) Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar
More informationStereo. Many slides adapted from Steve Seitz
Stereo Many slides adapted from Steve Seitz Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1 image 2 Dense depth map Binocular stereo Given a calibrated
More informationGeometric Reconstruction Dense reconstruction of scene geometry
Lecture 5. Dense Reconstruction and Tracking with Real-Time Applications Part 2: Geometric Reconstruction Dr Richard Newcombe and Dr Steven Lovegrove Slide content developed from: [Newcombe, Dense Visual
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationKinectFusion: Real-Time Dense Surface Mapping and Tracking
KinectFusion: Real-Time Dense Surface Mapping and Tracking Gabriele Bleser Thanks to Richard Newcombe for providing the ISMAR slides Overview General: scientific papers (structure, category) KinectFusion:
More informationImage Based Reconstruction II
Image Based Reconstruction II Qixing Huang Feb. 2 th 2017 Slide Credit: Yasutaka Furukawa Image-Based Geometry Reconstruction Pipeline Last Lecture: Multi-View SFM Multi-View SFM This Lecture: Multi-View
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationGuided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging
Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging Florin C. Ghesu 1, Thomas Köhler 1,2, Sven Haase 1, Joachim Hornegger 1,2 04.09.2014 1 Pattern
More informationMulti-view stereo. Many slides adapted from S. Seitz
Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window
More informationOutline. ETN-FPI Training School on Plenoptic Sensing
Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration
More informationLecture 10: Multi view geometry
Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from
More informationChaplin, Modern Times, 1936
Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections
More information3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More informationStructure from Motion
11/18/11 Structure from Motion Computer Vision CS 143, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from
More informationDepth Enhancement by Fusion for Passive and Active Sensing
Depth Enhancement by Fusion for Passive and Active Sensing Frederic Garcia, Djamila Aouada, Hashim Kemal Abdella, Thomas Solignac, Bruno Mirbach, and Björn Ottersten Interdisciplinary Centre for Security,
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationMulti-View 3D-Reconstruction
Multi-View 3D-Reconstruction Cedric Cagniart Computer Aided Medical Procedures (CAMP) Technische Universität München, Germany 1 Problem Statement Given several calibrated views of an object... can we automatically
More informationStereo Vision II: Dense Stereo Matching
Stereo Vision II: Dense Stereo Matching Nassir Navab Slides prepared by Christian Unger Outline. Hardware. Challenges. Taxonomy of Stereo Matching. Analysis of Different Problems. Practical Considerations.
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More information3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava
3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationStructured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe
Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X
More informationStep-by-Step Model Buidling
Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationPublic Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923
Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Teesta suspension bridge-darjeeling, India Mark Twain at Pool Table", no date, UCR Museum of Photography Woman getting eye exam during
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More information3D Computer Vision 1
3D Computer Vision 1 Multiview Stereo Multiview Stereo Multiview Stereo https://www.youtube.com/watch?v=ugkb7itpnae Shape from silhouette Shape from silhouette Shape from silhouette Shape from silhouette
More informationLidarBoost: Depth Superresolution for ToF 3D Shape Scanning
LidarBoost: Depth Superresolution for ToF 3D Shape Scanning Sebastian Schuon Stanford University schuon@cs.stanford.edu Christian Theobalt Stanford University theobalt@cs.stanford.edu James Davis UC Santa
More informationData-driven Depth Inference from a Single Still Image
Data-driven Depth Inference from a Single Still Image Kyunghee Kim Computer Science Department Stanford University kyunghee.kim@stanford.edu Abstract Given an indoor image, how to recover its depth information
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationThere are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...
STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More information10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.
Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors
More informationOutline. 1 Why we re interested in Real-Time tracking and mapping. 3 Kinect Fusion System Overview. 4 Real-time Surface Mapping
Outline CSE 576 KinectFusion: Real-Time Dense Surface Mapping and Tracking PhD. work from Imperial College, London Microsoft Research, Cambridge May 6, 2013 1 Why we re interested in Real-Time tracking
More informationTime-of-Flight Imaging!
Time-of-Flight Imaging Loren Schwarz, Nassir Navab 3D Computer Vision II Winter Term 2010 21.12.2010 Lecture Outline 1. Introduction and Motivation 2. Principles of ToF Imaging 3. Computer Vision with
More informationLocal features: detection and description May 12 th, 2015
Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 12 130228 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Panoramas, Mosaics, Stitching Two View Geometry
More informationCS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017
CS 277: Intro to Computer Vision Multiple Views Prof. Adriana Kovashka Universit of Pittsburgh March 4, 27 Plan for toda Affine and projective image transformations Homographies and image mosaics Stereo
More informationShape Preserving RGB-D Depth Map Restoration
Shape Preserving RGB-D Depth Map Restoration Wei Liu 1, Haoyang Xue 1, Yun Gu 1, Qiang Wu 2, Jie Yang 1, and Nikola Kasabov 3 1 The Key Laboratory of Ministry of Education for System Control and Information
More informationTemporal Filtering of Depth Images using Optical Flow
Temporal Filtering of Depth Images using Optical Flow Razmik Avetisyan Christian Rosenke Martin Luboschik Oliver Staadt Visual Computing Lab, Institute for Computer Science University of Rostock 18059
More information3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light I Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationStereo and structured light
Stereo and structured light http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 20 Course announcements Homework 5 is still ongoing. - Make sure
More informationMesh from Depth Images Using GR 2 T
Mesh from Depth Images Using GR 2 T Mairead Grogan & Rozenn Dahyot School of Computer Science and Statistics Trinity College Dublin Dublin, Ireland mgrogan@tcd.ie, Rozenn.Dahyot@tcd.ie www.scss.tcd.ie/
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can
More informationFundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F
Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental
More informationComputer Vision I - Filtering and Feature detection
Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image
More informationEpipolar Geometry and Stereo Vision
CS 1674: Intro to Computer Vision Epipolar Geometry and Stereo Vision Prof. Adriana Kovashka University of Pittsburgh October 5, 2016 Announcement Please send me three topics you want me to review next
More information3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa
3D Scanning Qixing Huang Feb. 9 th 2017 Slide Credit: Yasutaka Furukawa Geometry Reconstruction Pipeline This Lecture Depth Sensing ICP for Pair-wise Alignment Next Lecture Global Alignment Pairwise Multiple
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationFast Guided Global Interpolation for Depth and. Yu Li, Dongbo Min, Minh N. Do, Jiangbo Lu
Fast Guided Global Interpolation for Depth and Yu Li, Dongbo Min, Minh N. Do, Jiangbo Lu Introduction Depth upsampling and motion interpolation are often required to generate a dense, high-quality, and
More informationStereo Vision Based Image Maching on 3D Using Multi Curve Fitting Algorithm
Stereo Vision Based Image Maching on 3D Using Multi Curve Fitting Algorithm 1 Dr. Balakrishnan and 2 Mrs. V. Kavitha 1 Guide, Director, Indira Ganesan College of Engineering, Trichy. India. 2 Research
More informationLecture 16: Computer Vision
CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field
More informationLecture 16: Computer Vision
CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods
More informationLocal features: detection and description. Local invariant features
Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms
More informationMobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform
Mobile Point Fusion Real-time 3d surface reconstruction out of depth images on a mobile platform Aaron Wetzler Presenting: Daniel Ben-Hoda Supervisors: Prof. Ron Kimmel Gal Kamar Yaron Honen Supported
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationLecture 10 Dense 3D Reconstruction
Institute of Informatics Institute of Neuroinformatics Lecture 10 Dense 3D Reconstruction Davide Scaramuzza 1 REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time M. Pizzoli, C. Forster,
More informationComparison between Motion Analysis and Stereo
MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationMatching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.
Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.
More informationWhy is computer vision difficult?
Why is computer vision difficult? Viewpoint variation Illumination Scale Why is computer vision difficult? Intra-class variation Motion (Source: S. Lazebnik) Background clutter Occlusion Challenges: local
More information3D Object Representations. COS 526, Fall 2016 Princeton University
3D Object Representations COS 526, Fall 2016 Princeton University 3D Object Representations How do we... Represent 3D objects in a computer? Acquire computer representations of 3D objects? Manipulate computer
More informationRobotics Programming Laboratory
Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car
More informationSPM-BP: Sped-up PatchMatch Belief Propagation for Continuous MRFs. Yu Li, Dongbo Min, Michael S. Brown, Minh N. Do, Jiangbo Lu
SPM-BP: Sped-up PatchMatch Belief Propagation for Continuous MRFs Yu Li, Dongbo Min, Michael S. Brown, Minh N. Do, Jiangbo Lu Discrete Pixel-Labeling Optimization on MRF 2/37 Many computer vision tasks
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationLecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza
Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time, ICRA 14, by Pizzoli, Forster, Scaramuzza [M. Pizzoli, C. Forster,
More informationLecture 10: Multi-view geometry
Lecture 10: Multi-view geometry Professor Stanford Vision Lab 1 What we will learn today? Review for stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationComputer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier
Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear
More informationStructure from Motion
/8/ Structure from Motion Computer Vision CS 43, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from motion
More informationLocal features and image matching. Prof. Xin Yang HUST
Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source
More informationDepth Camera for Mobile Devices
Depth Camera for Mobile Devices Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Stereo Cameras Structured Light Cameras Time of Flight (ToF) Camera Inferring 3D Points Given we have
More informationSensor Modalities. Sensor modality: Different modalities:
Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature
More informationMonocular Tracking and Reconstruction in Non-Rigid Environments
Monocular Tracking and Reconstruction in Non-Rigid Environments Kick-Off Presentation, M.Sc. Thesis Supervisors: Federico Tombari, Ph.D; Benjamin Busam, M.Sc. Patrick Ruhkamp 13.01.2017 Introduction Motivation:
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationStereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world
More informationLecture 9 & 10: Stereo Vision
Lecture 9 & 10: Stereo Vision Professor Fei- Fei Li Stanford Vision Lab 1 What we will learn today? IntroducEon to stereo vision Epipolar geometry: a gentle intro Parallel images Image receficaeon Solving
More information3D Modeling of Objects Using Laser Scanning
1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More information