Data Term. Michael Bleyer LVA Stereo Vision
|
|
- Aleesha Ward
- 5 years ago
- Views:
Transcription
1 Data Term Michael Bleyer LVA Stereo Vision
2 What happened last time? We have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > N s( p, q) We have learned about an optimization algorithm that can minimize our energy Belief Propagation We have investigated the smoothness function s(): First order smoothness functions Linear Potts model Truncated linear Second order smoothness Edge/segment-sensitive smoothness functions 2
3 What is Going to Happen Today? We will look at the data term 3
4 Data Term Michael Bleyer LVA Stereo Vision
5 The Data Term Measures the goodness of correspondences. The color of corresponding pixels are compared. Formally defined as p I E data ( D) = m( p, q) where m() computes the intensity/color dissimilarity between two pixels p is a pixel of the left image q is p s correspondence in the right image according to disparity map D. We will now look at different methods for defining the pixel dissimilarity function m(). For the first match measures, I will assume that the photoconsistency assumption holds: I will relax this assumption later on. For now, we will just consider grey-scale images: I will speak about color matching later on. 5
6 Absolute Intensity Difference Pixel dissimilarity is computed as the absolute difference in intensity values: m( p, q) = Ip Iq where Ip denotes the intensity of pixel p. Quite common choice in stereo matching 6
7 Squared Intensity Difference Pixel dissimilarity is computed as the squared difference in intensity values: m( p, q) = ( Ip Iq ) 2 Probably not a good choice for stereo matching: Sensitive to outliers Typically performs slightly worse than absolute difference. 7
8 Truncated Absolute Difference Pixel dissimilarity is computed as the truncated absolute difference in intensity values: m( p, q) = min( Ip Iq, k) where k is a user-defined value. Advantage: Robustness against outliers. Better performance in occluded regions: Since an occluded pixel does not have a matching point, it will have high pixel dissimilarity. Truncation lowers the data costs for the correct disparity. There is still hope that the correct disparity can be propagated from surrounding non-occluded pixels. Disadvantage: You have an additional parameter. An optimal value for k is difficult to find or might not even exist. 8
9 Sampling Insensitive Measure [Birchfield,PAMI98] In the real world, intensity is a continuous function Intensity X-Coordinates (Left image)
10 Sampling Insensitive Measure [Birchfield,PAMI98] When we take a photo, we sample this continuous curve to derive discrete pixels. Intensity X-Coordinates (Left image)
11 Sampling Insensitive Measure [Birchfield,PAMI98] Problem: Samples are typically taken at different curve positions in the left and right images. Intensity Intensity X-Coordinates (Left image) X-Coordinates (Right image)
12 Sampling Insensitive Measure [Birchfield,PAMI98] Due to this different sampling, corresponding pixels have different intensity values (and this will oftentimes lead to wrong matches). Intensity p Intensity p X-Coordinates X-Coordinates p and p are corresponding pixels, (Left image) (Right image) but have different intensity values
13 Sampling Insensitive Measure [Birchfield,PAMI98] Idea of [Birchfield,PAMI98]: We also look p s horizontal neighbors p p Intensity q r Intensity X-Coordinates (Left image) X-Coordinates (Right image)
14 Sampling Insensitive Measure [Birchfield,PAMI98] We interpolate the intensity of the pixel p- that lies in between p p q and q by I + I I p = 2 p Intensity p- q r Intensity p X-Coordinates (Left image) X-Coordinates (Right image)
15 Sampling Insensitive Measure [Birchfield,PAMI98] We also interpolate the intensity of the pixel p+ that lies in between p r p and r by I + I I p + = 2 p Intensity p- p+ q r Intensity p X-Coordinates (Left image) X-Coordinates (Right image)
16 Sampling Insensitive Measure [Birchfield,PAMI98] We compute the sampling insensitive matching as p m( p, p') = min( p p', p p', p + p' ) Intensity p- p+ q Intensity p X-Coordinates (Left image) X-Coordinates (Right image)
17 Sampling Insensitive Measure [Birchfield,PAMI98] We compute the sampling insensitive matching as m( p, p') = min( p p', p p', p + p' ) p Intensity p- p+ q Intensity p Advantage: We have also included the correctly sampled pixel p X-Coordinates X-Coordinates => low intensity dissimilarity and high chances for (Left image) correct match (Right image)
18 Sampling Insensitive Measure [Birchfield,PAMI98] We should do this in a symmetric way: m( p, p') = min( p p', p p', p + p', p p', p p' + ) p Intensity p- p+ q Intensity p - p p + X-Coordinates X-Coordinates (Left image) (Right image)
19 Violations of Photo-Consistency Assumption In real-world stereo images, the photo-consistency assumption is almost never perfectly fulfilled. (Left image) We call a pixel radiometric distorted if its intensity is different in left and right images. There are various reasons for radiometric distortions, e.g.: Different illumination conditions in the images Different exposure times Different sensor characteristics (Right image) 19
20 Radiometric Insensitive Match Measures Treatment of radiometric distortions has great impact on the quality of results. Unfortunately, none of the above match measures is able to cope with radiometric distortions. We will now learn about 3 radiometric insensitive measures: Mutual Information Zero mean Normalized Cross-Correlation (ZNCC) Census 20
21 Mutual Information [Hirschmueller,PAMI08] Advantage: Mutual Information is a pixel-based measure. In contrast to window-based measures, artifacts at disparity borders are avoided To compute Mutual Information matching scores, we need the disparity map: Chicken and Egg problem If we knew the disparity map, then we were already done. This dilemma is typically solved in an iterative fashion: 1. We compute an initial disparity map (e.g., using absolute differences as a dissimilarity function) 2. We compute Mutual Information scores using our current disparity map. 3. We compute a new disparity map using the Mutual Information scores. 4. Goto 2. 21
22 Mutual Information [Hirschmueller,PAMI08] Advantage: Mutual How Information can is we a pixel-based compute measure. the Mutual Information matching scores? In contrast to window-based measures, artifacts at disparity borders are avoided To compute Mutual Information matching scores, we need the disparity map: Disclaimer: Chicken and Egg problem There is quite a lot of theory If we knew the disparity map, then we were already done. This dilemma behind is typically that. solved I in will an iterative focus fashion: on the 1. We compute an initial disparity map (e.g., using absolute differences practical implementation as as a dissimilarity function) 2. We compute Mutual Information described scores in using our current disparity map. [Hirschmueller,PAMI08]. 3. We compute a new disparity map using the Mutual Information scores. 4. Goto 2. 22
23 Computing Mutual Information Scores For each pixel p, we look-up its matching point q in the right image using our current disparity map. We look up the intensity values Ip and Iq. We make an entry at <Ip,Iq> in the diagram bellow. Intensity Right Image 100 Ip=150 Iq= Intensity Left Image For each possible pair of intensity values <Ip,Iq>, our diagram stores how often this pair occurred in the disparity map. 23
24 Computing Mutual Information Scores Let us assume that all corresponding pixels have identical intensity values, i.e. there is no radiometric distortion. The diagram looks like this: Intensity Right Image 45 Intensity Left Image 24
25 Computing Mutual Information Scores Let us assume that the right image is darker than the left one. The diagram looks like this: Intensity Right Image <45 Intensity Left Image 25
26 Computing Mutual Information Scores Let us assume that the left image is darker than the right one. The diagram looks like this: Intensity Right Image >45 Intensity Left Image 26
27 Computing Mutual Information Scores This is what the diagram looks like for the Teddy test set: Black means that the intensity pair occurred very frequently in the disparity map Images taken from [Hirschmueller, PAMI08] 27
28 Computing Mutual Information Scores This is what the diagram looks like for the Teddy test set: Those intensity pairs that occurred frequently should be given low matching costs. Black means that the intensity pair occurred very frequently in the disparity map 28
29 Computing Mutual Information Scores We compute log(p) where P is our diagram. P -log(p) These are our Mutual Information scores Where white pixels mean low matching costs. Side node: For simplicity, I left out two steps where you apply Gaussian smoothing on P and log(p). 29
30 Disadvantage of Mutual Information Global model: Mutual Information can only model radiometric changes that are valid for the whole image. For example, the whole left image is darker than the right one. It cannot model radiometric changes that occur only locally. For example, the bottom left part of the image is darker in the left view. Unfortunately, radiometric distortions are oftentimes local. 30
31 Zero mean Normalized Cross-Correlation (ZNCC) Is defined on windows => Will lead to artefacts at disparity discontinuities Pixel dissimilarity m() computed as m( p, p d ) = q Wp q Wp ( I q ( I I p q ) 2 I p )( I q q Wp d ( I q I d p d I ) p d ) 2 where Wp is the set of all pixels in the window centered at p. Ip is the mean intensity computed over all pixels insider Wp. Subtraction of the mean value serves to normalize intensity values (robustness against radiometric changes) 31
32 Census We center a window on pixel p in the left image. 32
33 Census We center a window on pixel p in the left image. Apply the following transformation: If a pixel has smaller intensity than the window center pixel, write 0. Else write 1. 33
34 Census We center a window on pixel p in the left image. Apply the following transformation: If a pixel has smaller intensity than the window center pixel, write 0. Else write 1. Write the binary values as a bitstring. 34
35 Census We center a window on pixel p in the left image. Apply the following transformation: If a pixel has smaller intensity than the window center pixel, write 0. Else write 1. Write the binary values as a bitstring. Apply the same operations for the window centered on pixel q in the right image. 35
36 Census We center a window on pixel p in the left image. Apply the following transformation: If a pixel has smaller intensity than the window center pixel, write 0. Else write 1. Write the binary values as a bitstring. Apply the same operations for the window centered on pixel q in the right image. Census matching costs are computed as the Hamming distance between the bit strings: Number of positions at which binary values are different 36
37 Census - Discussion We do not directly match the intensities, but the local texture represented as a bit-string. If one image is darker than the other, the bit strings should still agree: You can, for example, add a value of 10 to the intensity values in p s window on the previous slide and will still get the same bit string. Increased robustness if window overlaps a disparity discontinuity. Problems in untextured regions If all pixel have very similar intensities, the values of the bit string largely depend on image noise. Leads to noisy results in untextured regions. 37
38 How Can We Incorporate Color Information? Typically this is done in the most simple way: Compute the match measure individually for each color channel Sum-up the values over all color channels Let us now investigate the role of color 38
39 Why Should Color Help? We have a blue pixel in the left image. We have 2 candidate matches in the right image: A blue pixel A yellow pixel It is quite clear that the blue pixel is the correct match. (Left Image) (Right Image) 39
40 Why Should Color Help? Let us now convert the color images into grey-scale images. In our example blue and yellow colors map to the same grey-value. It is no longer clear which of our 2 candidate pixels is the correct match => Color information reduces ambiguity! (Left Image) (Right Image) 40
41 Why Should Color Help? Let us now convert the color images into grey-scale images. In our example blue and yellow colors map to the same grey-value. It is no longer clear which of our 2 candidate pixels is the correct matchhowever, a lot of stereo algorithms => Color information reduces ambiguity! do not use color information. Is this for a reason? (Left Image) (Right Image) 41
42 Evaluation of Color Matching [Bleyer,3DPVT10] We evaluate the performance of 8 different color systems and grey scale matching: The different color systems affect the data term of our energy. Truncated linear is used as a smoothness term. Energy optimization is accomplished using the Simple Tree dynamic programming method (see session on dynamic programming) We use 30 ground truth pairs from Middlebury as test data. Error is computed as the percentage of pixels having an absolute disparity error > 1 pixel in non-occluded regions. 42
43 Test Set 43
44 Absolute Intensity Difference We compute the pixel dissimilarity as the absolute difference in intensity values. Results (see plot bellow): Grey-scale matching nearly always performs worst. The color system LUV seems to perform better than RGB. 44
45 Absolute Intensity Difference We compute the pixel dissimilarity as the absolute difference in intensity values. Results (see plot bellow): So using color is a good thing? Grey-scale matching nearly always performs worst. The color system LUV seems to perform better than RGB. Well, this is not the whole story. 45
46 Example Results for Dolls Test Set (Left Image) (Grey - Disparity) (LUV - Disparity) Let us have a closer look at the disparity maps. I show two disparity maps where One has been computed by using the absolute difference of grey values as a match function. One has been computed by using the summed-up absolute differences in LUV values.
47 Example Results for Dolls Test Set (Left Image) (Grey - Errors) (LUV - Errors) We should rather look at the error maps Black pixels are pixels whose disparity error is larger than 1 pixel in comparison against the ground truth. Errors are clearly smaller when using LUV.
48 Example Results for Dolls Test Set (Left Image) (Grey - Errors) (LUV - Errors) We should rather look at the error maps Black pixels are pixels whose disparity error is larger than 1 pixel in comparison against the ground truth. Errors are clearly smaller when using LUV.
49 Radiometric Problems in the Dolls Set We have the ground truth disparity map for the Dolls Set. => For each pixel p of the left image, we know its correct correspondence q in the right image. If there are no radiometric distortions, Ip-Iq should be equal to 0. In practice, we obtain the image shown on the right where: Bright pixels have a large value for Ip-Iq. These bright pixels are the result of radiometric distortions. (Data Costs of Ground Truth Solution)
50 Radiometric Problems in the Dolls Set We can apply thresholding on the ground truth data cost image. (Data Costs of Ground Truth Solution) (Smoothed Thresholding)
51 Radiometric Problems in the Dolls Set We can apply thresholding on the ground truth data cost image. There seems to be large overlap between errors in the grey-scale matching result and radiometric distorted regions. (Data Costs of Ground Truth Solution) (Smoothed Thresholding) (Disparity Error when using Grey-Scale Matching)
52 Radiometric Problems in the Dolls Set We can apply thresholding on the ground truth data cost image. There seems to be large overlap between errors in the grey-scale matching result and radiometric distorted regions. Errors in radiometric distorted regions seem to be effectively reduced when using color matching. (Data Costs of Ground Truth Solution) (Smoothed Thresholding) (Disparity Error when using LUV Matching)
53 Radiometric Problems in the Dolls Set We can apply thresholding on the ground truth data cost image. There seems to be large overlap between errors in the grey-scale matching result and radiometric distorted regions. Errors in radiometric Colordistorted might regions be of seem specific to be effectively reduced when using color matching. importance in radiometric distorted image regions (Data Costs of Ground Truth Solution) (Smoothed Thresholding) (Disparity Error when using LUV Matching)
54 Color Helps in Radiometric Distorted Regions We have extracted the radiometric distorted regions for all 30 test images. We now analyze the disparity error separately In regions affected by radiometric distortions In regions unaffected by radiometric distortions Average error percentage in regions affected by radiometric distortions A large improvement can be observed when using color (e.g., LUV) instead of grey scale matching (Grey) 54
55 Color Helps in Radiometric Distorted Regions We have extracted the radiometric distorted regions for all 30 test images. We now analyze the disparity error separately In regions affected by radiometric distortions In regions unaffected by radiometric distortions Regions unaffected by radiometric distortions Almost no improvement 55
56 Color Helps in Radiometric Distorted Regions We have extracted the radiometric distorted regions for all 30 test images. We now analyze the disparity error separately In regions affected by radiometric distortions In regions unaffected by radiometric distortions Average error percentage in all regions. The overall improvement is largely due to considerable improvement in radiometric distorted regions. 56
57 Why Not Directly Using a Radiometric Insensitive Measure? If color only helps in radiometric distorted regions, we can directly use radiometric insensitive measures instead of color. 3 radiometric insensitive measures are tested: Mutual information (MI) Zero mean Normalized Cross-Correlation (NCC) Census (CENSUS) All 3 match measures are applied on grey-scale images 57
58 Why Not Directly Using a Radiometric Insensitive Measure? NCC and CENUS seem to perform best This result is consistent with [Hirschmueller,PAMI09] NCC and CENSUS have the same effect as using color (AbsDif (LUV)): They improve performance in radiometric distorted regions (blue line) NCC and CENSUS are considerably better than color in this respect. 58
59 Using Color with Radiometric Insensitive Measures NCC used with grey scale and 8 color spaces CENSUS used with grey scale and 8 color spaces It seems to be a bad idea to use color in conjunction with radiometric insensitive measures: Grey-scale matching performs better than all 8 color spaces. How can this happen? Increased robustness of color in radiometric regions is not important anymore (NCC and CENSUS do a better job) You practically do not lose texture when deleting color Intensity is probably more robustly capture by nowadays cameras (less 59 noise in the intensity channel)
60 Using Color with Radiometric Insensitive Measures NCC used with grey scale and 8 color spaces My advice: You should not use color. You should definitely use a radiometric insensitive match measure. It seems to be a bad idea to use color in conjunction with radiometric insensitive measures: Grey-scale matching performs better than all 8 color spaces. CENSUS used with grey scale and 8 color spaces How can this happen? Increased robustness of color in radiometric regions is not important anymore (NCC and CENSUS do a better job) You practically do not lose texture when deleting color Intensity is probably more robustly capture by nowadays cameras (less 60 noise in the intensity channel)
61 Support Aggregation in Global Stereo Matching In the session on local stereo methods, we have learned about support aggregation: We do not match single pixels, but windows Until recently, using windows in global stereo was considered a bad idea: You get artefacts at disparity discontinuities!
62 Support Aggregation in Global Stereo Matching We have also learned about new segmentation-based aggregation schemes: They deliver excellent performance In general, they will even improve the performance near depth discontinuities Apart from increased computational costs, there is relatively little that speaks against using these aggregation methods for implementing your pixel dissimilarity function m(). Standard Support Weights Geodesic Support Weights
63 Summary Data Term: Standard dissimilarity measures: Absolute / Squared intensity differences Sampling insensitive measures Radiometric insensitive measures: Mutual information ZNCC Census The role of color Segmentation-based aggregation schemes
64 References [Birchfiled,PAMI98] S. Birchfield, C. Tomasi, A Pixel Dissimilarity Measure That Is Insensitive to Image Sampling, PAMI, vol. 20, [Bleyer,3DPVT10] M. Bleyer, S. Chambon, Does Color Really Help in Dense Stereo Matching?, 3DPVT, [Hirschmueller,PAMI08] H. Hirschmueller, Stereo Processing by Semi- Global Matching and Mutual Information, PAMI, vol. 30, no. 2, [Hirschmueller,PAMI09] H. Hirschmueller, D. Scharstein, Evaluation of Stereo Matching Costs on Images with Radiometric Differences, PAMI, vol. 31, no. 9, 2009.
Does Color Really Help in Dense Stereo Matching?
Does Color Really Help in Dense Stereo Matching? Michael Bleyer 1 and Sylvie Chambon 2 1 Vienna University of Technology, Austria 2 Laboratoire Central des Ponts et Chaussées, Nantes, France Dense Stereo
More informationSegmentation Based Stereo. Michael Bleyer LVA Stereo Vision
Segmentation Based Stereo Michael Bleyer LVA Stereo Vision What happened last time? Once again, we have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > We have investigated the matching
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationEvaluation of Different Methods for Using Colour Information in Global Stereo Matching Approaches
Evaluation of Different Methods for Using Colour Information in Global Stereo Matching Approaches Michael Bleyer 1, Sylvie Chambon 2, Uta Poppe 1 and Margrit Gelautz 1 1 Vienna University of Technology,
More informationStereo Vision II: Dense Stereo Matching
Stereo Vision II: Dense Stereo Matching Nassir Navab Slides prepared by Christian Unger Outline. Hardware. Challenges. Taxonomy of Stereo Matching. Analysis of Different Problems. Practical Considerations.
More informationSupplementary Material for ECCV 2012 Paper: Extracting 3D Scene-consistent Object Proposals and Depth from Stereo Images
Supplementary Material for ECCV 2012 Paper: Extracting 3D Scene-consistent Object Proposals and Depth from Stereo Images Michael Bleyer 1, Christoph Rhemann 1,2, and Carsten Rother 2 1 Vienna University
More informationSubpixel accurate refinement of disparity maps using stereo correspondences
Subpixel accurate refinement of disparity maps using stereo correspondences Matthias Demant Lehrstuhl für Mustererkennung, Universität Freiburg Outline 1 Introduction and Overview 2 Refining the Cost Volume
More informationColour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation
ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology
More informationEvaluation of Stereo Matching Costs on Close Range, Aerial and Satellite Images
Evaluation of Stereo Matching Costs on Close Range, Aerial and Satellite Images Ke Zhu 1, Pablo d Angelo 2 and Matthias Butenuth 1 1 Remote Sensing Technology, Technische Universität München, Arcisstr
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationThe Naked Truth about Cost Functions for Stereo Matching
The Naked Truth about Cost Functions for Stereo Matching Simon Hermann and Reinhard Klette The.enpeda.. Project, The University of Auckland Auckland, New Zealand Abstract. This paper reports about the
More informationFilter Flow: Supplemental Material
Filter Flow: Supplemental Material Steven M. Seitz University of Washington Simon Baker Microsoft Research We include larger images and a number of additional results obtained using Filter Flow [5]. 1
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 19: Graph Cuts source S sink T Readings Szeliski, Chapter 11.2 11.5 Stereo results with window search problems in areas of uniform texture Window-based matching
More informationCS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence
CS4495/6495 Introduction to Computer Vision 3B-L3 Stereo correspondence For now assume parallel image planes Assume parallel (co-planar) image planes Assume same focal lengths Assume epipolar lines are
More informationEE368 Project: Visual Code Marker Detection
EE368 Project: Visual Code Marker Detection Kahye Song Group Number: 42 Email: kahye@stanford.edu Abstract A visual marker detection algorithm has been implemented and tested with twelve training images.
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationComputer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16
Computer Vision I Dense Stereo Correspondences Anita Sellent Stereo Two Cameras Overlapping field of view Known transformation between cameras From disparity compute depth [ Bradski, Kaehler: Learning
More informationPatchMatch Stereo - Stereo Matching with Slanted Support Windows
M. BLEYER, C. RHEMANN, C. ROTHER: PATCHMATCH STEREO 1 PatchMatch Stereo - Stereo Matching with Slanted Support Windows Michael Bleyer 1 bleyer@ims.tuwien.ac.at Christoph Rhemann 1 rhemann@ims.tuwien.ac.at
More informationA FAST SEGMENTATION-DRIVEN ALGORITHM FOR ACCURATE STEREO CORRESPONDENCE. Stefano Mattoccia and Leonardo De-Maeztu
A FAST SEGMENTATION-DRIVEN ALGORITHM FOR ACCURATE STEREO CORRESPONDENCE Stefano Mattoccia and Leonardo De-Maeztu University of Bologna, Public University of Navarre ABSTRACT Recent cost aggregation strategies
More informationSIMPLE BUT EFFECTIVE TREE STRUCTURES FOR DYNAMIC PROGRAMMING-BASED STEREO MATCHING
SIMPLE BUT EFFECTIVE TREE STRUCTURES FOR DYNAMIC PROGRAMMING-BASED STEREO MATCHING Michael Bleyer and Margrit Gelautz Institute for Software Technology and Interactive Systems, ViennaUniversityofTechnology
More informationDetecting motion by means of 2D and 3D information
Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,
More informationPerceptual Grouping from Motion Cues Using Tensor Voting
Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project
More informationRecall: Derivative of Gaussian Filter. Lecture 7: Correspondence Matching. Observe and Generalize. Observe and Generalize. Observe and Generalize
Recall: Derivative of Gaussian Filter G x I x =di(x,y)/dx Lecture 7: Correspondence Matching Reading: T&V Section 7.2 I(x,y) G y convolve convolve I y =di(x,y)/dy Observe and Generalize Derivative of Gaussian
More informationRobert Collins CSE486, Penn State. Lecture 09: Stereo Algorithms
Lecture 09: Stereo Algorithms left camera located at (0,0,0) Recall: Simple Stereo System Y y Image coords of point (X,Y,Z) Left Camera: x T x z (, ) y Z (, ) x (X,Y,Z) z X right camera located at (T x,0,0)
More informationCONTENTS. High-Accuracy Stereo Depth Maps Using Structured Light. Yeojin Yoon
[Paper Seminar 7] CVPR2003, Vol.1, pp.195-202 High-Accuracy Stereo Depth Maps Using Structured Light Daniel Scharstein Middlebury College Richard Szeliski Microsoft Research 2012. 05. 30. Yeojin Yoon Introduction
More informationWhat have we leaned so far?
What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic
More informationStereo Vision in Structured Environments by Consistent Semi-Global Matching
Stereo Vision in Structured Environments by Consistent Semi-Global Matching Heiko Hirschmüller Institute of Robotics and Mechatronics Oberpfaffenhofen German Aerospace Center (DLR), 82230 Wessling, Germany.
More informationIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1. Stereo Processing by Semi-Global Matching and Mutual Information. Heiko Hirschmüller
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Stereo Processing by Semi-Global Matching and Mutual Information Heiko Hirschmüller Abstract This paper describes the Semi-Global Matching
More informationLast update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1
Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus
More informationStereo Correspondence with Occlusions using Graph Cuts
Stereo Correspondence with Occlusions using Graph Cuts EE368 Final Project Matt Stevens mslf@stanford.edu Zuozhen Liu zliu2@stanford.edu I. INTRODUCTION AND MOTIVATION Given two stereo images of a scene,
More informationGPU-Accelerated Real-Time Stereo Matching. Master s thesis in Computer Science algorithms, languages and logic PETER HILLERSTRÖM
GPU-Accelerated Real-Time Stereo Matching Master s thesis in Computer Science algorithms, languages and logic PETER HILLERSTRÖM Department of Computer Science and Engineering CHALMERS UNIVERSITY OF TECHNOLOGY
More informationStereo Video Processing for Depth Map
Stereo Video Processing for Depth Map Harlan Hile and Colin Zheng University of Washington Abstract This paper describes the implementation of a stereo depth measurement algorithm in hardware on Field-Programmable
More informationReal-time Global Stereo Matching Using Hierarchical Belief Propagation
1 Real-time Global Stereo Matching Using Hierarchical Belief Propagation Qingxiong Yang 1 Liang Wang 1 Ruigang Yang 1 Shengnan Wang 2 Miao Liao 1 David Nistér 1 1 Center for Visualization and Virtual Environments,
More informationSpatio-Temporal Stereo Disparity Integration
Spatio-Temporal Stereo Disparity Integration Sandino Morales and Reinhard Klette The.enpeda.. Project, The University of Auckland Tamaki Innovation Campus, Auckland, New Zealand pmor085@aucklanduni.ac.nz
More informationCHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION
CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes
More informationProject Updates Short lecture Volumetric Modeling +2 papers
Volumetric Modeling Schedule (tentative) Feb 20 Feb 27 Mar 5 Introduction Lecture: Geometry, Camera Model, Calibration Lecture: Features, Tracking/Matching Mar 12 Mar 19 Mar 26 Apr 2 Apr 9 Apr 16 Apr 23
More informationReal-Time Correlation-Based Stereo Vision with Reduced Border Errors
International Journal of Computer Vision 47(1/2/3), 229 246, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Real-Time Correlation-Based Stereo Vision with Reduced Border Errors
More informationIntegrating LIDAR into Stereo for Fast and Improved Disparity Computation
Integrating LIDAR into Stereo for Fast and Improved Computation Hernán Badino, Daniel Huber, and Takeo Kanade Robotics Institute, Carnegie Mellon University Pittsburgh, PA, USA Stereo/LIDAR Integration
More informationBilateral and Trilateral Adaptive Support Weights in Stereo Vision
Cost -based In GPU and Support Weights in Vision Student, Colorado School of Mines rbeethe@mines.edu April 7, 2016 1 / 36 Overview Cost -based In GPU 1 Cost 2 3 -based 4 In GPU 2 / 36 Cost -based In GPU
More informationStereo Matching with Reliable Disparity Propagation
Stereo Matching with Reliable Disparity Propagation Xun Sun, Xing Mei, Shaohui Jiao, Mingcai Zhou, Haitao Wang Samsung Advanced Institute of Technology, China Lab Beijing, China xunshine.sun,xing.mei,sh.jiao,mingcai.zhou,ht.wang@samsung.com
More informationHierarchical Belief Propagation To Reduce Search Space Using CUDA for Stereo and Motion Estimation
Hierarchical Belief Propagation To Reduce Search Space Using CUDA for Stereo and Motion Estimation Scott Grauer-Gray and Chandra Kambhamettu University of Delaware Newark, DE 19716 {grauerg, chandra}@cis.udel.edu
More informationComputer Vision I - Filtering and Feature detection
Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image
More informationFast Stereo Matching of Feature Links
Fast Stereo Matching of Feature Links 011.05.19 Chang-il, Kim Introduction Stereo matching? interesting topics of computer vision researches To determine a disparity between stereo images A fundamental
More informationData-driven Depth Inference from a Single Still Image
Data-driven Depth Inference from a Single Still Image Kyunghee Kim Computer Science Department Stanford University kyunghee.kim@stanford.edu Abstract Given an indoor image, how to recover its depth information
More informationCOS Lecture 10 Autonomous Robot Navigation
COS 495 - Lecture 10 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization
More informationDepth Discontinuities by
Depth Discontinuities by Pixel-to-Pixel l Stereo Stan Birchfield, Carlo Tomasi Proceedings of the 1998 IEEE International Conference on Computer Vision, i Bombay, India - Introduction Cartoon artists Known
More informationComparison of Graph Cuts with Belief Propagation for Stereo, using Identical MRF Parameters
Comparison of Graph Cuts with Belief Propagation for Stereo, using Identical MRF Parameters Marshall F. Tappen William T. Freeman Computer Science and Artificial Intelligence Laboratory Massachusetts Institute
More informationEmbedded real-time stereo estimation via Semi-Global Matching on the GPU
Embedded real-time stereo estimation via Semi-Global Matching on the GPU Daniel Hernández Juárez, Alejandro Chacón, Antonio Espinosa, David Vázquez, Juan Carlos Moure and Antonio M. López Computer Architecture
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationStereo Matching! Christian Unger 1,2, Nassir Navab 1!! Computer Aided Medical Procedures (CAMP), Technische Universität München, Germany!!
Stereo Matching Christian Unger 12 Nassir Navab 1 1 Computer Aided Medical Procedures CAMP) Technische Universität München German 2 BMW Group München German Hardware Architectures. Microprocessors Pros:
More informationImage Segmentation Via Iterative Geodesic Averaging
Image Segmentation Via Iterative Geodesic Averaging Asmaa Hosni, Michael Bleyer and Margrit Gelautz Institute for Software Technology and Interactive Systems, Vienna University of Technology Favoritenstr.
More informationPerformance of Stereo Methods in Cluttered Scenes
Performance of Stereo Methods in Cluttered Scenes Fahim Mannan and Michael S. Langer School of Computer Science McGill University Montreal, Quebec H3A 2A7, Canada { fmannan, langer}@cim.mcgill.ca Abstract
More informationsegments. The geometrical relationship of adjacent planes such as parallelism and intersection is employed for determination of whether two planes sha
A New Segment-based Stereo Matching using Graph Cuts Daolei Wang National University of Singapore EA #04-06, Department of Mechanical Engineering Control and Mechatronics Laboratory, 10 Kent Ridge Crescent
More information3D RECONSTRUCTION FROM STEREO/ RANGE IMAGES
University of Kentucky UKnowledge University of Kentucky Master's Theses Graduate School 2007 3D RECONSTRUCTION FROM STEREO/ RANGE IMAGES Qingxiong Yang University of Kentucky, qyang2@uky.edu Click here
More informationMACHINE VISION APPLICATIONS. Faculty of Engineering Technology, Technology Campus, Universiti Teknikal Malaysia Durian Tunggal, Melaka, Malaysia
Journal of Fundamental and Applied Sciences ISSN 1112-9867 Research Article Special Issue Available online at http://www.jfas.info DISPARITY REFINEMENT PROCESS BASED ON RANSAC PLANE FITTING FOR MACHINE
More informationDirect Methods in Visual Odometry
Direct Methods in Visual Odometry July 24, 2017 Direct Methods in Visual Odometry July 24, 2017 1 / 47 Motivation for using Visual Odometry Wheel odometry is affected by wheel slip More accurate compared
More informationImproved depth map estimation in Stereo Vision
Improved depth map estimation in Stereo Vision Hajer Fradi and and Jean-Luc Dugelay EURECOM, Sophia Antipolis, France ABSTRACT In this paper, we present a new approach for dense stereo matching which is
More informationStereo imaging ideal geometry
Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and
More informationGeometric Reconstruction Dense reconstruction of scene geometry
Lecture 5. Dense Reconstruction and Tracking with Real-Time Applications Part 2: Geometric Reconstruction Dr Richard Newcombe and Dr Steven Lovegrove Slide content developed from: [Newcombe, Dense Visual
More information3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationSegment-based Stereo Matching Using Graph Cuts
Segment-based Stereo Matching Using Graph Cuts Li Hong George Chen Advanced System Technology San Diego Lab, STMicroelectronics, Inc. li.hong@st.com george-qian.chen@st.com Abstract In this paper, we present
More informationChaplin, Modern Times, 1936
Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections
More informationA Comparative Study of Stereovision Algorithms
A Comparative Study of Stereovision Algorithms Elena Bebeşelea-Sterp NTT DATA ROMANIA Sibiu, Romania Raluca Brad Faculty of Engineering Lucian Blaga University of Sibiu Sibiu, Romania Remus Brad Faculty
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationMotion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures
Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of
More informationA Statistical Consistency Check for the Space Carving Algorithm.
A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper
More informationStereo Matching.
Stereo Matching Stereo Vision [1] Reduction of Searching by Epipolar Constraint [1] Photometric Constraint [1] Same world point has same intensity in both images. True for Lambertian surfaces A Lambertian
More informationFusing Color and Texture Features for Stereo Matching
Proceeding of the IEEE International Conference on Information and Automation Yinchuan, China, August 2013 Fusing Color and Texture Features for Stereo Matching Yaolin Hou, Jian Yao, Bing Zhou, and Yaping
More informationCS664 Lecture #16: Image registration, robust statistics, motion
CS664 Lecture #16: Image registration, robust statistics, motion Some material taken from: Alyosha Efros, CMU http://www.cs.cmu.edu/~efros Xenios Papademetris http://noodle.med.yale.edu/~papad/various/papademetris_image_registration.p
More informationPublic Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923
Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Teesta suspension bridge-darjeeling, India Mark Twain at Pool Table", no date, UCR Museum of Photography Woman getting eye exam during
More informationEdge and corner detection
Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements
More informationA layered stereo matching algorithm using image segmentation and global visibility constraints
ISPRS Journal of Photogrammetry & Remote Sensing 59 (2005) 128 150 www.elsevier.com/locate/isprsjprs A layered stereo matching algorithm using image segmentation and global visibility constraints Michael
More informationComputer Vision I - Basics of Image Processing Part 1
Computer Vision I - Basics of Image Processing Part 1 Carsten Rother 28/10/2014 Computer Vision I: Basics of Image Processing Link to lectures Computer Vision I: Basics of Image Processing 28/10/2014 2
More informationUsing temporal seeding to constrain the disparity search range in stereo matching
Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department
More informationTowards a Simulation Driven Stereo Vision System
Towards a Simulation Driven Stereo Vision System Martin Peris Cyberdyne Inc., Japan Email: martin peris@cyberdyne.jp Sara Martull University of Tsukuba, Japan Email: info@martull.com Atsuto Maki Toshiba
More informationFast Stereo Matching using Adaptive Window based Disparity Refinement
Avestia Publishing Journal of Multimedia Theory and Applications (JMTA) Volume 2, Year 2016 Journal ISSN: 2368-5956 DOI: 10.11159/jmta.2016.001 Fast Stereo Matching using Adaptive Window based Disparity
More informationSegmentation-based Disparity Plane Fitting using PSO
, pp.141-145 http://dx.doi.org/10.14257/astl.2014.47.33 Segmentation-based Disparity Plane Fitting using PSO Hyunjung, Kim 1, Ilyong, Weon 2, Youngcheol, Jang 3, Changhun, Lee 4 1,4 Department of Computer
More informationDiscrete Optimization Methods in Computer Vision CSE 6389 Slides by: Boykov Modified and Presented by: Mostafa Parchami Basic overview of graph cuts
Discrete Optimization Methods in Computer Vision CSE 6389 Slides by: Boykov Modified and Presented by: Mostafa Parchami Basic overview of graph cuts [Yuri Boykov, Olga Veksler, Ramin Zabih, Fast Approximation
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely Light & Perception Announcements Quiz on Tuesday Project 3 code due Monday, April 17, by 11:59pm artifact due Wednesday, April 19, by 11:59pm Can we determine shape
More informationComputer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16
Announcements Stereo III CSE252A Lecture 16 HW1 being returned HW3 assigned and due date extended until 11/27/12 No office hours today No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can
More informationOptimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform
Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform Torsten Palfner, Alexander Mali and Erika Müller Institute of Telecommunications and Information Technology, University of
More informationCorrespondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong
More informationVirtual Rephotography: Novel View Prediction Error for 3D Reconstruction
Supplemental Material for ACM Transactions on Graphics 07 paper Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction Michael Waechter, Mate Beljan, Simon Fuhrmann, Nils Moehrle, Johannes
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationSPM-BP: Sped-up PatchMatch Belief Propagation for Continuous MRFs. Yu Li, Dongbo Min, Michael S. Brown, Minh N. Do, Jiangbo Lu
SPM-BP: Sped-up PatchMatch Belief Propagation for Continuous MRFs Yu Li, Dongbo Min, Michael S. Brown, Minh N. Do, Jiangbo Lu Discrete Pixel-Labeling Optimization on MRF 2/37 Many computer vision tasks
More informationA novel heterogeneous framework for stereo matching
A novel heterogeneous framework for stereo matching Leonardo De-Maeztu 1, Stefano Mattoccia 2, Arantxa Villanueva 1 and Rafael Cabeza 1 1 Department of Electrical and Electronic Engineering, Public University
More informationModel-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a
96 Chapter 7 Model-Based Stereo 7.1 Motivation The modeling system described in Chapter 5 allows the user to create a basic model of a scene, but in general the scene will have additional geometric detail
More informationSymStereo: Stereo Matching using Induced Symmetry
Int J Computer Vision manuscript No. (will be inserted by the editor) SymStereo: Stereo Matching using Induced Symmetry Michel Antunes João P. Barreto Received: date / Accepted: date Abstract Stereo methods
More informationAccurate Disparity Estimation Based on Integrated Cost Initialization
Sensors & Transducers 203 by IFSA http://www.sensorsportal.com Accurate Disparity Estimation Based on Integrated Cost Initialization Haixu Liu, 2 Xueming Li School of Information and Communication Engineering,
More informationProbabilistic Correspondence Matching using Random Walk with Restart
C. OH, B. HAM, K. SOHN: PROBABILISTIC CORRESPONDENCE MATCHING 1 Probabilistic Correspondence Matching using Random Walk with Restart Changjae Oh ocj1211@yonsei.ac.kr Bumsub Ham mimo@yonsei.ac.kr Kwanghoon
More informationFast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics
Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics Heiko Hirschmüller, Peter R. Innocent and Jon M. Garibaldi Centre for Computational Intelligence, De Montfort
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints
More informationAdaptive Support-Weight Approach for Correspondence Search
650 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 4, APRIL 2006 Adaptive Support-Weight Approach for Correspondence Search Kuk-Jin Yoon, Student Member, IEEE, and In So Kweon,
More informationPanoramic Image Stitching
Mcgill University Panoramic Image Stitching by Kai Wang Pengbo Li A report submitted in fulfillment for the COMP 558 Final project in the Faculty of Computer Science April 2013 Mcgill University Abstract
More informationME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"
ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies" lhm@jpl.nasa.gov, 818-354-3722" Announcements" First homework grading is done! Second homework is due
More informationStatic Scene Reconstruction
GPU supported Real-Time Scene Reconstruction with a Single Camera Jan-Michael Frahm, 3D Computer Vision group, University of North Carolina at Chapel Hill Static Scene Reconstruction 1 Capture on campus
More informationRecognition of Object Contours from Stereo Images: an Edge Combination Approach
Recognition of Object Contours from Stereo Images: an Edge Combination Approach Margrit Gelautz and Danijela Markovic Institute for Software Technology and Interactive Systems, Vienna University of Technology
More information