Filter Flow: Supplemental Material

Similar documents
Filter Flow. Steven M. Seitz University of Washington. Simon Baker. Abstract. 1. Introduction. at Microsoft Research.

Comparison of stereo inspired optical flow estimation techniques

EE795: Computer Vision and Intelligent Systems

CS6670: Computer Vision

Motion Estimation with Adaptive Regularization and Neighborhood Dependent Constraint

CONTENTS. High-Accuracy Stereo Depth Maps Using Structured Light. Yeojin Yoon

EE795: Computer Vision and Intelligent Systems

Data Term. Michael Bleyer LVA Stereo Vision

OPTICAL FLOW ESTIMATION USING APPROXIMATE NEAREST NEIGHBOR FIELD FUSION. O. U. Nirmal Jith S. Avinash Ramakanth R. Venkatesh Babu

Improving Motion Estimation Using Image-Driven Functions and Hybrid Scheme

Part II: Modeling Aspects

Perceptual Grouping from Motion Cues Using Tensor Voting

Stereo Matching.

Hierarchical Belief Propagation To Reduce Search Space Using CUDA for Stereo and Motion Estimation

Learning Optical Flow

Subpixel accurate refinement of disparity maps using stereo correspondences

A New Adaptive Focus Measure for Shape From Focus

Motion Estimation using Block Overlap Minimization

Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo

Optical Flow Estimation on Coarse-to-Fine Region-Trees using Discrete Optimization

Stereo Correspondence with Occlusions using Graph Cuts

Why is computer vision difficult?

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

A FAST SEGMENTATION-DRIVEN ALGORITHM FOR ACCURATE STEREO CORRESPONDENCE. Stefano Mattoccia and Leonardo De-Maeztu

Lecture 10: Multi view geometry

What have we leaned so far?

Notes 9: Optical Flow

Real-Time Disparity Map Computation Based On Disparity Space Image

Does Color Really Help in Dense Stereo Matching?

Reliability Based Cross Trilateral Filtering for Depth Map Refinement

Ping Tan. Simon Fraser University

An Implementation of Spatial Algorithm to Estimate the Focus Map from a Single Image

Keywords:Synthetic Data, IBR, Data Generation Tool. Abstract

Fast Stereo Matching of Feature Links

A Database and Evaluation Methodology for Optical Flow

Motion Estimation. There are three main types (or applications) of motion estimation:

A novel heterogeneous framework for stereo matching

Supplementary Material for ECCV 2012 Paper: Extracting 3D Scene-consistent Object Proposals and Depth from Stereo Images

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Depth Enhancement by Fusion for Passive and Active Sensing

Stereo vision. Many slides adapted from Steve Seitz

Supplementary Material for Zoom and Learn: Generalizing Deep Stereo Matching to Novel Domains

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

FusionFlow: Discrete-Continuous Optimization for Optical Flow Estimation

A Segmentation Based Variational Model for Accurate Optical Flow Estimation

Efficient quality enhancement of disparity maps based on alpha matting

Image Based Reconstruction II

Accurate Stereo Matching using the Pixel Response Function. Abstract

segments. The geometrical relationship of adjacent planes such as parallelism and intersection is employed for determination of whether two planes sha

FILTER BASED ALPHA MATTING FOR DEPTH IMAGE BASED RENDERING. Naoki Kodera, Norishige Fukushima and Yutaka Ishibashi

A locally global approach to stereo correspondence

Integrating LIDAR into Stereo for Fast and Improved Disparity Computation

Robust Tracking and Stereo Matching under Variable Illumination

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

A New Approach to 3D Shape Recovery of Local Planar Surface Patches from Shift-Variant Blurred Images

Deep Learning-driven Depth from Defocus via Active Multispectral Quasi-random Projections with Complex Subpatterns

Dense Descriptors for Optical Flow Estimation: A Comparative Study

Stereo Vision Based Image Maching on 3D Using Multi Curve Fitting Algorithm

Supplementary Material for A Locally Linear Regression Model for Boundary Preserving Regularization in Stereo Matching

Structured light , , Computational Photography Fall 2017, Lecture 27

Epipolar Geometry Based On Line Similarity

Input. Output. Problem Definition. Rectified stereo image pair All correspondences lie in same scan lines

Geometric Reconstruction Dense reconstruction of scene geometry

Data-driven Depth Inference from a Single Still Image

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Lecture 4: Spatial Domain Transformations

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3.

High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination

Evaluation of Stereo Matching Costs on Close Range, Aerial and Satellite Images

ILLUMINATION ROBUST OPTICAL FLOW ESTIMATION BY ILLUMINATION-CHROMATICITY DECOUPLING. Sungheon Park and Nojun Kwak

SPM-BP: Sped-up PatchMatch Belief Propagation for Continuous MRFs. Yu Li, Dongbo Min, Michael S. Brown, Minh N. Do, Jiangbo Lu

EE795: Computer Vision and Intelligent Systems

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Announcements. Stereo Vision Wrapup & Intro Recognition

Stereo Matching with Reliable Disparity Propagation

Depth Map Estimation and Colorization of Anaglyph Images Using Local Color Prior and Reverse Intensity Distribution

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Spatio-Temporal Stereo Disparity Integration

Combining Stereo and Monocular Information to Compute Dense Depth Maps that Preserve Depth Discontinuities

Overview of Active Vision Techniques

Global Stereo Matching Leveraged by Sparse Ground Control Points

Stereo. Many slides adapted from Steve Seitz

Static Scene Reconstruction

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Illumination-robust variational optical flow based on cross-correlation

BIL Computer Vision Apr 16, 2014

Performance of Stereo Methods in Cluttered Scenes

Chaplin, Modern Times, 1936

Lecture 14: Computer Vision

The SIFT (Scale Invariant Feature

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters

Combined Bilateral Filter for Enhanced Real-Time Upsampling of Depth Images

Optical Flow Estimation using Fourier Mellin Transform

Towards a Simulation Driven Stereo Vision System

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Segmentation and Tracking of Partial Planar Templates

When Can We Use KinectFusion for Ground Truth Acquisition?

Fog Simulation and Refocusing from Stereo Images

On-line and Off-line 3D Reconstruction for Crisis Management Applications

Transcription:

Filter Flow: Supplemental Material Steven M. Seitz University of Washington Simon Baker Microsoft Research We include larger images and a number of additional results obtained using Filter Flow [5]. 1 Focus In Figure 1 we include a larger version of our linear depth-from-defocus results (Figure 4 in [5]), along with the corresponding result from [6] and from [2]. In Figure 2 we include similar results on another dataset from [6], along with corresponding result from [6] ([2] does not include results on this sequence.) In Figure 3 we include a larger version of our depth-from-defocus and motion results (Figure 5 in [5]). In Figure 4 we include results to illustrate robustness in the presence of blur. 2 Intensity Stereo In Figure 5 we include larger images of our results on the Rocks1 images from [3]. We include similar results on the Cloth4 sequence in Figure 6 and on the Wood1 sequence in Figure 7. 3 Optical Flow In Tables 1 and 2 we include our results on the Middlebury Flow benchmark [1]. We include results both with and without the kernel filters. Table 1 contains the end-point error measure. Table 2 contains the interpolation intensity error. See http://vision.middlebury.edu/flow/ for the other measures and a full set of images of our results with the kernel filters. In Figure 8 we include our results on the challenging (because it has a large search range) Urban sequence, which are amongst the best to date. In Figure 9 we include our results on interpolation results on the Backyard sequence. Our algorithm is one of the few (if any) to render the orange ball reasonably well. In Figure 10 we include a comparison of our results with and without a kernel filter to explain the relative performance of the algorithms in Table 1. 1

Input (one of two) Kernel Filter Width 2 (Computed) Median Filtered Results from [6] Results from [2] Figure 1: Defocus result on an image pair from [6] using our linear MRF-based formulation, showing estimated kernel width (W 2 ) for each pixel (white small, black large.) Light values correspond to sharpening, dark to blur. We also include results from [6] and [2]. 2

Input (one of two) Kernel Filter Width 2 (Computed) Raw, Unfiltered Results from [6] Figure 2: Defocus result on an image pair from [6] using our linear MRF-based formulation, showing estimated kernel width (W 2 ) for each pixel (white small, black large.) Light values correspond to sharpening, dark to blur. We also include the corresponding results from [6]. 3

(a) Far Focused Image (b) Near Focused Image (c) The Affine Alignment (d) Flow Color Coding (e) The Symmetric Kernel Filters (f) The Filter Width 2 Figure 3: Defocus and Motion Estimation. Two images (a b) at different focus settings yields an affine flow field which is roughly a zoom (c) and symmetric per-pixel kernels (e). The filter width 2 (f) visually correlates to scene depth. (d) Contains the color coding of the flow, from [1]. 4

Left Image Blurred Right Image Without Kernel Filter With Kernel Filter Figure 4: Robustness Results in the Presence of Blur. (a) The left Venus image from [4]. (b) The right Venus image, blurred with a Gaussian, σ = 1.4 pixels. (c) Filter Flow results with no kernel filter. (c) Filter Flow results with a kernel filter to model the blur. 5

(a) input image (b) input, different illumination (c) true depth (d) true shading change (e) disparity with illum. (f) estimated shading change (g) disparity without illum. Figure 5: Stereo with Illumination Changes Rocks1. (d) shows the shading changes (ratio of intensities) between images (a) and (b), as a result of moving the light source. Without illumination compensation, stereo performs poorly (g). Adding an MRF over shading changes greatly improves results (e), and the estimated shading (f) closely match (a regularized version of) the ground truth (d). 6

(a) input image (b) input, different illumination (c) true depth (d) true shading change (e) disparity with illum. (f) estimated shading change (g) disparity without illum. Figure 6: Stereo with Illumination Changes Cloth4. (d) shows the shading changes (ratio of intensities) between images (a) and (b), as a result of moving the light source. Without illumination compensation, stereo performs poorly (g). Adding an MRF over shading changes greatly improves results (e), and the estimated shading (f) closely match (a regularized version of) the ground truth (d). 7

(a) input image (b) input, different illumination (c) true depth (d) true shading change (e) disparity with illum. (f) estimated shading change (g) disparity without illum. Figure 7: Stereo with Illumination Changes Wood1. (d) shows the shading changes (ratio of intensities) between images (a) and (b), as a result of moving the light source. Without illumination compensation, stereo performs poorly (g). Adding an MRF over shading changes greatly improves results (e), and the estimated shading (f) closely match (a regularized version of) the ground truth (d). 8

Table 1: Quantitative Result on Middlebury Flow Benchmark: End-Point Error in Pixels. Army Mequon Schefflera Wooden Grove Urban Yosemite Teddy No Kernel Filter 0.20 0.94 0.93 1.23 1.13 0.71 0.22 1.16 With Kernel Filter 0.17 0.43 0.75 0.70 1.13 0.57 0.22 0.96 Table 2: Quantitative Result on Middlebury Flow Benchmark: Interpolation Error in Grey-Levels. Army Mequon Urban Teddy Backyard Basketball Dumptruck Evergreen No Kernel Filter 1.84 3.00 4.17 5.71 10.1 5.82 7.68 7.09 With Kernel Filter 1.82 2.93 4.10 5.58 10.2 5.70 7.63 7.11 4 Higher Order Smoothness In Figure 11 we include a larger version of Figure 2 in [5] which includes a comparison of local MRF smoothness terms (centroid smoothness, 2nd order smoothness, affine smoothness, and filter smoothness.) 5 Linear Global Affine Figure 12 contains the ground-truth and computed flow fields for our results in Section 6.1. 9

Urban Result Urban GT Figure 8: Our results on the challenging Urban sequence from the Middlebury Flow benchmark [1] are amongst the best. Backyard Result Backyard GT Figure 9: Our results on the Backyard sequence in the Middlebury Flow benchmark [1] are one of the few (if any) that render the orange ball anything close to correct. 10

(a) Mequon Image 10 (b) Mequon Image 11 (c) Mequon GT (d) Mequon Without Kernel (d) Mequon With Kernel Figure 10: Results with/without a Kernel Filter on the Mequon image pair from [1]. The input images (a) and (b) contain a shadow to the left of the leftmost figure that moves across the cloth background. Comparing with the ground-truth flow (c), our estimate of the flow (d) is erroneous in that region without the kernel filter. With the addition of a 1 1 kernel filter, our estimate of the flow (e) is much better. 11

(a) Centroid Smoothness (b) 2nd Order Smoothness (c) Affine Smoothness (d) Filter Smoothness Figure 11: Comparison of MRF smoothness terms. 2nd order smoothness produces smoother disparities compared to centroid smoothness, but over-smooths discontinuities. Affine smoothness yields both smooth disparities and relatively clean discontinuities. Filter smoothness is not quite as smooth as affine, but does best at discontinuities, due to the implicit truncation. These were computed using the filter flow stereo approach presented in Section 6.2. 12

(a) Input (1 of 2) (b) GT Motion (c) Result (d) Flow Color Coding Figure 12: Linear affine alignment results. (a) The input image which was synthetically warped with an affine warp to generate the other input. (b) The ground-truth motion. (c) The motion computed by our linear algorithm. (d) The color coding of the flow, from [1]. 13

References [1] S. Baker, D. Scharstein, J. Lewis, S. Roth, M. Black, and R. Szeliski. A database and evaluation methodology for optical flow. In Proc. ICCV, 2007. [2] P. Favaro, S. Soatto, M. Burger, and S. Osher. Shape from defocs via diffusion. PAMI, 30(3):518 531, 2008. [3] H. Hirschmuller and D. Scharstein. Evaluation of stereo matching costs on images with radiometric differences. PAMI, 31(9):1582 1599, 2009. [4] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV, 47(1-3):7 42, 2002. [5] S. Seitz and S. Baker. Filter Flow. In Proc. Int. Conf. on Computer Vision, 2009. [6] M. Watanabe and S. Nayar. Rational Filters for Passive Depth from Defocus. IJCV, 27(3):203 225, 1998. 14