Technion - Computer Science Department - Tehnical Report CIS

Similar documents
Over-Parameterized Variational Optical Flow

Notes 9: Optical Flow

Part II: Modeling Aspects

Multiple Combined Constraints for Optical Flow Estimation

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

Coarse to Over-Fine Optical Flow Estimation

High Accuracy Optical Flow Estimation Based on a Theory for Warping

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

Motion Estimation (II) Ce Liu Microsoft Research New England

Optic Flow and Basics Towards Horn-Schunck 1

Comparison between Motion Analysis and Stereo

Peripheral drift illusion

Optical Flow Estimation with CUDA. Mikhail Smirnov

CS-465 Computer Vision

CS6670: Computer Vision

CS 4495 Computer Vision Motion and Optic Flow

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Dense Image-based Motion Estimation Algorithms & Optical Flow

EE795: Computer Vision and Intelligent Systems

Comparison Between The Optical Flow Computational Techniques

A Segmentation Based Variational Model for Accurate Optical Flow Estimation

Global motion model based on B-spline wavelets: application to motion estimation and video indexing

Motion Estimation with Adaptive Regularization and Neighborhood Dependent Constraint

EE795: Computer Vision and Intelligent Systems

Feature Tracking and Optical Flow

Enhancing Gradient Sparsity for Parametrized Motion Estimation

Optical Flow Estimation using Fourier Mellin Transform

A Duality Based Algorithm for TV-L 1 -Optical-Flow Image Registration

Feature Tracking and Optical Flow

Motion Estimation. There are three main types (or applications) of motion estimation:

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Mariya Zhariy. Uttendorf Introduction to Optical Flow. Mariya Zhariy. Introduction. Determining. Optical Flow. Results. Motivation Definition

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Accurate Real-Time Disparity Estimation with Variational Methods

Segmentation and Tracking of Partial Planar Templates

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Comparison of stereo inspired optical flow estimation techniques

Optical Flow Estimation versus Motion Estimation

Multi-stable Perception. Necker Cube

Adaptive Multi-Stage 2D Image Motion Field Estimation

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques

Computing Slow Optical Flow By Interpolated Quadratic Surface Matching

Real-Time Optic Flow Computation with Variational Methods

Marcel Worring Intelligent Sensory Information Systems

Illumination-Robust Variational Optical Flow with Photometric Invariants

Optical flow and tracking

Variational Optic Flow Computation: From Continuous Models to Algorithms

Local Image Registration: An Adaptive Filtering Framework

Topics in over-parametrization variational methods Shachar Shem-Tov

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava

EECS 556 Image Processing W 09

CUDA GPGPU. Ivo Ihrke Tobias Ritschel Mario Fritz

Improving Motion Estimation Using Image-Driven Functions and Hybrid Scheme

Visual Tracking (1) Feature Point Tracking and Block Matching

Postprocessing of Optical Flows Via Surface Measures and Motion Inpainting

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

Variational Motion Compensated Deinterlacing

Optical Flow Estimation

Computer Vision Lecture 20

Computer Vision Lecture 20

Optical Flow CS 637. Fuxin Li. With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun

3D SSD Tracking with Estimated 3D Planes

Illumination-robust variational optical flow based on cross-correlation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Hand-Eye Calibration from Image Derivatives

Robust Model-Free Tracking of Non-Rigid Shape. Abstract

A Variational Algorithm For Motion Compensated Inpainting

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam

VC 11/12 T11 Optical Flow

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Piecewise-Smooth Dense Optical Flow via Level Sets

Super Resolution Using Graph-cut

smooth coefficients H. Köstler, U. Rüde

Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.

Real-Time Motion Analysis with Linear-Programming Λ

Sparsity Model for Robust Optical Flow Estimation at Motion Discontinuities

Occlusion Robust Multi-Camera Face Tracking

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

arxiv: v1 [cs.cv] 2 May 2016

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Continuous and Discrete Optimization Methods in Computer Vision. Daniel Cremers Department of Computer Science University of Bonn

Global Flow Estimation. Lecture 9

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation

Optical Flow. Adriana Bocoi and Elena Pelican. 1. Introduction

Variable Bandwidth QMDPE and Its Application in Robust Optical Flow Estimation

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Optical Flow Estimation with Large Displacements: A Temporal Regularizer

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS

Integrating Shape from Shading and Shape from Stereo for Variable Reflectance Surface Reconstruction from SEM Images

Robust Optical Flow Estimation

Solving Vision Tasks with variational methods on the GPU

Computer Vision Lecture 20

The 2D/3D Differential Optical Flow

Subpixel Corner Detection Using Spatial Moment 1)

Transcription:

Over-Parameterized Variational Optical Flow Tal Nir Alfred M. Bruckstein Ron Kimmel {taln, freddy, ron}@cs.technion.ac.il Department of Computer Science Technion Israel Institute of Technology Technion City, Haifa 32000, Israel Abstract We introduce a novel optical flow estimation process based on a spatio-temporal model with varying coefficients multiplying a set of basis functions at each pixel. Previous optical flow estimation methodologies did not use such an over parameterized representation of the flow field as the problem is ill-posed even without introducing any additional parameters: Neighborhood based methods like Lucas- Kanade determine the flow in each pixel by constraining the flow to to be constant in a small area. Modern variational methods represent the optic flow directly via its x and y components at each pixel. The benefit of over-parametrization becomes evident in the smoothness term, which instead of directly penalizing for changes in the optic flow, integrates a cost on the deviation from the assumed optic flow model. Previous variational optical flow techniques are special cases of the proposed method, used in conjunction with a constant flow basis function. Experimental results with the novel flow estimation process yielded significant improvements with respect to the best results published so far. 1. Introduction Despite much research effort invested in addressing optical flow computation it remains a challenging task in the field of computer vision. It is a necessary step in various applications like stereo matching, video compression, object tracking, and motion based segmentation. Hence, many approaches have been used for optical flow computation. Most methods assume brightness constancy and introduce further assumptions on the optical flow in order to deal with the inherent aperture problem. Lucas and Kanade [1] tackled the aperture problem by solving for the parameters of a constant motion model over image patches. Subsequently, Irani-Rousso-Peleg [2, 3] used motion models in a region in conjunction with Lucas-Kanade in order to recover the camera ego-motion. Spline based motion models were suggested by Szeliski and Coughlan [4]. Horn and Schunck [5] sought smooth flow fields and were the first to use functional minimization for solving optical flow problems employing mathematical tools from calculus of variations. Their pioneering work put forth the basic

idea for solving dense optical flow fields over the whole image by introducing a quality functional with two terms: a data term penalizing for deviations from the brightness constancy equation, and a smoothness term penalizing for variations in the flow field. Several important improvements have been proposed following their work. Nagel [6, 7] proposed an oriented smoothness term that penalizes anisotropically for variations in the flow field according to the direction of the intensity gradients. Ben-Ari and Sochen [8] used a functional with two alignment terms composed of the flow and image gradients. Replacing quadratic penalty terms by robust statistics integral measures was proposed in [9, 10] in order to allow sharp discontinuities in the optical flow solution along motion boundaries. Extensions to multi-frame formulations of the initial two-frames formulation allowed the consideration of spatio-temporal smoothness to replace the original spatial smoothness term [11, 12, 6, 13]. Brox-Bruhn-Papenberg-Weickert [14] demonstrated the importance of using the exact brightness constancy equation instead of its linearized version and added a gradient constancy to the data term which is important if the scene illumination changes in time. Recently, Amiaz and Kiryati [15] followed by Brox et al. [16] introduced a variational approach for optical flow computation and motion segmentation. In Farneback [17, 12], a constant and affine motion model is employed. The motion model is assumed to act on a region, and optic flow based segmentation is performed by a region growing algorithm. Adiv [18] used optical flow in order to determine motion and structure of several rigid objects moving in the scene. In this paper, we propose to represent the optical flow vector at each pixel by different coefficients of the same motion model in a variational framework. Such a grossly over-parameterized representation has the advantage that the smoothness term may now penalize deviations from the motion model instead of penalizing the change of the flow. For example, in an affine motion model, if the flow in a region can be accurately represented by an affine model, then in this region there will be no flow regularization penalty, while in the usual setting there is a cost resulting from the changes in the flow induced by the affine model. This over-parameterized model thereby offers a richer means for optical flow representation. For segmentation purposes, the over-parametrization has the benefit of segmentation in a more appropriate space (e.g. the parameters of the affine flow) rather than in a simple constant motion model space. The paper is organized as follows: Section 2 introduces the over-parameterized optical flow representation model and the corresponding functional together with the resulting Euler-Lagrange equations, examples of specific optical flow models are described. Section 3 discusses the numerical solution method. Section 4 describes the parameter settings and the experiments conducted to evaluate our method. Finally, Section 5 concludes the paper.

2. Over-parametrization model We propose to use for optic flow computation given an image video sequence the general space-time optical flow model describes as follows u = v = A i (x, y, t)φ i (x, y, t) (1) A i (x, y, t)η i (x, y, t), where, we regard φ i (x, y, t) and η i (x, y, t), i = 1...n as n basis functions of the flow model, while the A i are space and time varying coefficients of the model. This is an obviously heavily over-parameterized model since for more than two basis functions, there are typically many ways to express the same flow for any specific location. This redundancy however will be adequately resolved by a regularization assumption applied to the coefficients of the model. The coefficients and basis functions may be general functions of space-time. However, they play different roles in the functional minimization process: 1. The basis functions are selected a priori, while the coefficients are estimated. 2. The regularization is applied only to the coefficients. In our model, appropriate basis functions are such that the true flow could be described by approximately piecewise constant coefficients, so that most of the local spatio-temporal changes of the flow are induced by changes in the basis functions and not by variations of the coefficients. This way, regularization applied to the coefficients (as will be described later on) becomes meaningful since major parts in the optic flow variations can be described without changes of the coefficients. For example rigid body motion has a specific optical flow structure which can explain the flow using only six parameters in locations with approximately constant depth. Let us start from conventional optical flow functionals which includes a data term E D (u, v), that measures the deviation from the optical flow equation, and a regularization (or smoothness) term E S (u, v) that quantifies the smoothness of the flow field. The solution is the minimizer of the sum of the data and smoothness terms E(u, v) = E D (u, v) + αe S (u, v). (2) The main difference between the various variational methods is in the choice of data and smoothness terms, and the numerical methods used for solving for the minimizing flow field {u(x, y, t), v(x, y, t)}. For the data term we use the functional E D (u, v) = Ψ((I(x + w) I(x)) 2 )dx, (3) where, x = (x, y, t) T and w = (u, v, 1) T. This is the integral measure used for example in [14] (omitting the gradient constancy term). The function Ψ(s 2 ) =

s2 + ε 2 that is by now widely used, induces an approximate L 1 metric of the data term for a small ε. The smoothness term used in [14] is given by E S (u, v) = Ψ ( u 2 + v 2) dx. (4) where f (f x, f y, ω t f t ) T denotes the weighted spatio-temporal gradient. ω t indicates the weight of the temporal axis relative to the spatial axes in the context of the smoothness term. Inserting the over-parameterized model into the data term, we have ( ( ) ) 2 E D = Ψ I x + A i φ i, y + A i η i, t + 1 I(x, y, t) dx. (5) The proposed smoothness term penalizes for spatio-temporal changes in the coefficient functions, ( n ) E S = Ψ A i 2 dx. (6) Notice that in Equation (6), constant parameters of the model can describe changes of the flow field according to the chosen model as described in Equation (1) (e.g. Euclidean, affine, etc.) without smoothness penalty, whereas in Equation (4), any change in the flow field is penalized by the smoothness term. 2.1 Euler-Lagrange equations For an over-parametrization model with n coefficients, there are n Euler-Lagrange equations, one for each coefficient. The Euler-Lagrange equation for A q (q = 1,.., n) is given by Ψ ( ( ( Iz 2 ) n ) ) Iz (I x + φ q + I y + η q ) α div Ψ A i 2 A q = 0. (7) where f ( f x, f y, ω 2 t f t ) T, and ) I x + := I x (x + A i φ i, y + A i η i, t + 1 ) I y + := I y (x + A i φ i, y + A i η i, t + 1 ( ) I z := I x + A i φ i, y + A i η i, t + 1 I(x, y, t). (8)

2.2 Affine over-parametrization model The affine model is a good approximation of the flow in large regions in many real world scenarios. We therefore first focus on the affine model for our method. Note that we do not force the affine model over image patches as in typical image registration techniques since each pixel has its own independent affine model parameters. The affine model has n = 6 parameters, where, φ 1 = 1; φ 2 = x; φ 3 = ŷ; φ 4 = 0; φ 5 = 0; φ 6 = 0; η 1 = 0; η 2 = 0; η 3 = 0; η 4 = 1; η 5 = x; η 6 = ŷ, (9) x = ρ(x x 0) x 0 ŷ = ρ(y y 0) y 0 (10) and x 0 and y 0 are half image width and height respectively. ρ is a constant that has no meaning in an unconstrained optimization such as the Lucas-Kanade method. In our variational formulation, ρ is a parameter which weighs the penalty of the x and y coefficients relative to the coefficient of the constant term in the regularization. An equivalent alternative is to add a different weight for each coefficient in Equation (6). 2.3 Rigid motion model The optic flow of an object moving in rigid motion or of a static scene with a moving camera is described by u = θ 1 + θ 3 x + Ω 1 xŷ Ω 2 (1 + x 2 ) + Ω 3 ŷ v = θ 2 + θ 3 ŷ + Ω 1 (1 + ŷ 2 ) Ω 2 xŷ Ω 3 x (11) where, (θ 1, θ 2, θ 3 ) T is the translation vector divided by the z component (depth) and (Ω 1, Ω 2, Ω 3 ) T is the rotation vector. Here, the number of coefficients is n = 6. The coefficients A i represent the translation and rotation variables, A 1 = θ 1 ; A 2 = θ 2 ; A 3 = θ 3 ; A 4 = Ω 1 ; A 5 = Ω 2 ; A 6 = Ω 3, and the basis functions are φ 1 = 1; φ 2 = 0; φ 3 = x; φ 4 = xŷ; φ 5 = (1 + x 2 ); φ 6 = ŷ and η 1 = 0; η 2 = 1; η 3 = ŷ; η 4 = 1 + ŷ 2 ; η 5 = xŷ; and η 6 = x. Applying of similar constraints on optical flow of rigid motion was first introduced in [18]. However, in [18], the optical flow is a pre-processing step followed by rigid structure and motion estimation, while our formulation uses the rigid motion model directly within the solution of the optical flow.

2.4 Pure translation motion model A special case of the rigid motion scenario can be thought of when we limit the motion to simple translation. In this case we have, u = θ 1 + θ 3 x v = θ 2 + θ 3 ŷ. (12) The Euler-Lagrange equations of the rigid motion still applies in this case when considering only the first n = 3 coefficients and corresponding basis functions. 2.5 Constant motion model The constant motion model includes only n = 2 coefficients, with φ 1 = 1; φ 2 = 0 η 1 = 0; η 2 = 1, (13) as basis functions. For this model there are two coefficients to solve for A 1 and A 2 which are the optic flow components u and v, respectively. In this case we obtain the familiar variational formulation where we solve for the u and v components. This fact can also be seen by substitution of Equation (13) into Equation (7), that yields the Euler-Lagrange equations used, for example, in [14] (without the gradient constancy term). 3. Numerical scheme In our numerical scheme we use a multi-resolution solver, by down sampling the image data with a standard factor of 0.5 along the x and y axes between the different resolutions. The solution is interpolated from the coarse to the fine resolution. Similar techniques to overcome the intrinsic non-convex nature of the resulting optimization problem were used, for example, in [14] 1. At the lowest resolution, we start with the initial guess of A i = 0, i = 1,..., n. From this guess, the solution is iteratively refined and the coefficients are interpolated to become the initial guess at the next higher resolution and the process is repeated until the solution at the finest resolution is reached. At each resolution, three loops of iterations are applied. At the outer loop k, we solve for ( Ψ ( ) I k+1 2 ) ( (I z Iz k+1 ) + k x φq + ( I y + ) ) k ηq ( ( n ) ) α div Ψ k+1 A i 2 A k+1 q = 0, (14) 1 We compare our model and results to [14] since, to the best of our knowledge, that paper reports the most accurate flow field results for the Yosemite without clouds sequence.

while inserting the first terms of the Taylor expansion I k+1 z Iz k + ( I x + ) n k da k i φ i + ( I y + ) n k da k i η i, (15) where A k+1 i = A k i + dak i. The second inner loop fixed point iteration deals with the nonlinearity of Ψ with iteration variable l, ( where, (Ψ ) k,l Data d q Iz k + ( α div (Ψ ) k,l da k,l+1 i d i ) ( Smooth A k q + da k,l+1 ) ) q = 0, (16) d m := ( I x + ) k φm + ( I y + ) k ηm (17) ( (Ψ ) k,l Data := Ψ (Ψ ) k,l Smooth := Ψ I k z + ( n ( ) 2 i d i (18) da k,l A k i + da k,l i ) ) 2. (19) At this point, we have for each pixel n linear equations with n unknowns, the increments of the coefficients of the model parameters. The linear system of equations is solved on the sequence volume using Gauss-Seidel iterations. Each Gauss-Seidel iteration involves the solution of n linear equations for each pixel as described by Equation(16). 4. Experimental Results In this section we compare our optical flow computation results to the best published results. We use the standard measures of Average Angular Error (AAE) and Standard Deviation (STD). Our results are measured over all the pixels (100% dense). 4.1 Parameter settings The parameter settings was experimentally determined by an optimization process. We have found slightly different parameter settings for the 2D and 3D smoothness cases as shown in Table 1, where in 2D we refer to smoothness term with only spatial derivatives while 3D refers to spatio-temporal smoothness term that couples the solution of the optic flow field at different frames (two frames formulation versus multi-frame formulation). σ denotes the standard deviation of the 2D Gaussian pre-filter used for pre-processing the image sequence. We used 60 iterations of the outer loop in the 3D method and 80 iterations in the 2D method, 5 inner loop iterations and 10 Gauss-Seidel iterations.

4.2 Yosemite sequence Table 1. Parameter settings. Method α ρ ωt 2 ε σ Affine (2D) 58.3 0.858 0 0.001 0.8 Affine (3D) 32.9 1.44 0.474 0.001 0.8 Rigid motion (3D) 54.6 1.42 0.429 0.001 0.8 Pure translation (3D) 23.6 1.22 0.688 0.001 0.8 We applied our method to the Yosemite sequence without clouds, with four resolution levels. Table 2 shows our results relative to the best published ones. Our method achieves better reconstructed solution compared to all other reported results for the Yosemite sequence without clouds. Figures 4 and 5 show the solution of the affine parameters from which one can observe the trends of the flow changes with respect to the x and y axes. The depth discontinuities of the scene are sharp and evident in all the parameters. Figure 1 shows the ground truth. Figure 2 shows the image of the angular errors and Figure 3 shows the histogram of the angular error, both demonstrating the typically lower angular errors of our method. Table 3 shows the noise sensitivity of our method, coupling the affine over-parameterized model with our previous work on joint optic flow computation and denoising presented in [19]. This coupling provides a model with robust behavior under noise, that obtains better AAE measure under all noise levels compared to the best published results. Table 2. Yosemite sequence without clouds. Method AAE STD Papenberg et al. 2D smoothness [20] 1.64 o 1.43 o Brox et al. 2D smoothness [14] 1.59 o 1.39 o Memin-Perez [21] 1.58 o 1.21 o Weickert et al. [22] 1.46 o 1.50 o Farneback [17] 1.40 o 2.57 o Liu et al. [23] 1.39 o 2.83 o Our method affine 2D smoothness 1.18 o 1.31 o Govindu [24] 1.16 o 1.17 o Farneback [12] 1.14 o 2.14 o Papenberg et al. 3D smoothness [20] 0.99 o 1.17 o Brox et al. 3D smoothness [14] 0.98 o 1.17 o Our method rigid motion 3D smoothness 0.96 o 1.25 o Our method affine 3D smoothness 0.91 o 1.18 o Our method pure translation 3D smoothness 0.85 o 1.18 o

Table 3. Yosemite without clouds - noise sensitivity results. σ n Our method affine Our method affine coupled with [19] Results reported in [14] 0 0.91 ± 1.18 o 0.93 ± 1.20 o 0.98 ± 1.17 o 20 1.59 ± 1.67 o 1.52 ± 1.48 o 1.63 ± 1.39 o 40 2.45 ± 2.29 o 2.02 ± 1.76 o 2.40 ± 1.71 o Fig. 1. Ground truth optic flow for the Yosemite sequence.

Fig. 2. Images of the angular error, white indicates zero error and black indicates an error of 3 degrees or above. Left - our method with the pure translation model. Right - the method of [14], optic flow field courtesy of the authors. Probability distribution function [Number of pixels] 4500 4000 3500 3000 2500 2000 1500 1000 500 0 0 1 2 3 4 5 6 Angular error [deg] Fig. 3. Histogram of the angular errors. Solid - Optic flow calculated with the pure translation model. Dashed - Optic flow obtained in [14] courtesy of the authors.

Fig. 4. Solution of the affine parameters for the Yosemite sequence - from left to right A 1, A 2, A 3. Fig. 5. Solution of the affine parameters for the Yosemite sequence - from left to right A 4, A 5, A 6. 4.3 Synthetic piecewise constant affine flow example Consider the piecewise affine flow over an image of size 100 100 with given ground truth (see figures 10 and 7). For x < 40, u = 0.8 1.6(x 50)/50 + 0.8(y 50)/50, v = 1.0 + 0.65(x 50)/50 0.35(y 50)/50. For x 40, u = 0.48 0.36(x 50)/50 0.6(y 50)/50, v = 0.3 0.75(x 50)/50 0.75(y 50)/50. The two images used for this test were obtained by sampling a grid of size 100 100 from frame 8 of the Yosemite sequence (denoted I yosemite ) with the second image I 2 (x, y) = I yosemite (x + x, y + y) and the first image is sampled at warped locations I 1 (x, y) = I yosemite (x + x + u, y + y + v) using bilinear interpolation. The constant shift values are: x = 79 and y = 69. See the two images in Figure 6. Table 4 shows that our method with affine over-parametrization outperforms the method of [14]. Since the true flow is not piecewise constant, the smoothness term in [14] penalizes for changes from the constant flow model, whereas, the affine over-parametrization model solves the optimization problem in the affine space in which it accurately finds the piecewise constant affine parameters solution of the problem, as shown in figures 8 and 9. One can notice that the discontinuity at pixel x = 40 is well preserved.

The resulting optical flow shown in Figure 10 accurately matches the ground truth. Fig. 6. The two images: Left - first frame. Right - second frame. u Fig. 7. Synthetic piecewise affine flow - Ground truth. Left - u component. Right - v component. 5. Summary We introduced a novel over-parameterized variational framework for accurately solving the optical flow problem. The flow field is represented by a general spacetime model. The proposed approach is unique in the fact that each pixel has the freedom to choose its own set of model parameters. Thereby, the decision on the discontinuity locations of the model parameters, is resolved within the variational framework for each sequence. Previous variational approaches can be regarded as a special case of our method when one chooses a constant motion model. In most scenarios, the optical flow would be better represented by a piecewise constant affine model parameters rather than a piecewise constant flow, therefore, compared to existing variational techniques, the smoothness penalty term is better v

Table 4. Synthetic piecewise affine flow test. Method AAE STD Our implementation of the method of [14] 1.48 o 2.28 Our method with the affine flow model 0.88 o 1.67 Fig. 8. Solution of the affine parameters - from left to right A 1, A 2, A 3. Fig. 9. Solution of the affine parameters - from left to right A 4, A 5, A 6. Fig. 10. Ground truth for the synthetic test (left) and computed flow by the affine over-parametrization model (right).

modelled with the proposed over-parametrization models as demonstrated by our experiments. References 1. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proc. 7th International Joint Conference on Artificial Intelligence. (1981) 2. Irani, M., Rousso, B., Peleg, S.: Robust recovery of ego-motion. In: Computer Analysis of Images and Patterns. (1993) 371 378 3. Irani, M., Rousso, B., Peleg, S.: Recovery of ego-motion using region alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(3) (1997) 268 272 4. Szeliski, R., Coughlan, J.: Hierarchical spline-based image registration. International Journal of Computer Vision 22(3) (1997) 199 218 5. Horn, B.K.P., Schunck, B.: Determining optical flow. Artificial Intelligence 17 (1981) 6. Nagel, H.H.: Extending the oriented smoothness constraint into the temporal domain and the estimation of derivatives of optical flow. In: Proc. ECCV, Lecture notes in Computer Science. Volume 427., Springer, Berlin (1990) 7. Nagel, H.H., Enkelmann, W.: An investigation of smoothness constraints for the estimation of displacement vector fields from image sequences. IEEE Trans. on Pattern Analysis and Machine Intelligence 8 (1986) 8. Ari, R.B., Sochen, N.: A general framework and new alignment criterion for dense optical flow. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition. Volume 1. (2006) 529 536 9. Black, M.J., Anandan, P.: The robust estimation of multiple motions: parametric and piecewise smooth flow fields. Computer Vision and Image Understanding 63(1) (1996) 10. Deriche, R., Kornprobst, P., Aubert, G.: Optical flow estimation while preserving its discontinuities: a variational approach. In: Proc. Second Asian Conference on Computer Vision. Volume 2. (1995) 290 295 11. Black, M., Anandan, P.: Robust dynamic motion estimation over time. In: Proc. IEEE Computer Society Conference on Comupter Vision and Pattern Recognition, IEEE Computer Society Press (1991) 292 302 12. Farneback, G.: Very high accuracy velocity estimation using orientation tensors, parametric motion and simultaneous segmentation of the motion field. In: Proc. 8th International Conference on Computer Vision. Volume 1., IEEE Computer Society Press (2001) 13. Weickert, J., Schnorr, C.: Variational optic flow computation with a spatiotemporal smoothness constraint. Journal of Mathematical Imaging and Vision 14(3) (2001) 245 255 14. Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: Proc. ECCV 2004, Prague, Czech Republic - T.Pajdla, J. Matas (Eds.), Lecture Notes in Computer Science, Springer, Berlin (2004) 25 36 15. Amiaz, T., Kiryati, N.: Dense discontinuous optical flow via contour-based segmentation. In: Proc. International Conference on Image Processing. Volume 3. (Sept. 2005)

16. Brox, T., Bruhn, A., Weickert, J.: Variational motion segmentation with level sets. ECCV 2006, Part 1, LNCS 3951 (2006) 17. Farneback, G.: Fast and accurate motion estimation using orientation tensors and parametric motion models. In: Proc. 15th International Conference on Pattern Recognition. (2000) 18. Adiv, G.: Determining three-dimensional motion and structure from optical flow generated by several moving objects. IEEE Trans. On Pattern Analysis and Machine Intelligence (PAMI) 7(4) (1985) 384 401 19. Nir, T., Kimmel, R., Bruckstein, A.M.: Variational approach for joint optic-flow computation and video restoration. CIS-2005-03 report, (CIS Technion) 20. Papenberg, N., Bruhn, A., Brox, T., Didas, S., Weickert, J.: Highly accurate optic flow computation with theoretically justified warping. (International Journal of Computer Vision, to appear) 21. Memin, E., Perez, P.: Hierarchical estimation and segmentation of dense motion fields. International Journal of Computer Vision 46(2) (2002) 129 155 22. Bruhn, A., Weickert, J., Schnorr, C.: Lucas/kanade meets horn/schunck: Combining local and global optic flow methods. International Journal of Computer Vision 61 (2005) 23. Liu, H., Chellappa, R., Rosenfeld, A.: Accurate dense optical flow estimation using adaptive structure tensors and a parametric model. IEEE Transactions on Image Processing 12 (2003) 24. Govidu, V.M.: Revisiting the brightness constraint: Probabilistic formulation and algorithms. In: Proc. ECCV 2006, Part 3, LNCS 3953. (2006)