The 2D/3D Differential Optical Flow

Similar documents
CS-465 Computer Vision

Tracking Weather Storms using 3D Doppler Radial Velocity Information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

VC 11/12 T11 Optical Flow

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

Visual Tracking (1) Feature Point Tracking and Block Matching

C18 Computer vision. C18 Computer Vision. This time... Introduction. Outline.

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Optic Flow and Basics Towards Horn-Schunck 1

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

Hand-Eye Calibration from Image Derivatives

Mariya Zhariy. Uttendorf Introduction to Optical Flow. Mariya Zhariy. Introduction. Determining. Optical Flow. Results. Motivation Definition

Comparison Between The Optical Flow Computational Techniques

CPSC 425: Computer Vision

Computer Vision for HCI. Motion. Motion

Hand-Eye Calibration from Image Derivatives

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

CS 4495 Computer Vision Motion and Optic Flow

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Event-based Computer Vision

Spatial track: motion modeling

Notes 9: Optical Flow

Feature Tracking and Optical Flow

Horn-Schunck and Lucas Kanade 1

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

Dense Image-based Motion Estimation Algorithms & Optical Flow

COMPUTING MOTIONS OF (LOCALLY) PLANAR SURFACES FROM SPATIO-TEMPORAL CHANGES IN IMAGE BRIGHTNESS: A NOTE

Stereo Scene Flow for 3D Motion Analysis

Motion Estimation (II) Ce Liu Microsoft Research New England

EECS 556 Image Processing W 09

Visual Tracking (1) Pixel-intensity-based methods

CHAPTER 5 MOTION DETECTION AND ANALYSIS

.ps11 INTERPRETATION OF IMAGE FLOW: A SPATIO-TEMPORAL APPROACH. Muralidhara Subbarao. Abstract

Spatial track: motion modeling

Midterm Exam Solutions

Peripheral drift illusion

3D Motion Analysis Based on 2D Point Displacements

Lecture 16: Computer Vision

EE795: Computer Vision and Intelligent Systems

Feature Tracking and Optical Flow

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

A Quantitative Comparison of 4 Algorithms for Recovering Dense Accurate Depth

Structure from Motion. Prof. Marco Marcon

Basilio Bona DAUIN Politecnico di Torino

f xx (x, y) = 6 + 6x f xy (x, y) = 0 f yy (x, y) = y In general, the quantity that we re interested in is

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

Background for Surface Integration

extracted occurring from the spatial and temporal changes in an image sequence. An image sequence

Marcel Worring Intelligent Sensory Information Systems

Multi-stable Perception. Necker Cube

Technion - Computer Science Department - Tehnical Report CIS

Adaptive Multi-Stage 2D Image Motion Field Estimation

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

Image features. Image Features

Motion detection Computing image motion Motion estimation Egomotion and structure from motion Motion classification

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

Motion detection Computing image motion Motion estimation Egomotion and structure from motion Motion classification. Time-varying image analysis- 1

Regularised Range Flow

Dominant plane detection using optical flow and Independent Component Analysis

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Image Processing 1 (IP1) Bildverarbeitung 1

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

3D Motion from Image Derivatives Using the Least Trimmed Square Regression

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

Zhongquan Wu* Hanfang Sun** Larry S. Davis. computer Vision Laboratory Computer Science Cente-.r University of Maryland College Park, MD 20742

Image processing and features

Comparison between Motion Analysis and Stereo

Computing Slow Optical Flow By Interpolated Quadratic Surface Matching

Optical Flow Estimation

Optical Flow. Adriana Bocoi and Elena Pelican. 1. Introduction

Computer Vision Lecture 20

Computer Vision Lecture 20

Autonomous Navigation for Flying Robots

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

ELEC Dr Reji Mathew Electrical Engineering UNSW

Q-warping: Direct Computation of Quadratic Reference Surfaces

Lecture 16: Computer Vision

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

General Principles of 3D Image Analysis

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Efficient Dense Scene Flow from Sparse or Dense Stereo Data

Multi-Frame Scene-Flow Estimation Using a Patch Model and Smooth Motion Prior

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

Multi-Frame Correspondence Estimation Using Subspace Constraints

Optical Flow at Occlusion

Non-Rigid Image Registration III

The Lucas & Kanade Algorithm

Motion Estimation. There are three main types (or applications) of motion estimation:

Abstract. Keywords. Computer Vision, Geometric and Morphologic Analysis, Stereo Vision, 3D and Range Data Analysis.

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava

International Journal of Advance Engineering and Research Development

Can Lucas-Kanade be used to estimate motion parallax in 3D cluttered scenes?

Computer Vision Lecture 20

Obtaining Feature Correspondences

Transcription:

The 2D/3D Differential Optical Flow Prof. John Barron Dept. of Computer Science University of Western Ontario London, Ontario, Canada, N6A 5B7 Email: barron@csd.uwo.ca Phone: 519-661-2111 x86896 Canadian Conference on Computer and Robot Vision (CRV2009) Kelowna, British Columbia, May 24 th, 2009

2D Optical Flow: An Example (a) (b) Figure 1: (a) The middle frame from the Yosemite Fly-Through sequence and (b) its correct flow field.

2D Optical Flow: An Overview P(t) Image Plane (side view) V δt v δt Y(t) P(t ) Y(t ) Figure 2: V is the 3D velocity of 3D point P(t) = (X, Y, Z). v = (u, v) is the 2D image of V, i.e. v is the perspective projection of V. If P(t) moves with displacement V δt to P(t ) from time t to time t, then its image Y (t) = (x, y, f) moves with displacement vδt to Y (t ) = (x, y, f) from times t to t. f is the sensor focal. v is known as image velocity or optical flow.

The Image Velocity Equations The 3D instantaneous velocity V = (U, V, W) of a 3D point P = (X, Y, Z), where the sensor moves relative to P is V = (U, V, W) = T ω P, (1) where T = (T 0, T 1, T 2 ) is the instantaneous sensor translation and ω = (ω 0, ω 1, ω 2 ) is the sensor s instantaneous rotation. The components of V can be written out in full as: U = T 0 ω 1 Z + ω 2 Y (2) V = T 1 ω 2 X + ω 0 Z (3) W = T 2 ω 0 Y + ω 1 X (4) For a rigid object under perspective projection we can write x = f X Z, y = f Y Z and z = f Z Z = f, (5)

where f is the focal length of the sensor. The time derivatives of (x, y, z), (ẋ, ẏ, ż) = (u, v, 0), yield the two non-zero components of instantaneous image velocity, which can be written as ) (Ẋ ẋ = u = f Z XŻ Z 2 (6) ) (Ẏ ẏ = v = f Z Y Ż Z 2 (7) Substituting equations (2), (3) and (4) into equations (6) and (7) and using the definition of perspective projection in equation (5) we obtain the standard image velocity equations (see Longuet-Higgins and Prazdny, Proc. R. Soc. London B208, 1981): u = 1 Z ( ft 0 + xt 2 ) + ω 0 ( xy f ) ) ω 1 (f + x2 + ω 2 y f

and v = 1 ( ) Z ( ft 1 + yt 2 ) + ω 0 f + y2 f ω 1 ( xy f ) ω 2 x Given the sensor s instantaneous 3D translation and 3D rotation plus an image point s 2D coordinates (x, y) and its 3D depth, Z [hence we know X and Y as well via equations (5)] we can specify the correct 2D image velocity, (u, v), for this image point. Conversely, if we can measure (u, v) values at image points, we can recover the 3D sensor translation scaled by 3D depth and 3D sensor rotation. We can also recover relative depth at image points in a scene. So, while we can t know from a single monocular sensor that object 1 is 4m away while object 2 is 8m away (absolute depth) we can tell that object 1 is twice as close as object 2 (relative depth). Of course, with a binocular sensor setup we can recover absolute motion and depth parameters.

The 2D Aperture Problem We can estimate optical flow locally from spatial and temporal image intensity derivatives measured in local neighbourhoods. The aperture problem (Marr and Ullman 1981) is relevant when image velocities are measured locally. v Aperture n v n Contour Figure 3: ˆn is the unit normal vector in the normal velocity direction, i.e. it is perpendicular to the local contour structure and having length 1. Thus normal velocity v n is the component of full velocity v projected in the normal direction ˆn, ( v ˆn)ˆn = v n.

The Significance of the Aperture Problem So, due to the aperture problem, we can measure just v n and not v t (the tangential velocity) or v (the full velocity) locally. The aperture problem does not depend on the size of the aperture but rather on local contour structure: Is there enough local structure to recover full image velocity? No! Yes! Figure 4: We can recover full image velocity in neighbourhoods of corner points but not in neighbourhoods when the local contour is straight.

The Motion Constraint Equation Assume I(x, y, t) moves by δx, δy in time δt to I(x + δx, y + δy, t + δt). x,y x,y x+ δx,y+ δy t t+ δt Since I(x, y, t) and I(x + δx, y + δy, t + δt) are the images of the same point: I(x, y, t) = I(x + δx, y + δy, t + δt). We can perform a 1 st order Taylor series expansion about I(x, y, t): I(x+δx, y+δy, t+δt) = I(x, y, t)+ I I I δx+ δy+ x y t δt+h.o.t.

The Motion Constraint Equation Continued: Because I(x, y, t) = I(x + δx, y + δy, t + δt) we obtain: Here u = δx δt I x and v = δy δt I x = I x, I y = I y and I t = I t I I I δx + δy + δt x y t = 0, δx δt + I δy y δt + I δt t {z} δt = 0, =1 I x u + I y v + I t = 0 and I v + I t = 0. are the x and y components of image velocity and are image intensity derivatives at I(x, y, t).

Motion Constraint Line I v + I t = 0 is 1 equation in 2 unknowns (a line), the correct velocity is some unknown point on this line. The velocity with the smallest magnitude if the normal velocity v n. u (u nx,v ny) v n θ v Figure 5: The motion constraint line. v n = (u nx, v ny ) is the velocity with the smallest magnitude that is on this line.

The Relationship between Normal Velocity and the Motion Constraint Equation Consider a straight contour moving up/down and to the right with true full velocity v. Since it is viewed through an aperture we can only see the motion perpendicular to the contour. v n Since the normal velocity is the smallest of all potential velocities it

is the point on the motion constraint line closest to the origin. We can compute v n (the magnitude of v n ) and ˆn (its direction) and hence, the normal velocity, v n = v nˆn, very simply using the motion constraint equation I v = I t and the fact that v ˆn = v n as: I v = I t I v = I t I 2 I 2 ˆn v = v n ˆn = I I 2 and v n = I t I 2.

Resolving the Aperture Problem We need to impose additional constraints to recover the full velocity v everywhere. We could assume that each local image neighbourhood has constant velocity [Lucas and Kanade IJCAI1981]. Then 2 or more different normal velocities yield the true full velocity (in the least squares sense). We could assume that velocity varies smoothly everywhere [Horn and Schunck AI1981] and regularize a smoothness term across the image.

2D Lucas and Kanade 1981 Given I x, I y and I t at a single pixel, we can compute v n as v n = v ˆn, where v n = I t I 2 and ˆn = I I 2 are as before. Given an k = n n neighbourhood with the same velocity v we can write n x1 n x2. n xk n y1 n y2. n yk } {{ } N which we can write as }{{} N }{{} v k 2 2 1 u v }{{} v = B }{{} k 1. = v n1 v n2. v nk } {{ } B For k 2 we can solve this system as v = (N T N) 1 N T B.

2D Horn and Schunck 1981 Horn and Schunck combined the motion constraint equation with a global smoothness term to constrain the estimated velocity field v = (u, v), minimizing: f = ( I v + I t ) 2 + λ ( 2 u 2 x + u 2 y + vx 2 + vy) 2 dxdy D defined over a domain D (the image), where the magnitude of λ reflects the relative influence of the smoothness term. We use the Euler-Lagrange equations: f u df u x dx df u y dy f v df v x dx df v y dy = 0 = 0,

where: f u = 2I x (I x u + I y v + I t ) f v = 2I y (I x u + I y v + I t ) f ux = 2α 2 u x f uy = 2α 2 u y f vx = 2α 2 v x f vy = 2α 2 v y df ux dx df uy dy df vx dx df vy dy = 2α 2 u xx = 2α 2 u yy = 2α 2 v xx = 2α 2 v yy.

Since 2 u = u xx + u yy and 2 v = v xx + v yy we can rewrite the Euler-Lagrange equations as: I 2 xu + I x I y v + I x I t = α 2 2 u I x I y u + I 2 yv + I y I t = α 2 2 v. Using 2 u ū u and 2 v v v we get We can solve for u and v as: (α 2 + I 2 x)u + I x I y v = (α 2 ū I x I t ) I x I y u + (α 2 + I 2 y)v = (α 2 v I y I t ), (α 2 + I 2 x + I 2 y)u = (α 2 + I 2 y)ū I x I y v I x I t (α 2 + I 2 x + I 2 y)v = I x I y ū + (α 2 + I 2 y) v I y I t,

which can be written as: (α 2 + Ix 2 + Iy)(u 2 ū) = I x [I x ū + I y v + I t ] (α 2 + Ix 2 + Iy)(v 2 v) = I y [I x ū + I y v + I t ]. Gauss-Seidel iterative equations that minimize these equations are: u k+1 = ū k I x[i x ū k + I y v k + I t ] α 2 + I 2 x + I 2 y and v k+1 = v k I y[i x ū k + I y v k + I t ] α 2 + I 2 x + I 2 y Here k denotes the iteration number, v 0 y and v 0 y denote initial velocity estimates which are typically set to zero and ū k and v k denote neighbourhood averages of u k and v k. Iterations are stopped if k reaches a preset value, i.e. 100, or if the norm of the velocity field differences at iterations k and k + 1 is less than some preset threshold..

Intensity Differentiation One way to compute intensity derivatives is via convolution of Simoncelli s Matched/balanced filters [ICIP1994] compute I x, I y and I t. The filter coefficients are: n p 5 d 5 0 0.036-0.108 1 0.249-0.283 2 0.431 0.0 3 0.249 0.283 4 0.036 0.108 Table 1: Simoncelli s 5-point Matched/Balanced Kernels

Convolution Steps for Computing I x To compute I x in 2D for frame i is some sequence, we first convolve the smoothing kernel, p 5, in the t dimension to reduce the 5 images to 1 image, then convolve the smoothing kernel p 5 on that result in the y dimension and finally convolve the differentiation kernel, d 5, on that 2 nd result in the x dimension to obtain I x. I y is computed in a similar way. i 2 i 1 Convolve p in t ==> 1 image i i+2 i+1 5 i Convolve p in y ==> 1 image ==> ==> i Convolve d in x 5 5 ==> I image x

Convolution Steps for Computing I t To compute I t in 2D for frame i in some sequence, we first convolve p 5 in the x dimension and then on that result in the y dimension for each of frames i 2, i 1, i, i + 1 and i + 2. This yields 5 smoothed images in x and y. We then differentiate these images in the t dimension using d 5 to get I t at frame i. i+2 Convolve 5 images in x by p Convolve 5 images in y by p Convolve 5 images in t by d i+1 i i 1 i 2 5 5 5 ==> ==> ==> i+2 i 1 i 2 i i i i+1 i+1 i+2 i 1 i 2 I t image

3D Optical Flow Usually, by 3D optical flow we mean 3D volumetric flow: at voxel, (x, y, z), what is the 3D velocity (U, V, W)? An example of volumetric data: 20 volumes of gated MRI data of 1 beat of a human heart. The 3D motion constraint equations uses intensity derivatives in x, y and z, as well as t. 3D optical flow on a moving surface (such as a growing leaf) is called range flow and also produces 3D velocity (U, V, W) on all surface points of some scanned object that is moving over time. Now the 3D constraint equation uses x and y derivatives of Z, the depth value (either scanned or computed) at each surface point. Range flow is also often called scene flow.

The 3D Intensity Motion Constraint Equation is a simple extension of the 2D motion constraint equation. Consider a small 3D n n n block at (x, y, z) at time t moving to (x + δx, y + δy, z + δz) at time t + δt. (x+ δx,y+ δy,z+ δz) (x,y,z) Time t Time t+ δt Figure 6: A small n n n 3D neighbourhood of voxels centered at (x, y, z) at time t moving to (x + δx, y + δy, z + δz) at time t + δt.

3D Intensity Motion Constraint Equation and the 3D Aperture Problem We assume a 3D voxel I(x, y, z) at time t moves with a displacement (δx, δy, δz) over time δt. Since I(x, y, z, t) and I(x + δx, y + δy, z + δz, t + δt) are the same we can perform a 1 st order Taylor series expansion and obtain (as in the 2D case): I x U + I y V + I z W = I V = I t, I x, I y, I z and I t are 3D spatio-temporal intensity derivatives computed via Simoncelli convolution. The 3D velocity is V = (U, V, W) This equation describes a plane in 3D space. Any point on that plane is possibly the correct 3D velocity. The velocity on the plane that is

closest to the origin is called the plane normal velocity (the velocity normal to a local intensity planar structure) The line normal velocity is the velocity on the line caused by the intersection of 2 planes that is closest to the origin. of course, is three planes intersect at a single point, that point is the full 3D velocity.

V L V P 3D Plane Normal Velocity 3D Line Normal Velocity Figure 7: Graphical illustrations of the 3D plane and line normal velocities.

3D Lucas and Kanade From the 3D the motion constraint equation we have: I x U + I y V + I z W = I t, where I x, I y, I z and I t are the 3D intensity derivatives in a n n n neighbourhood centered at voxel (x, y, z) and V = (U, V, W) is that neighbourhood s (assumed) constant 3D velocity. Given I x, I y, I z and I t at a single pixel, we can compute V n as V n = V ˆn, where V n = I t I 2, ˆn = I I 2 and I = (I x, I y, I z ). Given an k = n n n neighbourhood with the same velocity V we

can write n x1 n y1 n z1 n x2 n y2 n z2... } n xk n yk {{ n zk } N U V W }{{} V = V n1 V n2. V nk } {{ } B which we can write as }{{} N }{{} V k 3 3 1 = B }{{} k 1 For k 3 we can solve this system as V = (N T N) 1 N T B..

3D Horn and Schunck 3D Horn and Schunck regularization becomes: F = Z D (I xu+iyv +IzW+I t ) 2 +α 2 Ux 2 + U2 y + U2 z + V x 2 + V y 2 + V z 2 + W2 x + W2 y + W2 z. We use the Euler-Lagrange equations: F U F V F W d dx F U X d dx F V X d dx F W X d dy F U Y d dz F U Z d dt F U t = 0, d dy F V Y d dz F V Z d dt F V t = 0, d dy F W Y d dz F W Z d dt F W t = 0.

where: F U = 2Z X (Z X U + Z Y V + Z Z W + Z t ) F V = 2Z Y (Z X U + Z Y V + Z Z W + Z t ) F W = 2Z Z (Z X U + Z Y V + Z Z W + Z t ) F UX = 2α 2 U X F UY = 2α 2 U Y F UZ = 2α 2 U Z F Ut = 2α 2 U t F VX = 2α 2 V X F VY = 2α 2 V Y F VZ = 2α 2 V Z F Vt = 2α 2 V t F WX = 2α 2 W X F WY = 2α 2 W Y F WZ = 2α 2 W Z F Wt = 2α 2 W t

and df UX dx df UY dy df UZ dz df Ut dt df VX dx df VY dy = 2α 2 U XX, = 2α 2 U Y Y, = 2α 2 U ZZ, = 2α 2 U tt, = 2α 2 V XX, = 2α 2 V Y Y, df VZ dz df Vt dt df WX dx df WY dy df WZ dz = 2α 2 V ZZ = 2α 2 V tt = 2α 2 W XX = 2α 2 W Y Y = 2α 2 W ZZ df Wt dt = 2α 2 W tt. Since 2 U = U XX + U Y Y + U ZZ + U tt, 2 V = V XX + V Y Y + V ZZ + V tt and 2 W = W XX + W Y Y + W ZZ + W tt we can rewrite the

Euler-Lagrange equations as: Z 2 XU + Z X Z Y V + Z X Z Z W + Z X Z t = α 2 2 U Z X Z Y U + Z 2 Y V + Z Y Z Z W + Z Y Z t = α 2 2 V Z X Z Z U + Z Y Z Z V + Z 2 ZW + Z Z Z t = α 2 2 W Using 2 U Ū U, 2 V V V and 2 W W W we can write: (α 2 + Z 2 X)U + Z X Z Y V + Z X Z Z W = (α 2 Ū + Z X Z t ) Z X Z Y U + (α 2 + Z 2 Y )V + Z Y Z Z W = (α 2 V + ZY Z t ) Z X Z Z U + Z Y Z Z V + (α 2 + Z 2 Z)W = (α 2 W + Z Z Z t ).

The Gauss-Seidel iterative equations can be written as: U k+1 = Ū k I [ ] x Ix Ū k + I y V k + I z W k + I t (α 2 + Ix 2 + Iy 2 + Iz) 2, V k+1 = V k I [ ] y Ix Ū k + I y V k + I z W k + I t and (α 2 + Ix 2 + Iy 2 + Iz) 2 W k+1 = W k I [ ] z Ix Ū k + I y V k + I z W k + I t (α 2 + Ix 2 + Iy 2 + Iz) 2. Ū k, V k and W k are n n n averages of neighbourhoods of velocities at iteration k. Ū 0, V 0 and W 0 are typically set to 0.0.

3D L&K OF for Gated MRI Cardiac Data LK-XY-9-36 (5phase) LK-XZ-9-36 (5phase) LK-XY-16-36 (10phase) LK-XZ-16-36 (10phase) Figure 8: The Lucas and Kanade XY and XZ flow fields superimposed on the 36 th slice of the 9 th and 16 th volumes of the 5phase and 10phase datasets. The eigenvalue threshold λ 1 was 1.0.

3D H&S OF for Gated MRI Cardiac Data HS-XY-9-36 (5phase) HS-XZ-9-36 (5phase) HS-XY-16-36 (10phase) HS-XZ-16-36 (10phase) Figure 9: The Horn and Schunck XY and XZ flow fields superimposed on the 36 th slice of the 9 th and 16 th volumes of the 5phase and 10phase datasets for 100 iterations. α = 1.0.

3D Range Motion Constraint Equation We can compute 3D Range Flow (3D Optical Flow on a 3D Surface) from 3D depth data measured by a ShapeGrabber range scanner on rigid and non-rigid surfaces [CVUI2002]. One example is the range flow for a plant leaf (the surface can be deformable) [ECCV2000]. The 3D Range Constraint Equation is Z x U + Z y V + W 1 = 0. Here Z x and Z y are depth derivatives and not intensity derivatives. Lucas and Kanade and Horn and Schunck like 3D surface optical flow can be computed [DAGM1999]. Range flow (from time-varying depth data) can be fused with 2D optical flow (from time-varying intensity data) to produce full 3D velocity on moving flat surfaces, where neither range or optical flow by themselves, can do this [ICIP2000].

Figure 10: Example 3D range flow for Castor oil plant leaves.

3D Scene Flow A typical approach: In a stereo image sequence, compute the time-varying depth maps using a stereo algorithm, Compute range or scene flow from these time-varying depth maps. One point of view: optical flow is not practical on real imaginary but stereo is much better: so use stereo to get depth maps and then compute range flow on these depth maps and their derivative values. One example of such work: Efficient Dense Scene Flow from Sparse or Dense Stereo Data, Andreas Wedel, Clemens Rabe, Tobi Vaudrey, Thomas Brox, Uwe Franke and Daniel Cremers, ECCV, October, 2008.