Computer Vision Lecture 18

Similar documents
Computer Vision Lecture 20

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

Computer Vision Lecture 20

Computer Vision Lecture 20

Peripheral drift illusion

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow

EECS 556 Image Processing W 09

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

CS 4495 Computer Vision Motion and Optic Flow

Optical flow and tracking

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Multi-stable Perception. Necker Cube

Comparison between Motion Analysis and Stereo

Optical flow. Cordelia Schmid

Optical flow. Cordelia Schmid

Structure from motion

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

ECE Digital Image Processing and Introduction to Computer Vision

Structure from Motion

Structure from Motion CSC 767

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Structure from motion

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Structure from Motion

Computer Vision II Lecture 4

BIL Computer Vision Apr 16, 2014

EE795: Computer Vision and Intelligent Systems

Motion estimation. Lihi Zelnik-Manor

Lecture 16: Computer Vision

Lecture 16: Computer Vision

Structure from motion

Photo by Carl Warner

C280, Computer Vision

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

Computer Vision II Lecture 4

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Dense Image-based Motion Estimation Algorithms & Optical Flow

CS 532: 3D Computer Vision 7 th Set of Notes

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017

Lucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.

3D Photography: Epipolar geometry

CAP5415-Computer Vision Lecture 8-Mo8on Models, Feature Tracking, and Alignment. Ulas Bagci

CS6670: Computer Vision

EE795: Computer Vision and Intelligent Systems

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS231M Mobile Computer Vision Structure from motion

Multi-stable Perception. Necker Cube

Autonomous Navigation for Flying Robots

Lecture 13: Tracking mo3on features op3cal flow

Computer Vision Lecture 17

Visual Tracking (1) Feature Point Tracking and Block Matching

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

Scale Invariant Feature Transform (SIFT) CS 763 Ajit Rajwade

Lecture 10: Multi-view geometry

Computing F class 13. Multiple View Geometry. Comp Marc Pollefeys

Spatial track: motion modeling

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15

Computer Vision Lecture 17

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Lecture 10: Multi view geometry

Optical Flow CS 637. Fuxin Li. With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun

Fitting a transformation: Feature-based alignment April 30 th, Yong Jae Lee UC Davis

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Spatial track: motion modeling

Lecture 14: Tracking mo3on features op3cal flow

Structure from Motion

CS201: Computer Vision Introduction to Tracking

CS 664 Structure and Motion. Daniel Huttenlocher

Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski

Capturing, Modeling, Rendering 3D Structures

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

C18 Computer vision. C18 Computer Vision. This time... Introduction. Outline.

CS664 Lecture #18: Motion

Lecture 20: Tracking. Tuesday, Nov 27

Motion Estimation. There are three main types (or applications) of motion estimation:

Structure from Motion

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

VC 11/12 T11 Optical Flow

Announcements. Recognition I. Optical Flow: Where do pixels move to? dy dt. I + y. I = x. di dt. dx dt. = t

Lecture 6 Stereo Systems Multi-view geometry

Tracking Computer Vision Spring 2018, Lecture 24

Final Exam Study Guide

Camera Geometry II. COS 429 Princeton University

Wikipedia - Mysid

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Motion Tracking and Event Understanding in Video Sequences

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

OPPA European Social Fund Prague & EU: We invest in your future.

Image processing and features

Determining the 2d transformation that brings one image into alignment (registers it) with another. And

CS231A Midterm Review. Friday 5/6/2016

Application questions. Theoretical questions

Visual Tracking (1) Pixel-intensity-based methods

Kanade Lucas Tomasi Tracking (KLT tracker)

Transcription:

Course Outline Computer Vision Lecture 8 Motion and Optical Flow.0.009 Bastian Leibe RWTH Aachen http://www.umic.rwth-aachen.de/multimedia leibe@umic.rwth-aachen.de Man slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefes, S. Lazebnik Image Processing Basics Segmentation & Grouping Object Recognition Local Features & Matching Object Categorization 3D Reconstruction Motion and Tracking Motion and Optical Flow Tracking with Linear Dnamic Models Articulated Tracking Repetition Recap: Structure from Motion P j j P 3 P Given: m images of n fied 3D points X j ij = P i X j, i =,, m, j =,, n Problem: estimate m projection matrices P i and n 3D points X j from the mn correspondences ij 3 3j Recap: Structure from Motion Ambiguit If we scale the entire scene b some factor k and, at the same time, scale the camera matrices b the factor of /k, the projections of the scene points in the image remain eactl the same. More generall: if we transform the scene using a transformation Q and appl the inverse transformation to the camera matrices, then the images do not change = PX = = PX = - ( ( PQ )( - QX) PQ )( QX) 4 Recap: Hierarch of 3D Transformations Recap: Affine Structure from Motion Projective 5dof Affine dof Similarit 7dof Euclidean 6dof A T v A T 0 s R T 0 R T 0 t v t t t Preserves intersection and tangenc Preserves parallellism, volume ratios Preserves angles, ratios of length Preserves angles, lengths With no constraints on the camera calibration matri or on the scene, we get a projective reconstruction. Need additional information to upgrade the reconstruction to affine, similarit, or Euclidean. 5 Let s create a m n data (measurement) matri: ˆ ˆ D = ˆ m ˆ ˆ ˆ m The measurement matri D = MS must have rank 3! C. Tomasi and T. Kanade. Shape and motion from image streams under orthograph: A factorization method. IJCV, 9():37-54, November 99. ˆ ˆ ˆ n n mn A A = A m Cameras (m 3) [ X X X ] Points (3 n) n 6

Recap: Affine Factorization Obtaining a factorization from SVD: Slide credit: Martial Hebert This decomposition minimizes D-MS 7 Recap: Projective Factorization z z D = zm m z z z m m z z z n n n n mn mn m Cameras (3m 4) If we knew the depths z, we could factorize D to estimate M and S. If we knew M and S, we could solve for z. Solution: iterative approach (alternate between above two steps). P P = P D = MS has rank 4 [ X X X ] Points (4 n) n 8 Recap: Sequential Projective SfM Recap: Bundle Adjustment Initialize motion from two images using fundamental matri Initialize structure For each additional view: Determine projection matri of new camera using all the known 3D points that are visible in its image calibration Refine and etend structure: compute new 3D points, re-optimize eisting points that are also seen b this camera triangulation Refine structure and motion: bundle adjustment Cameras Points 9 Non-linear method for refining structure and motion Minimizing mean-square reprojection error m n ( ) E( P, X) = D ij, Pi X j P X j P j i= j= X j P X j j P 3 X j 3 j P P 3 0 Topics of This Lecture Video Introduction to Motion Applications, uses Motion Field Derivation Optical Flow Brightness constanc constraint Aperture problem Lucas-Kanade flow Iterative refinement Global parametric motion Coarse-to-fine estimation Motion segmentation KLT Feature Tracking A video is a sequence of frames captured over time Now our image data is a function of space (, and time (t)

Applications of Segmentation to Video Applications of Segmentation to Video Background subtraction A static camera is observing a scene. Goal: separate the static background from the moving foreground., Kristen Grauman How to come up with background frame estimate without access to empt scene? 3 Background subtraction Shot boundar detection Commercial video is usuall composed of shots or sequences showing the same objects or scene. Goal: segment video into shots for summarization and browsing (each shot can be represented b a single keframe in a user interface). Difference from background subtraction: the camera is not necessaril stationar. 4 Applications of Segmentation to Video Applications of Segmentation to Video Background subtraction Shot boundar detection For each frame, compute the distance between the current frame and the previous one: Piel-b-piel differences Differences of color histograms Block comparison If the distance is greater than some threshold, classif the frame as a shot boundar. 5 Background subtraction Shot boundar detection Motion segmentation Segment the video into multiple coherentl moving objects 6 Motion and Perceptual Organization Sometimes, motion is the onl cue Motion and Perceptual Organization Sometimes, motion is foremost cue 7 Slide credit: Kristen Grauman 8 3

Motion and Perceptual Organization Motion and Perceptual Organization Even impoverished motion data can evoke a strong percept Even impoverished motion data can evoke a strong percept 9 0 Uses of Motion Motion Estimation Techniques Estimating 3D structure Directl from optic flow Indirectl to create correspondences for SfM Segmenting objects based on motion cues Learning dnamical models Recognizing events and activities Improving video qualit (motion stabilization) Slide adapted from Svetlana Lazebnik Direct methods Directl recover image motion at each piel from spatiotemporal image brightness variations Dense motion fields, but sensitive to appearance variations Suitable for video and when image motion is small Feature-based methods Etract visual features (corners, tetured areas) and track them over multiple frames Sparse motion fields, but more robust tracking Suitable when image motion is large (0s of piels) Topics of This Lecture Motion Field Introduction to Motion Applications, uses Motion Field Derivation Optical Flow Brightness constanc constraint Aperture problem Lucas-Kanade flow Iterative refinement Global parametric motion Coarse-to-fine estimation Motion segmentation KLT Feature Tracking 3 The motion field is the projection of the 3D scene motion into the image 4 4

Motion Field and Paralla P(t) is a moving 3D point Velocit of scene point: V = dp/dt p(t) = ((t),(t)) is the projection of P in the image. Apparent velocit v in the image: given b components v = d/dt and v = d/dt These components are known as the motion field of the image. P(t) V p(t) v P(t+dt) p(t+dt) 5 Motion Field and Paralla V = ( V, V, V ) To find image velocit v, differentiate p with respect to t (using quotient rule): v V v f V = f V Vz = Image motion is a function of both the 3D motion (V) and the depth of the 3D point (). p = f P z P f V Vz v = P(t) Quotient rule: D(f/g) = (g f g f)/g^ V p(t) v P(t+dt) p(t+dt) 6 Motion Field and Paralla Motion Field and Paralla Pure translation: V is constant everwhere f V Vz v = v = ( v 0 Vzp), f V Vz v = v = 0 ( f V, f V ) 7 Pure translation: V is constant everwhere v = ( v 0 Vzp), v = 0 ( f V, f V ) V z is nonzero: Ever motion vector points toward (or awa from) v 0, the vanishing point of the translation direction. 8 Motion Field and Paralla Topics of This Lecture Pure translation: V is constant everwhere v = ( v 0 Vzp), v = 0 ( f V, f V ) V z is nonzero: Ever motion vector points toward (or awa from) v 0, the vanishing point of the translation direction. V z is zero: Motion is parallel to the image plane, all the motion vectors are parallel. The length of the motion vectors is inversel proportional to the depth. 9 Introduction to Motion Applications, uses Motion Field Derivation Optical Flow Brightness constanc constraint Aperture problem Lucas-Kanade flow Iterative refinement Global parametric motion Coarse-to-fine estimation Motion segmentation KLT Feature Tracking 30 5

Optical Flow Apparent Motion Motion Field Definition: optical flow is the apparent motion of brightness patterns in the image. Ideall, optical flow would be the same as the motion field. Have to be careful: apparent motion can be caused b lighting changes without an actual motion. Think of a uniform rotating sphere under fied lighting vs. a stationar sphere under moving illumination. 3 Slide credit: Kristen Grauman 3 Figure from Horn book Estimating Optical Flow The Brightness Constanc Constraint I(,,t ) Given two subsequent frames, estimate the apparent motion field u(, and v(, between them. Ke assumptions Brightness constanc: projection of the same point looks the same in ever frame. Small motion: points do not move ver far. Spatial coherence: points move like their neighbors. I(,,t) 33 Brightness Constanc Equation: I (,, t ) = I( + u(,, + v(,, t) Linearizing the right hand side using Talor epansion: Hence, I(,,t ) I(,,t) I (,, t ) I (,, t) + I u(, + I v(, I u + I v + It 0 34 The Brightness Constanc Constraint The Aperture Problem I u + I v + I t How man equations and unknowns per piel? One equation, two unknowns Intuitivel, what does this constraint mean? The component of the flow perpendicular to the gradient (i.e., parallel to the edge) is unknown = 0 I ( u, v) + I = 0 If (u,v) satisfies the equation, so does (u+u, v+v ) if I ( u', v') = 0 t gradient (u,v) (u,v ) (u+u,v+v ) edge 35 Perceived motion 36 6

The Aperture Problem The Barber Pole Illusion Actual motion 37 http://en.wikipedia.org/wiki/barberpole_illusion 38 The Barber Pole Illusion The Barber Pole Illusion http://en.wikipedia.org/wiki/barberpole_illusion 39 http://en.wikipedia.org/wiki/barberpole_illusion 40 Solving the Aperture Problem Solving the Aperture Problem How to get more equations for a piel? Spatial coherence constraint: pretend the piel s neighbors have the same (u,v) If we use a 55 window, that gives us 5 equations per piel B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 674 679, 98. 4 Least squares problem: Minimum least squares solution given b solution of (The summations are over all piels in the K K window) Slide adapted from Svetlana Lazebnik 4 7

Conditions for Solvabilit Eigenvectors of A T A Optimal (u, v) satisfies Lucas-Kanade equation When is this solvable? A T A should be invertible. A T A entries should not be too small (noise). A T A should be well-conditioned. 43 Haven t we seen an equation like this before? Recall the Harris corner detector: M = A T A is the second moment matri. The eigenvectors and eigenvalues of M relate to edge direction and magnitude. The eigenvector associated with the larger eigenvalue points in the direction of fastest intensit change. The other eigenvector is orthogonal to it. 44 Interpreting the Eigenvalues Edge Classification of image points using eigenvalues of the second moment matri: λ and λ are small Slide credit: Kristen Grauman λ Edge λ >> λ Flat region Corner λ and λ are large, λ ~ λ Edge λ >> λ λ 45 Gradients ver large or ver small Large λ, small λ 46 Low-Teture Region High-Teture Region Gradients have small magnitude Small λ, small λ 47 Gradients are different, large magnitude Large λ, large λ 48 8

Per-Piel Estimation Procedure Iterative Refinement Let ( )( ) T II t M = I I and b = I It Algorithm: At each piel compute U b solving MU = b M is singular if all gradient vectors point in the same direction E.g., along an edge Triviall singular if the summation is over a single piel or if there is no teture I.e., onl normal flow is available (aperture problem) Corners and tetured areas are OK 49 Estimate velocit at each piel using one iteration of Lucas and Kanade estimation. Warp one image toward the other using the estimated flow field. (Easier said than done) Refine estimate b repeating the process. 50 Optical Flow: Iterative Refinement Optical Flow: Iterative Refinement estimate update 0 Initial guess: Estimate: (using d for displacement here instead of u) 5 estimate update 0 Initial guess: Estimate: (using d for displacement here instead of u) 5 Optical Flow: Iterative Refinement Optical Flow: Iterative Refinement estimate update 0 (using d for displacement here instead of u) Initial guess: Estimate: 53 (using d for displacement here instead of u) 54 9

Optic Flow: Iterative Refinement Global Parametric Motion Models Some Implementation Issues: Warping is not eas (ensure that errors in warping are smaller than the estimate refinement). Warp one image, take derivatives of the other so ou don t need to re-compute the gradient after each iteration. Often useful to low-pass filter the images before motion estimation (for better derivative estimation, and linear approimations to image intensit. 55 Translation unknowns Affine 6 unknowns Perspective 8 unknowns 3D rotation 3 unknowns 56 Affine Motion Affine Motion u(, = a + a + a v(, = a4 + a5 + a6 Substituting into the brightness constanc equation: 3 I u + I v + I t 0 57 u(, = a + a + a v(, = a4 + a5 + a6 Substituting into the brightness constanc equation: I 3 ( 6 a + a + a3 + I ( a4 + a5 + a + I t 0 Each piel provides linear constraint in 6 unknowns. Least squares minimization: Err ( a) = I ( a + a + a3 + I ( a4 + a5 + a6 + It [ ] 58 Problem Cases in Lucas-Kanade Dealing with Large Motions The motion is large (larger than a piel) Iterative refinement, coarse-to-fine estimation A point does not move like its neighbors Motion segmentation Brightness constanc does not hold Do ehaustive neighborhood search with normalized correlation. 59 60 0

Temporal Aliasing Idea: Reduce the Resolution! Temporal aliasing causes ambiguities in optical flow because images can have man piels with the same intensit. I.e., how do we know which correspondence is correct? Nearest match is correct (no aliasing) To overcome aliasing: coarse-to-fine estimation. actual shift estimated shift Nearest match is incorrect (aliasing) 6 6 Coarse-to-fine Optical Flow Estimation Coarse-to-fine Optical Flow Estimation u=.5 piels u=.5 piels u=5 piels Image u=0 piels Image Gaussian pramid of image Gaussian pramid of image 63 Run iterative L-K Warp & upsample Run iterative L-K... Image Image Gaussian pramid of image Gaussian pramid of image 64 Motion Segmentation Laered Motion How do we represent the motion in this scene? J. Wang and E. Adelson. Laered Representation for Motion Analsis. CVPR 993. 65 Break image sequence into laers each of which has a coherent motion J. Wang and E. Adelson. Laered Representation for Motion Analsis. CVPR 993. 66

What Are Laers? Motion Segmentation with an Affine Model Each laer is defined b an alpha mask and an affine motion model J. Wang and E. Adelson. Laered Representation for Motion Analsis. CVPR 993. 67 u(, = a + a + a v(, = a Local flow estimates 4 + a + a 5 3 6 J. Wang and E. Adelson. Laered Representation for Motion Analsis. CVPR 993. 68 Motion Segmentation with an Affine Model Motion Segmentation with an Affine Model u(, = a + a + a v(, = a 4 + a + a 5 3 6 Equation of a plane (parameters a, a, a 3 can be found b least squares) J. Wang and E. Adelson. Laered Representation for Motion Analsis. CVPR 993. 69 u(, = a + a + a Equation of a plane v(, = a u(, 4 + a + a Segmented estimate 5 True flow 3 6 D eample (parameters a, a, a 3 can be found b least squares) Local flow estimate Line fitting Foreground Background Occlusion 70 How Do We Estimate the Laers? Eample Result Compute local flow in a coarse-to-fine fashion. Obtain a set of initial affine motion hpotheses. Divide the image into blocks and estimate affine motion parameters in each block b least squares. Eliminate hpotheses with high residual error Perform k-means clustering on affine motion parameters. Merge clusters that are close and retain the largest clusters to obtain a smaller set of hpotheses to describe all the motions in the scene. Iterate until convergence: Assign each piel to best hpothesis. Piels with high residual error remain unassigned. Perform region filtering to enforce spatial constraints. Re-estimate affine motions in each region. J. Wang and E. Adelson. Laered Representation for Motion Analsis. CVPR 993. 7 J. Wang and E. Adelson. Laered Representation for Motion Analsis. CVPR 993. 7

Topics of This Lecture Feature Tracking Introduction to Motion Applications, uses Motion Field Derivation Optical Flow Brightness constanc constraint Aperture problem Lucas-Kanade flow Iterative refinement Global parametric motion Coarse-to-fine estimation Motion segmentation KLT Feature Tracking 73 So far, we have onl considered optical flow estimation in a pair of images. If we have more than two images, we can compute the optical flow from each frame to the net. Given a point in the first image, we can in principle reconstruct its path b simpl following the arrows. 74 Tracking Challenges Handling Large Displacements Ambiguit of optical flow Find good features to track Large motions Discrete search instead of Lucas-Kanade Changes in shape, orientation, color Allow some matching fleibilit Occlusions, disocclusions Need mechanism for deleting, adding new features Drift errors ma accumulate over time Need to know when to terminate a track 75 Define a small area around a piel as the template. Match the template against each piel within a search area in net image just like stereo matching! Use a match measure such as SSD or correlation. After finding the best discrete location, can use Lucas- Kanade to get sub-piel estimate. 76 Tracking Over Man Frames Shi-Tomasi Feature Tracker Select features in first frame For each frame: Update positions of tracked features Discrete search or Lucas-Kanade Terminate inconsistent tracks Compute similarit with corresponding feature in the previous frame or in the first frame where it s visible Start new tracks if needed Tpicall ever ~0 frames, new features are added to refill the ranks. 77 Find good features using eigenvalues of secondmoment matri Ke idea: good features to track are the ones that can be tracked reliabl. From frame to frame, track with Lucas-Kanade and a pure translation model. More robust for small displacements, can be estimated from smaller neighborhoods. Check consistenc of tracks b affine registration to the first observed instance of the feature. Affine model is more accurate for larger displacements. Comparing to the first frame helps to minimize drift. J. Shi and C. Tomasi. Good Features to Track. CVPR 994. 78 3

Tracking Eample Real-Time GPU Implementations This basic feature tracking framework (Lucas-Kanade + Shi-Tomasi) is commonl referred to as KLT tracking. J. Shi and C. Tomasi. Good Features to Track. CVPR 994. 79 Used as preprocessing step for man applications (recall the boujou demo esterda Lends itself to eas parallelization Ver fast GPU implementations available C. ach, D. Gallup, J.-M. Frahm, Fast Gain-Adaptive KLT tracking on the GPU. In CVGPU 08 Workshop, Anchorage, USA, 008 6 fps with automatic gain adaptation 60 fps without gain adaptation http://www.cs.unc.edu/~ssinha/research/gpu_klt/ http://cs.unc.edu/~cmzach/opensource.html 80 Eample Use of Optical Flow: Motion Paint Motion vs. Stereo: Similarities Use optical flow to track brush strokes, in order to animate them to follow underling scene motion. Both involve solving Correspondence: disparities, motion vectors Slide credit: Kristen Grauman What Dreams Ma Come http://www.fguide.com/article333.html 8 Reconstruction Slide credit: Kristen Grauman 8 Motion vs. Stereo: Differences Summar Motion: Uses velocit: consecutive frames must be close to get good approimate time derivative. 3D movement between camera and scene not necessaril single 3D rigid transformation. Whereas with stereo: Could have an disparit value. View pair separated b a single 3d transformation. Slide credit: Kristen Grauman 83 Motion field: 3D motions projected to D images; dependenc on depth. Solving for motion with Sparse feature matches Dense optical flow Optical flow Brightness constanc assumption Aperture problem Solution with spatial coherence assumption Etensions to segementation into motion laers Slide credit: Kristen Grauman 84 4

References and Further Reading Here is the original paper b Lucas & Kanade B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proc. IJCAI, pp. 674 679, 98. And the original paper b Shi & Tomasi J. Shi and C. Tomasi. Good Features to Track. CVPR 994. Read the stor how optical flow was used for special effects in a number of recent movies http://www.fguide.com/article333.html 85 5