Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.

Similar documents
Computational Photography and Capture: (Re)Coloring. Gabriel Brostow & Tim Weyrich TA: Frederic Besse

Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski

CS 4495 Computer Vision Motion and Optic Flow

Lucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Peripheral drift illusion

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow

CS6670: Computer Vision

EE795: Computer Vision and Intelligent Systems

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Multi-stable Perception. Necker Cube

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Lecture 16: Computer Vision

Schedule. Tuesday, May 10: Thursday, May 12: Motion microscopy, separating shading and paint min. student project presentations, projects due.

Computer Vision Lecture 20

Computer Vision Lecture 20

EE795: Computer Vision and Intelligent Systems

Dense Image-based Motion Estimation Algorithms & Optical Flow

Lecture 16: Computer Vision

Optical flow and tracking

Comparison between Motion Analysis and Stereo

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

VC 11/12 T11 Optical Flow

12/07/17. Video Magnification. Magritte, The Listening Room. Computational Photography Derek Hoiem, University of Illinois

EECS 556 Image Processing W 09

Computer Vision Lecture 20

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Capturing, Modeling, Rendering 3D Structures

ECE Digital Image Processing and Introduction to Computer Vision

Motion Estimation. There are three main types (or applications) of motion estimation:

Fi#ng & Matching Region Representa3on Image Alignment, Op3cal Flow

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

Lecture 20: Tracking. Tuesday, Nov 27

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

CS664 Lecture #18: Motion

Motion Estimation (II) Ce Liu Microsoft Research New England

Image Stitching. Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi

Optical Flow CS 637. Fuxin Li. With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Optic Flow and Basics Towards Horn-Schunck 1

Automatic Image Alignment (feature-based)

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Feature Based Registration - Image Alignment

CS664 Lecture #16: Image registration, robust statistics, motion

Optic Flow and Motion Detection

EE795: Computer Vision and Intelligent Systems

Displacement estimation

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

Homographies and RANSAC

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision II Lecture 4

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

Autonomous Navigation for Flying Robots

Visual Tracking (1) Feature Point Tracking and Block Matching

ELEC Dr Reji Mathew Electrical Engineering UNSW

Mixture Models and EM

Optical flow. Cordelia Schmid

Mosaics. Today s Readings

CSCI 1290: Comp Photo

Colorization: History

Final Exam Study Guide CSE/EE 486 Fall 2007

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Image warping and stitching

CS231A Section 6: Problem Set 3

Spatial track: motion modeling

Feature Matching and RANSAC

CS-465 Computer Vision

CS201: Computer Vision Introduction to Tracking

Local features: detection and description. Local invariant features

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

CS5670: Computer Vision

All good things must...

Image warping and stitching

Optical Flow Estimation

Final Exam Study Guide

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Local Feature Detectors

Image warping and stitching

Global Flow Estimation. Lecture 9

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision II Lecture 4

Chaplin, Modern Times, 1936

Representing Moving Images with Layers. J. Y. Wang and E. H. Adelson MIT Media Lab

Computer Vision Lecture 18

Robust Model-Free Tracking of Non-Rigid Shape. Abstract

Optical flow. Cordelia Schmid

Stitching and Blending

The Lucas & Kanade Algorithm

Automatic Image Alignment

Automatic Image Alignment

Transcription:

Overview Video Optical flow Motion Magnification Colorization Lecture 9 Optical flow Motion Magnification Colorization Overview Optical flow Combination of slides from Rick Szeliski, Steve Seitz, Alyosha Efros and Bill Freeman Motion estimation: Optical flow Why estimate motion? Lots of uses Track object behavior Correct for camera jitter (stabilization) Align images (mosaics) 3D shape reconstruction Special effects Will start by estimating motion of each pixel separately Then will consider motion of entire image 1

Problem definition: optical flow Optical flow constraints (grayscale images) How to estimate pixel motion from image H to image I? Solve pixel correspondence problem given a pixel in H, look for nearby pixels of the same color in I Key assumptions color constancy: a point in H looks the same in I For grayscale images, this is brightness constancy small motion: points do not move very far This is called the optical flow problem Let s look at these constraints more closely brightness constancy: Q: what s the equation? H(x,y)=I(x+u, y+v) small motion: (u and v are less than 1 pixel) suppose we take the Taylor series expansion of I: Optical flow equation Optical flow equation Combining these two equations Q: how many unknowns and equations per pixel? unknowns, one equation Intuitively, what does this constraint mean? The component of the flow in the gradient direction is determined The component of the flow parallel to an edge is unknown In the limit as u and v go to zero, this becomes exact This explains the Barber Pole illusion http://www.sandlotscience.com/ambiguous/barberpole_illusion.htm http://www.liv.ac.uk/~marcob/trieste/barberpole.html http://en.wikipedia.org/wiki/barber's_pol Aperture problem Aperture problem

Solving the aperture problem How to get more equations for a pixel? Basic idea: impose additional constraints most common is to assume that the flow field is smooth locally one method: pretend the pixel s neighbors have the same (u,v)» If we use a 5x5 window, that gives us 5 equations per pixel! RGB version How to get more equations for a pixel? Basic idea: impose additional constraints most common is to assume that the flow field is smooth locally one method: pretend the pixel s neighbors have the same (u,v)» If we use a 5x5 window, that gives us 5*3 equations per pixel! Note that RGB is not enough to disambiguate because R, G & B are correlated Just provides better gradient Lukas-Kanade flow Aperture Problem and Normal Flow Prob: we have more equations than unknowns The gradient constraint: Solution: solve least squares problem minimum least squares solution given by solution (in d) of: I xu + I yv + It = 0 r I U = 0 Defines a line in the (u,v) space v Normal Flow: The summations are over all pixels in the K x K window This technique was first proposed by Lukas & Kanade (1981) u It = I I I u Combining Local Constraints Conditions for solvability Optimal (u, v) satisfies Lucas-Kanade equation v 1 I U = 1 I t u I U = 3 I U etc. = I t 3 I t When is This Solvable? A T A should be invertible A T A should not be too small due to noise eigenvalues λ 1 and λ of A T A should not be too small A T A should be well-conditioned λ 1 / λ should not be too large (λ 1 = larger eigenvalue) A T A is solvable when there is no aperture problem 3

Local Patch Analysis Edge large gradients, all the same large λ 1, small λ Low texture region High textured region gradients have small magnitude smallλ 1, small λ gradients are different, large magnitudes large λ 1, large λ Observation This is a two image problem BUT Can measure sensitivity by just looking at one of the images! This tells us which pixels are easy to track, which are hard very useful later on when we do feature tracking... Errors in Lukas-Kanade What are the potential causes of errors in this procedure? Suppose A T A is easily invertible Suppose there is not much noise in the image When our assumptions are violated Brightness constancy is not satisfied The motion is not small A point does not move like its neighbors window size is too large what is the ideal window size? 4

Iterative Refinement Iterative Lukas-Kanade Algorithm 1. Estimate velocity at each pixel by solving Lucas-Kanade equations. Warp H towards I using the estimated flow field - use image warping techniques 3. Repeat until convergence Optical Flow: Iterative Estimation estimate update Initial guess: Estimate: x 0 x (using d for displacement here instead of u) Optical Flow: Iterative Estimation Optical Flow: Iterative Estimation estimate update Initial guess: Estimate: estimate update Initial guess: Estimate: x 0 x x 0 x Optical Flow: Iterative Estimation Optical Flow: Iterative Estimation Some Implementation Issues: Warping is not easy (ensure that errors in warping are smaller than the estimate refinement) Warp one image, take derivatives of the other so you don t need to re-compute the gradient after each iteration. Often useful to low-pass filter the images before motion estimation (for better derivative estimation, and linear approximations to image intensity) it x 0 x 5

Revisiting the small motion assumption Optical Flow: Aliasing Temporal aliasing causes ambiguities in optical flow because images can have many pixels with the same intensity. I.e., how do we know which correspondence is correct? actual shift estimated shift Is this motion small enough? Probably not it s much larger than one pixel ( nd order terms dominate) How might we solve this problem? nearest match is correct (no aliasing) To overcome aliasing: coarse-to nearest match is incorrect (aliasing) to-fine estimation. Reduce the resolution! Coarse-to-fine optical flow estimation u=1.5 pixels u=.5 pixels u=5 pixels image H u=10 pixels image II Gaussian pyramid of image H Gaussian pyramid of image I Coarse-to-fine optical flow estimation Beyond Translation So far, our patch can only translate in (u,v) What about other motion models? rotation, affine, perspective image HJ run iterative L-K warp & upsample run iterative L-K... image II Same thing but need to add an appropriate Jacobian See Szeliski s survey of Panorama stitching T A A = i T A b = T J I( I) J i T J I ( I) t T T Gaussian pyramid of image H Gaussian pyramid of image I 6

Recap: Classes of Techniques Feature-based methods (e.g. SIFT+Ransac+regression) Extract visual features (corners, textured areas) and track them over multiple frames Sparse motion fields, but possibly robust tracking Suitable especially when image motion is large (10-s of pixels) Direct-methods (e.g. optical flow) Directly recover image motion from spatio-temporal image brightness variations Global motion parameters directly recovered without an intermediate feature motion calculation Dense motion fields, but more sensitive to appearance variations Suitable for video and when image motion is small (< 10 pixels) Optical flow Motion Magnification Colorization Overview Motion Microscopy Motion Magnification How can we see all the subtle motions in a video sequence? Ce Liu Antonio Torralba William T. Freeman Frédo Durand Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Original sequence Magnified sequence Naïve Approach Layer-based Motion Magnification Processing Pipeline Magnify the estimated optical flow field Rendering by warping Input raw video sequence Video Registration Feature point tracking Trajectory clustering Output magnified video sequence Magnification, texture fill-in, rendering Layer segmentation Dense optical flow interpolation User interaction Layer-based motion analysis Stationary camera, stationary background Original sequence Magnified by naïve approach 7

Layer-based Motion Magnification Video Registration Robust Video Registration Input raw video sequence Video Registration Feature point tracking Trajectory clustering Find feature points with Harris corner detector on the reference frame Brute force tracking feature points Output magnified video sequence Magnification, texture fill-in, rendering User interaction Layer segmentation Dense optical flow interpolation Layer-based motion analysis Select a set of robust feature points with inlier and outlier estimation (most from the rigid background) Warp each frame to the reference frame with a global affine transform Stationary camera, stationary background Motion Magnification Pipeline Feature Point Tracking Challenges (1) Input raw video sequence Video Registration Feature point tracking Trajectory clustering Output magnified video sequence Magnification, texture fill-in, rendering Layer segmentation Dense optical flow interpolation User interaction Layer-based motion analysis Adaptive Region of Support Challenges () Brute force search Confused by occlusion! Learn adaptive region of support using expectationmaximization (EM) algorithm region of support time time 8

Trajectory Pruning Comparison Tracking with adaptive region of support Nonsense at full occlusion! inlier probabilit y Outliers Outlier detection and removal by interpolation time Without adaptive region of support and trajectory pruning Motion Magnification Pipeline Trajectory Clustering Normalized Complex Correlation Input raw video sequence Video Registration Feature point tracking Trajectory clustering The similarity metric should be independent of phase and magnitude Output magnified video sequence Magnification, texture fill-in, rendering User interaction Layer segmentation Dense optical flow interpolation Layer-based motion analysis Normalized complex correlation S( C, C ) = 1 t t C ( t) C ( t) 1 C ( t) C ( t) 1 1 t C ( t) C ( t) Spectral Clustering Clustering Results Trajectory Two clusters Trajecto ry Affinity matrix Clustering Reordering of affinity matrix 9

Motion Magnification Pipeline Dense Optical Flow Field From Sparse Feature Points to Dense Optical Flow Field Input raw video sequence Output magnified video sequence Video Registration Magnification, texture fill-in, rendering User interaction Feature point tracking Layer segmentation Trajectory clustering Dense optical flow interpolation Layer-based motion analysis Interpolate dense optical flow field using locally weighted linear regression Flow Dense vectors optical of flow clustered field of cluster sparse 1 feature (leaves) (swing) points Cluster 1: leaves Cluster : swing Motion Magnification Pipeline Layer Segmentation Input raw video sequence Output magnified video sequence Video Registration Magnification, texture fill-in, rendering User interaction Feature point tracking Layer segmentation Trajectory clustering Dense optical flow interpolation Layer-based motion analysis Motion Layer Assignment Assign each pixel to a motion cluster layer, using four cues: Motion likelihood consistency of pixel s intensity if it moves with the motion of a given layer (dense optical flow field) Color likelihood consistency of the color in a layer Spatial connectivity adjacent pixels favored to belong the same group Temporal coherence label assignment stays constant over time Energy minimization using graph cuts Segmentation Results Motion Magnification Pipeline Editing and Rendering Two additional layers: static background and outlier Input raw video sequence Video Registration Feature point tracking Trajectory clustering Output magnified video sequence Magnification, texture fill-in, rendering Layer segmentation Dense optical flow interpolation User interaction Layer-based motion analysis 10

Layered Motion Representation for Motion Processing Video Background Layer 1 Layer Layer mask Occluding layers Motion Magnification Appearance for each layer before texture filling-in Appearance for each layer after texture user editing filling-in Is the Baby Breathing? Are the Motions Real? Original Magnified y y x t x t Are the Motions Real? Applications Original Magnified Education Entertainment Mechanical engineering Original Medical diagnosis Magnified time time 11

Conclusion Thank you! Motion magnification A motion microscopy technique Layer-based motion processing system Robust feature point tracking Reliable trajectory clustering Dense optical flow field interpolation Layer segmentation combining multiple cues Motion Magnification Ce Liu Antonio Torralba William T. Freeman Frédo Durand Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Overview Optical flow Motion Magnification Colorization Colorization Using Optimization Anat Levin Dani Lischinski Yair Weiss School of Computer Science and Engineering The Hebrew University of Jerusalem, Israel Colorization Motivation Colorizing black and white movies and TV shows Colorization: a computer-assisted process of adding color to a monochrome image or movie. (Invented by Wilson Markle, 1970) Earl Glick (Chairman, Hal Roach Studios), 1984: You couldn't make Wyatt Earp today for $1 million an episode. But for $50,000 a segment, you can turn it into color and have a brand new series with no residuals to pay 1

Motivation Motivation Colorizing black and white movies and TV shows Recoloring color images for special effects Typical Colorization Process Typical Colorization Process Delineate region boundary Images from: Yet Another Colorization Tutorial http://www.worth1000.com/tutorial.asp?sid=161018 Images from: Yet Another Colorization Tutorial http://www.worth1000.com/tutorial.asp?sid=161018 Typical Colorization Process Typical Colorization Process Delineate region boundary Choose region color from palette. Delineate region boundary Choose region color from palette. Images from: Yet Another Colorization Tutorial http://www.worth1000.com/tutorial.asp?sid=161018 Images from: Yet Another Colorization Tutorial http://www.worth1000.com/tutorial.asp?sid=161018 13

Typical Colorization Process Video Colorization Process Delineate region boundary Choose region color from palette. Delineate region boundary Choose region color from palette. Track regions across video frames Images from: Yet Another Colorization Tutorial http://www.worth1000.com/tutorial.asp?sid=161018 Colorization Process Discussion Colorization by Analogy A : A Time consuming and labor intensive Fine boundaries. Failures in tracking. B : B? Hertzmann et al. 001, Welsh et al. 00 Colorization by Analogy A : A Colorization by Analogy - Discussion Indirect artistic control No spatial continuity constraint B : B Hertzmann et al. 001, Welsh et al. 00 14

Our approach Our approach Artist scribbles desired colors inside regions Our approach Propagation using Optimization Y U, V Intensity channel Color channels Neighboring pixels with similar intensities should have similar colors Colors are propagated to all pixels Nearby pixels with similar intensities should have the same color Propagation using Optimization Y U, V Intensity channel Color channels w rs e Affinity Functions ( Y ( r ) Y ( s )) / σ r σ proportional to local variance r J = ( U ) U ( r ) w ( ) rs U s r s N ( r ) Minimize difference between color at a pixel and an affinity- weighted average of the neighbors 15

Affinity Functions in Space-Time w rs e ( Y ( r ) Y ( s )) / σ r i 1 i i + 1 J Minimize: Minimizing cost function = ( U ) U ( r ) w ( ) rs U s r s N ( r ) Subject to labeling constraints Since cost is quadratic, minimum can be found by solving sparse system of linear equations. Color Interpolation Coloring Stills Coloring Stills Original Colorized 16

Coloring Stills Coloring Stills Colorization Challenges Segmentation? NCuts Segmentation (Shi & Malik 97) Segmentation aided colorization Our result Recoloring Recoloring Affinity between pixels based on intensity AND color similarities. 17

Recoloring Colorizing Video 13 out of 9 frames c.f. Poisson image editing Perez et al. SIGGRAPH 003 Colorizing Video Matting as Colorization 16 out of 101 frames Red channel<->matte Matting as Colorization Future Work: Import image segmentation advantages: affinity functions, optimization techniques. Alternative color spaces, propagating hue and saturation differently 18

Summary Interface: User scribbles color on a small number of pixels Colors propagate in space-time volume respecting intensity boundaries Convincing colorization with a small amount of user effort Code & examples available: www.cs.huji.ac.i/~yweiss/colorization/ 19