Computer Vision Lecture 20

Size: px
Start display at page:

Download "Computer Vision Lecture 20"

Transcription

1 Computer Vision Lecture 2 Motion and Optical Flow Bastian Leibe RWTH Aachen leibe@vision.rwth-aachen.de Man slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefes, S. Lazebnik Course Outline Image Processing Basics Segmentation & Grouping Object Recognition Local Features & Matching Object Categorization 3D Reconstruction Epipolar Geometr and Stereo Basics Camera calibration & Uncalibrated Reconstruction Active Stereo Motion Motion and Optical Flow 3D Reconstruction (Reprise) Structure-from-Motion 2 Recap: Epipolar Geometr Calibrated Case X Recap: Epipolar Geometr Uncalibrated Case X [ t ( R)] T E with E [ t ] R ˆ T E ˆ T F with F K EK T 1 Essential Matri (Longuet-Higgins, 1981) K ˆ Kˆ Fundamental Matri (Faugeras and Luong, 1992) 3 4 Recap: The Eight-Point Algorithm 2 3 = (u, v, 1) T, = (u, v, 1) T F11 F12 F13 F21 [u u; u v; u ; uv ; vv ; v ; u; v; 1] F22 = F23 6F F F u 1u1 u 1v1 u 1 u1v1 v1v1 v F11 1 u1 v1 1 u 2 u2 u 2 v2 u 2 u2v2 v2v2 v2 u2 v2 1 F12 u 3 u3 u 3 v3 u 3 u3v3 v3v3 v3 u3 v3 1 F13 u 4u4 u 4v4 u 4 u4v4 v4v4 v4 u4 v4 1 F21 u 5u5 u 5v5 u 5 u5v5 v5v5 v5 u5 v5 1 F22 = 6u 6u6 u 6v6 u 6 u6v6 v6v6 v6 u6 v6 1 F u 7 u7 u 7 v7 u 7 u7v7 v7v7 v7 u7 v7 156F u This minimizes: 8u8 u 8v8 u 8 u8v8 v8v8 v8 F32 u8 v8 1 Solve using? SVD! Slide adapted from Svetlana Lazebnik F33 Af = N T 2 ( F i ) i i1 5 Recap: Normalized Eight-Point Algorithm 1. Center the image data at the origin, and scale it so the mean squared distance between the origin and the data points is 2 piels. 2. Use the eight-point algorithm to compute F from the normalized points. 3. Enforce the rank-2 constraint using SVD. SVD F UDV 4. Transform fundamental matri back to original units: if T and T are the normalizing transformations in the two images, than the fundamental matri in original coordinates is T T F T. T d11 v11 v13 U d 22 d v31 v [Hartle, 1995] T Set d 33 to zero and reconstruct F 1

2 Practical Considerations Topics of This Lecture Introduction to Motion Applications, uses Motion Field Derivation Small Baseline Large Baseline 1. Role of the baseline Small baseline: large depth error Large baseline: difficult search problem Solution Track features between frames until baseline is sufficient. Optical Flow Brightness constanc constraint Aperture problem Lucas-Kanade flow Iterative refinement Global parametric motion Coarse-to-fine estimation Motion segmentation KLT Feature Tracking Slide adapted from Steve Seitz Video Motion and Perceptual Organization A video is a sequence of frames captured over time Now our image data is a function of space ( and time (t) Sometimes, motion is the onl cue Motion and Perceptual Organization Motion and Perceptual Organization Sometimes, motion is foremost cue Even impoverished motion data can evoke a strong percept Slide credit: Kristen Grauman

3 Motion and Perceptual Organization Uses of Motion Even impoverished motion data can evoke a strong percept Estimating 3D structure Directl from optic flow Indirectl to create correspondences for SfM Segmenting objects based on motion cues Learning dnamical models Recognizing events and activities Improving video qualit (motion stabilization) 21 Slide adapted from Svetlana Lazebnik 22 Motion Estimation Techniques Topics of This Lecture Direct methods Directl recover image motion at each piel from spatiotemporal image brightness variations Dense motion fields, but sensitive to appearance variations Suitable for video and when image motion is small Feature-based methods Etract visual features (corners, tetured areas) and track them over multiple frames Sparse motion fields, but more robust tracking Suitable when image motion is large (1s of piels) Introduction to Motion Applications, uses Motion Field Derivation Optical Flow Brightness constanc constraint Aperture problem Lucas-Kanade flow Iterative refinement Global parametric motion Coarse-to-fine estimation Motion segmentation KLT Feature Tracking Motion Field The motion field is the projection of the 3D scene motion into the image Motion Field and Paralla P(t) is a moving 3D point Velocit of 3D scene point: V = dp/dt p(t) = ((t),(t)) is the projection of P in the image. Apparent velocit v in the image: given b components v = d/dt and v = d/dt These components are known as the motion field of the image. P(t) V p(t) v P(t+dt) p(t+dt)

4 Motion Field and Paralla P(t) Quotient rule: (f/g) = (g f g f)/g 2 V P(t+dt) Motion Field and Paralla Pure translation: V is constant everwhere To find image velocit v, differentiate p with respect to t (using quotient rule): p(t) v p(t+dt) Image motion is a function of both the 3D motion (V) and the depth of the 3D point (Z) Motion Field and Paralla Pure translation: V is constant everwhere Motion Field and Paralla Pure translation: V is constant everwhere V z is nonzero: Ever motion vector points toward (or awa from) v, the vanishing point of the translation direction. V z is nonzero: Ever motion vector points toward (or awa from) v, the vanishing point of the translation direction. V z is zero: Motion is parallel to the image plane, all the motion vectors are parallel. The length of the motion vectors is inversel proportional to the depth Z Topics of This Lecture Introduction to Motion Applications, uses Motion Field Derivation Optical Flow Brightness constanc constraint Aperture problem Lucas-Kanade flow Iterative refinement Global parametric motion Coarse-to-fine estimation Motion segmentation KLT Feature Tracking Optical Flow Definition: optical flow is the apparent motion of brightness patterns in the image. Ideall, optical flow would be the same as the motion field. Have to be careful: apparent motion can be caused b lighting changes without an actual motion. Think of a uniform rotating sphere under fied lighting vs. a stationar sphere under moving illumination

5 Apparent Motion Motion Field Estimating Optical Flow I(,t 1) Given two subsequent frames, estimate the apparent motion field u( and v( between them. Ke assumptions I(,t) Brightness constanc: projection of the same point looks the same in ever frame. Small motion: points do not move ver far. Spatial coherence: points move like their neighbors. Slide credit: Kristen Grauman 33 Figure from Horn book 34 The Brightness Constanc Constraint Brightness Constanc Equation: I(, t 1) I( u(, v(, t) Linearizing the right hand side using Talor epansion: Hence, I(,t 1) I(,t) I(, t 1) I(, t) I u( I v( I u I v It Spatial derivatives Temporal derivative 35 The Brightness Constanc Constraint I u I v I t How man equations and unknowns per piel? One equation, two unknowns Intuitivel, what does this constraint mean? The component of the flow perpendicular to the gradient (i.e., parallel to the edge) is unknown I ( u, v) I If (u,v) satisfies the equation, so does (u+u, v+v ) if I ( u', v') t gradient (u,v) (u,v ) (u+u,v+v ) edge 36 The Aperture Problem The Aperture Problem Perceived motion Actual motion

6 The Barber Pole Illusion The Barber Pole Illusion The Barber Pole Illusion Solving the Aperture Problem How to get more equations for a piel? Spatial coherence constraint: pretend the piel s neighbors have the same (u,v) If we use a 55 window, that gives us 25 equations per piel 41 B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pp , Solving the Aperture Problem Least squares problem: Conditions for Solvabilit Optimal (u, v) satisfies Lucas-Kanade equation Minimum least squares solution given b solution of When is this solvable? A T A should be invertible. A T A entries should not be too small (noise). A T A should be well-conditioned. (The summations are over all piels in the K K window) Slide adapted from Svetlana Lazebnik

7 Eigenvectors of A T A Haven t we seen an equation like this before? Recall the Harris corner detector: M = A T A is the second moment matri. The eigenvectors and eigenvalues of M relate to edge direction and magnitude. The eigenvector associated with the larger eigenvalue points in the direction of fastest intensit change. The other eigenvector is orthogonal to it. Interpreting the Eigenvalues Classification of image points using eigenvalues of the second moment matri: 2 Edge 2 >> 1 Corner 1 and 2 are large, 1 ~ 2 1 and 2 are small Flat region Edge 1 >> 2 45 Slide credit: Kristen Grauman 1 46 Edge Low-Teture Region Gradients ver large or ver small Large 1, small 2 47 Gradients have small magnitude Small 1, small 2 48 High-Teture Region Per-Piel Estimation Procedure Let T II t M I I and b II t Algorithm: At each piel compute U b solving MU b M is singular if all gradient vectors point in the same direction E.g., along an edge Triviall singular if the summation is over a single piel or if there is no teture I.e., onl normal flow is available (aperture problem) Gradients are different, large magnitude Large 1, large 2 49 Corners and tetured areas are OK 5 7

8 Iterative Refinement Optical Flow: Iterative Refinement 1. Estimate velocit at each piel using one iteration of Lucas and Kanade estimation. estimate update Initial guess: Estimate: 2. Warp one image toward the other using the estimated flow field. (Easier said than done) 3. Refine estimate b repeating the process. (using d for displacement here instead of u) Slide adapted from Steve Seitz Optical Flow: Iterative Refinement Optical Flow: Iterative Refinement estimate update Initial guess: Estimate: estimate update Initial guess: Estimate: (using d for displacement here instead of u) (using d for displacement here instead of u) Optical Flow: Iterative Refinement Optic Flow: Iterative Refinement Some Implementation Issues: Warping is not eas (ensure that errors in warping are smaller than the estimate refinement). Warp one image, take derivatives of the other so ou don t need to re-compute the gradient after each iteration. Often useful to low-pass filter the images before motion estimation (for better derivative estimation, and linear approimations to image intensit. (using d for displacement here instead of u)

9 Etension: Global Parametric Motion Models Eample: Affine Motion u( a a a 1 2 v( a4 a5 a6 Substituting into the brightness constanc equation: 3 I u I v I t Translation Affine Perspective 3D rotation 2 unknowns 6 unknowns 8 unknowns 3 unknowns Eample: Affine Motion Problem Cases in Lucas-Kanade u( a a a 1 2 v( a4 a5 a6 Substituting into the brightness constanc equation: I 3 ( 6 a1 a2 a3 I ( a4 a5 a It The motion is large (larger than a piel) Iterative refinement, coarse-to-fine estimation A point does not move like its neighbors Motion segmentation Brightness constanc does not hold Do ehaustive neighborhood search with normalized correlation. Each piel provides 1 linear constraint in 6 unknowns. Least squares minimization: Err a) I ( a a a I ( a a a I t ( Dealing with Large Motions Temporal Aliasing Temporal aliasing causes ambiguities in optical flow because images can have man piels with the same intensit. I.e., how do we know which correspondence is correct? actual shift estimated shift Nearest match is correct (no aliasing) Nearest match is incorrect (aliasing) To overcome aliasing: coarse-to-fine estimation

10 Idea: Reduce the Resolution! Coarse-to-fine Optical Flow Estimation u=1.25 piels u=2.5 piels u=5 piels Image 1 u=1 piels Image 2 63 Gaussian pramid of image 1 Gaussian pramid of image 2 64 Coarse-to-fine Optical Flow Estimation Dense Optical Flow Dense measurements can be obtained b adding smoothness constraints. Color map Run iterative L-K Warp & upsample Run iterative L-K... Image 1 Image 2 Gaussian pramid of image 1 Gaussian pramid of image 2 65 T. Bro C. Bregler, J. Malik, Large displacement optical flow, CVPR 9, Miami, USA, June Summar References and Further Reading Motion field: 3D motions projected to 2D images; dependenc on depth. Solving for motion with Sparse feature matches Dense optical flow Optical flow Brightness constanc assumption Aperture problem Solution with spatial coherence assumption Here is the original paper b Lucas & Kanade B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proc. IJCAI, pp , And the original paper b Shi & Tomasi J. Shi and C. Tomasi. Good Features to Track. CVPR Slide credit: Kristen Grauman

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow

More information

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes Motion and perceptual organization Sometimes, motion is the onl cue Motion and perceptual organization Sometimes, motion is the

More information

Computer Vision Lecture 18

Computer Vision Lecture 18 Course Outline Computer Vision Lecture 8 Motion and Optical Flow.0.009 Bastian Leibe RWTH Aachen http://www.umic.rwth-aachen.de/multimedia leibe@umic.rwth-aachen.de Man slides adapted from K. Grauman,

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

EECS 556 Image Processing W 09

EECS 556 Image Processing W 09 EECS 556 Image Processing W 09 Motion estimation Global vs. Local Motion Block Motion Estimation Optical Flow Estimation (normal equation) Man slides of this lecture are courtes of prof Milanfar (UCSC)

More information

CS 4495 Computer Vision Motion and Optic Flow

CS 4495 Computer Vision Motion and Optic Flow CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Optical flow. Cordelia Schmid

Optical flow. Cordelia Schmid Optical flow Cordelia Schmid Motion field The motion field is the projection of the 3D scene motion into the image Optical flow Definition: optical flow is the apparent motion of brightness patterns in

More information

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field Finally: Motion and tracking Tracking objects, video analysis, low level motion Motion Wed, April 20 Kristen Grauman UT-Austin Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, and S. Lazebnik

More information

Optical flow and tracking

Optical flow and tracking EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,

More information

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in

More information

Comparison between Motion Analysis and Stereo

Comparison between Motion Analysis and Stereo MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

More information

Optical flow. Cordelia Schmid

Optical flow. Cordelia Schmid Optical flow Cordelia Schmid Motion field The motion field is the projection of the 3D scene motion into the image Optical flow Definition: optical flow is the apparent motion of brightness patterns in

More information

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20 Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit

More information

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental

More information

ECE Digital Image Processing and Introduction to Computer Vision

ECE Digital Image Processing and Introduction to Computer Vision ECE592-064 Digital Image Processing and Introduction to Computer Vision Depart. of ECE, NC State University Instructor: Tianfu (Matt) Wu Spring 2017 Recap, SIFT Motion Tracking Change Detection Feature

More information

Computer Vision II Lecture 4

Computer Vision II Lecture 4 Course Outline Computer Vision II Lecture 4 Single-Object Tracking Background modeling Template based tracking Color based Tracking Color based tracking Contour based tracking Tracking by online classification

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

Lucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Lucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. Lucas-Kanade Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object

More information

Motion estimation. Lihi Zelnik-Manor

Motion estimation. Lihi Zelnik-Manor Motion estimation Lihi Zelnik-Manor Optical Flow Where did each piel in image 1 go to in image 2 Optical Flow Pierre Kornprobst's Demo ntroduction Given a video sequence with camera/objects moving we can

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 19: Optical flow http://en.wikipedia.org/wiki/barberpole_illusion Readings Szeliski, Chapter 8.4-8.5 Announcements Project 2b due Tuesday, Nov 2 Please sign

More information

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization

More information

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

Computer Vision II Lecture 4

Computer Vision II Lecture 4 Computer Vision II Lecture 4 Color based Tracking 29.04.2014 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Single-Object Tracking Background modeling

More information

Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.

Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization. Overview Video Optical flow Motion Magnification Colorization Lecture 9 Optical flow Motion Magnification Colorization Overview Optical flow Combination of slides from Rick Szeliski, Steve Seitz, Alyosha

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment

More information

Optical Flow CS 637. Fuxin Li. With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun

Optical Flow CS 637. Fuxin Li. With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun Optical Flow CS 637 Fuxin Li With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun Motion and perceptual organization Sometimes, motion is the only cue Motion and perceptual

More information

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability

Midterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability Midterm Wed. Local features: detection and description Monday March 7 Prof. UT Austin Covers material up until 3/1 Solutions to practice eam handed out today Bring a 8.5 11 sheet of notes if you want Review

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski

Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2005 Today Go over Midterm Go over Project #3

More information

CAP5415-Computer Vision Lecture 8-Mo8on Models, Feature Tracking, and Alignment. Ulas Bagci

CAP5415-Computer Vision Lecture 8-Mo8on Models, Feature Tracking, and Alignment. Ulas Bagci CAP545-Computer Vision Lecture 8-Mo8on Models, Feature Tracking, and Alignment Ulas Bagci bagci@ucf.edu Readings Szeliski, R. Ch. 7 Bergen et al. ECCV 92, pp. 237-252. Shi, J. and Tomasi, C. CVPR 94, pp.593-6.

More information

VC 11/12 T11 Optical Flow

VC 11/12 T11 Optical Flow VC 11/12 T11 Optical Flow Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Optical Flow Constraint Equation Aperture

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical

More information

CS664 Lecture #18: Motion

CS664 Lecture #18: Motion CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Structure from Motion

Structure from Motion 11/18/11 Structure from Motion Computer Vision CS 143, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from

More information

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017

CS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017 CS 277: Intro to Computer Vision Multiple Views Prof. Adriana Kovashka Universit of Pittsburgh March 4, 27 Plan for toda Affine and projective image transformations Homographies and image mosaics Stereo

More information

Spatial track: motion modeling

Spatial track: motion modeling Spatial track: motion modeling Virginio Cantoni Computer Vision and Multimedia Lab Università di Pavia Via A. Ferrata 1, 27100 Pavia virginio.cantoni@unipv.it http://vision.unipv.it/va 1 Comparison between

More information

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object

More information

Wikipedia - Mysid

Wikipedia - Mysid Wikipedia - Mysid Erik Brynjolfsson, MIT Filtering Edges Corners Feature points Also called interest points, key points, etc. Often described as local features. Szeliski 4.1 Slides from Rick Szeliski,

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Recovering structure from a single view Pinhole perspective projection

Recovering structure from a single view Pinhole perspective projection EPIPOLAR GEOMETRY The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Svetlana Lazebnik (U. Illinois); Bill Freeman and Antonio Torralba (MIT), including their

More information

CSCI 1290: Comp Photo

CSCI 1290: Comp Photo CSCI 1290: Comp Photo Fall 2018 @ Brown University James Tompkin Many slides thanks to James Hays old CS 129 course, along with all its acknowledgements. CS 4495 Computer Vision A. Bobick Motion and Optic

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

EE 264: Image Processing and Reconstruction. Image Motion Estimation II. EE 264: Image Processing and Reconstruction. Outline

EE 264: Image Processing and Reconstruction. Image Motion Estimation II. EE 264: Image Processing and Reconstruction. Outline Peman Milanar Image Motion Estimation II Peman Milanar Outline. Introduction to Motion. Wh Estimate Motion? 3. Global s. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical

More information

CS231M Mobile Computer Vision Structure from motion

CS231M Mobile Computer Vision Structure from motion CS231M Mobile Computer Vision Structure from motion - Cameras - Epipolar geometry - Structure from motion Pinhole camera Pinhole perspective projection f o f = focal length o = center of the camera z y

More information

Lecture 9: Epipolar Geometry

Lecture 9: Epipolar Geometry Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting

Announcements. Motion. Structure-from-Motion (SFM) Motion. Discrete Motion: Some Counting Announcements Motion Introduction to Computer Vision CSE 152 Lecture 20 HW 4 due Friday at Midnight Final Exam: Tuesday, 6/12 at 8:00AM-11:00AM, regular classroom Extra Office Hours: Monday 6/11 9:00AM-10:00AM

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Camera Geometry II. COS 429 Princeton University

Camera Geometry II. COS 429 Princeton University Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence

More information

3D Photography: Epipolar geometry

3D Photography: Epipolar geometry 3D Photograph: Epipolar geometr Kalin Kolev, Marc Pollefes Spring 203 http://cvg.ethz.ch/teaching/203spring/3dphoto/ Schedule (tentative) Feb 8 Feb 25 Mar 4 Mar Mar 8 Mar 25 Apr Apr 8 Apr 5 Apr 22 Apr

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X

More information

Photo by Carl Warner

Photo by Carl Warner Photo b Carl Warner Photo b Carl Warner Photo b Carl Warner Fitting and Alignment Szeliski 6. Computer Vision CS 43, Brown James Has Acknowledgment: Man slides from Derek Hoiem and Grauman&Leibe 2008 AAAI

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision 1 / 49 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 49 Menu March 10, 2016 Topics: Motion

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

Spatial track: motion modeling

Spatial track: motion modeling Spatial track: motion modeling Virginio Cantoni Computer Vision and Multimedia Lab Università di Pavia Via A. Ferrata 1, 27100 Pavia virginio.cantoni@unipv.it http://vision.unipv.it/va 1 Comparison between

More information

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline 1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation

More information

Lecture 20: Tracking. Tuesday, Nov 27

Lecture 20: Tracking. Tuesday, Nov 27 Lecture 20: Tracking Tuesday, Nov 27 Paper reviews Thorough summary in your own words Main contribution Strengths? Weaknesses? How convincing are the experiments? Suggestions to improve them? Extensions?

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches Reminder: Lecture 20: The Eight-Point Algorithm F = -0.00310695-0.0025646 2.96584-0.028094-0.00771621 56.3813 13.1905-29.2007-9999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary

More information

Final Exam Study Guide CSE/EE 486 Fall 2007

Final Exam Study Guide CSE/EE 486 Fall 2007 Final Exam Study Guide CSE/EE 486 Fall 2007 Lecture 2 Intensity Sufaces and Gradients Image visualized as surface. Terrain concepts. Gradient of functions in 1D and 2D Numerical derivatives. Taylor series.

More information

Computing F class 13. Multiple View Geometry. Comp Marc Pollefeys

Computing F class 13. Multiple View Geometry. Comp Marc Pollefeys Computing F class 3 Multiple View Geometr Comp 90-089 Marc Pollefes Multiple View Geometr course schedule (subject to change) Jan. 7, 9 Intro & motivation Projective D Geometr Jan. 4, 6 (no class) Projective

More information

Structure from Motion and Multi- view Geometry. Last lecture

Structure from Motion and Multi- view Geometry. Last lecture Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,

More information

Tracking Computer Vision Spring 2018, Lecture 24

Tracking Computer Vision Spring 2018, Lecture 24 Tracking http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 24 Course announcements Homework 6 has been posted and is due on April 20 th. - Any questions about the homework? - How

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement

More information

Lecture 14: Tracking mo3on features op3cal flow

Lecture 14: Tracking mo3on features op3cal flow Lecture 14: Tracking mo3on features op3cal flow Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei- Fei Li Stanford Vision Lab Lecture 14-1 What we will learn today? Introduc3on Op3cal flow Feature

More information

Lecture 13: Tracking mo3on features op3cal flow

Lecture 13: Tracking mo3on features op3cal flow Lecture 13: Tracking mo3on features op3cal flow Professor Fei- Fei Li Stanford Vision Lab Lecture 13-1! What we will learn today? Introduc3on Op3cal flow Feature tracking Applica3ons (Problem Set 3 (Q1))

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Structure from Motion

Structure from Motion /8/ Structure from Motion Computer Vision CS 43, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from motion

More information

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections

More information

C18 Computer vision. C18 Computer Vision. This time... Introduction. Outline.

C18 Computer vision. C18 Computer Vision. This time... Introduction. Outline. C18 Computer Vision. This time... 1. Introduction; imaging geometry; camera calibration. 2. Salient feature detection edges, line and corners. 3. Recovering 3D from two images I: epipolar geometry. C18

More information

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15

Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15 Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15 Lecture 6 Stereo Systems Multi- view geometry Stereo systems

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

Local features and image matching. Prof. Xin Yang HUST

Local features and image matching. Prof. Xin Yang HUST Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source

More information

Application questions. Theoretical questions

Application questions. Theoretical questions The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera

More information

Lecture 10: Multi view geometry

Lecture 10: Multi view geometry Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

Lecture 5 Epipolar Geometry

Lecture 5 Epipolar Geometry Lecture 5 Epipolar Geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 5-24-Jan-18 Lecture 5 Epipolar Geometry Why is stereo useful? Epipolar constraints Essential

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

Local features: detection and description. Local invariant features

Local features: detection and description. Local invariant features Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms

More information

Undergrad HTAs / TAs. Help me make the course better! HTA deadline today (! sorry) TA deadline March 21 st, opens March 15th

Undergrad HTAs / TAs. Help me make the course better! HTA deadline today (! sorry) TA deadline March 21 st, opens March 15th Undergrad HTAs / TAs Help me make the course better! HTA deadline today (! sorry) TA deadline March 2 st, opens March 5th Project 2 Well done. Open ended parts, lots of opportunity for mistakes. Real implementation

More information

Local features: detection and description May 12 th, 2015

Local features: detection and description May 12 th, 2015 Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59

More information

Scale Invariant Feature Transform (SIFT) CS 763 Ajit Rajwade

Scale Invariant Feature Transform (SIFT) CS 763 Ajit Rajwade Scale Invariant Feature Transform (SIFT) CS 763 Ajit Rajwade What is SIFT? It is a technique for detecting salient stable feature points in an image. For ever such point it also provides a set of features

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Lecture 6 Stereo Systems Multi-view geometry

Lecture 6 Stereo Systems Multi-view geometry Lecture 6 Stereo Systems Multi-view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-5-Feb-4 Lecture 6 Stereo Systems Multi-view geometry Stereo systems

More information