Lecture 14: Tracking mo3on features op3cal flow

Size: px
Start display at page:

Download "Lecture 14: Tracking mo3on features op3cal flow"

Transcription

1 Lecture 14: Tracking mo3on features op3cal flow Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei- Fei Li Stanford Vision Lab Lecture 14-1

2 What we will learn today? Introduc3on Op3cal flow Feature tracking Applica3ons (Supplementary) Technical note Reading: [Szeliski] Chapters: 8.4, 8.5 [Fleet & Weiss, 2005] hzp:// Lecture 14-2

3 From images to videos A video is a sequence of frames captured over 3me Now our image data is a func3on of space (x, y) and 3me (t) Lecture 14-3

4 Mo3on es3ma3on techniques Op3cal flow Recover image mo3on at each pixel from spa3o- temporal image brightness varia3ons (op3cal flow) Feature- tracking Extract visual features (corners, textured areas) and track them over mul3ple frames Lecture 14-4

5 Op3cal flow Vector field func3on of the spa3o- temporal image brightness varia3ons Picture courtesy of Selim Temizer - Learning and Intelligent Systems (LIS) Group, MIT Lecture 14-5

6 Feature- tracking Courtesy of Jean- Yves Bouguet Vision Lab, California Ins3tute of Technology Lecture 14-6

7 Feature- tracking Courtesy of Jean- Yves Bouguet Vision Lab, California Ins3tute of Technology Lecture 14-7

8 Op3cal flow Defini3on: op3cal flow is the apparent mo3on of brightness pazerns in the image Note: apparent mo3on can be caused by ligh3ng changes without any actual mo3on Think of a uniform rota3ng sphere under fixed ligh3ng vs. a sta3onary sphere under moving illumina3on GOAL: Recover image mo3on at each pixel from op3cal flow Lecture 14-8

9 technical note Es3ma3ng op3cal flow I(x,y,t 1) I(x,y,t) Given two subsequent frames, es3mate the apparent mo3on field u(x,y), v(x,y) between them Key assump3ons Brightness constancy: projec3on of the same point looks the same in every frame Small mo9on: points do not move very far Spa9al coherence: points move like their neighbors Lecture 14-9

10 technical note The brightness constancy constraint I(x,y,t 1) I(x,y,t) Brightness Constancy Equa3on: I(x, y,t 1) = I(x + u(x, y), y + v(x, y), t) Linearizing the right side using Taylor expansion: I(x + u, y + v,t) I(x, y,t 1)+ I x u(x, y)+ I y v(x, y)+ I t Hence, I x u + I y v + I t 0 Image deriva3ve along x I(x + u, y + v,t) I(x, y,t 1) = I x u(x, y)+ I y v(x, y)+ I t I [ u v] T + I t = 0 Lecture 14-10

11 technical note The brightness constancy constraint Can we use this equa3on to recover image mo3on (u,v) at each pixel? I u v [ ] T + I t = 0 How many equa3ons and unknowns per pixel? One equa3on (this is a scalar equa3on!), two unknowns (u,v) The component of the flow perpendicular to the gradient (i.e., parallel to the edge) cannot be measured If (u, v ) sa3sfies the equa3on, so does (u+u, v+v ) if I [ u' v' ] T = 0 (u,v ) gradient edge (u,v) (u+u,v+v ) Lecture 14-11

12 The aperture problem Actual mo9on Lecture 14-12

13 The aperture problem Perceived mo9on Lecture 14-13

14 The barber pole illusion hzp://en.wikipedia.org/wiki/barberpole_illusion Lecture 14-14

15 The barber pole illusion hzp://en.wikipedia.org/wiki/barberpole_illusion Lecture 14-15

16 Aperture problem cont d * From Marc Pollefeys COMP Lecture 14-16

17 technical note Solving the ambiguity B. Lucas and T. Kanade. An itera3ve image registra3on technique with an applica3on to stereo vision. In Proceedings of the Interna6onal Joint Conference on Ar6ficial Intelligence, pp , How to get more equa3ons for a pixel? Spa9al coherence constraint: Assume the pixel s neighbors have the same (u,v) If we use a 5x5 window, that gives us 25 equa3ons per pixel Lecture 14-17

18 technical note Lucas- Kanade flow Overconstrained linear system: Lecture 14-18

19 technical note Lucas- Kanade flow Overconstrained linear system Least squares solu3on for d given by The summa3ons are over all pixels in the K x K window Lecture 14-19

20 technical note Condi3ons for solvability Op3mal (u, v) sa3sfies Lucas- Kanade equa3on When is This Solvable? A T A should be inver3ble A T A should not be too small due to noise eigenvalues λ 1 and λ 2 of A T A should not be too small A T A should be well- condi3oned λ 1 / λ 2 should not be too large (λ 1 = larger eigenvalue) Does this remind anything to you? Lecture 14-20

21 technical note M = A T A is the second moment matrix! (Harris corner detector ) Eigenvectors and eigenvalues of A T A relate to edge direc3on and magnitude The eigenvector associated with the larger eigenvalue points in the direc3on of fastest intensity change The other eigenvector is orthogonal to it Lecture 14-21

22 technical note Interpre3ng the eigenvalues Classifica3on of image points using eigenvalues of the second moment matrix: λ 2 Edge λ 2 >> λ 1 Corner λ 1 and λ 2 are large, λ 1 ~ λ 2 λ 1 and λ 2 are small Flat region Edge λ 1 >> λ 2 λ 1 Lecture 14-22

23 technical note Edge gradients very large or very small large λ 1, small λ 2 Lecture 14-23

24 technical note Low- texture region gradients have small magnitude small λ 1, small λ 2 Lecture 14-24

25 technical note High- texture region gradients are different, large magnitudes large λ 1, large λ 2 Lecture 14-25

26 What we will learn today? Introduc3on Op3cal flow Feature tracking Applica3ons (Supplementary) Technical note Lecture 14-26

27 What are good features to track? Can measure quality of features from just a single image Hence: tracking Harris corners (or equivalent) guarantees small error sensi3vity! à Implemented in Open CV Lecture 14-27

28 Recap Key assump3ons (Errors in Lucas- Kanade) Small mo9on: points do not move very far Brightness constancy: projec3on of the same point looks the same in every frame Spa9al coherence: points move like their neighbors Lecture 14-28

29 Revisi3ng the small mo3on assump3on Is this mo3on small enough? Probably not it s much larger than one pixel (2 nd order terms dominate) How might we solve this problem? * From Khurram Hassan-Shafique CAP5415 Computer Vision 2003 Lecture 14-29

30 Reduce the resolu3on! * From Khurram Hassan-Shafique CAP5415 Computer Vision 2003 Lecture 14-30

31 Coarse- to- fine op3cal flow es3ma3on u=1.25 pixels u=2.5 pixels u=5 pixels image 1 H u=10 pixels image 2 I Gaussian pyramid of image 1 Gaussian pyramid of image 2 Lecture 14-31

32 Coarse- to- fine op3cal flow es3ma3on run itera3ve L- K run itera3ve L- K warp & upsample... image 1 J Gaussian pyramid of image 1 (t) image 2 I Gaussian pyramid of image 2 (t+1) Lecture 14-32

33 Op3cal Flow Results * From Khurram Hassan-Shafique CAP5415 Computer Vision 2003 Lecture 14-33

34 Op3cal Flow Results hzp:// OpenCV * From Khurram Hassan-Shafique CAP5415 Computer Vision 2003 Lecture 14-34

35 Recap Key assump3ons (Errors in Lucas- Kanade) Small mo9on: points do not move very far Brightness constancy: projec3on of the same point looks the same in every frame Spa9al coherence: points move like their neighbors Lecture 14-35

36 Reminder: Gestalt common fate Lecture 14-36

37 Mo3on segmenta3on How do we represent the mo3on in this scene? Lecture 14-37

38 Mo3on segmenta3on J. Wang and E. Adelson. Layered Representa3on for Mo3on Analysis. CVPR Break image sequence into layers each of which has a coherent (affine) mo3on Lecture 14-38

39 Lecture 14 - Subs3tu3ng into the brightness constancy equa3on: y a x a a y x v y a x a a y x u ), ( ), ( + + = + + = t y x I v I u I Affine mo3on 39 technical note

40 Lecture 14-0 ) ( ) ( t y x I y a x a a I y a x a a I Subs3tu3ng into the brightness constancy equa3on: y a x a a y x v y a x a a y x u ), ( ), ( + + = + + = Each pixel provides 1 linear constraint in 6 unknowns [ ] = t y x I y a x a a I y a x a a I a Err ) ( ) ( ) ( ! Least squares minimiza3on: Affine mo3on 40 technical note

41 technical note How do we es3mate the layers? 1. Obtain a set of ini3al affine mo3on hypotheses Divide the image into blocks and es3mate affine mo3on parameters in each block by least squares Eliminate hypotheses with high residual error Map into mo3on parameter space Perform k- means clustering on affine mo3on parameters Merge clusters that are close and retain the largest clusters to obtain a smaller set of hypotheses to describe all the mo3ons in the scene Lecture 14-41

42 technical note How do we es3mate the layers? 1. Obtain a set of ini3al affine mo3on hypotheses Divide the image into blocks and es3mate affine mo3on parameters in each block by least squares Eliminate hypotheses with high residual error Map into mo3on parameter space Perform k- means clustering on affine mo3on parameters Merge clusters that are close and retain the largest clusters to obtain a smaller set of hypotheses to describe all the mo3ons in the scene Lecture 14-42

43 technical note How do we es3mate the layers? 1. Obtain a set of ini3al affine mo3on hypotheses Divide the image into blocks and es3mate affine mo3on parameters in each block by least squares Eliminate hypotheses with high residual error Map into mo3on parameter space Perform k- means clustering on affine mo3on parameters Merge clusters that are close and retain the largest clusters to obtain a smaller set of hypotheses to describe all the mo3ons in the scene 2. Iterate un3l convergence: Assign each pixel to best hypothesis Pixels with high residual error remain unassigned Perform region filtering to enforce spa3al constraints Re- es3mate affine mo3ons in each region Lecture 14-43

44 Example result J. Wang and E. Adelson. Layered Representation for Motion Analysis. CVPR Lecture 14-44

45 What we will learn today? Introduc3on Op3cal flow Feature tracking Applica3ons (Supplementary) Technical note Lecture 14-45

46 Uses of mo3on Tracking features Segmen3ng objects based on mo3on cues Learning dynamical models Improving video quality Mo3on stabiliza3on Super resolu3on Tracking objects Recognizing events and ac3vi3es Lecture 14-46

47 Es3ma3ng 3D structure Lecture 14-47

48 Segmen3ng objects based on mo3on cues Background subtrac3on A sta3c camera is observing a scene Goal: separate the sta3c background from the moving foreground Lecture 14-48

49 Segmen3ng objects based on mo3on cues Mo3on segmenta3on Segment the video into mul3ple coherently moving objects S. J. Pundlik and S. T. Birchfield, Mo3on Segmenta3on at Any Speed, Proceedings of the Bri3sh Machine Vision Conference (BMVC) 2006 Lecture 14-49

50 Tracking objects Z.Yin and R.Collins, "On- the- fly Object Modeling while Tracking," IEEE Computer Vision and PaIern Recogni6on (CVPR '07), Minneapolis, MN, June Lecture 14-50

51 Synthesizing dynamic textures Lecture 14-51

52 Super- resolu3on Example: A set of low quality images Lecture 14-52

53 Super- resolu3on Each of these images looks like this: Lecture 14-53

54 Super- resolu3on The recovery result: Lecture 14-54

55 Recognizing events and ac3vi3es Tracker D. Ramanan, D. Forsyth, and A. Zisserman. Tracking People by Learning their Appearance. PAMI Lecture 14-55

56 Recognizing events and ac3vi3es Juan Carlos Niebles, Hongcheng Wang and Li Fei- Fei, Unsupervised Learning of Human Ac9on Categories Using Spa9al- Temporal Words, (BMVC), Edinburgh, Lecture 14-56

57 Recognizing events and ac3vi3es Crossing Talking Queuing Dancing jogging W. Choi & K. Shahid & S. Savarese WMC 2010 Lecture 14-57

58 W. Choi, K. Shahid, S. Savarese, "What are they doing? : Collec3ve Ac3vity Classifica3on Using Spa3o- Temporal Rela3onship Among People", 9th Interna3onal Workshop on Visual Surveillance (VSWS09) in conjuc3on with ICCV 09 Lecture 14-58

59 Op3cal flow without mo3on! Lecture 14-59

60 What we have learned today? Introduc3on Op3cal flow Feature tracking Applica3ons (Supplementary) Technical note Reading: [Szeliski] Chapters: 8.4, 8.5 [Fleet & Weiss, 2005] hzp:// Lecture 14-60

61 technical note Fleet & Weiss, 2005 Lecture 14-61

62 technical note Fleet & Weiss, 2005 Lecture 14-62

63 technical note Fleet & Weiss, 2005 Lecture 14-63

64 technical note Fleet & Weiss, 2005 Lecture 14-64

65 technical note Fleet & Weiss, 2005 Lecture 14-65

66 technical note Fleet & Weiss, 2005 Lecture 14-66

67 technical note Fleet & Weiss, 2005 Lecture 14-67

68 technical note Fleet & Weiss, 2005 Lecture 14-68

69 technical note Fleet & Weiss, 2005 Lecture 14-69

70 technical note Fleet & Weiss, 2005 Lecture 14-70

71 technical note Fleet & Weiss, 2005 Lecture 14-71

72 technical note Fleet & Weiss, 2005 Lecture 14-72

Lecture 13: Tracking mo3on features op3cal flow

Lecture 13: Tracking mo3on features op3cal flow Lecture 13: Tracking mo3on features op3cal flow Professor Fei- Fei Li Stanford Vision Lab Lecture 13-1! What we will learn today? Introduc3on Op3cal flow Feature tracking Applica3ons (Problem Set 3 (Q1))

More information

Optical flow and tracking

Optical flow and tracking EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

EECS 556 Image Processing W 09

EECS 556 Image Processing W 09 EECS 556 Image Processing W 09 Motion estimation Global vs. Local Motion Block Motion Estimation Optical Flow Estimation (normal equation) Man slides of this lecture are courtes of prof Milanfar (UCSC)

More information

CS 4495 Computer Vision Motion and Optic Flow

CS 4495 Computer Vision Motion and Optic Flow CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris

More information

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

Comparison between Motion Analysis and Stereo

Comparison between Motion Analysis and Stereo MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

More information

ECE Digital Image Processing and Introduction to Computer Vision

ECE Digital Image Processing and Introduction to Computer Vision ECE592-064 Digital Image Processing and Introduction to Computer Vision Depart. of ECE, NC State University Instructor: Tianfu (Matt) Wu Spring 2017 Recap, SIFT Motion Tracking Change Detection Feature

More information

(More) Algorithms for Cameras: Edge Detec8on Modeling Cameras/Objects. Connelly Barnes

(More) Algorithms for Cameras: Edge Detec8on Modeling Cameras/Objects. Connelly Barnes (More) Algorithms for Cameras: Edge Detec8on Modeling Cameras/Objects Connelly Barnes Acknowledgment: Many slides from James Hays, also Derek Hoiem Grauman&Leibe 2008 Outline Edge Detec)on: Canny, etc.

More information

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes Motion and perceptual organization Sometimes, motion is the onl cue Motion and perceptual organization Sometimes, motion is the

More information

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Vision Lecture 2 Motion and Optical Flow Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de 28.1.216 Man slides adapted from K. Grauman, S. Seitz, R. Szeliski,

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 19: Optical flow http://en.wikipedia.org/wiki/barberpole_illusion Readings Szeliski, Chapter 8.4-8.5 Announcements Project 2b due Tuesday, Nov 2 Please sign

More information

CAP5415-Computer Vision Lecture 8-Mo8on Models, Feature Tracking, and Alignment. Ulas Bagci

CAP5415-Computer Vision Lecture 8-Mo8on Models, Feature Tracking, and Alignment. Ulas Bagci CAP545-Computer Vision Lecture 8-Mo8on Models, Feature Tracking, and Alignment Ulas Bagci bagci@ucf.edu Readings Szeliski, R. Ch. 7 Bergen et al. ECCV 92, pp. 237-252. Shi, J. and Tomasi, C. CVPR 94, pp.593-6.

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

Lucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Lucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. Lucas-Kanade Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object

More information

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field Finally: Motion and tracking Tracking objects, video analysis, low level motion Motion Wed, April 20 Kristen Grauman UT-Austin Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, and S. Lazebnik

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods

More information

VC 11/12 T11 Optical Flow

VC 11/12 T11 Optical Flow VC 11/12 T11 Optical Flow Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Optical Flow Constraint Equation Aperture

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical

More information

Optical flow. Cordelia Schmid

Optical flow. Cordelia Schmid Optical flow Cordelia Schmid Motion field The motion field is the projection of the 3D scene motion into the image Optical flow Definition: optical flow is the apparent motion of brightness patterns in

More information

CS231A Section 6: Problem Set 3

CS231A Section 6: Problem Set 3 CS231A Section 6: Problem Set 3 Kevin Wong Review 6 -! 1 11/09/2012 Announcements PS3 Due 2:15pm Tuesday, Nov 13 Extra Office Hours: Friday 6 8pm Huang Common Area, Basement Level. Review 6 -! 2 Topics

More information

Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski

Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2005 Today Go over Midterm Go over Project #3

More information

Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.

Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization. Overview Video Optical flow Motion Magnification Colorization Lecture 9 Optical flow Motion Magnification Colorization Overview Optical flow Combination of slides from Rick Szeliski, Steve Seitz, Alyosha

More information

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

CSCI 1290: Comp Photo

CSCI 1290: Comp Photo CSCI 1290: Comp Photo Fall 2018 @ Brown University James Tompkin Many slides thanks to James Hays old CS 129 course, along with all its acknowledgements. CS 4495 Computer Vision A. Bobick Motion and Optic

More information

CS664 Lecture #18: Motion

CS664 Lecture #18: Motion CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision 1 / 49 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 49 Menu March 10, 2016 Topics: Motion

More information

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20 Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit

More information

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object

More information

ECE 634: Digital Video Systems Mo7on models: 1/19/17

ECE 634: Digital Video Systems Mo7on models: 1/19/17 ECE 634: Digital Video Systems Mo7on models: 1/19/17 Professor Amy Reibman MSEE 356 reibman@purdue.edu hip://engineering.purdue.edu/~reibman/ece634/index.html Outline Today: Mo7on models for mo7on es7ma7on

More information

Computer Vision Lecture 18

Computer Vision Lecture 18 Course Outline Computer Vision Lecture 8 Motion and Optical Flow.0.009 Bastian Leibe RWTH Aachen http://www.umic.rwth-aachen.de/multimedia leibe@umic.rwth-aachen.de Man slides adapted from K. Grauman,

More information

Representing Moving Images with Layers. J. Y. Wang and E. H. Adelson MIT Media Lab

Representing Moving Images with Layers. J. Y. Wang and E. H. Adelson MIT Media Lab Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab Goal Represent moving images with sets of overlapping layers Layers are ordered in depth and occlude each other Velocity

More information

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization

More information

Optical flow. Cordelia Schmid

Optical flow. Cordelia Schmid Optical flow Cordelia Schmid Motion field The motion field is the projection of the 3D scene motion into the image Optical flow Definition: optical flow is the apparent motion of brightness patterns in

More information

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1 Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape

More information

3D Photography. Marc Pollefeys, Torsten Sattler. Spring 2015

3D Photography. Marc Pollefeys, Torsten Sattler. Spring 2015 3D Photography Marc Pollefeys, Torsten Sattler Spring 2015 Schedule (tentative) Feb 16 Feb 23 Mar 2 Mar 9 Mar 16 Mar 23 Mar 30 Apr 6 Apr 13 Apr 20 Apr 27 May 4 May 11 May 18 May 25 Introduction Geometry,

More information

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:

More information

Motion. CS 554 Computer Vision Pinar Duygulu Bilkent University

Motion. CS 554 Computer Vision Pinar Duygulu Bilkent University 1 Motion CS 554 Computer Vision Pinar Duygulu Bilkent University 2 Motion A lot of information can be extracted from time varying sequences of images, often more easily than from static images. e.g, camouflaged

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Optical Flow CS 637. Fuxin Li. With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun

Optical Flow CS 637. Fuxin Li. With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun Optical Flow CS 637 Fuxin Li With materials from Kristen Grauman, Richard Szeliski, S. Narasimhan, Deqing Sun Motion and perceptual organization Sometimes, motion is the only cue Motion and perceptual

More information

Spatial track: motion modeling

Spatial track: motion modeling Spatial track: motion modeling Virginio Cantoni Computer Vision and Multimedia Lab Università di Pavia Via A. Ferrata 1, 27100 Pavia virginio.cantoni@unipv.it http://vision.unipv.it/va 1 Comparison between

More information

Fi#ng & Matching Region Representa3on Image Alignment, Op3cal Flow

Fi#ng & Matching Region Representa3on Image Alignment, Op3cal Flow Fi#ng & Matching Region Representa3on Image Alignment, Op3cal Flow Lectures 5 & 6 Prof. Fergus Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros. Facebook 360 photos Panoramas How do we build

More information

Computer Vision II Lecture 4

Computer Vision II Lecture 4 Computer Vision II Lecture 4 Color based Tracking 29.04.2014 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Single-Object Tracking Background modeling

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

3D Vision. Viktor Larsson. Spring 2019

3D Vision. Viktor Larsson. Spring 2019 3D Vision Viktor Larsson Spring 2019 Schedule Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar 25 Apr 1 Apr 8 Apr 15 Apr 22 Apr 29 May 6 May 13 May 20 May 27 Introduction Geometry, Camera Model, Calibration Features,

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Optic Flow and Motion Detection

Optic Flow and Motion Detection Optic Flow and Motion Detection Computer Vision and Imaging Martin Jagersand Readings: Szeliski Ch 8 Ma, Kosecka, Sastry Ch 4 Image motion Somehow quantify the frame-to to-frame differences in image sequences.

More information

Displacement estimation

Displacement estimation Displacement estimation Displacement estimation by block matching" l Search strategies" l Subpixel estimation" Gradient-based displacement estimation ( optical flow )" l Lukas-Kanade" l Multi-scale coarse-to-fine"

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Face Detec<on & Tracking using KLT Algorithm

Face Detec<on & Tracking using KLT Algorithm Laboratory of Image Processing Face Detec

More information

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References

More information

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation

More information

Marcel Worring Intelligent Sensory Information Systems

Marcel Worring Intelligent Sensory Information Systems Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video

More information

Spatial track: motion modeling

Spatial track: motion modeling Spatial track: motion modeling Virginio Cantoni Computer Vision and Multimedia Lab Università di Pavia Via A. Ferrata 1, 27100 Pavia virginio.cantoni@unipv.it http://vision.unipv.it/va 1 Comparison between

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD M.E-II, Department of Computer Engineering, PICT, Pune ABSTRACT: Optical flow as an image processing technique finds its applications

More information

CS395T Visual Recogni5on and Search. Gautam S. Muralidhar

CS395T Visual Recogni5on and Search. Gautam S. Muralidhar CS395T Visual Recogni5on and Search Gautam S. Muralidhar Today s Theme Unsupervised discovery of images Main mo5va5on behind unsupervised discovery is that supervision is expensive Common tasks include

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Computer Vision I - Basics of Image Processing Part 2

Computer Vision I - Basics of Image Processing Part 2 Computer Vision I - Basics of Image Processing Part 2 Carsten Rother 07/11/2014 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

Computer Vision. Computer Science Tripos Part II. Dr Christopher Town. Texture. Includes: more regular patterns. Includes: more random patterns

Computer Vision. Computer Science Tripos Part II. Dr Christopher Town. Texture. Includes: more regular patterns. Includes: more random patterns Computer Vision Computer Science Tripos Part II Dr Christopher Town Texture Includes: more regular patterns What defines a texture? Trevor Darrell Trevor Darrell Includes: more random patterns Scale: objects

More information

Horn-Schunck and Lucas Kanade 1

Horn-Schunck and Lucas Kanade 1 Horn-Schunck and Lucas Kanade 1 Lecture 8 See Sections 4.2 and 4.3 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information. 1 / 40 Where We

More information

Computer Vision I - Filtering and Feature detection

Computer Vision I - Filtering and Feature detection Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

Comparison Between The Optical Flow Computational Techniques

Comparison Between The Optical Flow Computational Techniques Comparison Between The Optical Flow Computational Techniques Sri Devi Thota #1, Kanaka Sunanda Vemulapalli* 2, Kartheek Chintalapati* 3, Phanindra Sai Srinivas Gudipudi* 4 # Associate Professor, Dept.

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Dirk W. Wagener, Ben Herbst Department of Applied Mathematics, University of Stellenbosch, Private Bag X1, Matieland 762,

More information

Targil 12 : Image Segmentation. Image segmentation. Why do we need it? Image segmentation

Targil 12 : Image Segmentation. Image segmentation. Why do we need it? Image segmentation Targil : Image Segmentation Image segmentation Many slides from Steve Seitz Segment region of the image which: elongs to a single object. Looks uniform (gray levels, color ) Have the same attributes (texture

More information

A Bottom Up Algebraic Approach to Motion Segmentation

A Bottom Up Algebraic Approach to Motion Segmentation A Bottom Up Algebraic Approach to Motion Segmentation Dheeraj Singaraju and RenéVidal Center for Imaging Science, Johns Hopkins University, 301 Clark Hall, 3400 N. Charles St., Baltimore, MD, 21218, USA

More information

Optic Flow and Basics Towards Horn-Schunck 1

Optic Flow and Basics Towards Horn-Schunck 1 Optic Flow and Basics Towards Horn-Schunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information.

More information

Optical Flow Estimation

Optical Flow Estimation Optical Flow Estimation Goal: Introduction to image motion and 2D optical flow estimation. Motivation: Motion is a rich source of information about the world: segmentation surface structure from parallax

More information

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian. Announcements Project 1 Out today Help session at the end of class Image matching by Diva Sian by swashford Harder case Even harder case How the Afghan Girl was Identified by Her Iris Patterns Read the

More information

16720 Computer Vision: Homework 3 Template Tracking and Layered Motion.

16720 Computer Vision: Homework 3 Template Tracking and Layered Motion. 16720 Computer Vision: Homework 3 Template Tracking and Layered Motion. Instructor: Martial Hebert TAs: Varun Ramakrishna and Tomas Simon Due Date: October 24 th, 2011. 1 Instructions You should submit

More information

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology Corner Detection Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu April 11, 2006 Abstract Corners and edges are two of the most important geometrical

More information

Kanade Lucas Tomasi Tracking (KLT tracker)

Kanade Lucas Tomasi Tracking (KLT tracker) Kanade Lucas Tomasi Tracking (KLT tracker) Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 26,

More information

The goals of segmentation

The goals of segmentation Image segmentation The goals of segmentation Group together similar-looking pixels for efficiency of further processing Bottom-up process Unsupervised superpixels X. Ren and J. Malik. Learning a classification

More information

CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on. Instructor: Wei-Min Shen

CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on. Instructor: Wei-Min Shen CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on Instructor: Wei-Min Shen Status Check and Review Status check Have you registered in Piazza? Have you run the Project-1?

More information

Motion Estimation (II) Ce Liu Microsoft Research New England

Motion Estimation (II) Ce Liu Microsoft Research New England Motion Estimation (II) Ce Liu celiu@microsoft.com Microsoft Research New England Last time Motion perception Motion representation Parametric motion: Lucas-Kanade T I x du dv = I x I T x I y I x T I y

More information

SIFT (Scale Invariant Feature Transform) descriptor

SIFT (Scale Invariant Feature Transform) descriptor Local descriptors SIFT (Scale Invariant Feature Transform) descriptor SIFT keypoints at loca;on xy and scale σ have been obtained according to a procedure that guarantees illumina;on and scale invance.

More information

Wikipedia - Mysid

Wikipedia - Mysid Wikipedia - Mysid Erik Brynjolfsson, MIT Filtering Edges Corners Feature points Also called interest points, key points, etc. Often described as local features. Szeliski 4.1 Slides from Rick Szeliski,

More information

Lecture 6: Finding Features (part 1/2)

Lecture 6: Finding Features (part 1/2) Lecture 6: Finding Features (part 1/2) Dr. Juan Carlos Niebles Stanford AI Lab Professor Stanford Vision Lab 1 What we will learn today? Local invariant features MoOvaOon Requirements, invariances Keypoint

More information

CSE 252A Computer Vision Homework 3 Instructor: Ben Ochoa Due : Monday, November 21, 2016, 11:59 PM

CSE 252A Computer Vision Homework 3 Instructor: Ben Ochoa Due : Monday, November 21, 2016, 11:59 PM CSE 252A Computer Vision Homework 3 Instructor: Ben Ochoa Due : Monday, November 21, 2016, 11:59 PM Instructions: Homework 3 has to be submitted in groups of 3. Review the academic integrity and collaboration

More information

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

CS201: Computer Vision Introduction to Tracking

CS201: Computer Vision Introduction to Tracking CS201: Computer Vision Introduction to Tracking John Magee 18 November 2014 Slides courtesy of: Diane H. Theriault Question of the Day How can we represent and use motion in images? 1 What is Motion? Change

More information

Two-view geometry Computer Vision Spring 2018, Lecture 10

Two-view geometry Computer Vision Spring 2018, Lecture 10 Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of

More information