Computer Vision II Lecture 4
|
|
- Kathryn Jefferson
- 5 years ago
- Views:
Transcription
1 Course Outline Computer Vision II Lecture 4 Single-Object Tracking Background modeling Template based tracking Color based Tracking Color based tracking Contour based tracking Tracking by online classification Tracking-by-detection Bayesian Filtering Bastian Leibe RWTH Aachen Multi-Object Tracking Articulated Tracking leibe@vision.rwth-aachen.de 2 Image source: Robert Collins Recap: Estimating Optical Flow Recap: Lucas-Kanade Optical Flow Use all piels in a K K window to get more equations. Least squares problem: I(,y,t 1) I(,y,t) Optical Flow Given two subsequent frames, estimate the apparent motion field u(,y) and v(,y) between them. Key assumptions Brightness constancy: projection of the same point looks the same in every frame. Small motion: points do not move very far. Spatial coherence: points move like their neighbors. Slide credit: Svetlana Lazebnik 3 Minimum least squares solution given by solution of Slide adapted from Svetlana Lazebnik Recall the Harris detector! 4 Recap: Iterative Refinement Recap: Coarse-to-fine Optical Flow Estimation Estimate velocity at each piel using one iteration of LK estimation. Warp one image toward the other using the estimated flow field. Refine estimate by repeating the process. estimate update estimate update estimate update 0 0 Initial guess: Estimate: Initial guess: Estimate: Initial guess: u=1.25 piels u=2.5 piels u=5 piels Estimate: Iterative procedure Results in subpiel accurate localization. Converges for small displacements. 0 Image 1 u=10 piels Image 2 Gaussian pyramid of image 1 Gaussian pyramid of image 2 Slide adapted from Steve Seitz 0 5 Slide credit: Steve Seitz 6 1
2 Recap: Coarse-to-fine Optical Flow Estimation Recap: Shi-Tomasi Feature Tracker ( KLT) Run iterative LK Warp & upsample Run iterative LK... Idea Find good features using eigenvalues of second-moment matri Key idea: good features to track are the ones that can be tracked reliably. Frame-to-frame tracking Track with LK and a pure translation motion model. More robust for small displacements, can be estimated from smaller neighborhoods (e.g., 5 5 piels). Image 1 Image 2 Checking consistency of tracks Affine registration to the first observed feature instance. Affine model is more accurate for larger displacements. Comparing to the first frame helps to minimize drift. Gaussian pyramid of image 1 Gaussian pyramid of image 2 Slide credit: Steve Seitz 7 J. Shi and C. Tomasi. Good Features to Track. CVPR Slide credit: Svetlana Lazebnik 8 Recap: General LK Image Registration Recap: Step-by-Step Derivation Goal Find the warping parameters p that minimize the sum-ofsquares intensity difference between the template image T() and the warped input image I(W(;p)). LK formulation Formulate this as an optimization problem X 2 arg min I(W(; p)) T() p We assume that an initial estimate of p is known and iteratively solve for increments to the parameters p: X 2 arg min I(W(; p + p)) T() p 9 Key to the derivation Taylor epansion around p I(W(; p + p)) ¼ I(W(; p)) p+o( p2 ) = I(W([; y]; p 1 ; : : : ; p n )) h i : : @Wy : : 4 @pn p n Gradient Jacobian Increment parameters to solve for 10 Recap: LK Algorithm Iterate Warp I to obtain I(W([, y]; p)) Compute the error image T([, y]) I(W([, y]; p)) Warp the gradient ri with W([, y]; p) Evaluate at ([, y]; p) (Jacobian) Compute steepest descent images Compute Hessian matri H = P h i T h @p P h i T T([; y]) I(W([; y]; p)) Compute p = H 1 P h i T y]) I(W([; y]; p)) Update the parameters p à p + p Until p magnitude is negligible 11 [S. Baker, I. Matthews, IJCV 04] Recap: LK Algorithm Visualization 12 [S. Baker, I. Matthews, IJCV 04] 2
3 Eample of a More Comple Warping Function Today: Color based Tracking Encode geometric constraints into region tracking Constrained homography transformation model Translation parallel to the ground plane Rotation around the ground plane normal W() = W obj P W t W α Q Input for high-level tracker with car steering model. 13 [E. Horbert, D. Mitzel,, DAGM 10] 14 Image source: Robert Collins Topics of This Lecture Mean-Shift Mean-Shift Mean-shift mode estimation Using mean-shift on color images Mean-Shift with Eplicit Weight Images Histogram backprojection CAMshift approach Mean-Shift with Implicit Weight Images Comaniciu s approach Bhattacharyya distance Gradient ascent Comparison Qualitative intuition 15 Mean-Shift Tracking Efficient approach to tracking objects whose appearance is defined by color. Actually, the approach is not limited to color. Can also use teture, motion, etc. Popular use for object tracking Very simple to implement Non-parametric method, does not make strong assumptions about the shape of the distribution Suitable for non-static distributions (as typical in tracking) Can be combined with dynamic models (Kalman filters, etc.) Good performance in practice 16 3
4 Using Mean-Shift on Color Models Two main approaches 1. Eplicit weight images Create a color likelihood image, with piels weighted by the similarity to the desired color (best for unicolored objects). Use mean-shift to find spatial modes of the likelihood. 2. Implicit weight images Represent color distribution by a histogram. Use mean-shift to find the region that has the most similar color distribution. 24 4
5 Topics of This Lecture Mean-Shift on Weight Images Mean-Shift Mean-shift mode estimation Using mean-shift on color images Ideal case Want an indicator function that returns 1 for piels on the tracked object and 0 for all other piels. Mean-Shift with Eplicit Weight Images Histogram backprojection CAMshift approach Mean-Shift with Implicit Weight Images Comaniciu s approach Bhattacharyya distance Gradient ascent Comparison Qualitative intuition 25 Instead Compute likelihood maps Value at a piel is proportional to the likelihood that the piel comes from the tracked object. Likelihood can be based on Color Teture Shape (boundary) Predicted location 26 Mean-Shift Tracking Mean-Shift Tracking A closer look at the procedure... Kernel weight evaluated at offset (a ) Weight from the likelihood image at piel a Offset of piel a to kernel center Idea Let piels form a uniform grid of data points. Each piel has a weight proportional to the likelihood that the piel is on the object we want to track. Perform standard mean-shift using the weighted set of points. 27 Image source: Robert Collins Sum over all piels a under kernel K Normalization term Mean-shift computes the weighted mean of all shifts (offsets), weighted by the likelihood under the kernel function. 28 Duality Property Eample: Face Tracking using Mean-Shift Duality Running mean-shift with kernel K on weight image w is equivalent to performing gradient ascent in a (virtual) image formed by convolving w by some shadow kernel H. Note: mode we are looking for is mode of location (,y) likelihood, NOT mode of color distribution. 29 Image source: Robert Collins G. Bradski, Computer Vision Face Tracking for use in a Perceptual User Interface, IEEE Workshop On Applications of Computer Vision, Princeton, NJ, 1998, pp Image source: Gary Bradski 5
6 Eplicit Weight Images Side Note: Color Histograms for Recognition Using color histograms for recognition Works surprisingly well In the first paper (1991), 66 objects could be recognized almost without errors Histogram backprojection Histogram is an empirical estimate of p(color object) = p(c o) Bayes rule says: Simplistic approimation: assume p(o)/p(c) is constant. Use histogram h as a lookup table to set piel values in the weight image. If piel maps to histogram bucket i, set weight for piel to h(i). 31 Image source: Gary Bradski 32 [Swain & Ballard, 1991] Localization by Histogram Backprojection Object Localization Results Where in the image are the colors we re looking for? Query: object with histogram M Given: image with histogram I Compute the ratio histogram : R reveals how important an object color is, relative to the current image. Color is frequent on the object large M i Color is frequent in the image large I i This value is projected back into the image (i.e. the image values are replaced by the values of R that they inde). The result image is convolved with a circular mask the size of the target object. Peaks in the convolved image indicate detected objects. Does this sound familiar? 33 Eample result after backprojection Looking for blue pullover 34 [Swain & Ballard, 1991] Bradski s CAMshift Visualization: Bradski s CAMshift in Action Idea Find,y location of mode by mean-shift. Determine z, roll angle µ by fitting an ellipse to the mode found using mean-shift. 35 Image source: Gary Bradski 36 Image source: 6
7 Problem: Scale Changes Visualization: Scale Adaptation in CAMshift Window always has the same size When the object size changes, does not fit anymore Tracking soon diverges Image source: 38 Image source: CAMShift Results Applications: Perceptual User Interfaces Face tracking Using a skin color model in HSV color space 39 Video source: Gary Bradski Head tracking as input modality Controlling a flight simulator by head gestures 40 Video source: Gary Bradski Topics of This Lecture Using Mean-Shift on Color Models Mean-Shift Mean-shift mode estimation Using mean-shift on color images Mean-Shift with Eplicit Weight Images Histogram backprojection CAMshift approach Two main approaches 1. Eplicit weight images Create a color likelihood image, with piels weighted by the similarity to the desired color (best for unicolored objects). Use mean-shift to find spatial modes of the likelihood. Mean-Shift with Implicit Weight Images Comaniciu s approach Bhattacharyya distance Gradient ascent 2. Implicit weight images Represent color distribution by a histogram. Use mean-shift to find the region that has the most similar color distribution. Comparison Qualitative intuition
8 Implicit Weight Images Mean-Shift Object Tracking Sometimes the weight is not eplicitly created Eample: Mean-shift Tracking by Comaniciu et al. Weight is embedded into the matching procedure Comes out as a side effect of matching two pdfs. Main idea: Match the pdf of the target object Interesting consequence Implicit weight image changes between iterations of mean-shift, as compared to iterating to convergence on an eplicit weight image! We ll take a look at their approach and see how this works. D. Comaniciu, V. Ramesh, P. Meer. Kernel-Based Object Tracking, PAMI, Vol. 25(5), pp , Mean-Shift Object Tracking Approach Color histogram representation Measuring distances between histograms Distance as a function of window location y where is the Bhattacharyya coefficient Approach Finding the Object Compute histograms via Parzen estimation Goal: Find the location y that maimizes the Bhattacharyya coefficient Taylor epansion around current values p u (y 0 ) where k( ) is some radially symmetric smoothing kernel profile, i is the piel at location i, and b( i ) is the inde of its bin in the quantized feature space. Consequence of this formulation Gathers a histogram over a neighborhood Also allows interpolation of histograms centered around an off-lattice location. 48 where This does not depend on y Just need to maimize this. Note: It s a KDE!!! 49 8
9 Finding the Object Taylor epansion around current values p u (y 0 ) Finding the Object At each iteration, perform This does not depend on y Just need to maimize this. Note: It s a KDE!!! Find the mode of the second term by mean-shift iterations which is just standard mean-shift on (implicit) weight image w i. Let s look at the weight image more closely. For each piel i If piel i s value maps to histogram bucket B, then This is only 1 once in the summation Finding the Object Results: Mean-Shift Tracking Summary If model histogram is q 1, q 2,..., q m and current data histogram is p 1, p 2,..., p m Form weights q 1 /p 1, q 2 /p 2,..., q m /p m Do histogram backprojection of these values into the image to get the weight image w i. (Note: this is done implicitly) Note In each iteration, p 1, p 2,..., p m change, and therefore so does the weight image w i. Different from applying mean-shift to fied likelihood image! Configuration Feature space: quantized RGB Target manually selected in 1 st frame Average mean-shift iterations per frame: 4 D. Comaniciu, V. Ramesh, P. Meer. Kernel-Based Object Tracking, PAMI, Vol. 25(5), pp , Slide adapted from Ukrainitz & Sarel 53 Video source: Dorin Comaniciu Results: Mean-Shift Tracking Topics of This Lecture Difficulties Mean-Shift Mean-shift mode estimation Using mean-shift on color images Partial occlusion Distraction Motion blur Mean-shift still performs robustly despite those. Slide adapted from Ukrainitz & Sarel 54 Mean-Shift with Eplicit Weight Images Histogram backprojection CAMshift approach Mean-Shift with Implicit Weight Images Comaniciu s approach Bhattacharyya distance Gradient ascent Comparison Qualitative intuition 55 9
10 Qualitative Intuition Qualitative Intuition Bradski s Mean-Shift procedure Comaniciu s approach Assume that an object is 60% red and 40% green. I.e., q 1 = 0.6, q 2 = 0.4, q i = 0 for all other i. Let s assume the data histogram is perfectly located q 1 = 0.6, q 2 = 0.4, q i = 0 for all other i. p 1 = 0.6, p 2 = 0.4, p i = 0 for all other i. w 1 = sqrt(0.6/0.6), w 2 = sqrt(0.4/0.4), w i = 0 for all other i. If we just did histogram backprojection of these likelihood values (a la Bradski), we would get this weight image: Resulting weight image: Mean-shift does a weighted center-of- computation at each iteration. Much better! Window will be biased towards the region of red piels, since they have higher weight! Perfect object indicator function References and Further Reading The original CAMshift paper G. Bradski, Computer Vision Face Tracking for use in a Perceptual User Interface, IEEE Workshop On Applications of Computer Vision, Princeton, NJ, 1998, pp The Mean-Shift Tracking paper by Comaniciu D. Comaniciu, V. Ramesh, P. Meer. Kernel-Based Object Tracking, PAMI, Vol. 25(5), pp ,
Computer Vision II Lecture 4
Computer Vision II Lecture 4 Color based Tracking 29.04.2014 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Single-Object Tracking Background modeling
More informationRobert Collins CSE598G. Robert Collins CSE598G
Recall: Kernel Density Estimation Given a set of data samples x i ; i=1...n Convolve with a kernel function H to generate a smooth function f(x) Equivalent to superposition of multiple kernels centered
More informationComputer Vision Lecture 20
Computer Vision Lecture 2 Motion and Optical Flow Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de 28.1.216 Man slides adapted from K. Grauman, S. Seitz, R. Szeliski,
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing
More informationVisual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.
Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationVisual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationTracking Computer Vision Spring 2018, Lecture 24
Tracking http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 24 Course announcements Homework 6 has been posted and is due on April 20 th. - Any questions about the homework? - How
More informationKanade Lucas Tomasi Tracking (KLT tracker)
Kanade Lucas Tomasi Tracking (KLT tracker) Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 26,
More informationCS 4495 Computer Vision Motion and Optic Flow
CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris
More informationOPPA European Social Fund Prague & EU: We invest in your future.
OPPA European Social Fund Prague & EU: We invest in your future. Patch tracking based on comparing its piels 1 Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine
More informationPeripheral drift illusion
Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video
More informationVisual Tracking (1) Tracking of Feature Points and Planar Rigid Objects
Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationOptical flow and tracking
EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,
More informationMidterm Wed. Local features: detection and description. Today. Last time. Local features: main components. Goal: interest operator repeatability
Midterm Wed. Local features: detection and description Monday March 7 Prof. UT Austin Covers material up until 3/1 Solutions to practice eam handed out today Bring a 8.5 11 sheet of notes if you want Review
More informationCOMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE
COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment
More informationMatching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.
Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.
More informationMotion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi
Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationVisual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys
Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes Motion and perceptual organization Sometimes, motion is the onl cue Motion and perceptual organization Sometimes, motion is the
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationAugmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit
Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection
More informationComputer Vision Lecture 6
Computer Vision Lecture 6 Segmentation 12.11.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Image Processing Basics Structure Extraction Segmentation
More informationECE Digital Image Processing and Introduction to Computer Vision
ECE592-064 Digital Image Processing and Introduction to Computer Vision Depart. of ECE, NC State University Instructor: Tianfu (Matt) Wu Spring 2017 Recap, SIFT Motion Tracking Change Detection Feature
More informationLecture 16: Computer Vision
CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field
More informationComparison between Motion Analysis and Stereo
MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis
More informationComputer Vision Lecture 18
Course Outline Computer Vision Lecture 8 Motion and Optical Flow.0.009 Bastian Leibe RWTH Aachen http://www.umic.rwth-aachen.de/multimedia leibe@umic.rwth-aachen.de Man slides adapted from K. Grauman,
More informationVC 11/12 T11 Optical Flow
VC 11/12 T11 Optical Flow Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Optical Flow Constraint Equation Aperture
More informationVisual Tracking (1) Pixel-intensity-based methods
Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationLecture 16: Computer Vision
CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods
More informationNinio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,
Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in
More informationLecture 7: Segmentation. Thursday, Sept 20
Lecture 7: Segmentation Thursday, Sept 20 Outline Why segmentation? Gestalt properties, fun illusions and/or revealing examples Clustering Hierarchical K-means Mean Shift Graph-theoretic Normalized cuts
More informationLecture: k-means & mean-shift clustering
Lecture: k-means & mean-shift clustering Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap: Image Segmentation Goal: identify groups of pixels that go together 2 Recap: Gestalt
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 19: Optical flow http://en.wikipedia.org/wiki/barberpole_illusion Readings Szeliski, Chapter 8.4-8.5 Announcements Project 2b due Tuesday, Nov 2 Please sign
More informationLecture 8: Fitting. Tuesday, Sept 25
Lecture 8: Fitting Tuesday, Sept 25 Announcements, schedule Grad student extensions Due end of term Data sets, suggestions Reminder: Midterm Tuesday 10/9 Problem set 2 out Thursday, due 10/11 Outline Review
More informationLocal features: detection and description May 12 th, 2015
Local features: detection and description May 12 th, 2015 Yong Jae Lee UC Davis Announcements PS1 grades up on SmartSite PS1 stats: Mean: 83.26 Standard Dev: 28.51 PS2 deadline extended to Saturday, 11:59
More informationLecture: k-means & mean-shift clustering
Lecture: k-means & mean-shift clustering Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap: Image Segmentation Goal: identify groups of pixels that go together
More informationCS201: Computer Vision Introduction to Tracking
CS201: Computer Vision Introduction to Tracking John Magee 18 November 2014 Slides courtesy of: Diane H. Theriault Question of the Day How can we represent and use motion in images? 1 What is Motion? Change
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationVisual Tracking (1) Feature Point Tracking and Block Matching
Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationLecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20
Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationC18 Computer vision. C18 Computer Vision. This time... Introduction. Outline.
C18 Computer Vision. This time... 1. Introduction; imaging geometry; camera calibration. 2. Salient feature detection edges, line and corners. 3. Recovering 3D from two images I: epipolar geometry. C18
More informationElliptical Head Tracker using Intensity Gradients and Texture Histograms
Elliptical Head Tracker using Intensity Gradients and Texture Histograms Sriram Rangarajan, Dept. of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634 srangar@clemson.edu December
More informationOptical flow. Cordelia Schmid
Optical flow Cordelia Schmid Motion field The motion field is the projection of the 3D scene motion into the image Optical flow Definition: optical flow is the apparent motion of brightness patterns in
More informationLucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Lucas-Kanade Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object
More informationLocal features and image matching. Prof. Xin Yang HUST
Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source
More informationKey properties of local features
Key properties of local features Locality, robust against occlusions Must be highly distinctive, a good feature should allow for correct object identification with low probability of mismatch Easy to etract
More informationEdge and corner detection
Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements
More informationFundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F
Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental
More informationCSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu. Lectures 21 & 22 Segmentation and clustering
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lectures 21 & 22 Segmentation and clustering 1 Schedule Last class We started on segmentation Today Segmentation continued Readings
More informationOptical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object
More informationFinal Exam Study Guide
Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.
More informationSegmentation and Grouping
CS 1699: Intro to Computer Vision Segmentation and Grouping Prof. Adriana Kovashka University of Pittsburgh September 24, 2015 Goals: Grouping in vision Gather features that belong together Obtain an intermediate
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix
More informationThe goals of segmentation
Image segmentation The goals of segmentation Group together similar-looking pixels for efficiency of further processing Bottom-up process Unsupervised superpixels X. Ren and J. Malik. Learning a classification
More informationComputer Vision Lecture 6
Course Outline Computer Vision Lecture 6 Segmentation Image Processing Basics Structure Extraction Segmentation Segmentation as Clustering Graph-theoretic Segmentation 12.11.2015 Recognition Global Representations
More informationAnnouncements, schedule. Lecture 8: Fitting. Weighted graph representation. Outline. Segmentation by Graph Cuts. Images as graphs
Announcements, schedule Lecture 8: Fitting Tuesday, Sept 25 Grad student etensions Due of term Data sets, suggestions Reminder: Midterm Tuesday 10/9 Problem set 2 out Thursday, due 10/11 Outline Review
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationHow and what do we see? Segmentation and Grouping. Fundamental Problems. Polyhedral objects. Reducing the combinatorics of pose estimation
Segmentation and Grouping Fundamental Problems ' Focus of attention, or grouping ' What subsets of piels do we consider as possible objects? ' All connected subsets? ' Representation ' How do we model
More informationRobert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method
Intro to Template Matching and the Lucas-Kanade Method Appearance-Based Tracking current frame + previous location likelihood over object location current location appearance model (e.g. image template,
More informationTracking in image sequences
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Tracking in image sequences Lecture notes for the course Computer Vision Methods Tomáš Svoboda svobodat@fel.cvut.cz March 23, 2011 Lecture notes
More informationClustering. Discover groups such that samples within a group are more similar to each other than samples across groups.
Clustering 1 Clustering Discover groups such that samples within a group are more similar to each other than samples across groups. 2 Clustering Discover groups such that samples within a group are more
More informationKanade Lucas Tomasi Tracking (KLT tracker)
Kanade Lucas Tomasi Tracking (KLT tracker) Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 26,
More informationLocal features: detection and description. Local invariant features
Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections
More informationParticle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease
Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References
More informationAutomatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski
Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2005 Today Go over Midterm Go over Project #3
More informationOptical flow. Cordelia Schmid
Optical flow Cordelia Schmid Motion field The motion field is the projection of the 3D scene motion into the image Optical flow Definition: optical flow is the apparent motion of brightness patterns in
More informationComputer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town
Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical
More informationCEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.
CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationDeformable Part Models
CS 1674: Intro to Computer Vision Deformable Part Models Prof. Adriana Kovashka University of Pittsburgh November 9, 2016 Today: Object category detection Window-based approaches: Last time: Viola-Jones
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More information2%34 #5 +,,% ! # %& ()% #% +,,%. & /%0%)( 1 ! # %& % %()# +(& ,.+/ +&0%//#/ &
! # %& ()% #% +,,%. & /%0%)( 1 2%34 #5 +,,%! # %& % %()# +(&,.+/ +&0%//#/ & & Many slides in this lecture are due to other authors; they are credited on the bottom right Topics of This Lecture Problem
More informationSegmentation and Grouping April 19 th, 2018
Segmentation and Grouping April 19 th, 2018 Yong Jae Lee UC Davis Features and filters Transforming and describing images; textures, edges 2 Grouping and fitting [fig from Shi et al] Clustering, segmentation,
More informationOverview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.
Overview Video Optical flow Motion Magnification Colorization Lecture 9 Optical flow Motion Magnification Colorization Overview Optical flow Combination of slides from Rick Szeliski, Steve Seitz, Alyosha
More informationThe Lucas & Kanade Algorithm
The Lucas & Kanade Algorithm Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Registration, Registration, Registration. Linearizing Registration. Lucas & Kanade Algorithm. 3 Biggest
More informationApplications. Foreground / background segmentation Finding skin-colored regions. Finding the moving objects. Intelligent scissors
Segmentation I Goal Separate image into coherent regions Berkeley segmentation database: http://www.eecs.berkeley.edu/research/projects/cs/vision/grouping/segbench/ Slide by L. Lazebnik Applications Intelligent
More informationCS 2770: Computer Vision. Edges and Segments. Prof. Adriana Kovashka University of Pittsburgh February 21, 2017
CS 2770: Computer Vision Edges and Segments Prof. Adriana Kovashka University of Pittsburgh February 21, 2017 Edges vs Segments Figure adapted from J. Hays Edges vs Segments Edges More low-level Don t
More informationCS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing
CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.
More informationLocal invariant features
Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest
More informationEECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline
EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)
More informationOutline 7/2/201011/6/
Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern
More informationPreviously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011
Previously Part-based and local feature models for generic object recognition Wed, April 20 UT-Austin Discriminative classifiers Boosting Nearest neighbors Support vector machines Useful for object recognition
More informationChapter 9 Object Tracking an Overview
Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging
More informationGrouping and Segmentation
03/17/15 Grouping and Segmentation Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Today s class Segmentation and grouping Gestalt cues By clustering (mean-shift) By boundaries (watershed)
More informationSegmentation & Grouping Kristen Grauman UT Austin. Announcements
Segmentation & Grouping Kristen Grauman UT Austin Tues Feb 7 A0 on Canvas Announcements No office hours today TA office hours this week as usual Guest lecture Thursday by Suyog Jain Interactive segmentation
More informationFeature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking
Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)
More informationFeature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1
Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline
More informationSCALE INVARIANT FEATURE TRANSFORM (SIFT)
1 SCALE INVARIANT FEATURE TRANSFORM (SIFT) OUTLINE SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion 2 SIFT BACKGROUND Scale-invariant feature transform SIFT: to detect
More information(10) Image Segmentation
(0) Image Segmentation - Image analysis Low-level image processing: inputs and outputs are all images Mid-/High-level image processing: inputs are images; outputs are information or attributes of the images
More informationLecture 13: k-means and mean-shift clustering
Lecture 13: k-means and mean-shift clustering Juan Carlos Niebles Stanford AI Lab Professor Stanford Vision Lab Lecture 13-1 Recap: Image Segmentation Goal: identify groups of pixels that go together Lecture
More informationFinally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field
Finally: Motion and tracking Tracking objects, video analysis, low level motion Motion Wed, April 20 Kristen Grauman UT-Austin Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, and S. Lazebnik
More informationAnno accademico 2006/2007. Davide Migliore
Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?
More informationMotion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation
Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion
More informationCS 4495 Computer Vision. Segmentation. Aaron Bobick (slides by Tucker Hermans) School of Interactive Computing. Segmentation
CS 4495 Computer Vision Aaron Bobick (slides by Tucker Hermans) School of Interactive Computing Administrivia PS 4: Out but I was a bit late so due date pushed back to Oct 29. OpenCV now has real SIFT
More information