16720 Computer Vision: Homework 3 Template Tracking and Layered Motion.
|
|
- Reynold Evans
- 6 years ago
- Views:
Transcription
1 16720 Computer Vision: Homework 3 Template Tracking and Layered Motion. Instructor: Martial Hebert TAs: Varun Ramakrishna and Tomas Simon Due Date: October 24 th, Instructions You should submit your completed homework by placing all your files in the directory named after your andrew id and upload it to your submission directory on Blackboard. The submission location is the same section where you downloaded your assignment. Please include a line in your write-up indicating how many hours you spent on this homework. Do not include the handout materials in your submission. All questions marked with a Q require a submission. For the implementation part of the homework please stick to the function headers described. Final reminder: Start early! Start early! Start early!. This homework contains mostly implementation questions which can get tricky and could take a while to debug. 2 Lucas-Kanade Tracking The first groundbreaking work on template tracking was the Lucas-Kanade tracker 1. It basically assumes that the template undergoes constant motion in a small region. The Lucas-Kanade Tracker works on two frames at a time, and does not assume any statistical motion model throughout the sequence. The algorithm estimates the deformations between two image frames under the assumption that the intensity of the objects has not changed significantly between the two frames. A good reference for the Lucas-Kanade tracker is given below. See Figure 2 in the above paper for a good overview of the algorithm. Lucas-Kanade 20 Years On - Baker, S. and Matthews, I. view.html?pub_id= Developed here in CMU! 1
2 Starting with a rectangle R t on frame I t, the Lucas-Kanade Tracker aims to move it by an offset (u, v) to obtain another rectangle R t+1 on frame I t+1, so that the pixel squared difference in the two rectangles is minimized: minj(u, v) = u,v (x,y) R t (I t+1 (x + u, y + v) I t (x, y)) 2 (1) Starting with a initial guess of (u, v) (usually (0, 0)), we can compute the optimal (u, v ) iteratively. In each iteration, the objective function is locally linearized by first-order Taylor expansion and optimized by solving a linear system. Q1. (15 points) Upload the following function in your submission directory: [u,v] = LucasKanade(It,It1,rect) that computes the optimal local motion from frame I t to frame I t+1 that minimizes Eqn. 1. Here It is the image frame I t, It1 is the image frame I t+1, and rect is the 4-by-1 vector representing a rectangle on the image frame I t. The four components of the rectangle are [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner. The rectangle is inclusive, i.e., it includes all the four corners. To deal with fractional movement of the template, you will need to interpolate the image using the MATLAB command interp2. You will also need to iterate the estimation until the change in (u, v) is below a threshold. Q2. (15pts) Load the video sequence (carsequence.mat) and run the Lucas-Kanade Tracker to track the speeding car. The rectangle in the first frame is [x1, y1, x2, y2] = [328, 213, 419, 265]. In otherwords, the rectangle starts from (328, 213) (row 213 and column 328 in the image) and ends at (419, 265). In your answer sheet, print the tracking result (image + rectangle) at frames 20, frame 50 and frame 100. What are the possible reasons the tracker could fail? How could you possibly fix some of these problems? 3 Tracking with Dominant Motion Estimation In this section, you will implement a tracker for estimating dominant affine motion in a sequence of images and subsequently identify pixels corresponding to moving objects in the scene. You will be using the images in the file Sequence1.tar.gz consisting of aerial views of moving vehicles from a non-stationary camera. 3.1 Dominant Motion Estimation You will start by implementing a tracker for affine motion using the equations for affine flow described in class and in the Motion slides from class. As in the previous section, the tracker will only work on two frames at a time. To estimate dominant motion, the entire image I(t) will serve as the template to be tracked in image I(t+1), i.e. I(t+1) is assumed to be approximately an affine warped version of I(t). This approach is reasonable under the assumption that a majority of the pixels correspond to stationary objects in the scene whose depth variation is small 2
3 relative to their distance from the camera. Using the equations for the affine model of flow, you can recover the 6-vector [ a b c d e f ]T of affine flow parameters. They will be related to the equivalent affine transformation matrix as: M = 1 + a b c d 1 + e f (2) relating the homogenous image coordinates of I(t) to I(t + 1) according to x t+1 = Mx t t (where x = [ x y 1 ] T ). For the next pair of consecutive images in the sequence, image I(t + 1) will serve as thetemplate to be tracked in image I(t + 2), and so on through the rest of the sequence. Note that M will differ between successive image pairs. Each update of the affine parameter vector, p is computed via a least squares method using the pseudo- inverse as described in class: p = (A T A) 1 A T I t where the matrix A of image derivatives is defined as A = [ xi x yi x I x xi y yi y I y ] Note: Unlike previous examples where the template to be tracked is usually small in comparison with the size of the image, image I(t) will almost always not be contained fully in the warped version of I(t + 1). Hence the matrix of image derivatives, A, and the temporal derivatives, I t, must be computed only on the pixels lying in the region common to I(t) and the warped version of I(t + 1) to be meaningful. 3.2 Moving Object Detection Once you are able to compute the transformation matrix M relating an image pair I(t) and I(t + 1), a naive way of determining pixels lying on moving objects is as follows. Warp the image I(t) using M so that it is registered to I(t + 1), and subtract it from I(t + 1). The locations where the absolute difference exceeds a threshold can then be declared as corresponding to locations of moving objects. For better results, hysteresis thresholding may be used. We have provided MATLAB code for hysteresis thresholding (hysthresh). Q3 (30 Points) Using the above method for dominant motion estimation, write a MATLAB function [moving image] = SubtractDominantMotion(image1, image2) where image1 and image2 (in uint8 format) form the input image pair and moving image is a binary image of the same size as the input images, in which the non-zero pixels correspond to locations of moving objects. The script test motion.m that we will use for grading this section has been provided to you for testing your function. This 3
4 script simply makes repeated calls to SubtractDominantMotion on every consecutive image pair in the file Sequence1.tar.gz, makes an AVI movie out of the moving image returned for every image pair processed, and saves it as a file motion.avi for your offline viewing pleasure. An example of such a movie is also provided consisting of results using the methods described above. You will notice that the estimates of moving pixels are a bit noisy. 2 We do not expect perfect results this is, after all, real world data! You are free to implement any algorithm of your choice for estimating the moving image following the affine registration step 3. We will be awarding Extra Credit(10 Pts) to the best looking results as judged by the instructors. 4 Motion Layers (a) (b) Figure 1: (a) Optical flow between the first two frames. (b) Computed motion segmentation Optical flow is the deformation undergone by each pixel in the current frame from the previous frame. Optical flow has found varied application in computer vision and in this question we will explore the task of extracting motion layers. In many situations, motion in the image is caused by the movements of objects at different depths in the scene. In such situations, the image sequence can be represented compactly if pixels are grouped into appropriate objects or layers. Segmentation of a scene into foreground objects and background is crucial for scene understanding and recognition as we will see later in the course. To extract motion layers we will mainly be following the method outlined in the following paper: Representing Moving Images with Layers - Wang, J. Y. A., and Adelson, E. H., IEEE Transactions on Image Processing, 3(5): , wang_tr279.pdf 2 For de-noising your output, you may also want to look at the imerode and imdilate morphological operators in MATLAB. 3 A basic version of this would involve computing the difference between the warped version of the first image and the second image and thresholding. 4
5 You will need to read sections 4 and 5 of the above paper. The method proceeds by generating hypotheses by fitting affine motion models in overlapping square regions to the optical flow. Regions are then merged using a clustering algorithm and each pixel is assigned a region satsifying a minimum distortion criterion. Regions that are spatially disconnected are split and small regions are filtered out. The process is iterated until the pixel memberships do not change. We have provided you with precomputed flow in the folder /flow/ in the form of.mat files. Each.mat file contains two variables vx and vy which contain the x-component and the y-component respectively of the flow for each pixel location. We have also provided you with a function viewflow to visualize the optical flow. 4.1 Generating hypotheses Q4. (20 Pts) Read section 4 of the paper and upload the following function in your submission directory [A] = generateaffinehypotheses(vx, vy, I) that generates affine parameter hypotheses A (a Nx6 matrix where N is the number of hypotheses) for the regions specified in the matrix I which is a mask of indices which is the same size as the image. In the first iteration of the algorithm, I will be empty, during this iteration you will generate hypotheses by dividing the image into small square regions and compute the affine warp parameters for each of the regions. IMPORTANT: This can be very slow if you do not use a vectorized implementation and convolutions. In subsequent iterations you will proceed by computing the affine warps for each of the regions indexed by I. 4.2 Clustering hypotheses To merge regions, we cluster the hypothesized affine warps as computed in 4.1. Q5. (10 Pts) Write and upload the function [C] = mergeregions(a,i,distancethreshold) This function clusters the hypotheses and returns the cluster centres in the matrix C (NumClusters x 6) and the assigned indices for the hypotheses in I. To perform the adaptive k-means as described in the paper, we provide you with a simple version which performs k-means and as a post-processing step, merges clusters if the distance between the clusters is less than distancethreshold. As an alternative, you could also use MATLAB s clusterdata function. 4.3 Assigning Regions We now associate each pixel location with a cluster centre. Follow the method outlined in the paper in section 5.3, and assign each pixel to the affine model which minimizes the distortion at that pixel location. Q6. (20 Pts) Create and upload the function 5
6 4.4 Splitting and filtering [I]= assignregions(vx,vy,c) As described in the paper, regions that are spatially disconnected can get assigned to the same region during the clustering step. We also want to get rid of small regions which can throw off the estimation of the affine warps. We post process the segmentation by separating the regions that are disconnected and filtering out small regions. Q7. (10 Pts) Create and upload the function: [I,C] = splitandfilter(i,c) where I and C are the index mask and cluster centres respectively. The function should recompute the index mask and return the new index mask and new set of cluster centres. For this function you might want to use some of MATLAB s image processing tools such as bwconncomp and regionprops. The goal here is to familiarize you with some of the post-processing that is required in many computer vision applications but is usually not taught in detail. 4.5 Iterative Motion Segmentation Now apply the above procedure iteratively in a loop until the pixel-region memberships do not change. 4 Q8. (15 Pts) Create and upload a function [I, C] = motionsegment(vx,vy) where vx and vy are the optical flow pre-computed for a pair of frames and I is the matrix of pixel-region memberships. Compute the motion segmentation for the flow corresponding to the first two frames of the sequence given in the folder and visualize your I matrix using the imagesc function. 4.6 Extra Credit: Video Segmentation So far we estimated the motion segmentation for a pair of frames. To obtain a consistent segmentation of an entire video sequence would require you to associate the generated segments across the sequence. Estimate the motion segmentation for each pair of frames in the video. Associate the segments across the frames in the sequence by assigning the segments to the same id in the first frame by comparing the respective motion models 5, warping and checking overlap consistency etc. QX1. (20 Pts) Write and upload the script motionlayers.m which reads in the flow for each frame sequentially, performs motion segmentation and associates segments across frames. Upload a video of your motion segmentation. 4 In practice 20 iterations were enough for convergence. 5 Euclidean distance betwen the affine warp parameters 6
7 5 Implementation hints 5.1 Lucas-Kanade Tracking Please make a.mat file which has top-left and bottom-right corners of your tracked object for us to evaluate your tracking result. The.MAT file should contain a variable called TrackedObject which is a nx4 matrix. n is number of frames in CarSequence and each row in the matrix contains 4 numbers: x1 y1 x2 y2, where indicates the coordinates of top-left and bottom-right corners to the object you tracked. Please name the.mat file as track coordinates.mat. Recall that determining the motion vector (u, v) for each frame is an iterative procedure and should be repeated until convergence which means u and v are close to zero. You may want to set a maximum number of iterations to try in case your template is lost between frames. Because the car will not be moving by integer amounts, you will need to interpolate values of the image or its gradient within the moving template windows. Using interp2 for this can be helpful. The meshgrid function will likely be useful for creating arrays of (x, y) coordinates for all the pixels within a car s window. Because of the necessity of iterating over image frames, and iterating until convergence, efficiency will be important to prevent your code from running painfully slowly. Please try to vectorizing your image operations. 5.2 Dominant Motion Estimation Scaling image coordinates can be crucial for this type of problem. You should scale coordinates to be between 0 and 1 when computing the A matrix. But be sure to undo that scaling whenever you need to use coordinates to actually reference pixels in an image. Also note that Ix and Iy will vary depending on your scaling. Note that transforming by an affine matrix M1 and then by M2 is the same as transforming by M12 = M2 M1. It may be easier to keep track of the total accumulated warp (e.g. M12 here) as you iterate towards convergence, and warp the original I(t + 1) using that cumulative warp at each iteration. As before, note that the spatial derivatives remain the same between iterations when converging to the correct affine parameters, p. The only thing that changes will be It as you warp I(t + 1) more and more to better match I(t). Related to the previous point, the matrix A contains one row for every pixel in the image. But as discussed above, when we compute the temporal derivative, It, we should only consider those pixels of the warped version of I(t + 1) that overlap with I(t). Thus, at a given iteration, we need to ignore some of the rows 7
8 of A specifically those that correspond to pixels that do not overlap with the current warped version of I(t + 1). Note that this still does not require completely rebuilding A. Instead, you only need to keep track of which of its rows are currently active and index those accordingly when computing p. 5.3 Motion Layers As in the previous problem you will need to scale your co-ordinates to be between 0 and 1. You do not need to undo your scaling as long as you are consistent with scaling when you perform the pixel assignment step. The MATLAB representation NaN and the function isnan are useful for filtering out pixel locations you do not want to consider. While estimating affine models in your first iteration, some of the linear systems that you solve might be singular or have a large condition number. You will have to filter out these hypotheses. You might need to experiment with different values of distancethreshold in order to correctly merge hypotheses. 8
Computer Vision: Homework 3 Lucas-Kanade Tracking & Background Subtraction
16-720 Computer Vision: Homework 3 Lucas-Kanade Tracking & Background Subtraction Instructor: Deva Ramanan TAs: Achal Dave, Sashank Jujjavarapu Siddarth Malreddy, Brian Pugh See course website for deadline:
More information16720: Computer Vision Homework 1
16720: Computer Vision Homework 1 Instructor: Martial Hebert TAs: Varun Ramakrishna and Tomas Simon Instructions A complete homework submission consists of two parts. A pdf file with answers to the theory
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationMotion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation
Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion
More informationCSE 252A Computer Vision Homework 3 Instructor: Ben Ochoa Due : Monday, November 21, 2016, 11:59 PM
CSE 252A Computer Vision Homework 3 Instructor: Ben Ochoa Due : Monday, November 21, 2016, 11:59 PM Instructions: Homework 3 has to be submitted in groups of 3. Review the academic integrity and collaboration
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationCS4495 Fall 2014 Computer Vision Problem Set 5: Optic Flow
CS4495 Fall 2014 Computer Vision Problem Set 5: Optic Flow DUE: Wednesday November 12-11:55pm In class we discussed optic flow as the problem of computing a dense flow field where a flow field is a vector
More informationCS143: Introduction to Computer Vision Homework Assignment 3
CS143: Introduction to Computer Vision Homework Assignment 3 Affine motion and Image Registration Due: November 3 at 10:59am - problems 1 and Due: November 9 at 10:59am - all problems The assignment is
More informationCOMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE
COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment
More informationStructure from Motion
11/18/11 Structure from Motion Computer Vision CS 143, Brown James Hays Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert This class: structure from
More informationThe Lucas & Kanade Algorithm
The Lucas & Kanade Algorithm Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Registration, Registration, Registration. Linearizing Registration. Lucas & Kanade Algorithm. 3 Biggest
More informationOptical flow and tracking
EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,
More informationCS143 Introduction to Computer Vision Homework assignment 1.
CS143 Introduction to Computer Vision Homework assignment 1. Due: Problem 1 & 2 September 23 before Class Assignment 1 is worth 15% of your total grade. It is graded out of a total of 100 (plus 15 possible
More informationVisual Tracking (1) Pixel-intensity-based methods
Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationRepresenting Moving Images with Layers. J. Y. Wang and E. H. Adelson MIT Media Lab
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab Goal Represent moving images with sets of overlapping layers Layers are ordered in depth and occlude each other Velocity
More informationMatching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.
Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.
More informationAugmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit
Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationMotion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi
Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationKanade Lucas Tomasi Tracking (KLT tracker)
Kanade Lucas Tomasi Tracking (KLT tracker) Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 26,
More informationDense Image-based Motion Estimation Algorithms & Optical Flow
Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion
More informationChapter 9 Object Tracking an Overview
Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging
More informationVisual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.
Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:
More informationIntroduction to Computer Vision
Introduction to Computer Vision Michael J. Black Oct 2009 Motion estimation Goals Motion estimation Affine flow Optimization Large motions Why affine? Monday dense, smooth motion and regularization. Robust
More informationCS 4495 Computer Vision Motion and Optic Flow
CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris
More informationLecture 16: Computer Vision
CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field
More informationVisual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation
More informationLeow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1
Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape
More informationAUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION. Ninad Thakoor, Jean Gao and Huamei Chen
AUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION Ninad Thakoor, Jean Gao and Huamei Chen Computer Science and Engineering Department The University of Texas Arlington TX 76019, USA ABSTRACT
More informationarxiv: v1 [cs.cv] 2 May 2016
16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer
More informationSE 263 R. Venkatesh Babu. Object Tracking. R. Venkatesh Babu
Object Tracking R. Venkatesh Babu Primitive tracking Appearance based - Template Matching Assumptions: Object description derived from first frame No change in object appearance Movement only 2D translation
More informationVideo Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin
Final Report Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This report describes a method to align two videos.
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical
More informationComparison between Motion Analysis and Stereo
MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis
More informationMotion. CS 554 Computer Vision Pinar Duygulu Bilkent University
1 Motion CS 554 Computer Vision Pinar Duygulu Bilkent University 2 Motion A lot of information can be extracted from time varying sequences of images, often more easily than from static images. e.g, camouflaged
More informationDesign and development of Optical flow based Moving Object Detection and Tracking (OMODT) System
International Journal of Computational Engineering Research Vol, 03 Issue, 4 Design and development of Optical flow based Moving Object Detection and Tracking (OMODT) System Ms. Shamshad Shirgeri 1, Ms.
More informationCS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow
CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow
More informationLecture 20: Tracking. Tuesday, Nov 27
Lecture 20: Tracking Tuesday, Nov 27 Paper reviews Thorough summary in your own words Main contribution Strengths? Weaknesses? How convincing are the experiments? Suggestions to improve them? Extensions?
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix
More informationVisual Tracking (1) Feature Point Tracking and Block Matching
Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationLecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20
Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit
More informationTracking in image sequences
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Tracking in image sequences Lecture notes for the course Computer Vision Methods Tomáš Svoboda svobodat@fel.cvut.cz March 23, 2011 Lecture notes
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationImage Mosaicing with Motion Segmentation from Video
Image Mosaicing with Motion Segmentation from Video Augusto Román and Taly Gilat EE392J Digital Video Processing Winter 2002 Introduction: Many digital cameras these days include the capability to record
More informationFinally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field
Finally: Motion and tracking Tracking objects, video analysis, low level motion Motion Wed, April 20 Kristen Grauman UT-Austin Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, and S. Lazebnik
More informationPerspective Projection [2 pts]
Instructions: CSE252a Computer Vision Assignment 1 Instructor: Ben Ochoa Due: Thursday, October 23, 11:59 PM Submit your assignment electronically by email to iskwak+252a@cs.ucsd.edu with the subject line
More informationVisual Tracking (1) Tracking of Feature Points and Planar Rigid Objects
Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/
More informationCS-465 Computer Vision
CS-465 Computer Vision Nazar Khan PUCIT 9. Optic Flow Optic Flow Nazar Khan Computer Vision 2 / 25 Optic Flow Nazar Khan Computer Vision 3 / 25 Optic Flow Where does pixel (x, y) in frame z move to in
More informationRobert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method
Intro to Template Matching and the Lucas-Kanade Method Appearance-Based Tracking current frame + previous location likelihood over object location current location appearance model (e.g. image template,
More informationComputer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z
Computer Vision I Name : CSE 252A, Fall 202 Student ID : David Kriegman E-Mail : Assignment (Due date: 0/23/202). Perspective Projection [2pts] Consider a perspective projection where a point = z y x P
More informationCS664 Lecture #18: Motion
CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use
More informationEECS 556 Image Processing W 09
EECS 556 Image Processing W 09 Motion estimation Global vs. Local Motion Block Motion Estimation Optical Flow Estimation (normal equation) Man slides of this lecture are courtes of prof Milanfar (UCSC)
More informationRobust Camera Pan and Zoom Change Detection Using Optical Flow
Robust Camera and Change Detection Using Optical Flow Vishnu V. Makkapati Philips Research Asia - Bangalore Philips Innovation Campus, Philips Electronics India Ltd. Manyata Tech Park, Nagavara, Bangalore
More informationNIH Public Access Author Manuscript Proc Int Conf Image Proc. Author manuscript; available in PMC 2013 May 03.
NIH Public Access Author Manuscript Published in final edited form as: Proc Int Conf Image Proc. 2008 ; : 241 244. doi:10.1109/icip.2008.4711736. TRACKING THROUGH CHANGES IN SCALE Shawn Lankton 1, James
More informationLecture 16: Computer Vision
CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods
More informationPeripheral drift illusion
Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video
More informationKanade Lucas Tomasi Tracking (KLT tracker)
Kanade Lucas Tomasi Tracking (KLT tracker) Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: November 26,
More informationGlobal Flow Estimation. Lecture 9
Motion Models Image Transformations to relate two images 3D Rigid motion Perspective & Orthographic Transformation Planar Scene Assumption Transformations Translation Rotation Rigid Affine Homography Pseudo
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationOPPA European Social Fund Prague & EU: We invest in your future.
OPPA European Social Fund Prague & EU: We invest in your future. Patch tracking based on comparing its piels 1 Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine
More informationComputer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier
Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear
More informationModule 7 VIDEO CODING AND MOTION ESTIMATION
Module 7 VIDEO CODING AND MOTION ESTIMATION Version ECE IIT, Kharagpur Lesson Block based motion estimation algorithms Version ECE IIT, Kharagpur Lesson Objectives At the end of this less, the students
More informationCS201: Computer Vision Introduction to Tracking
CS201: Computer Vision Introduction to Tracking John Magee 18 November 2014 Slides courtesy of: Diane H. Theriault Question of the Day How can we represent and use motion in images? 1 What is Motion? Change
More information9.913 Pattern Recognition for Vision. Class 8-2 An Application of Clustering. Bernd Heisele
9.913 Class 8-2 An Application of Clustering Bernd Heisele Fall 2003 Overview Problem Background Clustering for Tracking Examples Literature Homework Problem Detect objects on the road: Cars, trucks, motorbikes,
More informationDeep Learning: Image Registration. Steven Chen and Ty Nguyen
Deep Learning: Image Registration Steven Chen and Ty Nguyen Lecture Outline 1. Brief Introduction to Deep Learning 2. Case Study 1: Unsupervised Deep Homography 3. Case Study 2: Deep LucasKanade What is
More informationOutline. Data Association Scenarios. Data Association Scenarios. Data Association Scenarios
Outline Data Association Scenarios Track Filtering and Gating Global Nearest Neighbor (GNN) Review: Linear Assignment Problem Murthy s k-best Assignments Algorithm Probabilistic Data Association (PDAF)
More informationTake Home Exam # 2 Machine Vision
1 Take Home Exam # 2 Machine Vision Date: 04/26/2018 Due : 05/03/2018 Work with one awesome/breathtaking/amazing partner. The name of the partner should be clearly stated at the beginning of your report.
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationFace Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm
Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Dirk W. Wagener, Ben Herbst Department of Applied Mathematics, University of Stellenbosch, Private Bag X1, Matieland 762,
More informationTwo-view geometry Computer Vision Spring 2018, Lecture 10
Two-view geometry http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 10 Course announcements Homework 2 is due on February 23 rd. - Any questions about the homework? - How many of
More informationClass 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008
Class 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008 Instructor: YingLi Tian Video Surveillance E6998-007 Senior/Feris/Tian 1 Outlines Moving Object Detection with Distraction Motions
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationOptical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationLucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Lucas-Kanade Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object
More informationEN1610 Image Understanding Lab # 4: Corners, Interest Points, Hough Transform
EN1610 Image Understanding Lab # 4: Corners, Interest Points, Hough Transform The goal of this fourth lab is to ˆ Learn how to detect corners, and use them in a tracking application ˆ Learn how to describe
More informationSegmentation by Clustering Reading: Chapter 14 (skip 14.5)
Segmentation by Clustering Reading: Chapter 14 (skip 14.5) Data reduction - obtain a compact representation for interesting image data in terms of a set of components Find components that belong together
More informationEE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline
1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation
More informationSegmentation by Clustering. Segmentation by Clustering Reading: Chapter 14 (skip 14.5) General ideas
Reading: Chapter 14 (skip 14.5) Data reduction - obtain a compact representation for interesting image data in terms of a set of components Find components that belong together (form clusters) Frame differencing
More informationReal-time Detection of Illegally Parked Vehicles Using 1-D Transformation
Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,
More informationGlobal Flow Estimation. Lecture 9
Global Flow Estimation Lecture 9 Global Motion Estimate motion using all pixels in the image. Parametric flow gives an equation, which describes optical flow for each pixel. Affine Projective Global motion
More informationCPSC 425: Computer Vision
1 / 49 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 49 Menu March 10, 2016 Topics: Motion
More informationOccluded Facial Expression Tracking
Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento
More informationIntroduction to Digital Image Processing
Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin
More informationComparison Between The Optical Flow Computational Techniques
Comparison Between The Optical Flow Computational Techniques Sri Devi Thota #1, Kanaka Sunanda Vemulapalli* 2, Kartheek Chintalapati* 3, Phanindra Sai Srinivas Gudipudi* 4 # Associate Professor, Dept.
More informationAssignment: Backgrounding and Optical Flow.
Assignment: Backgrounding and Optical Flow. April 6, 00 Backgrounding In this part of the assignment, you will develop a simple background subtraction program.. In this assignment, you are given two videos.
More informationhuman vision: grouping k-means clustering graph-theoretic clustering Hough transform line fitting RANSAC
COS 429: COMPUTER VISON Segmentation human vision: grouping k-means clustering graph-theoretic clustering Hough transform line fitting RANSAC Reading: Chapters 14, 15 Some of the slides are credited to:
More informationENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows
ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows Instructor: Gabriel Taubin Assignment written by: Douglas Lanman 29 January 2009 Figure 1: 3D Photography
More informationVisual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys
Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes Motion and perceptual organization Sometimes, motion is the onl cue Motion and perceptual organization Sometimes, motion is the
More informationMotion Detection and Segmentation Using Image Mosaics
Research Showcase @ CMU Institute for Software Research School of Computer Science 2000 Motion Detection and Segmentation Using Image Mosaics Kiran S. Bhat Mahesh Saptharishi Pradeep Khosla Follow this
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections
More informationSchool of Computing University of Utah
School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?
More informationTextureless Layers CMU-RI-TR Qifa Ke, Simon Baker, and Takeo Kanade
Textureless Layers CMU-RI-TR-04-17 Qifa Ke, Simon Baker, and Takeo Kanade The Robotics Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Abstract Layers are one of the most well
More informationPerception IV: Place Recognition, Line Extraction
Perception IV: Place Recognition, Line Extraction Davide Scaramuzza University of Zurich Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Outline of Today s lecture Place recognition using
More informationParticle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease
Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References
More informationAgenda. Rotations. Camera calibration. Homography. Ransac
Agenda Rotations Camera calibration Homography Ransac Geometric Transformations y x Transformation Matrix # DoF Preserves Icon translation rigid (Euclidean) similarity affine projective h I t h R t h sr
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationFundamentals of Digital Image Processing
\L\.6 Gw.i Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Chris Solomon School of Physical Sciences, University of Kent, Canterbury, UK Toby Breckon School of Engineering,
More information