Global motion model based on B-spline wavelets: application to motion estimation and video indexing

Size: px
Start display at page:

Download "Global motion model based on B-spline wavelets: application to motion estimation and video indexing"

Transcription

1 Global motion model based on B-spline wavelets: application to motion estimation and video indexing E. Bruno 1, D. Pellerin 1,2 Laboratoire des Images et des Signaux (LIS) 1 INPG, 46 Av. Félix Viallet, Grenoble Cedex, France 2 ISTG, Université Joseph Fourier, Grenoble, France eric.bruno, denis.pellerin@lis.inpg.fr Abstract This paper describes a framewor to estimate global motion model based on B-spline wavelets. The wavelet-based model allows optical flow to be recovered at different resolution levels from image derivatives. By combining estimation from different resolution levels in a coarse to fine scheme, our algorithm is able to recover a large range of velocity magnitude. Our algorithm is evaluated on artificial and real image sequences and provides accurate optical flows. The wavelet coefficients of the model at low resolution levels also provide features to index video or to recognize activities. Preliminary experiments on video indexing based on coarse wavelet coefficients are presented. As an example, we have considered video sequences containing different human activities. Wavelet coefficients are efficient to give a database partition related to the ind of human activities. 1 Introduction Motion estimation is a fundemental problem in video analysis. The nowledge of motion is required in many applications, such as video compression, video indexing, activity recognition or 3D reconstruction. Parameter models of optical flow aid in estimation by enforcing strong constraints on motion spatial variation within a region or over the whole image. Many authors have used affine or quadratic basis set [2, 8] to model motion. These models are very effective to relate planar transformation but fail when there are two or more different moving regions in the image sequence. These methods generally aim at determining a partition of the moving regions and then applying motion model on each region. Other wors in activity recognition use motion specificities to generate basis functions. In [3], Blac et. al model motion by linear combination of eigenvectors computed from a training set of flow fields (motion discontinuities and non-rigid mouth motion). These methods are very effective to model velocity fields close to the training set but cannot be used to estimate any motions. In our previous wor [4], global motion was expressed as a Fourier series expansion. Such a model allows a large range of motion to be approximated on a few harmonic basis functions. Following the same principle, we now use a set of B-spline wavelet functions. Wavelet series expansion is better than Fourier analysis to approximate piecewise smooth functions, such as optical flow. Then using B-spline wavelets, which are smooth, symmetrical and have a compact support, improves motion estimation accuracy. In addition, model parameters provide a concise description of global motion. Srinivasan and Chellapa have recently proposed to model global motion by a set of overlapped cosine windows [10]. Their approach is close to our algorithm. Our wor can be also compared to Szelisi s approach [11], which is based on a sparse velocity vector field estimation interpolated by spline functions. This paper is organized as follows: Section 2 outlines the problem of global motion model estimation from the image sequence brightness function and presents the motion model based on B-spline wavelets. Section 3 details the model parameter robust estimation based on a sparse system inversion. This procedure is then embedded in a coarse to fine scheme so as to handle a large range of motion magnitude. Section 4 presents experimental results on estimated motion model accuracy and a performance comparison with the Srinivasan and Svelisi s algorithm. This section also displays some preliminary experiments on video indexing based on motion wavelet coefficients.

2 2 Global Motion model 2.1 General remars on motion model estimation Let us consider an image sequence I(p i, t) with p i = (x i, y i ) Ω the location of each pixel in the image. The brightness constancy assumption states that the image brightness I(p i, t + 1) is a simple deformation of the image at time t: I(p i, t) = I(p i + V(p i, t), t + 1) (1) V(p i, t) = (u, v) is the optical flow (also named velocity field or global motion) between the two frames I(p i, t) and I(p i, t + 1). V(p i, t) is also defined over Ω. This velocity field can be globally modeled as a linear combination of basis functions: V(p i, t) = N c φ (p i ) with c = (c x, c y) T (2) =0 where φ (p i ) are basis functions. The motion parameter vector θ = [c 0... c N ] T is estimated by minimizing an objective function [8]: E = p i Ω ρ (I(p i + V(p i, t), t + 1) I(p i, t), σ) = p i Ω ρ (r(p i + V), σ) (3) where ρ(, σ) is a robust norm error (or M-estimator) applied to the difference of warped images r(p i + V) = I(p i + V(p i, t), t + 1) I(p i, t). A robust estimation is needed to reduce outlying data weight in the estimation process. Then, the solution for θ is given by: θ = argmin θ (E) (4) The success of the minimization stage depends on the model s ability to fit the real global motion between I(t) and I(t + 1). Wavelet basis functions are reputed to give the best approximation of piecewise smooth functions, such as optical flow, and are an effective solution to solve our problem. 2.2 Wavelet series expansion Any function f(x) L 2 (R) can be expanded as a weighted sum of basis functions: f(x) = c l, φ l, (x) + d j, ψ j, (x) (5) j l φ j, (x) = φ(2 j x ) and ψ j, (x) = ψ(2 j x ) are respectively the scaling and the wavelet functions, dilated and translated by at a level j. Be V j, with j Z, a set of closed subspaces of L 2 (R). The scaling functions {φ j, (x)} are a basis for V j. V j contains all functions of L 2 (R) at a resolution level j. The new details at level j are represented by the wavelet set {ψ j, (x)} which is a basis for W j. Then, the subspace V j+1 is defined as: V j+1 = V j W j V j V j+1 (6) Relation (6) leads to: c j+1, φ j+1, (x) = c j, φ j, (x) + d j, ψ j, (x) (7) Estimation of the {c j+1, } coefficients provides also scaling and wavelet coefficients for all lower resolution levels. Then the approximation of f(x) at level j could be expressed as a linear combination of scaling functions: f j (x) = c j, φ j, (x) (8) We want to model the velocity field V(x, y) by using two-dimensional scaling basis functions. A natural extension of one-dimensional to two-dimensional scaling basis function is: Φ j,1, 2 (x, y) = φ(2 j x 1 )φ(2 j y 2 ) (9) The subscripts j, 1, 2 represent respectively the resolution scale, the horizontal and the vertical translations. The global motion could then be approximated at a resolution scale j by: V j (p i, t) = 2 j 1 1, 2=0 2.3 B-spline basis functions c j,1, 2 Φ j,1, 2 (p i ) (10) So as to recover a smooth and regular optical flow, the scaling functions must be as smooth and symmetrical as possible. The B-spline wavelet has maximum regularity and symmetry, and is a good candidate to model motion. The B-spline function of degree N 1 is the convolution of N box functions: 0 (x) = (B B... B)(x) (11) with B = ( 1 2, 1 2). The B-spline scaling function at the resolution level j + 1 is defined by the dilatation equation: j+1 (x) = 21 N N =0 ( N ) j (2x ) (12)

3 error quantity: Ẽ(V + δv) = ρ (r(p i + V) + δv I(p i + V, t + 1), σ) p i Ω (14) (a) φ 1,2,3 where I = [I x, I y ] T represents the spatial derivatives of I. The M-estimation problem can be converted into an iterated reweighted least squares [7]: Ẽ(V + δv) = w(r(p i + V)) (r(p i + V) + δv I(p i + V, t + 1)) 2 p i Ω (15) (b) Φ 1 (x) (c) Φ 2 (x) (d) Φ 3 (x) Figure 1. a) 1-dimensional B-spline scaling functions of degree 1, 2 and 3 and, b,c,d) the corresponding 2-dimensional scaling functions with w(x) = 1 x Tucey: { ρ(x) = σ 2 ρ(x) x. The ρ-function is the biweight of 6 (1 (1 ( x σ )2 ) 3 ) if x σ σ 2 6 if x > σ (16) Using global motion model defined in (10) for V and δv at a resolution level j, the robust error in (15) becomes with matrix notation: Ẽ(V + δv) = W (M j δθ j + B) T (M j δθ j + B) (17) 0 (x) is supported on [0, N]. Since we wor on a finite sampled signal, the scaling function φ0 N 1 (x) is mapped into the signal support. The two-dimensional basis functions are computed at different resolution levels using the product (9). Figure 1 represents the 1D B-spline functions of degree 1, 2 and 3 and the corresponding 2D scaling functions used to model motion. 3 Global Motion estimation 3.1 Incremental robust estimation To estimate global motion V, the robust error E(V) defined in (3), has to be minimized. This step is achieved by using an incremental scheme. E(V + δv) = p i Ω ρ (r(p i + V + δv), σ) (13) Given an estimate V (initially zero), the goal is to estimate δv that minimize equation (13). The first order Taylor series expansion of I(t+1) with respect to δv provides a new with, for 1 2 = [ j 1] 2 and p i Ω (N = (2 j ) 2 and M = card(ω)): M j = [I x (p i + V)Φ j,1, 2 (p i ), I y (p i + V)Φ j,1, 2 (p i )], M j R M 2N δθ j = [δc j,1, 2 ] T, δθ j R 2N W = [diag(w(p i + V))], W R M M B = r(p i + V), B R M (18) The minimum of (17) with respect to δθ j leads to the linear system: M T j W M j δθ j = M T j W B (19) and we obtain the solution: δθ j = ( M T j W M j ) 1 M T j W B (20) The matrix M T j W M j could be large when the resolution level j is high, and it is time and memory space expensive to compute its inverse with numerical techniques. Hopefully, because we use localized scaling functions (Fig. 1) as basis functions, system (19) is sparse. Figure 2 represents zero and nonzero entries of M T j W M j for the 3 first degree B-spline scaling functions at level j = 4. Note that sparsity depends on the spatial extent of the scaling function used.

4 (a) (b) (c) (a) Frame from Yosemite sequence (b) Real flow field (c) Estimated flow field Figure 2. M T j W M j matrix for B-spline scaling function of degree a) 1, b) 2 and c) 3 at level j = 4. White and blac regions represent zero and non-zero entries, respectively. δθ j is then iteratively estimated in (20) using the generalized minimum residual method [9] which is effective and fast to solve such a sparse system. The first step of the reweighted least squares consist in obtaining a first estimate δθj 0 with the weight matrix W equal to the identity. New weights are next evaluated and a new estimate δθj 1 is obtained. This process is repeated until the incremental estimate δθj i is too small or after a predefined number of iterations. Then, the motion parameters vector at level j is: θ j = δθ j (21) Figure 3. Optical flow result. The optical flow estimated is modeled by the B-spline basis functions of degree 1 with a coarse to fine estimation from the level 2 to Performances on synthetic and real image sequences We test our algorithm on both synthetic and real sequences. The angular error [1] is used to evaluate our estimation when the true motion field (u r, v r ) is nown (in Yosemite sequence (Fig 3.a) downloaded from Barron et al. s FTP 1 ). This angular error is defined at each location by: ( ) uu r + vv r + 1 α e = arccos u2 + v (22) u 2 r + vr Coarse to fine estimation At coarse resolution level, the wavelet coefficients are estimated on large image regions, and can recover large but coarse motion. So as to provide an accurate global motion estimation for large and fine displacements, the incremental scheme is embedded in a coarse to fine refinement algorithm. A first estimate θ l is obtained at the coarsest level l. Then θ l is transmitted to a finer level where a new incremental estimation is done. This is repeated until the finest level L is reached. Then, the final motion parameter vector θ L is estimated, and contains all wavelet coefficients that describe global motion V L at resolution level L. 4 Results The global motion parameter vector estimated not only provides an optical flow estimation, but also describes the sequence activity by means of some wavelet coefficients. In this section, we present our results on motion estimation accuracy and some results on video indexing based on wavelet coefficients. This provides a degree value which taes care of velocity modulus and orientation errors. Table 1 compares the angular error average and standard deviation for B-spline scaling functions of degree 1, 2 and 3. The motion models estimated with the different B-splines are close to the real optical flow. The B-splines of degree 2 and 3 perform slightly better than the B-spline of degree 1 but for a larger computational time. Whatever the degree of the B-spline used, our algorithm outperforms the approach of Srinivasan and Chellappa [10], who use overlapped cosine windows to model global motion. With a lower density of velocity vector estimated, the Szelsii and Coughlan s algorithm performs slightly better. The next example, Baltrain sequence (Fig 4.a), is a real sequence where motion is more complex than Yosemite. Figures 4.b), c) and d) display the estimation scheme progress across the resolution levels. Estimated optical flows are close to perceptual view, whereas they are just defined by a small number of parameters (only wavelet coefficient vectors at level 4). To sum up, a motion model based on B-spline wavelets is efficient to approximate on a very few parameters the global optical flow. Actually, with only 16 basis functions (j = 2) 1 ftp.csd.uwo.ca

5 (a) Frame from baltrain sequence (b) Estimated flow field at j = 2 (c) Estimated flow field at j = 3 (d) Estimated flow field at j = 4 Figure 4. Frame from Baltrain sequence and global motion estimated at level a) 2, b) 3 and c) 4. B-spline of degree 3 were used to model motion. B-spline Level Avg. Std Density Φ 1 j = o 9.6 o 100% j = o 8.5 o 100% j = o 9.7 o 100% Φ 2 j = o 9.6 o 100% j = o 7.5 o 100% j = o 9.0 o 100% Φ 3 j = o 10.7 o 100% j = o 8.0 o 100% j = o 8.7 o 100% Srinivasan [10] 8.9 o 10.6 o 100% Szelisi [11] 3.1 o 7.6 o 39.6% Table 1. Motion estimation angular errors (average and standart deviation) for Yosemite sequence, using different B-spline scaling functions. The coarse to fine scheme was performed from the resolution level j = 2 to j = 4 it is possible to have a coarse motion estimation and then information about motion activities along a video sequence. 4.2 Video indexing based on wavelet coefficients In a motion based video indexing problem, the first stage is to define relevant motion features. These features have to contain enough information to distinguish activities, and to be coarse enough to be invariable to close motion activities. We will show in this section that a wavelet model estimated at coarse levels could provide relevant motion features. In the following, sequences are considered as elementary video shots Definition of the motion features Wavelet components extracted from different sequences provide for each frame i of sequence S a motion parameter vector θ i = [c 1... c N ] T R 2N. The motion-based feature vector associated to the sequence S of M frames is then defined as the center of gravity of all reduced motion parameters vectors of S. Θ S = 1 M M θi T σ 1 (23) i=1 where σ is a diagonal matrix which contains standard deviation of θ components over the whole sequence. Computing Θ S for K videos provides a motion-based feature space : Ω = {Θ Si }, i = 1... K (24) The dimension of Ω depends on the number of wavelets used to model global motion. The curvilinear componant analysis (CCA) [6] is an algorithm for dimensionality reduction and representation of multidimensional data sets. Applying CCA on the motion feature space Ω gives a revealing representation of it in low dimension, preparing a basis for further clustering and classification Experimental results We have built Ω with a video database representing six human activities (Figure 5.a): up, down, left, right, come and go. These sequences were acquired for activity recognition [5] (aquisition rate: 10Hz). Each sequence includes ten frames and there are five sequences for each activity with different persons. So, the video database contains 30 sequences. B-splines of degree 1 at resolution level 2 were used to model coarse motion. Figure 5.b) represents the 2D mapping of Ω. The projection shows that our motion features are relevant to cluster together the 6 activities. In addition to the six clusters,

6 up down left right come go (a) Figure 5. a) Typical video sequences in the database and b) 2D mapping of the motion feature space obtained by CCA (b) the activities are also grouped by their main global motion characteristics. The sequences up, right and come have a global motion to the right of the scene, whereas the other sequences to the left. The same division appears into the projection map. 5 Conclusion We have presented a new motion model based on B- spline wavelet. Using this framewor, it is possible to estimate accuratly motion in image sequence and also to extract global motion features from video sequences. It is based on the optical flow B-spline wavelet series expansion between two consecutive frames. This ind of expansion presents the advantage to model various types of motions and provides relevant information about motion structure. It is of importance to note that motion wavelet coefficients are estimated directly from the image derivatives and do not require prior information of dense image motion or image segmentation. We have illustrated the motion model relevance with regard to the motion estimation accuracy and showed that wavelet coeffients can be used for video indexing or activity recognition. Future wors will concern precisely video indexing problems. We have to study how to combine wavelet coefficients along video sequences and how to classify these motion features to obtain relevant video database representation. References [1] J. Barron, D. Fleet, and S. Beauchemin. Performance of optical flow techniques. International Journal of Computer Vision, 1(12):43 77, [2] M. Blac and A. Jepson. Estimating optical flow in segmented images using variable-order parametric models with local deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(1):57 92, July [3] M. Blac, Y. Yacoob, A. Jepson, and D. Fleet. Learning parametrized models of image motion. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, CVPR 97, pages , Puerto Rico, June [4] E. Bruno and D. Pellerin. Global motion fourier series expansion for video indexing and retrieval. In Advances in Visual Information System, VISUAL, pages , Lyon, November [5] O. Chomat and J. Crowley. Probabilistic recognition of activity using local appearance. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, CVPR 99, pages , June [6] P. Demartines and J. Herault. Curvilinear component analysis: A self-organising neural networ for non linear mapping of data sets. IEEE Transactions on Neural Networs, 8(1): , [7] P. Holland and R. Welsch. Robust regression using iteratively reweighted least squares. Communications in Statistique - Theor. Meth., A6: , [8] J.-M. Odobez and P. Bouthemy. Robust multiresolution estimation of parametric motion models. Journal of Visual Communication and Image Representation, 6(4): , December [9] W. H. Press, S. Teuolsy, W. Vetterling, and B. Flannery. Numerical Recipies in C, Second Edition. Cambridge University Press, [10] S. Srinivasan and R. Chellappa. Noise-resiliant estimation of optical flow by use of overlapped basis functions. Journal of the Optical Society of America A, 16(3): , March [11] R. Szelisi and J. Coughlan. Spline-based image registration. International Journal of Computer Vision, 22(3): , 1997.

Learning Parameterized Models of Image Motion

Learning Parameterized Models of Image Motion Learning Parameterized Models of Image Motion Michael J. Black Yaser Yacoob Allan D. Jepson David J. Fleet Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CA 9434 Computer Vision Laboratory,

More information

Technion - Computer Science Department - Tehnical Report CIS

Technion - Computer Science Department - Tehnical Report CIS Over-Parameterized Variational Optical Flow Tal Nir Alfred M. Bruckstein Ron Kimmel {taln, freddy, ron}@cs.technion.ac.il Department of Computer Science Technion Israel Institute of Technology Technion

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION Ammar Zayouna Richard Comley Daming Shi Middlesex University School of Engineering and Information Sciences Middlesex University, London NW4 4BT, UK A.Zayouna@mdx.ac.uk

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques

Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques Digital Image Computing: Techniques and Applications. Perth, Australia, December 7-8, 1999, pp.143-148. Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques Changming Sun CSIRO Mathematical

More information

EECS 556 Image Processing W 09

EECS 556 Image Processing W 09 EECS 556 Image Processing W 09 Motion estimation Global vs. Local Motion Block Motion Estimation Optical Flow Estimation (normal equation) Man slides of this lecture are courtes of prof Milanfar (UCSC)

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Multiple Combined Constraints for Optical Flow Estimation

Multiple Combined Constraints for Optical Flow Estimation Multiple Combined Constraints for Optical Flow Estimation Ahmed Fahad and Tim Morris School of Computer Science, The University of Manchester Manchester, M3 9PL, UK ahmed.fahad@postgrad.manchester.ac.uk,

More information

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation Michael J. Black and Allan D. Jepson Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto,

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Adaptive Multi-Stage 2D Image Motion Field Estimation

Adaptive Multi-Stage 2D Image Motion Field Estimation Adaptive Multi-Stage 2D Image Motion Field Estimation Ulrich Neumann and Suya You Computer Science Department Integrated Media Systems Center University of Southern California, CA 90089-0781 ABSRAC his

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Optical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object

More information

Learning Parameterized Models of Image Motion

Learning Parameterized Models of Image Motion Learning Parameterized Models of Image Motion Michael J. Black Yaser Yacoob y Allan D. Jepson z David J. Fleet x Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CA 9434 y Computer Vision

More information

Optical Flow Estimation

Optical Flow Estimation Optical Flow Estimation Goal: Introduction to image motion and 2D optical flow estimation. Motivation: Motion is a rich source of information about the world: segmentation surface structure from parallax

More information

Notes on Robust Estimation David J. Fleet Allan Jepson March 30, 005 Robust Estimataion. The field of robust statistics [3, 4] is concerned with estimation problems in which the data contains gross errors,

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References

More information

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD M.E-II, Department of Computer Engineering, PICT, Pune ABSTRACT: Optical flow as an image processing technique finds its applications

More information

Motion Estimation (II) Ce Liu Microsoft Research New England

Motion Estimation (II) Ce Liu Microsoft Research New England Motion Estimation (II) Ce Liu celiu@microsoft.com Microsoft Research New England Last time Motion perception Motion representation Parametric motion: Lucas-Kanade T I x du dv = I x I T x I y I x T I y

More information

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow

CS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image Processing

More information

Over-Parameterized Variational Optical Flow

Over-Parameterized Variational Optical Flow DOI 10.1007/s11263-007-0051-2 Over-Parameterized Variational Optical Flow Tal Nir Alfred M. Bruckstein Ron Kimmel Received: 4 August 2006 / Accepted: 5 March 2007 Springer Science+Business Media, LLC 2007

More information

Motion-based obstacle detection and tracking for car driving assistance

Motion-based obstacle detection and tracking for car driving assistance Motion-based obstacle detection and tracking for car driving assistance G. Lefaix, E. Marchand, Patrick Bouthemy To cite this version: G. Lefaix, E. Marchand, Patrick Bouthemy. Motion-based obstacle detection

More information

CS 4495 Computer Vision Motion and Optic Flow

CS 4495 Computer Vision Motion and Optic Flow CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

SUBDIVISION ALGORITHMS FOR MOTION DESIGN BASED ON HOMOLOGOUS POINTS

SUBDIVISION ALGORITHMS FOR MOTION DESIGN BASED ON HOMOLOGOUS POINTS SUBDIVISION ALGORITHMS FOR MOTION DESIGN BASED ON HOMOLOGOUS POINTS M. Hofer and H. Pottmann Institute of Geometry Vienna University of Technology, Vienna, Austria hofer@geometrie.tuwien.ac.at, pottmann@geometrie.tuwien.ac.at

More information

VIDEO SUMMARIZATION USING FUZZY DESCRIPTORS AND A TEMPORAL SEGMENTATION. INPG, 45 Avenue Felix Viallet Grenoble Cedex, FRANCE

VIDEO SUMMARIZATION USING FUZZY DESCRIPTORS AND A TEMPORAL SEGMENTATION. INPG, 45 Avenue Felix Viallet Grenoble Cedex, FRANCE VIDEO SUMMARIZATION USING FUZZY DESCRIPTORS AND A TEMPORAL SEGMENTATION Mickael Guironnet, 2 Denis Pellerin and 3 Patricia Ladret mickael.guironnet@lis.inpg.fr,2,3 Laboratoire des Images et des Signaux

More information

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing

More information

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys

Visual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes Motion and perceptual organization Sometimes, motion is the onl cue Motion and perceptual organization Sometimes, motion is the

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow

More information

Coarse to Over-Fine Optical Flow Estimation

Coarse to Over-Fine Optical Flow Estimation Coarse to Over-Fine Optical Flow Estimation Tomer Amiaz* Eyal Lubetzky Nahum Kiryati* *School of Electrical Engineering School of Computer Science Tel Aviv University Tel Aviv 69978, Israel September 11,

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

Optical flow and tracking

Optical flow and tracking EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,

More information

Face Detection and Recognition in an Image Sequence using Eigenedginess

Face Detection and Recognition in an Image Sequence using Eigenedginess Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras

More information

Enhancing Gradient Sparsity for Parametrized Motion Estimation

Enhancing Gradient Sparsity for Parametrized Motion Estimation HAN et al.: ENHANCING GRADIENT SPARSITY FOR MOTION ESTIMATION 1 Enhancing Gradient Sparsity for Parametrized Motion Estimation Junyu Han pengjie.han@gmail.com Fei Qi fred.qi@ieee.org Guangming Shi gmshi@xidian.edu.cn

More information

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,

Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in

More information

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Moritz Baecher May 15, 29 1 Introduction Edge-preserving smoothing and super-resolution are classic and important

More information

Comparison between Motion Analysis and Stereo

Comparison between Motion Analysis and Stereo MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

More information

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING SECOND EDITION IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING ith Algorithms for ENVI/IDL Morton J. Canty с*' Q\ CRC Press Taylor &. Francis Group Boca Raton London New York CRC

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Multi-scale 3D Scene Flow from Binocular Stereo Sequences

Multi-scale 3D Scene Flow from Binocular Stereo Sequences Boston University OpenBU Computer Science http://open.bu.edu CAS: Computer Science: Technical Reports 2004-11-02 Multi-scale 3D Scene Flow from Binocular Stereo Sequences Li, Rui Boston University Computer

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment

More information

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20 Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit

More information

Image Analysis, Classification and Change Detection in Remote Sensing

Image Analysis, Classification and Change Detection in Remote Sensing Image Analysis, Classification and Change Detection in Remote Sensing WITH ALGORITHMS FOR ENVI/IDL Morton J. Canty Taylor &. Francis Taylor & Francis Group Boca Raton London New York CRC is an imprint

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Vision Lecture 2 Motion and Optical Flow Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de 28.1.216 Man slides adapted from K. Grauman, S. Seitz, R. Szeliski,

More information

Multi-Scale 3D Scene Flow from Binocular Stereo Sequences

Multi-Scale 3D Scene Flow from Binocular Stereo Sequences Boston University OpenBU Computer Science http://open.bu.edu CAS: Computer Science: Technical Reports 2007-06 Multi-Scale 3D Scene Flow from Binocular Stereo Sequences Li, Rui Boston University Computer

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT 17th European Signal Processing Conference (EUSIPCO 009) Glasgow, Scotland, August 4-8, 009 IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION

More information

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:

More information

Unsupervised learning in Vision

Unsupervised learning in Vision Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

Real-Time Dense and Accurate Parallel Optical Flow using CUDA

Real-Time Dense and Accurate Parallel Optical Flow using CUDA Real-Time Dense and Accurate Parallel Optical Flow using CUDA Julien Marzat INRIA Rocquencourt - ENSEM Domaine de Voluceau BP 105, 78153 Le Chesnay Cedex, France julien.marzat@gmail.com Yann Dumortier

More information

Comparison Between The Optical Flow Computational Techniques

Comparison Between The Optical Flow Computational Techniques Comparison Between The Optical Flow Computational Techniques Sri Devi Thota #1, Kanaka Sunanda Vemulapalli* 2, Kartheek Chintalapati* 3, Phanindra Sai Srinivas Gudipudi* 4 # Associate Professor, Dept.

More information

CS-465 Computer Vision

CS-465 Computer Vision CS-465 Computer Vision Nazar Khan PUCIT 9. Optic Flow Optic Flow Nazar Khan Computer Vision 2 / 25 Optic Flow Nazar Khan Computer Vision 3 / 25 Optic Flow Where does pixel (x, y) in frame z move to in

More information

Optic Flow and Basics Towards Horn-Schunck 1

Optic Flow and Basics Towards Horn-Schunck 1 Optic Flow and Basics Towards Horn-Schunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information.

More information

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation

More information

Optical Flow Estimation with CUDA. Mikhail Smirnov

Optical Flow Estimation with CUDA. Mikhail Smirnov Optical Flow Estimation with CUDA Mikhail Smirnov msmirnov@nvidia.com Document Change History Version Date Responsible Reason for Change Mikhail Smirnov Initial release Abstract Optical flow is the apparent

More information

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover 38 CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING Digital image watermarking can be done in both spatial domain and transform domain. In spatial domain the watermark bits directly added to the pixels of the

More information

Dense Motion Field Reduction for Motion Estimation

Dense Motion Field Reduction for Motion Estimation Dense Motion Field Reduction for Motion Estimation Aaron Deever Center for Applied Mathematics Cornell University Ithaca, NY 14853 adeever@cam.cornell.edu Sheila S. Hemami School of Electrical Engineering

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.

Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization. Overview Video Optical flow Motion Magnification Colorization Lecture 9 Optical flow Motion Magnification Colorization Overview Optical flow Combination of slides from Rick Szeliski, Steve Seitz, Alyosha

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical

More information

Wikipedia - Mysid

Wikipedia - Mysid Wikipedia - Mysid Erik Brynjolfsson, MIT Filtering Edges Corners Feature points Also called interest points, key points, etc. Often described as local features. Szeliski 4.1 Slides from Rick Szeliski,

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

Mariya Zhariy. Uttendorf Introduction to Optical Flow. Mariya Zhariy. Introduction. Determining. Optical Flow. Results. Motivation Definition

Mariya Zhariy. Uttendorf Introduction to Optical Flow. Mariya Zhariy. Introduction. Determining. Optical Flow. Results. Motivation Definition to Constraint to Uttendorf 2005 Contents to Constraint 1 Contents to Constraint 1 2 Constraint Contents to Constraint 1 2 Constraint 3 Visual cranial reflex(vcr)(?) to Constraint Rapidly changing scene

More information

CS201: Computer Vision Introduction to Tracking

CS201: Computer Vision Introduction to Tracking CS201: Computer Vision Introduction to Tracking John Magee 18 November 2014 Slides courtesy of: Diane H. Theriault Question of the Day How can we represent and use motion in images? 1 What is Motion? Change

More information

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline 1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation

More information

Lucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.

Lucas-Kanade Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. Lucas-Kanade Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object

More information

VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD. Ertem Tuncel and Levent Onural

VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD. Ertem Tuncel and Levent Onural VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD Ertem Tuncel and Levent Onural Electrical and Electronics Engineering Department, Bilkent University, TR-06533, Ankara, Turkey

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Image Segmentation and Registration

Image Segmentation and Registration Image Segmentation and Registration Dr. Christine Tanner (tanner@vision.ee.ethz.ch) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation

More information

smooth coefficients H. Köstler, U. Rüde

smooth coefficients H. Köstler, U. Rüde A robust multigrid solver for the optical flow problem with non- smooth coefficients H. Köstler, U. Rüde Overview Optical Flow Problem Data term and various regularizers A Robust Multigrid Solver Galerkin

More information

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM N. A. Tsoligkas, D. Xu, I. French and Y. Luo School of Science and Technology, University of Teesside, Middlesbrough, TS1 3BA, UK E-mails: tsoligas@teihal.gr,

More information

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 5, September 2016 Face Recognition ased on LDA and Improved Pairwise-Constrained

More information

Accurate Image Registration from Local Phase Information

Accurate Image Registration from Local Phase Information Accurate Image Registration from Local Phase Information Himanshu Arora, Anoop M. Namboodiri, and C.V. Jawahar Center for Visual Information Technology, IIIT, Hyderabad, India { himanshu@research., anoop@,

More information

Application of wavelet theory to the analysis of gravity data. P. Hornby, F. Boschetti* and F. Horowitz, Division of Exploration and Mining, CSIRO,

Application of wavelet theory to the analysis of gravity data. P. Hornby, F. Boschetti* and F. Horowitz, Division of Exploration and Mining, CSIRO, Application of wavelet theory to the analysis of gravity data. P. Hornby, F. Boschetti* and F. Horowitz, Division of Exploration and Mining, CSIRO, Australia. Summary. The fundamental equations of potential

More information

VC 11/12 T11 Optical Flow

VC 11/12 T11 Optical Flow VC 11/12 T11 Optical Flow Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline Optical Flow Constraint Equation Aperture

More information

Scale-Invariance of Support Vector Machines based on the Triangular Kernel. Abstract

Scale-Invariance of Support Vector Machines based on the Triangular Kernel. Abstract Scale-Invariance of Support Vector Machines based on the Triangular Kernel François Fleuret Hichem Sahbi IMEDIA Research Group INRIA Domaine de Voluceau 78150 Le Chesnay, France Abstract This paper focuses

More information

MOTION ESTIMATION WITH THE REDUNDANT WAVELET TRANSFORM.*

MOTION ESTIMATION WITH THE REDUNDANT WAVELET TRANSFORM.* MOTION ESTIMATION WITH THE REDUNDANT WAVELET TRANSFORM.* R. DeVore A. Petukhov R. Sharpley Department of Mathematics University of South Carolina Columbia, SC 29208 Abstract We present a fast method for

More information

Image Coding with Active Appearance Models

Image Coding with Active Appearance Models Image Coding with Active Appearance Models Simon Baker, Iain Matthews, and Jeff Schneider CMU-RI-TR-03-13 The Robotics Institute Carnegie Mellon University Abstract Image coding is the task of representing

More information

Fast trajectory matching using small binary images

Fast trajectory matching using small binary images Title Fast trajectory matching using small binary images Author(s) Zhuo, W; Schnieders, D; Wong, KKY Citation The 3rd International Conference on Multimedia Technology (ICMT 2013), Guangzhou, China, 29

More information

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska 1 Krzysztof Krawiec IDSS 2 The importance of visual motion Adds entirely new (temporal) dimension to visual

More information

Highly Symmetric Bi-frames for Triangle Surface Multiresolution Processing

Highly Symmetric Bi-frames for Triangle Surface Multiresolution Processing Highly Symmetric Bi-frames for Triangle Surface Multiresolution Processing Qingtang Jiang and Dale K. Pounds Abstract In this paper we investigate the construction of dyadic affine (wavelet) bi-frames

More information

The Lucas & Kanade Algorithm

The Lucas & Kanade Algorithm The Lucas & Kanade Algorithm Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Registration, Registration, Registration. Linearizing Registration. Lucas & Kanade Algorithm. 3 Biggest

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent

More information

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation

More information

Note Set 4: Finite Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for

More information