Visual Saliency Based Object Tracking

Size: px
Start display at page:

Download "Visual Saliency Based Object Tracking"

Transcription

1 Visual Saliency Based Object Tracking Geng Zhang 1,ZejianYuan 1, Nanning Zheng 1, Xingdong Sheng 1,andTieLiu 2 1 Institution of Artificial Intelligence and Robotics, Xi an Jiaotong University, China {gzhang, zjyuan, nnzheng, xdsheng}@aiar.xjtu.edu.cn 2 IBM China Research Lab liultie@cn.ibm.com Abstract. This paper presents a novel method of on-line object tracking with the static and motion saliency features extracted from the video frames locally, regionally and globally. When detecting the salient object, the saliency features are effectively combined in Conditional Random Field (CRF). Then Particle Filter is used when tracking the detected object. Like the attention shifting mechanism of human vision, when the object being tracked disappears, our tracking algorithm can change its target to other object automatically even without re-detection. And different from many other existing tracking methods, our algorithm has little dependence on the surface appearance of the object, so it can detect any category of objects as long as they are salient, and the tracking is robust to the change of global illumination and object shape. Experiments on video clips of various objects show the reliable results of our algorithm. 1 Introduction Object detection and tracking is an essential technology used in computer vision system to actively sense the environment. As the robotic and unmanned technology develops, automatically detecting and tracking interesting objects in unknown environment with little prior knowledge becomes more and more important. The main challenge of object tracking is the unpredictable of the environment which always makes it hard to estimate the state of the object. The changing of illumination, clutter background and the occlusion also badly affects the tracking robust. In order to overcome these difficulties, a variety of tracking algorithms have been proposed and implemented. The representative ones include condensation [3], meanshift [4], and probabilistic data association filter [5] and so on. Generally speaking, most of the tracking algorithm has two major components: the representation model of the object and the algorithm framework. The existing frameworks can be classified into two categories: deterministic methods and stochastic methods. Deterministic methods iteratively search for the optimistic solution of a similarity cost function between the template and the current image. The cost functions widely used are the sum of squared differences (SSD) between the template and the current image [6] and kernel based

2 cost functions [4]. In contrast, the stochastic methods use the state space to model the underlying dynamics of the tracking system and view tracking as a Bayesian inference problem. Among them, the sequential Monte Carlo method, also known as particle filter [7] is the most popular approach. There are many models to represent the target including image patch [6], color histogram [4] and so on. However, color based models are too sensitive to the illumination changes and always confused with background colors. The contour based features [8][9] are more robust to illumination variation but they are sensitive to the background clutter and are restricted to simple shape models. When human sense the environment, they mostly pay attention to the objects which are visually salient. Saliency values the difference between object and background, they are not depend on the objects intrinsic property and is robust to illumination and shape changes. One of the representative visual attention approaches is visual surprising analysis [10] which proves that static and motion features are both important to video attention detection. Itti [11] has proposed a set of static features in his saliency model. More static features are proposed these years [1]. For video series, [2] introduces an method to detect salient object in video series, which combines static and motion features in Dynamic Conditional Random Field (DCRF) under the constraint of global topic model. This approach achieves good results on many challenging videos, but it needs the whole video series to compute the global topic model, which makes it can only be used offtime. In this paper, we elaborate static and motion saliency features into the framework of particle filter to formulate an online salient object tracking method. When computing the color spatial-distribution feature, we use a graph-based segmentation algorithm [12] as the color clustering method instead of Gaussian Mixture Model (GMM) which is used in [1]. Sparse optical flow [13] is used to get motion field for motion saliency feature computing. All these features are adaptively selected and combined. The main contributions of our approach are summarized as follows. First, we propose a novel method to tracking salient object online, which is robust to the illumination and shape changes, it can also automatically rebuild attention to the object being tracked disappears. Second, a segmentation based feature is proposed as the global static feature which is more effective than the feature basedongmm. This paper is organized as follows. We introduce the framework of our algorithm in section 2. The detail of saliency feature computing and combination appears in section 3. In section 3 we also introduce the spatial and temporal coherence used. Section 4 is the collaborative particle filter. Experiment results are shown in section 5. 2 Problem Description and Modeling Object tracking is an important procedure for human to sense and understand the environment. For human, this procedure can be roughly divided into three

3 Compute Constraints Spatial and Temporal Coherences Image frames Feature Extraction Saliency Maps Feature Selection and Combination Object Description Particle Filter Tracking State at time t N Salient Object Exists Y State at time t-1 Salient Object Detection : process : data Fig. 1. The flow chart of salient object tracking parts: attention establishing, attention following and attention shifting. To apply these parts to computer vision, for the input video series I 1,,I t,, detection algorithm is used to find the interesting object and build a description model for it. Usually, when there is no high level knowledge, the objects we are interested in are those who are visual salient. After the attention is established, the object s appearance description X 1 is built and the initial object state is gotten in a short time. The state can be the shape, position or scale of the object. Tracking is to estimate the state of the object at time t (X t ) given the initial state X 1 and the observation up to time t (I t =(I 1,,I t ) ). This process is also called filtering. The flow chart of our method is shown in Fig.1. The tracking model is usually built under the probabilistic framework of Hidden Markov Model (HMM): { Xt = G(X t 1 )+v t 1 I t = H(X t )+n t, (1) where G( ) andh( ) are the system transition function and observation function while v t 1 and n t are the system noise and observation noise. When tracking a single object, we formulate the problem as computing the maximum a posteriori (MAP) estimation of X t. We predict the posterior at time t as P (X t I t ) P (I t X t ) P (X t X t 1 )P (X t 1 I t 1 )dx t 1, (2) According to (2), we can use the method of statistical filtering to solve the problem. But the state space is extremely huge, computation of the integration in (2) is NP hard. So we choose to use the sequential Monte Carlo method [7]. When filtering, the state X t can be defined in various forms. Some people try to track the contour of the object [8]. But contour tracking can be easily disturbed by the background clutters and is time-consuming. So people always simplify the state to a rectangle surround the object: X t =(x t,y t,w t,h t ) X t R 4,wherex t,y t,w t,h t are the position and size of the rectangle at time t. In our method, the observation at time t is the image frame I t. The observation model includes the saliency features and the spatial and temporal coherence

4 constraints. So the observation likelihood P (I t X t ) can be formulated as P (I t X t ) exp { ( Ft (X t,i t )+Sc t (X t,i t )+Tc t (X t 1,t,I t 1,t ) )}, (3) where F t is the description of the object which comes from the final saliency feature in our method. Sc t and Tc t are the spatial and temporal coherence constraints. The detail of these features and constrains will be described in the following section. 3 The Saliency Features and Constraints 3.1 Static Saliency Features Visual saliency can be seen as a binary labeling problem which separates the object from the background. Each kind of saliency feature provides a normalized feature map f(p, I) [0, 1] which indicates the probability that the pixel p belongs to the salient object. We compute the local and regional features using themethodin[1]. Multi-scale Contrast: Contrast is the most common local feature used for attention detection. Without knowing the size of the object, we compute contrast feature f sc (p, I) under a Gaussian image pyramid as f sc (p, I) = L l=1 p N(p) I l (p) I l (p ) 2, (4) where I l is the image in the l-th level of the pyramid and N(p) is the 8- neighborhood of pixel p. Center-Surround Histogram: The salient object can always be distinguished by the difference of it and its context. Suppose the object is enclosed by a rectangle. The regional center-surround histogram feature f sh (p, I) is defined as f sh (p, I) w pp χ 2 (R (p ),RS (p )), (5) {p p R (p ),R S (p )} where R is the most distinct rectangle centered pixel p and containing the pixel p. RS is the surrounding contour of R* and have the same area of it. And the weight w pp =exp( 0.5σ 2 p p p 2 ) is a Gaussian falloff weight with variance σp 2. Color Spatial distribution: The salient object usually has distinguishing color with the background. So the wider a color is distributed in the image, the less possible a salient object contains this color. The global spatial distribution of a specific color can be used to describe the saliencyofanobject.weproposea novel and more effective method to compute this feature.

5 Fig. 2. The left one is the original image. The middle one is the map of segmentation and the right one is the map of the color spatial distribution feature. The first step of computing this feature is color clustering. We use a fast image segmentation algorithm instead of Gaussian Mixture Model to improve the speed and robustness to noise. This algorithm fuses the pixels with similar property, for example color, in a graph-based way[12]. Having the segmentation result, we unify the RGB value in the i-th image segment seg i to its average color. Then we convert the image to index color representation and compute the distribution variance of every color. So the color spatial distribution feature f sd (p, I) is defined as f sd (p, I) = ind(x,y)=ind(p) xy x x y ȳ 1, (6) where ind(x, y) andind(p) are the indexing color of point (x, y) andpointp. The segmentation result and the feature map are shown in Fig Motion Saliency Features Compared to static object, human s attention is more sensitive to moving objects. The static saliency features can be extended to motion field. In this paper, we use the Lucas/Kanade s motion estimation under a pyramidal implementation [14]. The computed motion field is a 2-D map M with the displacement of every pixel in X and Y directions. In order to compute features from the motion map using the method of computing static featureswe do the lighting operation on M to make the moving area connective. The lighting operation is a Gaussian weighting of the spot areas centered at every sparse points in M. Themotion saliency features are computed on the motion field as follows. Multi-scale Motion Contrast: This local feature f Mc shows the difference of motion. It is computed from the Gaussian pyramid of motion field map: f Mc (p, M) = L l=1 p N(p) M l (p) M l (p ) 2, (7) where M l is the motion map in the l-th level of the pyramid.

6 Center-Surround Motion Histogram: This is the regional feature which represents the motion of a block. f Mh is defined as f Mh (p, M) w pp χ 2 (R M (p ),RMS (p )), (8) {x x R (p ),R S (p )} where the weight w pp has the similar definition as that of the regional static feature. Motion Spatial Distribution: The global static feature is also extended to the motion field. Motion is first represented using GMM. Then we compute the distribution variance V M (m) of each component m. So the spatial distribution of motion f Md is defined as f Md (p, M) m P (m M p )(1 V M (m)), (9) where P (m M p ) represents the probability that pixel p belongs to component m. See details of the motion saliency features in [2]. 3.3 Feature Selection And Combination During the process of tracking, the feature space that best distinguishes between object and background is the best feature space to use for tracking [15]. To achieve best performance using the features mentioned above, we adaptively select and combine them to get the final saliency map F t. We notice that when the background of the video is nearly still and the object is moving, the motion features are decisive. In contrast, when the object and background have the similar form of movement, still or moving, we can hardly verify them in the motion field. At this time, static features are more distinctive. Frame difference is used to decide whether the object has the similar motion form as the background. First, we smooth the adjacent frames I t 1,I t by Gaussian filtering. Then, the frame difference of margin and the whole image are computed to judge the movement of background and the whole scene. If both the background and the whole scene are moving or still, we use static features, otherwise, motion features are selected. The final saliency map F t is defined as a linear combination of the selected features: F t = w tk f tk, (10) k where f tk is the k-th selected feature at time t which could be any of the features mentioned above. The weight w tk represents the distinguishing ability of f tk, which is measured by the Saliency Ratio (SR) of the feature: w tk SR tk = / f tk (p) f tk (p), (11) p X t p/ X t

7 where Xt is a simple estimation of X t by extending the area of X t 1. p Xt represents that pixel p is in the corresponding rectangle of Xt. We normalize SR tk to get w tk. Given the final saliency map, the object s description F t (I t,x t ) is defined by the sum square of F t as F t (X t,i t )= 1 (F(I t,p)) 2, (12) w t h t p X t where F(I t,p)=f t (p), represents the process of feature computing. 3.4 Coherence Constraints For the tracking problem, the rectangle should fit the object. In our method, that is to say, the border of the rectangle should be close to the edge of the salient object. So we define the spatial coherence as the sum of edge values near the border of the rectangle as Sc t (X t,i t )=λ S E(I t,p), (13) p N(X t) where N(X t ) is the area near the corresponding rectangle of X t,ande(i t,p)is the edge value of I t at pixel p. λ S = α/w L h L is the normalizing factor. Temporal coherence models similarity between two consecutive salient objects. We use the coherence mentioned in [2]: Tc t (X t 1,t,I t 1,t )=β 1 X t X t 1 + β 2 χ 2 (h(x t ),h(x t 1 )), (14) where χ 2 (h(x t ),h(x t 1 )) is the χ 2 distance between the histogram of two adjacent state. β 1 and β 2 are normalizing factors. 4 Particle Filter Tracking The particle filter [7] tracker consists of an initialization of the template model and a sequential Monte Carlo implementation of a Bayesian filtering for the stochastic tracking system. We use the method mentioned in [1] to initialize the system. In order to track moving object, the saliency features we organize in CRF are not only static but can also be motion. The initial state we get from detection is X 1 = (x 1,y 1,h 1,w 1 ). In the prediction stage, the samples in the state space are propagated through a dynamic model. In our algorithm, we use a first-order regressive process model: X t = X t 1 + v t 1, (15) where v t 1 is a multivariate Gaussian random variable.

8 Fig. 3. The results of color spatial distribution feature computed with our approach and approach mentioned in [1]. The top row are the original images. The middle row are the results of our approach. The bottom row are the comparing results. In the update stage, the particles importance weight is defined by the object description and coherence constraints. The weight of the i-th particle at time t is: wt i = F t (Xt,F i t ) Sc t (Xt,I i t ) Tc t (Xt 1,t,I i t 1,t), i (16) where Xt i is the corresponding system state of the i-th particle at time t. During update, a direct version of Monte Carlo importance sampling technology [7] is applied to avoid the degeneracy. 5 Experiments We show here the saliency map of color spatial distribution feature computed with our method and the results of salient object tracking under a variety of situations, including multifold objects tracking, tracking with object appearance changes, and automatically attention rebuilding. 5.1 Color Spatial Distribution When computing the feature of color spatial distribution. A graph-based segmentation algorithm is used to cluster the adjacent pixels with similar colors. The segmentation is done to every channel of the RGB image and the results are merged to get the final segmentation map. In Fig.3, We compare our feature maps with the maps computed using the approach mentioned in [1]. The lighter area has higher probability to be saliency. As we can see, the results of our approach shows the salient area more clearly than the comparing results. We use images from the Berkeley segmentation dataset [16] for comparing convenience.

9 5.2 Tracking Results Our approach is implemented and experiments are performed on video series of various topics. We have collected a video dataset of different object topics, including people, bicycles, cars, animals and so on. Most of our test videos are real-life data collected with a hand-held camera while others are downloaded from the internet. The objects of interest in our experiments are initiated using the detection approach mentioned in [1]. Different from the original detection process, we manually set them to be 0.3, 0.45 and 0.25 for local, regional and global features. During tracking with sequential Monte Carlo method, we sampled 100 particles for each filter. Fig.4 shows some tracking results of multifold objects, including bicycle, car, people, and bird. Instead of object s intrinsic property, the saliency based detection and tracking method depends only on the distinction between object and background. So we can track any object as long as it is salient. Fig.5 shows the tracking results of our approach when the appearance color feature of the object changes. In this experiment, we manually alter the global illumination. Besides, the red car in the video is occluded by tree leaves in some frames which also causes the changing of its appearance feature. We track the red car using our approach and meanshift [4] for comparison. For the meanshift tracker, we manually set the initial position of object. From Fig.5 we can see, our approach gives good results inspite of illumination changes and partial occlusion while meanshift fails when the appearance of the object is changed. Fig.6 shows the results when the shape of the object changes. In this experiment, the girl comes nearer and near to the camera while making different gestures, which causes obvious changes of the object shape. As we can see, our approach achieve good results under this condition. In Fig.7 we show that our tracking method can automatically rebuild attention when the object being tracked is out of sight. In this experiment, the detection algorithm set attention on the white car as the initial state. When this car goes out of the scene, attention is rebuilt on the bicycle. Finally, when the bicycle disappears and another car comes, this car becomes the salient object and draws the attention. 6 Conclusion This paper presents a novel approach of online salient object tracking. In this method, object s appearance is described by its difference to the background which is compute from the static and motion saliency features locally, regionally and globally. A new segmentation based color spatial distribution feature is proposed which is more distinctive between the object and the background. Features are adaptively selected and combined and the sequential Monte Carlo technology is used to track the saliency object. Our approach can track any salient object and is robust to illumination changes and partially occlusions. Moreover, attention can be automatically rebuilt without re-detection in our approach. We are

10 now preparing to extend this approach to multi-object tracking, which involves the modeling of objects interactions. Acknowledgment This research is supported in part by the National Basic Research Program of China under Grant No.2007CB and the National Natural Science Foundation of China under Grant No References 1. T. Liu, J. Sun, NN. Zheng, X. Tang, HY. Shum: Learning to Detect A Salient Object. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition, T. Liu, NN. Zheng, W. Ding, ZJ. Yuan: Video Attention: Learning to Detect A Salient Object Sequence. In: 19th International Conference on Pattern Recognition, (2008) 3. M. Isard, A. Blake: Condensation: conditional density propagation for visual tracking. International Journal of Computer Vision, 29(1), 5-28 (1998) 4. D. Comaniciu, V. Ramesh, and P. Meer: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell., 25(5), (2003) 5. C. Rasmussen, G.D. Hager: Probabilistic Data Association Methods for Tracking Complex Visual Objects. IEEE Trans. Pattern Analysis Machine Intell., 23(6), (2001) 6. G.D. Hager, P.N. Hager: Efficient region tracking with parametric models of geometry and illumination. IEEE Trans. Pattern Analysis Machine Intell., 20(10), (1998) 7. A. Doucet, N. de Freitas, and N. Gordon, editors: Sequential Monte Carlo Methods in Practice. Springer-Verlag, New York (2001) 8. M.Isard, A.Blake: Contour tracking by stochastic propagation of conditional density. In: Proc. European Conf. on Computer Vision, 1, (1996) 9. F. Leymarie, M. Levine: Tracking deformable objects in the plane using an active contour model. IEEE Trans. Pattern Analysis Machine Intell., 15(6), (1993) 10. R. Carmi, L. Itti: Visual causes versus correlates of attentional selection in dynamic scenes. Vision Research, 46(26), (2006) 11. L. Itti, C. Koch, E. Niebur: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Analysis Machine Intell., 20(11): (1998) 12. P. F. Felzenszwalb, D. F. Huttenlocher: Efficient Graph-Based Image Segmentation. International Journal of Computer Vision, 59(2), (2004) 13. S.M. Smith, J.M.Brady: ASSET-2: Real-Time Motion Segmentation and Shape Tracking. IEEE Trans. Pattern Analysis Machine Intell., 17(8), (1995) 14. J. Y. Bouguet: Pyramidal Implementation of the Lucas-Kanade Feature Tracker. Tech. Rep., Intel Corporation, Microprocessor Research Labs (1999) 15. Robert T. Collins and Yanxi Liu: On-Line Selection of Discriminative Tracking Features. In: Proc. IEEE Conf. on Computer Vision (2003) 16. D.Martin,C.Fowlkes,D.Tal,J.Malik:ADatabaseofHumanSegmentedNatural Images and its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics. In: Proc. IEEE Conf. on Computer Vision (2001)

11 Fig. 4. The results of multifold objects tracking. Fig. 5. The results of tracking under illumination changes and occlusion. The top row are the results of our approach. The bottom row are the results of meanshift Fig. 6. The results of tracking while the shape of object changes. Fig. 7. The results of attention rebuilding.

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

Object Tracking with an Adaptive Color-Based Particle Filter

Object Tracking with an Adaptive Color-Based Particle Filter Object Tracking with an Adaptive Color-Based Particle Filter Katja Nummiaro 1, Esther Koller-Meier 2, and Luc Van Gool 1,2 1 Katholieke Universiteit Leuven, ESAT/VISICS, Belgium {knummiar,vangool}@esat.kuleuven.ac.be

More information

Tracking Algorithms. Lecture16: Visual Tracking I. Probabilistic Tracking. Joint Probability and Graphical Model. Deterministic methods

Tracking Algorithms. Lecture16: Visual Tracking I. Probabilistic Tracking. Joint Probability and Graphical Model. Deterministic methods Tracking Algorithms CSED441:Introduction to Computer Vision (2017F) Lecture16: Visual Tracking I Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Deterministic methods Given input video and current state,

More information

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING Proceedings of MUSME 2011, the International Symposium on Multibody Systems and Mechatronics Valencia, Spain, 25-28 October 2011 HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING Pedro Achanccaray, Cristian

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Yang Wang Tele Tan Institute for Infocomm Research, Singapore {ywang, telctan}@i2r.a-star.edu.sg

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song

DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN Gengjian Xue, Jun Sun, Li Song Institute of Image Communication and Information Processing, Shanghai Jiao

More information

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:

More information

Probabilistic Index Histogram for Robust Object Tracking

Probabilistic Index Histogram for Robust Object Tracking Probabilistic Index Histogram for Robust Object Tracking Wei Li 1, Xiaoqin Zhang 2, Nianhua Xie 1, Weiming Hu 1, Wenhan Luo 1, Haibin Ling 3 1 National Lab of Pattern Recognition, Institute of Automation,CAS,

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

A Feature Point Matching Based Approach for Video Objects Segmentation

A Feature Point Matching Based Approach for Video Objects Segmentation A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer

More information

Visual Motion Analysis and Tracking Part II

Visual Motion Analysis and Tracking Part II Visual Motion Analysis and Tracking Part II David J Fleet and Allan D Jepson CIAR NCAP Summer School July 12-16, 16, 2005 Outline Optical Flow and Tracking: Optical flow estimation (robust, iterative refinement,

More information

Robust Model-Free Tracking of Non-Rigid Shape. Abstract

Robust Model-Free Tracking of Non-Rigid Shape. Abstract Robust Model-Free Tracking of Non-Rigid Shape Lorenzo Torresani Stanford University ltorresa@cs.stanford.edu Christoph Bregler New York University chris.bregler@nyu.edu New York University CS TR2003-840

More information

A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b and Guichi Liu2, c

A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b and Guichi Liu2, c 4th International Conference on Mechatronics, Materials, Chemistry and Computer Engineering (ICMMCCE 2015) A Background Modeling Approach Based on Visual Background Extractor Taotao Liu1, a, Lin Qi2, b

More information

SE 263 R. Venkatesh Babu. Object Tracking. R. Venkatesh Babu

SE 263 R. Venkatesh Babu. Object Tracking. R. Venkatesh Babu Object Tracking R. Venkatesh Babu Primitive tracking Appearance based - Template Matching Assumptions: Object description derived from first frame No change in object appearance Movement only 2D translation

More information

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Visual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation

More information

Face detection in a video sequence - a temporal approach

Face detection in a video sequence - a temporal approach Face detection in a video sequence - a temporal approach K. Mikolajczyk R. Choudhury C. Schmid INRIA Rhône-Alpes GRAVIR-CNRS, 655 av. de l Europe, 38330 Montbonnot, France {Krystian.Mikolajczyk,Ragini.Choudhury,Cordelia.Schmid}@inrialpes.fr

More information

Mean shift based object tracking with accurate centroid estimation and adaptive Kernel bandwidth

Mean shift based object tracking with accurate centroid estimation and adaptive Kernel bandwidth Mean shift based object tracking with accurate centroid estimation and adaptive Kernel bandwidth ShilpaWakode 1, Dr. Krishna Warhade 2, Dr. Vijay Wadhai 3, Dr. Nitin Choudhari 4 1234 Electronics department

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

ECSE-626 Project: An Adaptive Color-Based Particle Filter

ECSE-626 Project: An Adaptive Color-Based Particle Filter ECSE-626 Project: An Adaptive Color-Based Particle Filter Fabian Kaelin McGill University Montreal, Canada fabian.kaelin@mail.mcgill.ca Abstract The goal of this project was to discuss and implement a

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Multiple Motion and Occlusion Segmentation with a Multiphase Level Set Method

Multiple Motion and Occlusion Segmentation with a Multiphase Level Set Method Multiple Motion and Occlusion Segmentation with a Multiphase Level Set Method Yonggang Shi, Janusz Konrad, W. Clem Karl Department of Electrical and Computer Engineering Boston University, Boston, MA 02215

More information

Multi-View Face Tracking with Factorial and Switching HMM

Multi-View Face Tracking with Factorial and Switching HMM Multi-View Face Tracking with Factorial and Switching HMM Peng Wang, Qiang Ji Department of Electrical, Computer and System Engineering Rensselaer Polytechnic Institute Troy, NY 12180 Abstract Dynamic

More information

Object Tracking using HOG and SVM

Object Tracking using HOG and SVM Object Tracking using HOG and SVM Siji Joseph #1, Arun Pradeep #2 Electronics and Communication Engineering Axis College of Engineering and Technology, Ambanoly, Thrissur, India Abstract Object detection

More information

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1 Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape

More information

NIH Public Access Author Manuscript Proc Int Conf Image Proc. Author manuscript; available in PMC 2013 May 03.

NIH Public Access Author Manuscript Proc Int Conf Image Proc. Author manuscript; available in PMC 2013 May 03. NIH Public Access Author Manuscript Published in final edited form as: Proc Int Conf Image Proc. 2008 ; : 241 244. doi:10.1109/icip.2008.4711736. TRACKING THROUGH CHANGES IN SCALE Shawn Lankton 1, James

More information

Keeping flexible active contours on track using Metropolis updates

Keeping flexible active contours on track using Metropolis updates Keeping flexible active contours on track using Metropolis updates Trausti T. Kristjansson University of Waterloo ttkri stj @uwater l oo. ca Brendan J. Frey University of Waterloo f r ey@uwater l oo. ca

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Image Classification based on Saliency Driven Nonlinear Diffusion and Multi-scale Information Fusion Ms. Swapna R. Kharche 1, Prof.B.K.

Image Classification based on Saliency Driven Nonlinear Diffusion and Multi-scale Information Fusion Ms. Swapna R. Kharche 1, Prof.B.K. Image Classification based on Saliency Driven Nonlinear Diffusion and Multi-scale Information Fusion Ms. Swapna R. Kharche 1, Prof.B.K.Chaudhari 2 1M.E. student, Department of Computer Engg, VBKCOE, Malkapur

More information

Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement

Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement Daegeon Kim Sung Chun Lee Institute for Robotics and Intelligent Systems University of Southern

More information

A Boosted Particle Filter: Multitarget Detection and Tracking

A Boosted Particle Filter: Multitarget Detection and Tracking A Boosted Particle Filter: Multitarget Detection and Tracking Kenji Okuma, Ali Taleghani, Nando De Freitas, James J. Little, and David G. Lowe University of British Columbia, Vancouver B.C V6T 1Z4, CANADA,

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 2 Issue 11, November 2015. Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Introduction to behavior-recognition and object tracking

Introduction to behavior-recognition and object tracking Introduction to behavior-recognition and object tracking Xuan Mo ipal Group Meeting April 22, 2011 Outline Motivation of Behavior-recognition Four general groups of behaviors Core technologies Future direction

More information

Active Monte Carlo Recognition

Active Monte Carlo Recognition Active Monte Carlo Recognition Felix v. Hundelshausen 1 and Manuela Veloso 2 1 Computer Science Department, Freie Universität Berlin, 14195 Berlin, Germany hundelsh@googlemail.com 2 Computer Science Department,

More information

Target Tracking Based on Mean Shift and KALMAN Filter with Kernel Histogram Filtering

Target Tracking Based on Mean Shift and KALMAN Filter with Kernel Histogram Filtering Target Tracking Based on Mean Shift and KALMAN Filter with Kernel Histogram Filtering Sara Qazvini Abhari (Corresponding author) Faculty of Electrical, Computer and IT Engineering Islamic Azad University

More information

Tracking of Human Body using Multiple Predictors

Tracking of Human Body using Multiple Predictors Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,

More information

People Tracking and Segmentation Using Efficient Shape Sequences Matching

People Tracking and Segmentation Using Efficient Shape Sequences Matching People Tracking and Segmentation Using Efficient Shape Sequences Matching Junqiu Wang, Yasushi Yagi, and Yasushi Makihara The Institute of Scientific and Industrial Research, Osaka University 8-1 Mihogaoka,

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

Efficient Particle Filter-Based Tracking of Multiple Interacting Targets Using an MRF-based Motion Model

Efficient Particle Filter-Based Tracking of Multiple Interacting Targets Using an MRF-based Motion Model Efficient Particle Filter-Based Tracking of Multiple Interacting Targets Using an MRF-based Motion Model Zia Khan, Tucker Balch, and Frank Dellaert {zkhan,tucker,dellaert}@cc.gatech.edu College of Computing,

More information

Kernel-Bayesian Framework for Object Tracking

Kernel-Bayesian Framework for Object Tracking Kernel-Bayesian Framework for Object Tracking Xiaoqin Zhang 1,WeimingHu 1, Guan Luo 1, and Steve Maybank 2 1 National Laboratory of Pattern Recognition, Institute of Automation, Beijing, China {xqzhang,wmhu,gluo}@nlpr.ia.ac.cn

More information

Video Based Moving Object Tracking by Particle Filter

Video Based Moving Object Tracking by Particle Filter Video Based Moving Object Tracking by Particle Filter Md. Zahidul Islam, Chi-Min Oh and Chil-Woo Lee Chonnam National University, Gwangju, South Korea zahid@image.chonnam.ac.kr Abstract Usually, the video

More information

Lecture 20: Tracking. Tuesday, Nov 27

Lecture 20: Tracking. Tuesday, Nov 27 Lecture 20: Tracking Tuesday, Nov 27 Paper reviews Thorough summary in your own words Main contribution Strengths? Weaknesses? How convincing are the experiments? Suggestions to improve them? Extensions?

More information

Why study Computer Vision?

Why study Computer Vision? Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications building representations of the 3D world from pictures automated surveillance (who s doing what)

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

A Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos

A Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos A Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos Alireza Tavakkoli 1, Mircea Nicolescu 2 and George Bebis 2,3 1 Computer Science Department, University of Houston-Victoria,

More information

Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing

Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing Tomoyuki Nagahashi 1, Hironobu Fujiyoshi 1, and Takeo Kanade 2 1 Dept. of Computer Science, Chubu University. Matsumoto 1200,

More information

Human Detection and Motion Tracking

Human Detection and Motion Tracking Human Detection and Motion Tracking Technical report - FI - VG20102015006-2011 04 Ing. Ibrahim Nahhas Ing. Filip Orság, Ph.D. Faculty of Information Technology, Brno University of Technology December 9,

More information

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

New Models For Real-Time Tracking Using Particle Filtering

New Models For Real-Time Tracking Using Particle Filtering New Models For Real-Time Tracking Using Particle Filtering Ng Ka Ki and Edward J. Delp Video and Image Processing Laboratories (VIPER) School of Electrical and Computer Engineering Purdue University West

More information

Online Figure-ground Segmentation with Edge Pixel Classification

Online Figure-ground Segmentation with Edge Pixel Classification Online Figure-ground Segmentation with Edge Pixel Classification Zhaozheng Yin Robert T. Collins Department of Computer Science and Engineering The Pennsylvania State University, USA {zyin,rcollins}@cse.psu.edu,

More information

Keywords:- Object tracking, multiple instance learning, supervised learning, online boosting, ODFS tracker, classifier. IJSER

Keywords:- Object tracking, multiple instance learning, supervised learning, online boosting, ODFS tracker, classifier. IJSER International Journal of Scientific & Engineering Research, Volume 5, Issue 2, February-2014 37 Object Tracking via a Robust Feature Selection approach Prof. Mali M.D. manishamali2008@gmail.com Guide NBNSCOE

More information

Scale Invariant Segment Detection and Tracking

Scale Invariant Segment Detection and Tracking Scale Invariant Segment Detection and Tracking Amaury Nègre 1, James L. Crowley 1, and Christian Laugier 1 INRIA, Grenoble, France firstname.lastname@inrialpes.fr Abstract. This paper presents a new feature

More information

Representing Moving Images with Layers. J. Y. Wang and E. H. Adelson MIT Media Lab

Representing Moving Images with Layers. J. Y. Wang and E. H. Adelson MIT Media Lab Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab Goal Represent moving images with sets of overlapping layers Layers are ordered in depth and occlude each other Velocity

More information

Online Spatial-temporal Data Fusion for Robust Adaptive Tracking

Online Spatial-temporal Data Fusion for Robust Adaptive Tracking Online Spatial-temporal Data Fusion for Robust Adaptive Tracking Jixu Chen Qiang Ji Department of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute, Troy, NY 12180-3590, USA

More information

Graph-Based Superpixel Labeling for Enhancement of Online Video Segmentation

Graph-Based Superpixel Labeling for Enhancement of Online Video Segmentation Graph-Based Superpixel Labeling for Enhancement of Online Video Segmentation Alaa E. Abdel-Hakim Electrical Engineering Department Assiut University Assiut, Egypt alaa.aly@eng.au.edu.eg Mostafa Izz Cairo

More information

Probabilistic Tracking in Joint Feature-Spatial Spaces

Probabilistic Tracking in Joint Feature-Spatial Spaces Probabilistic Tracking in Joint Feature-Spatial Spaces Ahmed Elgammal Department of Computer Science Rutgers University Piscataway, J elgammal@cs.rutgers.edu Ramani Duraiswami UMIACS University of Maryland

More information

Optical Flow Estimation

Optical Flow Estimation Optical Flow Estimation Goal: Introduction to image motion and 2D optical flow estimation. Motivation: Motion is a rich source of information about the world: segmentation surface structure from parallax

More information

C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT Chennai

C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT Chennai Traffic Sign Detection Via Graph-Based Ranking and Segmentation Algorithm C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT

More information

Announcements. Computer Vision I. Motion Field Equation. Revisiting the small motion assumption. Visual Tracking. CSE252A Lecture 19.

Announcements. Computer Vision I. Motion Field Equation. Revisiting the small motion assumption. Visual Tracking. CSE252A Lecture 19. Visual Tracking CSE252A Lecture 19 Hw 4 assigned Announcements No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in WLH Room 2112 Motion Field Equation Measurements I x = I x, T: Components

More information

Supplementary Materials for Salient Object Detection: A

Supplementary Materials for Salient Object Detection: A Supplementary Materials for Salient Object Detection: A Discriminative Regional Feature Integration Approach Huaizu Jiang, Zejian Yuan, Ming-Ming Cheng, Yihong Gong Nanning Zheng, and Jingdong Wang Abstract

More information

Self Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation

Self Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation For Intelligent GPS Navigation and Traffic Interpretation Tianshi Gao Stanford University tianshig@stanford.edu 1. Introduction Imagine that you are driving on the highway at 70 mph and trying to figure

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Image Segmentation Some material for these slides comes from https://www.csd.uwo.ca/courses/cs4487a/

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

Learning to Detect A Salient Object

Learning to Detect A Salient Object Learning to Detect A Salient Object Tie Liu Jian Sun Nan-Ning Zheng Xiaoou Tang Heung-Yeung Shum Xi an Jiaotong University Microsoft Research Asia Xi an, P.R. China Beijing, P.R. China Abstract We study

More information

Tracking Soccer Ball Exploiting Player Trajectory

Tracking Soccer Ball Exploiting Player Trajectory Tracking Soccer Ball Exploiting Player Trajectory Kyuhyoung Choi and Yongdeuk Seo Sogang University, {Kyu, Yndk}@sogang.ac.kr Abstract This paper proposes an algorithm for tracking the ball in a soccer

More information

Scale-invariant visual tracking by particle filtering

Scale-invariant visual tracking by particle filtering Scale-invariant visual tracing by particle filtering Arie Nahmani* a, Allen Tannenbaum a,b a Dept. of Electrical Engineering, Technion - Israel Institute of Technology, Haifa 32000, Israel b Schools of

More information

Video Processing for Judicial Applications

Video Processing for Judicial Applications Video Processing for Judicial Applications Konstantinos Avgerinakis, Alexia Briassouli, Ioannis Kompatsiaris Informatics and Telematics Institute, Centre for Research and Technology, Hellas Thessaloniki,

More information

Computer Vision II Lecture 4

Computer Vision II Lecture 4 Computer Vision II Lecture 4 Color based Tracking 29.04.2014 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Single-Object Tracking Background modeling

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

IMAGE SEGMENTATION. Václav Hlaváč

IMAGE SEGMENTATION. Václav Hlaváč IMAGE SEGMENTATION Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/ hlavac, hlavac@fel.cvut.cz

More information

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

Video Inter-frame Forgery Identification Based on Optical Flow Consistency Sensors & Transducers 24 by IFSA Publishing, S. L. http://www.sensorsportal.com Video Inter-frame Forgery Identification Based on Optical Flow Consistency Qi Wang, Zhaohong Li, Zhenzhen Zhang, Qinglong

More information

Image Segmentation Using Iterated Graph Cuts BasedonMulti-scaleSmoothing

Image Segmentation Using Iterated Graph Cuts BasedonMulti-scaleSmoothing Image Segmentation Using Iterated Graph Cuts BasedonMulti-scaleSmoothing Tomoyuki Nagahashi 1, Hironobu Fujiyoshi 1, and Takeo Kanade 2 1 Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai,

More information

Lecture 28 Intro to Tracking

Lecture 28 Intro to Tracking Lecture 28 Intro to Tracking Some overlap with T&V Section 8.4.2 and Appendix A.8 Recall: Blob Merge/Split merge occlusion occlusion split When two objects pass close to each other, they are detected as

More information

Probabilistic Tracking and Recognition of Non-Rigid Hand Motion

Probabilistic Tracking and Recognition of Non-Rigid Hand Motion Probabilistic Tracking and Recognition of Non-Rigid Hand Motion Huang Fei and Ian Reid Department of Engineering Science University of Oxford, Parks Road, OX1 3PJ, UK [fei,ian]@robots.ox.ac.uk Abstract

More information

Recall: Blob Merge/Split Lecture 28

Recall: Blob Merge/Split Lecture 28 Recall: Blob Merge/Split Lecture 28 merge occlusion Intro to Tracking Some overlap with T&V Section 8.4.2 and Appendix A.8 occlusion split When two objects pass close to each other, they are detected as

More information

DETECTION OF IMAGE PAIRS USING CO-SALIENCY MODEL

DETECTION OF IMAGE PAIRS USING CO-SALIENCY MODEL DETECTION OF IMAGE PAIRS USING CO-SALIENCY MODEL N S Sandhya Rani 1, Dr. S. Bhargavi 2 4th sem MTech, Signal Processing, S. J. C. Institute of Technology, Chickballapur, Karnataka, India 1 Professor, Dept

More information

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S Radha Krishna Rambola, Associate Professor, NMIMS University, India Akash Agrawal, Student at NMIMS University, India ABSTRACT Due to the

More information

Bi-directional Tracking using Trajectory Segment Analysis

Bi-directional Tracking using Trajectory Segment Analysis Bi-directional Tracking using Trajectory Segment Analysis Jian Sun Weiwei Zhang Xiaoou Tang Heung-Yeung Shum Microsoft Research Asia, Beijing, P. R. China {jiansun, weiweiz, xitang, and hshum}@microsoft.com

More information

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints Last week Multi-Frame Structure from Motion: Multi-View Stereo Unknown camera viewpoints Last week PCA Today Recognition Today Recognition Recognition problems What is it? Object detection Who is it? Recognizing

More information

Ensemble Tracking. Abstract. 1 Introduction. 2 Background

Ensemble Tracking. Abstract. 1 Introduction. 2 Background Ensemble Tracking Shai Avidan Mitsubishi Electric Research Labs 201 Broadway Cambridge, MA 02139 avidan@merl.com Abstract We consider tracking as a binary classification problem, where an ensemble of weak

More information

Fragment-based Visual Tracking with Multiple Representations

Fragment-based Visual Tracking with Multiple Representations American Journal of Engineering and Applied Sciences Original Research Paper ragment-based Visual Tracking with Multiple Representations 1 Junqiu Wang and 2 Yasushi Yagi 1 AVIC Intelligent Measurement,

More information

A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space

A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space Naoyuki ICHIMURA Electrotechnical Laboratory 1-1-4, Umezono, Tsukuba Ibaraki, 35-8568 Japan ichimura@etl.go.jp

More information

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar

More information

Class 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008

Class 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008 Class 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008 Instructor: YingLi Tian Video Surveillance E6998-007 Senior/Feris/Tian 1 Outlines Moving Object Detection with Distraction Motions

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

Superpixel Tracking. The detail of our motion model: The motion (or dynamical) model of our tracker is assumed to be Gaussian distributed:

Superpixel Tracking. The detail of our motion model: The motion (or dynamical) model of our tracker is assumed to be Gaussian distributed: Superpixel Tracking Shu Wang 1, Huchuan Lu 1, Fan Yang 1 abnd Ming-Hsuan Yang 2 1 School of Information and Communication Engineering, University of Technology, China 2 Electrical Engineering and Computer

More information

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision Object Recognition Using Pictorial Structures Daniel Huttenlocher Computer Science Department Joint work with Pedro Felzenszwalb, MIT AI Lab In This Talk Object recognition in computer vision Brief definition

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

Image Resizing Based on Gradient Vector Flow Analysis

Image Resizing Based on Gradient Vector Flow Analysis Image Resizing Based on Gradient Vector Flow Analysis Sebastiano Battiato battiato@dmi.unict.it Giovanni Puglisi puglisi@dmi.unict.it Giovanni Maria Farinella gfarinellao@dmi.unict.it Daniele Ravì rav@dmi.unict.it

More information

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 11, NO. 5, MAY Baoxin Li, Member, IEEE, and Rama Chellappa, Fellow, IEEE.

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 11, NO. 5, MAY Baoxin Li, Member, IEEE, and Rama Chellappa, Fellow, IEEE. TRANSACTIONS ON IMAGE PROCESSING, VOL. 11, NO. 5, MAY 2002 1 A Generic Approach to Simultaneous Tracking and Verification in Video Baoxin Li, Member,, and Rama Chellappa, Fellow, Abstract In this paper,

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

MODEL-FREE, STATISTICAL DETECTION AND TRACKING OF MOVING OBJECTS

MODEL-FREE, STATISTICAL DETECTION AND TRACKING OF MOVING OBJECTS MODEL-FREE, STATISTICAL DETECTION AND TRACKING OF MOVING OBJECTS University of Koblenz Germany 13th Int. Conference on Image Processing, Atlanta, USA, 2006 Introduction Features of the new approach: Automatic

More information