A Bayesian Approach to Object Contour Tracking Using Level Sets
|
|
- Esmond Palmer
- 5 years ago
- Views:
Transcription
1 A Bayesian Approach to Object Contour Tracking Using Level Sets Alper Yilmaz Xin Li Mubarak Shah Computer Science Dept. Mathematics Dept. Computer Science Dept University of Central Florida University of Central Florida University of Central Florida Abstract High level vision tasks (recognition, understanding, etc.) for video processing require tracking of the complete contour of the objects. In general, objects undergo non-rigid deformations, which limit the applicability of using motion models (e.g. affine, projective) that impose rigidity constraints on the objects. In this paper, we propose a contour tracking algorithm for video captured from moving cameras of different modalities. The proposed tracking algorithm uses a Bayesian approach using the probability density functions (PDFs) of texture and color features. These feature PDFs are fused in an independent opinion polling strategy, where the contribution of each feature is defined by its discrimination power. We formulate the evolution of the object contour as a variational calculus problem and solve the system using level sets. The associated energy functional combines region-based and boundary-based object segmentation approaches into one framework for object tracking in video, where the search space for solution is reduced. In this regard, it can be viewed as generalization of formerly proposed methods where the shortcomings of other methods (color, shape, gradient constraints, etc.) are overcome. The robustness of the proposed algorithm is demonstrated on real sequences. Several video sequences are given in the supplementary media file. 1 Introduction In the recent years, a great deal of effort has been expended on object tracking. There are two broad classes of methods for tracking multiple objects: - Detection-based trackers: Detect the objects in every frame, then find the correspondence. - Transformation-based trackers: Detect the object only in the first frame then track it in consecutive frames. For detecting objects, Wren et al. [1] used a single Gaussian model for each pixel in the reference (background) image to find moving pixels. Stauffer and Grimson [2] generalized this scheme by modeling the pixel color in the reference image using a Gaussian mixture model. However, the most obvious limitation of these approaches is the fact that the camera has to be stationary. Another approach to detect the objects is to segment the frames. Although there are various ways to perform segmentation, due to the relevance to our approach, we will discuss only active contours (snakes). Snakes define the contour of the object by a number of points [3]. Generally, they segment the object by minimizing a gradient-based boundary condition. Compared to the background difference schemes, active contours produce tighter object boundaries and are more suitable for performing higher level tasks such as action recognition, object retrieval and surveillance. Main problems of these methods include contour initialization and their reliance on only image gradients. Recently, detection schemes that fuse snakes with background subtraction and change detection were proposed by Besson et al. [4], and Paragios and Deriche [], respectively. These methods overcome the initialization problem by evolving the contour to the moving regions. Specifically, in [], after initialization, a gradient-based approach similar to [6] is applied to obtain the true object region. In contrast, in [4] the contour is evolved based on the flow of the centroid of the same object in two consecutive frames. Although this approach overcomes the problems of the image gradient, its performance for non-rigid objects degrades due to the centroid flow. Note that both of these methods are only suitable for tracking a single object. In the detection-based trackers, tracking is accomplished by finding the corresponding objects in the consecutive frames. The simplest case is point correspondence, where each object is represented by a point, and the cost function based on the spatial positions of objects is minimized to determine correspondence. Point correspondence can be generalized to region correspondence, where in addition to the centroid of a region, other properties such as shape, color and texture can be employed in the cost function. On the other hand, transformation-based trackers require object detection only in the first frame. Once the object is detected, the task is to find the motion of the object based on the object representation. The object representations used by this class of trackers are circles, rectangles or elliptical shaped object patches or the full object con- 1
2 tours. According to the object representation chosen, different transformation models are used. Commonly used transformations are translation, translation+scaling, affine, projective, etc. Patch representations impose rigidity constraints on the object. For instance, Comaniciu and Meer [7] use a circular region around the object and compute the translation of the object by maximizing the likelihood of the model and the object density estimates. Similarly, Jepson et al. [8] used an ellipsoidal patch for object representation and computed the affine motion parameters using a probabilistic model that captures stable features, outliers and object structure. Although the performance of patch based trackers is acceptable, they cannot provide much information except for the centroid or the orientation of the region. Contour representation, on the other hand, relaxes the rigidity constraint and allows the objects to undergo nonrigid deformations. In contour tracking, the contour is initialized with the previous position and is evolved into the correct position by minimizing the associated energy functional. There are several issues in the active contour framework: selection of the energy functional, represention of the object boundary and the boundary evolution method. Traditional energy functionals are based on the image gradients, and the curve is represented and evolved in a Lagrangian methodology. Although we postpone the detailed discussion to Section 4, it is worth mentioning that the Lagrangian approach does not allow topology changes (splitting or merging), thus is not suitable for tracking nonrigid objects [9]. To overcome the limitations of the Lagrangian approach, Caselles et al. [6] used an Eulerian approach called level set, which allows topology changes. Detailed discussion about level sets will be given in Section 4. In their approach, the object contour converges to the object boundary, which is defined by the gradient magnitude. Bertalmio et al. [9] proposed a tracking scheme using the temporal gradient of the consecutive images using level set partial differential equations. However, their approach required small motion, and no changes in the local intensities. They also mentioned that the performance of the method degrades for objects with non uniform intensities on a complex background. In order to overcome the shortcomings of the gradient information, Zhu and Yuille proposed a region-based segmentation method using an active contour framework [10]. They selected a number of seed regions and performed a region growing step defined by intensity homogeneity criterion. In contrast to [4], [], [6], and [9], the topology change of the segmented objects is obtained offline by merging regions. Although the paper represents a pioneering work in this area, the work is not suitable for tracking objects due to the region growing step and the offline region merging step. In a recent paper, Mansouri proposed a Bayesian approach to contour tracking [11]. He modeled the possible transformations between the two regions in two frames by pixel based translations. Every pixel in the image is allowed to move in a circular window in which the probability of object or background membership is computed. Mansouri implemented the energy functional using level sets. The performance of this algorithm degrades in the presence of illumination changes and self shadowing. Manually selected local window size drastically affects the tracking performance. For small neighborhood, only small motion is required, whereas for large neighborhood, similar object and background intensities result in undesired tracking results. In the context of segmentation, Paragios and Deriche [12] proposed a Bayesian approach based on region competition. Their approach uses a mixture model for the magnitude of the Gabor responses. The algorithm computes a combination of boundary and region-based forces to evolve the contour. The boundary term used by the authors is essentially just the region term defined in four directional support regions around the boundary pixel. One problem with this approach is that the authors use the magnitude of complex valued Gabor filter outputs that do not constitute an orthonormal basis, which is necessary for independence constraint imposed on the features. Also as discussed in [8], the magnitude does not provide unique set of features. In this paper, we focus on tracking regions using a variational framework, which can be viewed as a generalization of [10], [11], and [12]. The proposed framework contributes to the field in several aspects. First, the objective functional is motivated by the Bayesian framework and is derived without imposing any constraints on the shape, the color or the gradient of the objects. Second, an independent opinion polling strategy is used to fuse different sources of information computed from the regions (the object and the background). Finally, the method uses a free band size parameter, which builds a bridge between the boundary-based and region-based variational tracking schemes, and is a generalized version of the previously proposed methods. Our approach differs from the generic active contour models of [3], [4], [], [6] and [13] in terms of object boundary definition. We define the boundary in a multi-model Bayesian framework instead of the image gradients. In contrast to [10] and [11], the curve is evolved to the object boundary by a region-motivated boundary force which uses higher order statistics. The proposed method fuses texture and color information intelligently (in contrast to active contour methods including [], [11], [12]). Due to modeling of the region instead of modeling of individual pixels, and due to the use of texture cues, it is less prone to lighting changes (in contrast to [11]). Also, the proposed method uses more reliable modeling of both color and texture information (compared to [12]). 2
3 The paper is organized as follows: Section 2 outlines how the texture and color features are modeled. In Section 3, a Bayesian approach to object tracking is outlined and the associated energy functional related to the current boundary hypothesis is derived. Section 4 deals with the curve evolution strategies and proposed speed function for evolving the boundary. Finally, the experimental results and conclusions are sketched in Sections and 6 respectively. 2 Feature Selection The performance of tracking and segmentation approaches depend on the features used. During the last two decades, two classes of features have been widely considered in the field of computer vision for tracking and segmentation purposes. Color features are obtained from raw color values in an image. In general, performance degrades due to the changes in illumination. However, practically, the color based approaches have reasonable performance. Texture features code the details in an image, which have repetitive structure. To compute texture features, an appropriate representation needs to be used. The most popular texture representation is the filter banks. The color histogram, which is the most widely used color representation, performs better than the parametric models in the context of skin detection [14]. Histograms can be generalized to multivariate kernel density estimates with kernel K(x) and the bandwith h for d dimensional data: f(x) = 1 nh d n K(x x i ), i=1 where the uniform kernel reduces to a histogram. Among various kernels, the Epanechnikov kernel: { 1 K(x) = 2 c 1 d (d + 2)(1 x 2 ) if x < 1, 0 otherwise where c d is the volume of unit d-dimensional sphere, yields minimum average global error between the estimate and the true density ([7]). In contrast to color, texture features require a preprocessing step to generate texture representations. In this framework, we select a multi-scale and multi-oriented linear basis (specifically steerable pyramids [1]) for representing textures. Filter selection for subband analysis in steerable pyramids plays a critical role in statistical modeling. In order to have a disjoint feature space for creating independent PDFs based on texture analysis, we adopt the Gabor wavelets, which create an orthonormal subband basis in the steerable pyramid representation. Gabor wavelets are Gaussian modulated sine and cosine gratings: G i (x, y) = e π[x2 /α 2 +y 2 /β 2] e 2πi[u 0x+v 0 y], (1) where α and β specify the width and the height, while u 0 and v 0 specify modulation of the filter. Similar to the idea of modeling the color information, PDFs for orthonormal texture features can be generated by kernel density estimates or parametric models. For the filter-based texture representation, a non-parametric density estimate is not efficient for two main reasons: the size of the non-parametric model is not known, and post-normalization of the filter responses degrades the performance. Instead, we use Gaussian mixture model similar to [8] and [12]. In a mixture model, the probability of observing a value x is computed using model parameters (µ k and σ k for Gaussian) and a priori probabilities P k by: p(x Θ) = C N k=1 P k p k (x µ k, σ k ), (2) where Θ = {P k, µ k, σ k : k = 1... C N }. Fixing the number of components C N = 3, unknowns P k, µ and σ in Eq. 2 are computed using the Expectation Maximization (EM) algorithm. In Fig. 1, a sample texture and the contour plot of its Gaussian mixture model are shown. We believe an ideal tracking system should use both color and texture features. In Fig.s 2a and b, we present selected frames from two synthetic sequences, which show the importance of color and texture features respectively. In (a), a synthetic object which has the same texture pattern but different color from the background is tracked using color (row i) and texture (row ii). In contrast to the color-based tracker, the texture-based tracker fails, due to texture similarity around the boundary. In part (b), the object and background have different textures but similar colors. For this case, the texture-based approach (row i) correctly tracks the object, whereas the color-based tracker (row ii) fails. 2.1 Fusing Color and Texture Features Fusing different sources of information can be achieved in various ways: we can use cascaded integration (one cue at a time), supervised late integration (linear opinion polling: weighted linear combination) or alternatively, unsupervised early integration, which is evaluated prior to object membership (independent opinion polling) [16]. Among these, the independent opinion polling strategy is the only scheme that is statistically stable [17]. In independent opinion polling, the likelihood of the observations for the object or the background model is given by p(x M α ) = β p β (x M α,β ), 3
4 Figure 1: Sample texture (left) and contour plots of 2-dimensional (magnitude and phase) PDFs of Gabor wavelets in 4 directions (0, 4, 90, 13 ). As seen, for each direction, only two Gaussians are fitted. i) ii) (a) (b) Figure 2: Necessity of using color cue (a) and texture cue (b). (a) The results of i) color based (correct tracking), ii) texture based (incorrect tracking due to texture similarity), tracker for a green object that has similar texture pattern on a brown background. (b) The results of i) texture based (correct tracking), ii) color based (incorrect tracking due to color similarity) tracker for the object that has similar color space. We recommend a color printer for viewing this figure. where x is the spatial variable, α {object, background} and β {color, {steerable subbands}}. Using Bayes rule, a posteriori estimate of membership can be computed: p(α x) = γ β p (x M β α,β)p(α) β p (x M β γ,β)p(γ), (3) where γ {object, background} and α and β are as defined above. It can be easily observed from Eq. 3 that features which do not have discrimination power will have equal membership probabilities; whereas the discriminant features will have higher probabilities. 3 Bayesian Approach to Tracking Tracking an object in an image sequence I i : 0 < i < can be treated as discriminant analysis of pixels on two nonoverlapping classes, where the classes correspond to the object region (foreground), R obj, and the background region, R bck. In the spatial domain R = R obj R bck. The boundary (front), Γ, between these two regions is defined by the intersection of the directional curves which define each region, such that Γ = Γ obj Γ bck, where Γ obj and Γ bck are respectively borders of the foreground and background regions in the counterclockwise direction. Considering contour tracking or segmentation methods, the likelihood of observing the front between two regions is equal to the likelihood of partitioning the regions: P ( Γ) = P (ϕ(r) = {R obj, R bck }), (4) where ϕ is the partitioning operator [10]. Posteriori probability for the boundary (given on the left side of Eq. 4) can be used interchangeably with a posteriori probability of partitioning the the space. Thus, for frame n, we formulate the object tracking problem in terms of the boundary probability, P Γ, defined using the probabilities of the regions given the current image, I n, and the previous object boundary, Γ n 1 as P Γ = P ( ϕ(r n ) I n, Γ n 1). Using Bayes rule and discarding the constant terms, the probability of the object boundary is approximated as: P Γ P ( I n R n obj, Γ n 1) P ( I n R n bck, Γ n 1) P ( ϕ(r n ) Γ n 1). () The last term in Eq. is the probability measure of partitioning in the current image, based 4
5 on only the previous information and can be dropped. Thus, the) front probability becomes P Γ = P (I n Robj n, Γn 1 P ( I n Rbck n, Γn 1). Let there be two subregions Robj Γ and Rbck Γ defined in the neighborhood of the curve, such that Robj Γ Robj n and Rbck Γ Rn bck. Following the independent opinion polling strategy and the noise process present in the observations, the front probability, P Γ, can also be defined in terms of probabilities of the these subregions Robj Γ and R Γ bck using P Γ P ( I n R Γ obj, Γ n 1) P ( I n R Γ bck, Γ n 1). (6) Throughout the paper, we assume that each pixel is independent, which results in the region probabilities as product of probabilities of individual pixels: P (I n R α ) = x R α P (I n (x)), where R α = {x x R α, α {obj,bck}}. The probability conditioned on Γ n 1 in Eq. can be written in terms of the probability of the subregion, R A, defined by the curve: P Γ (I n R A ) = P RA (I n (x 2 )), (7) x 1 Γ x 2 R A (x 1) where R A (x 1 ) denotes the square region around boundary point x 1. Thus posteriori front probability becomes: P Γ = ( ) P Robj (I n (x 2 )) P Rbck (I n (x 3 )), (8) x 1 x 2 x 3 }{{}}{{} P (I n R obj, Γ) P (I n R bck, Γ) where x 1 Γ, x 2 R obj (x 1 ) and x 3 R bck (x 1 ). The maximum a posteriori estimate (MAP), Γ n, of the front Γ n is found by maximizing the probability P Γ over the subsets Γ Ω, where Ω is the space of all possible object boundaries. Based on Eq. 8, the MAP estimate for Γ can be written as the MAP estimate of the front based on subregion. Thus, we have the following tracking scheme, Γ n = arg max Γ Ω x 1 ( x 2 P R Γ obj (I n (x 2 )) ) P R Γ bck (I n (x 3 )) x 3 (9) where x 1 Γ, x 2 Robj Γ(x 1) and x 2 Rbck Γ(x 1). In Eq. 9, Rα Γ (x 1 ) : x 1 Γ defines the band region with respect to the front variable x 1, such that Rα Γ (x 1 ) Rα Γ R α. This new region definition based on the front serves both as a boundary constraint (as in boundary based schemes [3], [6], [13]) and as a region constraint (similar to [10], [12]). The advantages of using the band around the boundary compared to using the complete region are: - Boundary search space is reduced; Figure 3: Subregions around front Γ defined by rectangles. - Since the region boundary is a closed curve, pixels that do not belong to the object, but lie inside the boundary are not considered in the estimation; - The effect of the random noise process in the front estimation step is minimized; and - It intelligently fuses the boundary-based and regionbased methodologies into one framework. 3.1 Energy Functional A convenient way of converting a MAP estimation to a minimization problem is by computing the negative loglikelihood of probabilities. Thus, the tracking scheme proposed in Eq. 9 can be formulated as a minimization problem with the energy functional: ( E( Γ) = Ψ obj (x 2 )dx 2 + x 1 Γ x 2 Robj Γ(x 1) x 2 Robj Γ(x 1) }{{} E A Ψ bck (x 2 )dx 2 } {{ } E B ) dx 1, (10) where Ψ obj (x) = log P Robj (I n (x)) and Ψ bck (x) = log P Rbck (I n (x)). However, the integrals defined on the plane by the subregion around the front Γ are not definite. For the sake of practicality and implementability, the subregion, where these integrals are defined, is denoted by R Γ (x i ) : R Γ (x i ) R obj R bck, x i Γ and R Γ (x i ) are selected to be square regions [ m, m]x[ m, m] centered around x i as shown in Fig. 3, where m is the limit of the squares. The pixels inside the square region are defined for each front position (f(s), g(s)) by x = x + f(s) and y = ỹ + g(s), where f and g are the parametric curve functions with s being the parameter, x, ỹ [ m, m] and x, y Robj Γ R Γ bck. To compute object and background probabilities inside the rectangular region, we define an indicator function
6 1 Γ α {x R α } of the form: 1 Γ α {x R α } = { 1 x Rα 0 otherwise Using the definitions outlined above, the functional E A in Eq. 10 can be converted to its new form by changing the variables from (x 1, y 1 ) and (x 2, y 2 ) to s and ( x, ỹ) respectively: m m ( E A (s) = Ψ A ( x + f(s), ỹ + g(s)) m m ) 1 Γ obj{ x + f(s), ỹ + g(s)} J d xdỹ, where J is the Jacobian introduced due to the change of variables. Due to the translation of the square window around the boundary the Jacobian becomes 1 and is dropped. Once E B is written following the same idea, Eq. 10 results in the tracking functional: E = l m m 0 m m l m m 0 posteriori object log likelihood {}}{ m m log P Robj (I(x))1 Γ obj{x}d xdỹ ds log P Rbck (I(x))1 Γ bck{x}d xdỹ ds, (11) } {{ } posteriori background log likelihood where l is the length of the front, x = (x, y) T and 1 Γ bck = 1 1 Γ obj. The functional proposed in Eq. 11 has similarities with the functionals proposed by Zhu and Yuille [10], Mansouri [11], and Paragios and Deriche [], [12]: - The proposed objective function can be generalized to the two region case of [10] by increasing m such that the square window covers the object and background regions. In this case, the front dependent subregions Rα Γ of Eq. 10 are changed to regional terms R α. - Replacing the probabilities of Eq. 11 with the Gaussian of the temporal gradients results in the motion detection step of [] (similar to tracking framework of [4]). For the tracking step, the same probabilities can be replaced by the Gaussian of the image gradient (similar to segmentation framework of [6]). - Limiting the discriminant analysis to the pixels inside the region and setting the probabilities of Eq. 11 to P α (x) = n 1 (x) I n (x+z)) 2 max z: z m e (I 2σ 2, and dropping the plane integrals (due to max operation) results in the tracking scheme proposed in [11]. - Similarly, in Eq. 11, a convex combination of boundary force (defined in a small boundary neighborhood) and regional force (similar to [10]) proposed in [] is unified through regional probabilities attracted by the front which defines the regions. 4 Curve Evolution An initial curve, Γ 0, can be defined as an evolving front in the direction of its normal vector field with speed, F, by introducing an additional parameter t which constructs a family of curves [18]. Curve, Γ, can be represented by its parametric form, implicit representation, etc. The parametric form has problems during the evolution phase due to the stability of the solution, singularity problems, and inability to handle topology changes. These limitations can be dealt with by using the implicit representations. Among various implicit representations, the level set is numerically the most stable [18]. In a level set scheme, the curve is implicitly represented on a fixed discrete grid φ : R 2 R 1, using φ ( Γ(s, t), t) = 0, and the regions inside and outside the curve are defined by φ(x, t) < 0 and φ(x, t) > 0 respectively. Thus, evolving the curve corresponds to updating the level set function, φ, which defines new zero crossings. Deformation of φ as the front evolves is obtained by φ( Γ(s, t), t) t = F (x, t) φ( Γ(s, t), t), (12) where F is the speed in the normal direction, n, of the curve. 4.1 Minimizing the Energy Functional Evolution of the curve using the Bayesian approach given in Eq. 11 can be related to Eq. 12 by defining the speed F in terms of maximizing the front probability. This leads to moving the front in the steepest ascent direction with respect to front Γ. Taking the functional derivative of the energy functional (computing the Euler-Lagrange equations and using Green s theorem [6], [10]), the motion equation for the front is given by: d Γ dt = m m m m m m m m log P Robj (I( x))1 Γ obj{ x}d xdỹn obj log P Rbck (I( x))1 Γ bck{ x}d xdỹn bck, (13) where n obj is the normal of object region and n bck is the normal of the background region. Note that the counterclockwise curves of object and background regions provide normals in the opposite directions, thus n obj = n bck. An interesting observation about the composition of the speed function proposed in Eq. 13 is that it is a differential equation which has integral terms! For instance, if we follow a 6
7 different approach for defining neighborhood, the resulting energy functional may lead to unstable schemes. If we take a closer look at Eq. 13, it indeed includes the speed of the front point, x, in the normal direction, n obj. Thus the deformation of φ, given in Eq. 12, along with discretization, is φ n x,y = φ n 1 x,y tf x,y φx,y n 1, where F x,y = + m m i= m j= m m m i= m j= m log P Robj (I x )1 Γ obj{(x )} log P Rbck (I x )1 Γ bck{(x )}, (14) where x = (x + i, y + j). Note that negative and positive terms correspond to the shrinking and expanding forces and are due to the opposite normal directions discussed in Eq. 13. We can interpret this speed function as follows: - When the boundary hypothesis for pixel x Γ is correct, such that square region defined by x Γ is correctly partitioned, the motion of the front pixels will be 0. - If the front is not correctly positioned, the background (object) probability of the front pixel will be higher than the object (background) probability and the speed in the normal direction will be negative (positive). Experiments To demonstrate the performance of the proposed tracking approach, we have tested our algorithm on various sequences captured with infrared and electro-optical cameras. We have implemented Eq. 14 using the narrow band level set implementation outlined in [18]. The algorithm is initialized with boundaries of objects in the first frame. Selection of m (limits of square region in Eq. 14) is not sequence dependent and is fixed to 6 for all sequences. In contrast to [11], maximum range of motion is not constrained. The algorithm runs at different frame rates based on the motion and the size of the objects. For the video sequences we refer the reader to the media file submitted with this paper. In Fig. 4a, the results are shown for tennis player sequence captured with a camcorder. The sequence is composed of 940 frames, and the player has high non-rigid motion. The texture of the player and the background are very similar in this particular case and the resulting inside and outside probabilities in Eq. 14 are both very close to. so only the color cue is effectively used. The performance of the algorithm throughout the sequence is very good and the player is correctly tracked, in contrast to the object and the background segmentation method proposed in [17], where the interframe object segmentation had problems. In another experiment shown in Fig. 4b, the approach is tested on a low quality sequence obtained from a stationary surveillance camera. The sequence is composed of 300 frames. The tracked person in the sequence undergoes high nonrigid motion. Due to the similarities of the background and the object models, both features (color and texture) contribute to the tracking function given in Eq. 14. The object contour is correctly tracked throughout the sequence. Shown in Fig. 4c, we also extended the method to track multiple objects. To the best of our knowledge, we propose the first approach to track multiple objects in the context of active contours. However, due to space limitations the discussion is excluded. Currently, we do not handle occlusions, but for the given sequence objects are correctly tracked after occlusion. The algorithm was also tested on IR sequences. In addition to the problems due to the intensity and texture similarity of the targets, the targets sometimes are not visible or distinguishable due to the background clutter. In Fig., we show two different closing sequences, where the target size increases. Since the imagery is of low quality and the atmospheric effects cause changes in the features, in contrast to the EO imagery, we used dynamic models that reflect the change in the scene. The dynamic models are obtained by updating the models for color and texture features at every frame using the a priori models. 6 Conclusions We proposed a contour tracking method for non-rigid objects motivated by a Bayesian framework. The color and texture features of objects are modeled using kernel density estimates and the mixture models respectively. Color and texture features are then combined in an independent opinion polling strategy, where the features contribute to the energy functional based on their discrimination power. The energy functional is minimized by the level set method, which represents the contour implicitly. The level set implementation of the contour evolution supports topology changes for objects, which can split or merge. The results presented show robustness of tracking algorithm for EO and IR imagery. More results are presented in the accompanied media file submitted with this paper. References [1] C.R. Wren, A. Azarbayejani, T. Darrell, and A. Pentland, Pfinder: Real-time tracking of the human body, PAMI, 19/7, pp , [2] C. Stauffer and W.E.L. Grimson, Learning patterns of activity using real time tracking, PAMI, 22/8. 7
8 (a) (b) (c) Figure 4: Contour tracking results: (a) a tennis player in sequence captured by moving camera; (b) a person from a stationary surveillance camera; (c) occluding multiple objects. We suggest the reviewers to view the results using a colored printout. (a) (b) Figure : Contours tracking of targets in closing infrared sequences taken from an airborne vehicle. (a) Sequence RNG14 1 every 30 th frame; (b) sequence RNG16 18 every 30 th frame. Video sequences are available in accompanied media file. [3] M. Kass, A. Witkin, and D. Terzopoulos, Snakes: active contour models, IJCV, vol. 1, pp , [4] S. Besson, M. Barlaud, and G. Aubert, Detection and tracking of moving objects using a level set based method, in ICPR, 2000, v. 3, pp [] N. Paragios and R. Deriche, Geodesic active contours and level sets for the detection and tracking of moving objects, PAMI, 22/3, pp , [6] V. Caselles, R. Kimmel, and G. Sapiro, Geodesic active contours, in ICCV, 199, pp [7] D. Comaniciu, V. Ramesh, and P. Meer, Real-time tracking of non-rigid objects using mean shift, in CVPR, 2000, v. 2. [8] A.D. Jepson, D.J. Fleet, and T.F. El-Maraghi, Robust online appereance models for visual tracking, in CVPR, [9] M. Bertalmio, G. Sapiro, and G. Randall, Morphing active contours, PAMI, 22/7, pp , [10] S.C. Zhu and A. Yuille, Region competition: unifying snakes, region growing, and bayes/mdl for multiband image segmentation, PAMI, 18/9, pp , [11] A.R. Mansouri, Region tracking via level set pdes without motion computation, PAMI, 24/7, pp , [12] N. Paragios and R. Deriche, Geodesic active regions and level set methods for supervised texture segmentation, IJCV, 46/3, pp , [13] F. Leymarie, Tracking deformable objects in the plane using an active contour model, PAMI, 1/6, pp , [14] M.J. Jones and J.M. Rehg, Statistical color models with application to skin detection, IJCV, 46/1, pp , [1] W.T. Freeman and E.H. Adelson, The design and use of steerable filters, PAMI, 13/9, pp , [16] P. Torr, R. Szeliski, and P. Anandan, An integrated bayesian approach to layer extraction from image sequences, PAMI, 23/3, pp ,
9 [17] E. Hayman and J. Eklundh, Probabilistic and voting approaches to cue integration for figure ground segmentation, in ECCV, 2002, pp [18] J.A. Sethian, Level set methods: evolving interfaces in geometry,fluid mechanics computer vision and material sciences, Cambridge University Press,
Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi
IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 2 Issue 11, November 2015. Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi
More informationA Survey on Moving Object Detection and Tracking in Video Surveillance System
International Journal of Soft Computing and Engineering (IJSCE) A Survey on Moving Object Detection and Tracking in Video Surveillance System Kinjal A Joshi, Darshak G. Thakore Abstract This paper presents
More informationROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL
ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte
More informationExtract Object Boundaries in Noisy Images using Level Set. Literature Survey
Extract Object Boundaries in Noisy Images using Level Set by: Quming Zhou Literature Survey Submitted to Professor Brian Evans EE381K Multidimensional Digital Signal Processing March 15, 003 Abstract Finding
More informationActive Contour-Based Visual Tracking by Integrating Colors, Shapes, and Motions Using Level Sets
Active Contour-Based Visual Tracking by Integrating Colors, Shapes, and Motions Using Level Sets Suvarna D. Chendke ME Computer Student Department of Computer Engineering JSCOE PUNE Pune, India chendkesuvarna@gmail.com
More informationSegmentation in Noisy Medical Images Using PCA Model Based Particle Filtering
Segmentation in Noisy Medical Images Using PCA Model Based Particle Filtering Wei Qu a, Xiaolei Huang b, and Yuanyuan Jia c a Siemens Medical Solutions USA Inc., AX Division, Hoffman Estates, IL 60192;
More informationVariational Methods II
Mathematical Foundations of Computer Graphics and Vision Variational Methods II Luca Ballan Institute of Visual Computing Last Lecture If we have a topological vector space with an inner product and functionals
More informationSupervised texture detection in images
Supervised texture detection in images Branislav Mičušík and Allan Hanbury Pattern Recognition and Image Processing Group, Institute of Computer Aided Automation, Vienna University of Technology Favoritenstraße
More informationMixture Models and EM
Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationObject Tracking with an Adaptive Color-Based Particle Filter
Object Tracking with an Adaptive Color-Based Particle Filter Katja Nummiaro 1, Esther Koller-Meier 2, and Luc Van Gool 1,2 1 Katholieke Universiteit Leuven, ESAT/VISICS, Belgium {knummiar,vangool}@esat.kuleuven.ac.be
More informationElliptical Head Tracker using Intensity Gradients and Texture Histograms
Elliptical Head Tracker using Intensity Gradients and Texture Histograms Sriram Rangarajan, Dept. of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634 srangar@clemson.edu December
More informationUnifying Boundary and Region-based information for Geodesic Active Tracking
IEEE CVPR-99, pages xx-yy, Colorado, USA Unifying Boundary and Region-based information for Geodesic Active Tracking Nikos Paragios Rachid Deriche I.N.R.I.A BP 93, 2 Route des Lucioles 692 Sophia Antipolis
More informationGeodesic Active Regions for Tracking I.N.R.I.A Sophia Antipolis CEDEX, France.
Geodesic Active Regions for Tracking Nikos Paragios? Rachid Deriche I.N.R.I.A BP. 93, 24 Route des Lucioles 692 Sophia Antipolis CEDEX, France e-mail: fnparagio,derg@sophia.inria.fr Abstract. In this paper
More informationMotivations and Generalizations. Ali Torkamani
Distribution Tracking B Active Contours, Motivations and Generalizations Ali Torkamani Outline Motivation Calculus of Variations Statistical Distance Earl works on Snakes and Active Contour Models Active
More informationDr. Ulas Bagci
CAP5415-Computer Vision Lecture 11-Image Segmentation (BASICS): Thresholding, Region Growing, Clustering Dr. Ulas Bagci bagci@ucf.edu 1 Image Segmentation Aim: to partition an image into a collection of
More informationColor Image Segmentation
Color Image Segmentation Yining Deng, B. S. Manjunath and Hyundoo Shin* Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106-9560 *Samsung Electronics Inc.
More informationOptical Flow Estimation
Optical Flow Estimation Goal: Introduction to image motion and 2D optical flow estimation. Motivation: Motion is a rich source of information about the world: segmentation surface structure from parallax
More informationDr. Ulas Bagci
Lecture 9: Deformable Models and Segmentation CAP-Computer Vision Lecture 9-Deformable Models and Segmentation Dr. Ulas Bagci bagci@ucf.edu Lecture 9: Deformable Models and Segmentation Motivation A limitation
More informationIntegrating Intensity and Texture in Markov Random Fields Segmentation. Amer Dawoud and Anton Netchaev. {amer.dawoud*,
Integrating Intensity and Texture in Markov Random Fields Segmentation Amer Dawoud and Anton Netchaev {amer.dawoud*, anton.netchaev}@usm.edu School of Computing, University of Southern Mississippi 118
More informationhuman vision: grouping k-means clustering graph-theoretic clustering Hough transform line fitting RANSAC
COS 429: COMPUTER VISON Segmentation human vision: grouping k-means clustering graph-theoretic clustering Hough transform line fitting RANSAC Reading: Chapters 14, 15 Some of the slides are credited to:
More informationMultiple Motion and Occlusion Segmentation with a Multiphase Level Set Method
Multiple Motion and Occlusion Segmentation with a Multiphase Level Set Method Yonggang Shi, Janusz Konrad, W. Clem Karl Department of Electrical and Computer Engineering Boston University, Boston, MA 02215
More information6.801/866. Segmentation and Line Fitting. T. Darrell
6.801/866 Segmentation and Line Fitting T. Darrell Segmentation and Line Fitting Gestalt grouping Background subtraction K-Means Graph cuts Hough transform Iterative fitting (Next time: Probabilistic segmentation)
More informationInteractive Image Segmentation Using Level Sets and Dempster-Shafer Theory of Evidence
Interactive Image Segmentation Using Level Sets and Dempster-Shafer Theory of Evidence Björn Scheuermann and Bodo Rosenhahn Leibniz Universität Hannover, Germany {scheuermann,rosenhahn}@tnt.uni-hannover.de
More informationAutomated Segmentation Using a Fast Implementation of the Chan-Vese Models
Automated Segmentation Using a Fast Implementation of the Chan-Vese Models Huan Xu, and Xiao-Feng Wang,,3 Intelligent Computation Lab, Hefei Institute of Intelligent Machines, Chinese Academy of Science,
More informationCOMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE
COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment
More informationELEC Dr Reji Mathew Electrical Engineering UNSW
ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion
More informationLayered Active Contours for Tracking
Layered Active Contours for Tracking Gallagher Pryor, Patricio Vela, Tauseef ur Rehman, and Allen Tannenbaum Georgia Institute of Technology, Atlanta GA, USA Abstract This paper deals with the task of
More informationComputer Vision II Lecture 4
Computer Vision II Lecture 4 Color based Tracking 29.04.2014 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Single-Object Tracking Background modeling
More informationVisual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.
Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:
More informationActive Geodesics: Region-based Active Contour Segmentation with a Global Edge-based Constraint
Active Geodesics: Region-based Active Contour Segmentation with a Global Edge-based Constraint Vikram Appia Anthony Yezzi Georgia Institute of Technology, Atlanta, GA, USA. Abstract We present an active
More informationA Novelty Detection Approach for Foreground Region Detection in Videos with Quasi-stationary Backgrounds
A Novelty Detection Approach for Foreground Region Detection in Videos with Quasi-stationary Backgrounds Alireza Tavakkoli 1, Mircea Nicolescu 1, and George Bebis 1 Computer Vision Lab. Department of Computer
More informationDynamic Shape Tracking via Region Matching
Dynamic Shape Tracking via Region Matching Ganesh Sundaramoorthi Asst. Professor of EE and AMCS KAUST (Joint work with Yanchao Yang) The Problem: Shape Tracking Given: exact object segmentation in frame1
More informationGeodesic Active Contours with Combined Shape and Appearance Priors
Geodesic Active Contours with Combined Shape and Appearance Priors Rami Ben-Ari 1 and Dror Aiger 1,2 1 Orbotech LTD, Yavneh, Israel 2 Ben Gurion University, Be er Sheva, Israel {rami-ba,dror-ai}@orbotech.com
More informationImproving Performance of Distribution Tracking through Background Mismatch
282 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 2, FEBRUARY 2005 Improving Performance of Distribution Tracking through Background Mismatch Tao Zhang, Student Member, IEEE,
More informationImplicit Active Shape Models for 3D Segmentation in MR Imaging
Implicit Active Shape Models for 3D Segmentation in MR Imaging Mikaël Rousson 1, Nikos Paragios 2, and Rachid Deriche 1 1 I.N.R.I.A. Sophia Antipolis, France E-mail: {Mikael.Rousson,Rachid.Deriche}@sophia.inria.fr
More informationMetaMorphs: Deformable Shape and Texture Models
MetaMorphs: Deformable Shape and Texture Models Xiaolei Huang, Dimitris Metaxas, Ting Chen Division of Computer and Information Sciences Rutgers University New Brunswick, NJ 8854, USA {xiaolei, dnm}@cs.rutgers.edu,
More informationTARGET-TRACKING IN FLIR IMAGERY USING MEAN-SHIFT AND GLOBAL MOTION COMPENSATION
TARGET-TRACKING IN FLIR IMAGERY USING MEAN-SHIFT AND GLOBAL MOTION COMPENSATION Alper Yilmaz Khurram Shafique Niels Lobo Xin Li Teresa Olson Mubarak A. Shah Unv. of Central Florida, Computer Science Dept.
More informationRobust Shape Retrieval Using Maximum Likelihood Theory
Robust Shape Retrieval Using Maximum Likelihood Theory Naif Alajlan 1, Paul Fieguth 2, and Mohamed Kamel 1 1 PAMI Lab, E & CE Dept., UW, Waterloo, ON, N2L 3G1, Canada. naif, mkamel@pami.uwaterloo.ca 2
More informationEnsemble Tracking. Abstract. 1 Introduction. 2 Background
Ensemble Tracking Shai Avidan Mitsubishi Electric Research Labs 201 Broadway Cambridge, MA 02139 avidan@merl.com Abstract We consider tracking as a binary classification problem, where an ensemble of weak
More informationImplicit Active Model using Radial Basis Function Interpolated Level Sets
Implicit Active Model using Radial Basis Function Interpolated Level Sets Xianghua Xie and Majid Mirmehdi Department of Computer Science University of Bristol, Bristol BS8 1UB, England. {xie,majid}@cs.bris.ac.uk
More informationSegmentation Using Active Contour Model and Level Set Method Applied to Medical Images
Segmentation Using Active Contour Model and Level Set Method Applied to Medical Images Dr. K.Bikshalu R.Srikanth Assistant Professor, Dept. of ECE, KUCE&T, KU, Warangal, Telangana, India kalagaddaashu@gmail.com
More informationCombining Edge and Color Features for Tracking Partially Occluded Humans
Combining Edge and Color Features for Tracking Partially Occluded Humans Mandar Dixit and K.S. Venkatesh Computer Vision Lab., Department of Electrical Engineering, Indian Institute of Technology, Kanpur
More informationRobust Automatic Tracking of Skin-Colored Objects with Level Set based Occlusion Handling
Robust Automatic Tracking of Skin-Colored Objects with Level Set based Occlusion Handling Liang-Guo Zhang 1,2,3, Xilin Chen 2, Chunli Wang 4, and Wen Gao 1,2,3 1 Key Lab of Intelligent Inf. Processing,
More informationVisual Tracking. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Visual Tracking Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 11 giugno 2015 What is visual tracking? estimation
More informationImplicit Active Contours Driven by Local Binary Fitting Energy
Implicit Active Contours Driven by Local Binary Fitting Energy Chunming Li 1, Chiu-Yen Kao 2, John C. Gore 1, and Zhaohua Ding 1 1 Institute of Imaging Science 2 Department of Mathematics Vanderbilt University
More informationTracking with Active Contours Using Dynamically Updated Shape Information
Tracking with Active Contours Using Dynamically Updated Shape Information Abstract An active contour based tracking framework is described that generates and integrates dynamic shape information without
More informationDYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song
DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN Gengjian Xue, Jun Sun, Li Song Institute of Image Communication and Information Processing, Shanghai Jiao
More informationECG782: Multidimensional Digital Signal Processing
ECG782: Multidimensional Digital Signal Processing Object Recognition http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Knowledge Representation Statistical Pattern Recognition Neural Networks Boosting
More informationVisual Motion Analysis and Tracking Part II
Visual Motion Analysis and Tracking Part II David J Fleet and Allan D Jepson CIAR NCAP Summer School July 12-16, 16, 2005 Outline Optical Flow and Tracking: Optical flow estimation (robust, iterative refinement,
More informationDense Image-based Motion Estimation Algorithms & Optical Flow
Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion
More informationFitting Models to Distributed Representations of Vision
Fitting Models to Distributed Representations of Vision Sourabh A. Niyogi Department of Electrical Engineering and Computer Science MIT Media Laboratory, 20 Ames Street, Cambridge, MA 02139 Abstract Many
More informationParticle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore
Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction
More informationNotes 9: Optical Flow
Course 049064: Variational Methods in Image Processing Notes 9: Optical Flow Guy Gilboa 1 Basic Model 1.1 Background Optical flow is a fundamental problem in computer vision. The general goal is to find
More informationRobert Collins CSE598G. Robert Collins CSE598G
Recall: Kernel Density Estimation Given a set of data samples x i ; i=1...n Convolve with a kernel function H to generate a smooth function f(x) Equivalent to superposition of multiple kernels centered
More informationRobust Model-Free Tracking of Non-Rigid Shape. Abstract
Robust Model-Free Tracking of Non-Rigid Shape Lorenzo Torresani Stanford University ltorresa@cs.stanford.edu Christoph Bregler New York University chris.bregler@nyu.edu New York University CS TR2003-840
More informationChapter 9 Object Tracking an Overview
Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging
More informationLocal or Global Minima: Flexible Dual-Front Active Contours
Local or Global Minima: Flexible Dual-Front Active Contours Hua Li 1,2 and Anthony Yezzi 1 1 School of ECE, Georgia Institute of Technology, Atlanta, GA, USA 2 Dept. of Elect. & Info. Eng., Huazhong Univ.
More informationLearning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009
Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer
More informationCS 534: Computer Vision Segmentation and Perceptual Grouping
CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation
More information1486 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 14, NO. 10, OCTOBER 2005
1486 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 14, NO. 10, OCTOBER 2005 A Nonparametric Statistical Method for Image Segmentation Using Information Theory and Curve Evolution Junmo Kim, Member, IEEE,
More informationRegion-based Segmentation
Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 10 Segmentation 14/02/27 http://www.ee.unlv.edu/~b1morris/ecg782/
More informationCS 231A Computer Vision (Fall 2012) Problem Set 3
CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest
More informationMethods in Computer Vision: Mixture Models and their Applications
Methods in Computer Vision: Mixture Models and their Applications Oren Freifeld Computer Science, Ben-Gurion University May 7, 2017 May 7, 2017 1 / 40 1 Background Modeling Digression: Mixture Models GMM
More informationSurvey on Moving Body Detection in Video Surveillance System
RESEARCH ARTICLE OPEN ACCESS Survey on Moving Body Detection in Video Surveillance System Prof. D.S.Patil 1, Miss. R.B.Khanderay 2, Prof.Teena Padvi 3 1 Associate Professor, SSVPS, Dhule (North maharashtra
More informationImage Segmentation II Advanced Approaches
Image Segmentation II Advanced Approaches Jorge Jara W. 1,2 1 Department of Computer Science DCC, U. of Chile 2 SCIAN-Lab, BNI Outline 1. Segmentation I Digital image processing Segmentation basics 2.
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationChange Detection by Frequency Decomposition: Wave-Back
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Change Detection by Frequency Decomposition: Wave-Back Fatih Porikli and Christopher R. Wren TR2005-034 May 2005 Abstract We introduce a frequency
More informationUnit 3 : Image Segmentation
Unit 3 : Image Segmentation K-means Clustering Mean Shift Segmentation Active Contour Models Snakes Normalized Cut Segmentation CS 6550 0 Histogram-based segmentation Goal Break the image into K regions
More informationImage Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing
Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing Tomoyuki Nagahashi 1, Hironobu Fujiyoshi 1, and Takeo Kanade 2 1 Dept. of Computer Science, Chubu University. Matsumoto 1200,
More informationLecture 16 Segmentation and Scene understanding
Lecture 16 Segmentation and Scene understanding Introduction! Mean-shift! Graph-based segmentation! Top-down segmentation! Silvio Savarese Lecture 15 -! 3-Mar-14 Segmentation Silvio Savarese Lecture 15
More informationA Fast Moving Object Detection Technique In Video Surveillance System
A Fast Moving Object Detection Technique In Video Surveillance System Paresh M. Tank, Darshak G. Thakore, Computer Engineering Department, BVM Engineering College, VV Nagar-388120, India. Abstract Nowadays
More informationSubmitted by Wesley Snyder, Ph.D. Department of Electrical and Computer Engineering. North Carolina State University. February 29 th, 2004.
Segmentation using Multispectral Adaptive Contours Final Report To U.S. Army Research Office On contract #DAAD-19-03-1-037 Submitted by Wesley Snyder, Ph.D. Department of Electrical and Computer Engineering
More informationA Feature Point Matching Based Approach for Video Objects Segmentation
A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical
More informationStudy of an active contour model: application in real time tracking.
Study of an active contour model: application in real time tracking. Ouardia CHILALI, Samir HACHOUR, Idir FEDDAG and Youcef IABADENE. Département Automatique, F.G.E.I., U.M.M.T.O., Tizi-Ouzou, Algérie.
More informationAn Active Contour Model without Edges
An Active Contour Model without Edges Tony Chan and Luminita Vese Department of Mathematics, University of California, Los Angeles, 520 Portola Plaza, Los Angeles, CA 90095-1555 chan,lvese@math.ucla.edu
More informationMultiregion level set tracking with transformation invariant shape priors
Multiregion level set tracking with transformation invariant shape priors Michael Fussenegger 1, Rachid Deriche 2, and Axel Pinz 1 1 Institute of Electrical Measurement and Measurement Signal Processing
More informationSimultaneous Appearance Modeling and Segmentation for Matching People under Occlusion
Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Zhe Lin, Larry S. Davis, David Doermann, and Daniel DeMenthon Institute for Advanced Computer Studies University of
More informationMulti-Cue Based Visual Tracking in Clutter Scenes with Occlusions
2009 Advanced Video and Signal Based Surveillance Multi-Cue Based Visual Tracking in Clutter Scenes with Occlusions Jie Yu, Dirk Farin, Hartmut S. Loos Corporate Research Advance Engineering Multimedia
More informationRobert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method
Intro to Template Matching and the Lucas-Kanade Method Appearance-Based Tracking current frame + previous location likelihood over object location current location appearance model (e.g. image template,
More informationProbabilistic Tracking in Joint Feature-Spatial Spaces
Probabilistic Tracking in Joint Feature-Spatial Spaces Ahmed Elgammal Department of Computer Science Rutgers University Piscataway, J elgammal@cs.rutgers.edu Ramani Duraiswami UMIACS University of Maryland
More informationLast update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1
Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus
More informationIntroduction to behavior-recognition and object tracking
Introduction to behavior-recognition and object tracking Xuan Mo ipal Group Meeting April 22, 2011 Outline Motivation of Behavior-recognition Four general groups of behaviors Core technologies Future direction
More informationOnline Spatial-temporal Data Fusion for Robust Adaptive Tracking
Online Spatial-temporal Data Fusion for Robust Adaptive Tracking Jixu Chen Qiang Ji Department of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute, Troy, NY 12180-3590, USA
More informationPattern Recognition Methods for Object Boundary Detection
Pattern Recognition Methods for Object Boundary Detection Arnaldo J. Abrantesy and Jorge S. Marquesz yelectronic and Communication Engineering Instituto Superior Eng. de Lisboa R. Conselheiro Emídio Navarror
More informationSegmentation and low-level grouping.
Segmentation and low-level grouping. Bill Freeman, MIT 6.869 April 14, 2005 Readings: Mean shift paper and background segmentation paper. Mean shift IEEE PAMI paper by Comanici and Meer, http://www.caip.rutgers.edu/~comanici/papers/msrobustapproach.pdf
More informationEvaluation of Moving Object Tracking Techniques for Video Surveillance Applications
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation
More informationSnakes, Active Contours, and Segmentation Introduction and Classical Active Contours Active Contours Without Edges
Level Sets & Snakes Snakes, Active Contours, and Segmentation Introduction and Classical Active Contours Active Contours Without Edges Scale Space and PDE methods in image analysis and processing - Arjan
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Image Segmentation Some material for these slides comes from https://www.csd.uwo.ca/courses/cs4487a/
More informationSegmentation by Clustering. Segmentation by Clustering Reading: Chapter 14 (skip 14.5) General ideas
Reading: Chapter 14 (skip 14.5) Data reduction - obtain a compact representation for interesting image data in terms of a set of components Find components that belong together (form clusters) Frame differencing
More informationGrouping and Segmentation
03/17/15 Grouping and Segmentation Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Today s class Segmentation and grouping Gestalt cues By clustering (mean-shift) By boundaries (watershed)
More informationNonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.
Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2
More informationAnnouncements. Computer Vision I. Motion Field Equation. Revisiting the small motion assumption. Visual Tracking. CSE252A Lecture 19.
Visual Tracking CSE252A Lecture 19 Hw 4 assigned Announcements No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in WLH Room 2112 Motion Field Equation Measurements I x = I x, T: Components
More informationClass 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008
Class 3: Advanced Moving Object Detection and Alert Detection Feb. 18, 2008 Instructor: YingLi Tian Video Surveillance E6998-007 Senior/Feris/Tian 1 Outlines Moving Object Detection with Distraction Motions
More informationSegmentation by Clustering Reading: Chapter 14 (skip 14.5)
Segmentation by Clustering Reading: Chapter 14 (skip 14.5) Data reduction - obtain a compact representation for interesting image data in terms of a set of components Find components that belong together
More informationLevel lines based disocclusion
Level lines based disocclusion Simon Masnou Jean-Michel Morel CEREMADE CMLA Université Paris-IX Dauphine Ecole Normale Supérieure de Cachan 75775 Paris Cedex 16, France 94235 Cachan Cedex, France Abstract
More informationA Method for Dynamic Clustering of Data
A Method for Dynamic Clustering of Data Arnaldo J. Abrantesy and Jorge S. Marquesz yelectronic and Communication Engineering Instituto Superior Eng. de Lisboa R. Conselheiro Emídio Navarro, Lisboa, Portugal
More informationComputer Vision II Lecture 4
Course Outline Computer Vision II Lecture 4 Single-Object Tracking Background modeling Template based tracking Color based Tracking Color based tracking Contour based tracking Tracking by online classification
More information