Online visual tracking by integrating spatio-temporal cues

Size: px
Start display at page:

Download "Online visual tracking by integrating spatio-temporal cues"

Transcription

1 Published in IET Computer Vision Received on 12th September 2013 Revised on 27th March 2014 Accepted on 14th May 2014 ISSN Online visual tracking by integrating spatio-temporal cues Yang He, Mingtao Pei, Min Yang, Yuwei Wu, Yunde Jia Beijing Laboratory of Intelligent Information Technology, Beijing Institute of Technology, Beijing , People s Republic of China peimt@bit.edu.cn Abstract: The performance of online visual trackers has improved significantly, but designing an effective appearance-adaptive model is still a challenging task because of the accumulation of errors during the model updating with newly obtained results, which will cause tracker drift. In this study, the authors propose a novel online tracking algorithm by integrating spatiotemporal cues to alleviate the drift problem. The authors goal is to develop a more robust way of updating an adaptive appearance model. The model consists of multiple modules called temporal cues, and these modules are updated in an alternate way which can keep both the historical and current information of the tracked object to handle drastic appearance change. Each module is represented by several fragments called spatial cues. In order to incorporate all the spatial and temporal cues, the authors develop an efficient cue quality evaluation criterion that combines appearance and motion information. Then the tracking results are obtained by a two-stage dynamic integration mechanism. Both qualitative and quantitative evaluations on challenging video sequences demonstrate that the proposed algorithm performs more favourably against the state-of-the-art methods. 1 Introduction As one of the fundamental problems in the computer vision community, visual tracking has received lots of attention in recent decades. The goal of online visual tracking is to estimate the state of an object in each frame only by the initial state. Designing a robust online tracker remains a challenging task because the object s appearance may change drastically because of illumination variations, occlusions, pose changes, motion blur and so on. Numerous adaptive tracking algorithms have been developed to address the appearance variations [1 5]. In these methods, the appearance model is updated by its own tracking results. With the appearance model updating, the errors in the tracking results would be accumulated, which can degrade the appearance model and cause drift [6]. If a tracker updates its appearance model at a low speed, it can keep more historical information thus is not likely to drift, but cannot track the object with drastic appearance changes, and vice versa. Therefore, a suitable appearance update scheme (or strategy) is crucial for tracking an object. Lots of attempts have been made to update an appearance model effectively, for example, multiple instance learning (MIL) [7], semi-supervised learning [8] and P N learning [9, 10]. However, they update the appearance model once the newly obtained results are available, and hence fail to keep the historical appearance information. In practice, both historical and current information are helpful to track an object. For example, current information plays a main role when tracking a moving person without occlusion, but trackers would be confused when the tracked person is severely occluded by other person, and may lose the target even though the tracked person appears in the scenario again. This is explained by the fact that the appearance model is degenerated by the occluded tracking results. Apparently, one can easily locate the person with the appearance model which is built before he is occluded. It is not hard to generalise the situation that retaining the historical information of an object appearance benefits tracking the object with drastic and dynamic appearance changes. Based on this observation, we aim to develop a robust appearance model that treats the historical information of the object appearance as a temporal cue. To further improve the tracking performance, we use the fragments-based representation [11, 12] to model an object and consider the fragments of the object as a spatial cue. We also formulate visual tracking as a spatio-temporal cue integration problem. In this paper, we propose a discriminative visual tracking algorithm based on spatio-temporal cue integration. Tensor subspace is adopted to model the target object and the background. With the background information, our tracker achieves more robust results. For the target object, we present an adaptive spatio-temporal appearance model consisting of a set of adaptive modules and a stable module. Each module is represented by four tensor subspaces of the local features inside the object region. In addition, we design a simple alternating updating strategy. Only one module is upgraded with the new tracking results, and the other modules remain unchanged. Those modules 124

2 are updated alternately. The strategy makes our tracker retain the object s appearance at different moments so that our tracker can handle occlusion and drift effectively. Finally, we develop an efficient cue quality evaluation criterion by combining the appearance and motion information. Our criterion depends on the principle that a reliable cue should offer a result having a high similarity to the appearance model, and also give a new location making the tracking trajectory smooth. Our tracker is then performed within the Bayesian inference framework in which the observation likelihood is calculated by the spatio-temporal cue integration with the criterion. The rest of this paper is organised as follows: In Section 2, the most related tracking methods are reviewed. In Section 3, the proposed sptaio-temporal appearance model is described. In Section 4, our tracking algorithm is presented. Experimental results of our tracker on 16 challenging video sequences and discussions are reported in Section 5, and Section 6 concludes this paper. 2 Related work We review the most related methods from three aspects: object appearance modelling, appearance model updating, and information fusion for tracking. Thorough discussions of this topic can be found in [13 15]. Object appearance modelling is to construct a visual representation or a statistical model of the object. It can be roughly classified into two types: generative [3, 4, 11, 16] and discriminative [7, 8, 10, 17 19]. Generative methods always formulate the tracking problem as searching an image region which is most similar to the template, such as histogram [11, 12], linear subspace [3, 4, 20], Riemann subspace [21], 3D-DCT [22] and sparse representation [23, 24]. Discriminative methods are intended to separate the target from the background by training an object/ background classifier, such as boosting [8, 18], MIL [7] and SVM [25]. Adam et al. [11] presented a fragments-based representation with intensity histogram to handle occlusion. Ross et al. [3] proposed an adaptive tracking algorithm which is robust to large variations in pose, illumination and scale using the incremental principal component analysis. Hu et al. [4] proposed an incremental tensor decomposition based representation which is more effective than the vector-based subspace learning [3]. Recently, Wang et al. [26, 27] modelled the object appearance as the sum of the PCA basis and error terms, and the error terms are represented as the linear combination of trivial templates [26], or the sum of Gaussian noise and Laplacian noise [27]. Appearance model updating focuses on how to make use of newly obtained results to update the object model. Grabner et al. [8] dealt with the drift problem by a semi-supervised learning procedure in which labelled examples only come from the first frame and subsequent training examples are left unlabelled. Babenko et al. [7] applied MIL which bootstraps the classifier by using a positive bag consisting of several image patches instead of several positive examples. The MIL tracker is robust to slight inaccuracies by reducing the degradation of the classifier. Kalal et al. [10] introduced P N learning to improve the classifier performance. This learning process is guided by positive (P) and negative (N) constraints which restrict the labelling of the unlabelled set. Wang et al. [26] adopted an occlusion detection scheme by exploring the coefficients of trivial template. They updated the appearance model by detecting occlusion, and the update was made in the three kinds of operations: full update, partial update and no update. Some algorithms, utilising multiple complementary visual cues or trackers, have been put forward for improving the tracking performance. Kwon and Lee [5] described visual tracking decomposition where a set of basic observation models and basic motion models was applied to construct many compound trackers, and they coped with various appearance changes by fusing those compound trackers. Erdem et al. [12] partitioned the object into four non-overlapped parts, and tracked the object by integrating those parts. They estimated the cue quality dynamically, and updated those patches at different speeds. Fan et al. [28] modelled the appearance by two interactive subspaces of different visual cues. Nickel and Stiefelhagen [29] presented a 3D-person tracker with motion, colour, detector and stereo cues. Stenger et al. [30] reported a framework to evaluate the robustness and precision of different trackers in a given scenario, and find a tradeoff between precision and robustness by using multiple complementary trackers. Santner et al. [31] combined a stable template, a highly adaptive optical-flow based mean-shift tracker, and a moderate random forest classifier in the cascade way to increase the stability and plasticity at the same time. Li et al. [32] proposed a binary code learning method based on random forest hashing to fuse different features by reducing multiple feature descriptors to a single binary code vector. To find a tradeoff between adaptivity and stability, Lu et al. [33] presented a hybrid tracking algorithm which contains three complementary modules with different updating speed and fused those modules with a biased multiplicative criterion. Our work is motivated by the method of tensor subspace appearance modelling [4, 34], the method of fragments-based representation [11, 12] and the method of tracking an object with multiple cues or trackers [5, 29]. In [34], Elden and Savas tried to solve the handwritten digit classification problem by using the higher order singular value decomposition. Here, we extend their work into incremental learning for online visual tracking. In contrast to the incremental tensor subspace tracking method [4], we only flatten an ith order tensor at mode-i flattening, and it is more appropriate for visual tracking because of lower computational cost. Compared with the fragments-based method [11, 12], our tracker updates all the fragments at the same speed during tracking, thus our tracker is more robust to variations in pose, view, illumination, and background. Distinct from the multi-feature trackers [5, 29], we focus on integrating multiple cues at different moments and positions (the features of the cues are the same) rather than obtaining a powerful representation with multiple complementary features. 3 Adaptive spatio-temporal appearance model via tensor subspace 3.1 Spatio-temporal appearance model and alternate updating strategy Our spatio-temporal appearance model contains multiple adaptive modules (we term them as M 1 to M n ). Those modules are the temporal cues of our model. Each module consists of four parts of local representations (up, down, left and right) which are the spatial cues of our model. We design an alternate updating strategy to update our model. 125

3 The spatio-temporal appearance model and its updating strategy are presented in Fig. 1. The alternate updating strategy is simple but effective, which makes our tracker robust to drift. The strategy is the case that only one module is updated with newly added data and the others keep stable. Concretely, the module M i (i < n) is updated this time, then the module M i+1 will be updated in the next time. What is more, the alternate updating strategy can be easily adopted to other online visual tracking algorithms. In the experimental section, we test the strategy on other online tracking algorithms, and the experimental results are promising. To further enhance the robustness to drift, we add a stable module in our appearance model. The stable module is initialised at the beginning and keeps stable until the end of tracking. The advantages of our model can be summarised as follows: Benefitting from the spatial and temporal cues, our method is able to handle occlusion effectively. The spatial cue makes our method able to track an object under partial occlusion, and the temporal cue facilitates track an object under heavy occlusion. The alternate updating strategy makes our tracker keep the appearance model at different moments. First, there is always a module being updated to adapt the newly appearance variations. Second, if the current tracking results are inaccurate, the results only affect one module. Thus, the alternate updating strategy is helpful to alleviate drift. Although our model is made up of multiple modules, our method would not cost much time in appearance updating because only one module is updated every time. 3.2 Object representation We apply tensor subspace to model each part in a module. An image part is regarded as a second-order tensor. All the images are stacked into a third-order tensor. Denote a third order tensor as A [ R N 1 N 2 N 3, which is constructed with N 3 images of size N 1 N 2 and it can be decomposed by the higher order singular value decomposition (HOSVD) (more details about HOSVD can be found in [35]) A=V 1 U 1 2 U 2 3 U 3 = N 3 j=1 (V(:, :,, j) 1 U 1 2 U 2 ) 3 U 3 (1) Let Ω(:, :, j) 1 U 1 2 U 2 be S j, the tensor decomposition can be written as A= N 3 j=1 S j 3 U 3 =S 3 U 3 (2) where S i and S j are orthogonal when i j [35], and S(:, :, i) = S i. Equation (2) is called mode-3 tensor decomposition in this paper. Given an image I and a subspace {S 1, S 2,, S n }, the image could be reconstructed and reconstruction error (RE) is defined as RE = I n i=1 ki, S i l S i 2 (3) where A, B is the scalar product of two tensors A and B. We extend the decomposition into incremental learning which is called incremental mode-3 tensor subspace learning for online visual tracking. Denote A S i U i, where A={A 1, A 2,..., A m }, S={S 1, S 2,..., S N } and N i [ R m N. X={X 1, X 2,..., X n } is the additional data and can be decomposed as X= S 3 Ũ. This operation can be done by the mode-3 unfolding and QR decomposition. Then, the additional tensor [A/X] can be expressed as [ ] [ A = S S ] [ ] U 3 0 X 3 0 Ũ Perform mode-i tensor decomposition to [S/ S], then [A/X] can be decomposed as [ ] [ ] A = X S 3 U U Ũ ([ ] ) = S U U 0 Ũ The incremental mode-3 tensor subspace learning is summarised in Algorithm 1 (see Fig. 2). (4) (5) Fig. 1 Structure of our spatio-temporal appearance model and its updating strategy The block refers to a module, and there are n adaptive modules (M 1, M 2,, M n ) as well as a stable module M S Each module consists of four parts of local representations The arrow means those modules are updated in the alternate way 126

4 Fig. 2 Algorithm 1: Incremental Mode-3 tensor subspace llearning 4 Online visual tracking by spatio-temporal cue integration We carry out the tracking in the probabilistic Bayesian inference framework [36] that is a Markov model with a hidden state variable Z t [3]. The state variable Z t denotes the affine motion parameters of the object at time t. Given an observation set of the object O t = {o 1,..., o t } up to time t, the hidden state variable Z t is estimated by Bayes inference theorem p(z t O t ) / p(o t Z t )p(z t Z t 1 )p(z t 1 O t 1 )dz t 1 (6) and the optimal object state Z t can be determined by solving the following maximum a posterior (MAP) problem Z t = arg max Z t p(z t O t ) (7) Thus, the tracking process is governed by the motion model p (Z t Z t 1 ), which describes the correlation of the object states in consecutive frames, and the observation model (or likelihood) p(o t Z t ). The location of an object can be represented by an affine image warping. The state at time t consists of six parameters. Let Z t =(x t, y t, s t, θ t, α t, f t ) denote the state parameters where x t, y t, s t, θ t, α t, f t denote the x, y coordinates of centre location, scale, rotation angle, aspect ratio and skew direction at time t, respectively. The motion model p(z t Z t 1 ) is modelled as p(z t Z t 1 ) =N(Z t ; Z t 1, S) (8) where S is a diagonal covariance matrix whose elements are the variances of affine parameters: s 2 x, s 2 y, s 2 s, s 2 u, s 2 a and s 2 f. In the particle filter framework with MAP estimation, tracking is accomplished by selecting the best candidate in the current particle set. In this inference framework, the most important issue for tracking is the observation model (likelihood), which reflects the probability that a sample is the object. Here, it is necessary to measure the similarity between the candidate and the object template. Thus, the observation model can be defined as p(o t Z t ) / exp ( D) (9) where D denotes the dissimilarity (or distance), and the likelihood is larger with smaller D. 4.1 Cue quality evaluation and dynamic integration The key issue in multi-cue integration tracking is to estimate the reliability of each cue. Here, we develop an efficient cue quality evaluation criterion by investigating the following principles: a good cue should make the current tracking result have low dissimilarity to the appearance model. a good cue should give a new location making the tracking trajectory smooth. The locations between adjacent frames should obey the motion model and Gaussian distribution is used as the motion model. In other words, when two candidates have the same dissimilarity to the appearance model, our method will select the one which is close to the previous location. We use the RE as a dissimilarity measurement, thus we also utilise RE to evaluate the cue quality. An efficient cue quality evaluation criterion can be calculated by combining the RE as well as motion information. Let RE p(z t Z t 1) (10) be the cue quality score and the cue is more reliable if this score is smaller. The most important parameters of an object state are the centre location (x t, y t ). Hence, only the parameters of centre location are considered in cue quality 127

5 Fig. 3 Illustration of our cues integration steps D i is the dissimilarity function calculated by the ith cue and w i is the weight of the ith cue Firstly, obtain the dissimilarity function using (3) and the MAP estimated results of each cue Secondly, evaluate each cue s quality scores using (12) and calculate the weights using (13) Finally, Calculate the joint dissimilarity and obtain the final results evaluation. In (10), Z t =(x t, y t ) and the probability p(z t Z t 1 ) =N(Z t ; Z t 1, S ), where [ S = a ] s2 x 0 0 a s 2 y (11) In (11), a is a parameter which controls the affection of the motion in cue quality evaluation and s 2 x, s 2 y are the variances in (8). When a is set larger, the affection of motion information is smaller. Fig. 3 illustrates the cue integration procedure. For each cue, calculate the dissimilarity of each candidate and obtain the results by the MAP estimation. Then, calculate the quality scores using (10) of each estimated result, and the quality score of the ith cue is Q i = RE i p(z t,i Z t 1) (12) where Z t, i is the centre location of the MAP result with the ith cue at the tth frame. And the weights are updated by v i t = (1 t)v i exp( kq t 1 + t i ) N n=1 exp( kq n) (13) The weight update factor t controls the speed of the adaptation. The parameter k controls the smoothness. When k =0,itis equivalent to the average scheme. When k =+, it reduces to the choose-max scheme. To avoid set parameters for different videos, parameter k could be adjusted to make k max(q 1, Q 2,, Q n )=l, and l is a constant. (In the experiment section, we use l s and l t to represent this constant for spatial cue and temporal cue, respectively.) The joint likelihood is computed by a two-stage dynamic integration mechanism. Let D ij (RE ij ) be the dissimilarity of the jth patch in the ith module. The first stage integrates all the patches (spatial cues) of a module. The weighted dissimilarity of ith module is computed by D i = 4 j=1 v ij D ij (14) The second stage integrates the modules (temporal cues). The weighted dissimilarity of our model is computed by D= N i=1 v i D i (15) Finally, the joint likelihood is calculated using (9). 4.2 Overall tracking algorithm The object appearance is represented with the aforementioned tensor subspace-based spatio-temporal model. In the initialisation step, we collect some positive examples around initial bounding box with slight movement to form N m modules which are the same at the beginning. To make use of background information, we also collect some negative examples in the area around the current tracker location to form a background subspace. Fig. 4 illustrates how to sample the training examples. The update of the object appearance model is performed every W frames and the update of the background subspace is performed every frame. The overall tracking algorithm is described in Algorithm 2 (see Fig. 5). 5 Experiments 5.1 Data description and experimental setting To evaluate the performance of the proposed tracker, we collect 20 challenging video sequences where the appearances change drastically because of illumination variation, heavy occlusions, pose variation and motion blurring, and compare our tracker with 9 state-of-the-art trackers including: IVT [3], VTD [5], MIL [7], Struck [17], TLD [10], FragT [11], AFT [12], DIKT [16] and LSST 128

6 Fig. 4 Illustration of training sample collection Left subfigure draws the collected positive examples at the first frame In the right subfigure, the dash bounding box plots the location of the current object, and these are the centre position of negative examples Direction and size of negative examples are the same as current tracking result [27]. We use the source codes provided by the authors and run them with adjusted parameters. Our tracker is implemented in MATLAB on a PC with an Intel Core 2 Duo 3.0 GHz processor and 2.00 G RAM. The average running time is about 3 frames per second with 400 particles. The target candidate patches and the background image patches are resized to 32 32, and the number of subspace basis N b used in all experiments is set to 10. The number of modules N m is set to 5. The weight factor α is set to 0.1 (in step 7 of Algorithm 2 (Fig. 5)). The forgetting factor is empirically set to 0.9. The cue weight update factor t (in (13)) is set to 1. The parameters for cue quality evaluation l s, l t (both in Section 4.1) and a (in (11)) are set to 5, 1 and 1, respectively. The update of the object appearance model is performed every 5 frames and the background appearance model is updated every frame. All the parameters, except the covariance matrix in the motion model, are kept constant for all the video sequences. 5.2 Evaluation of our cue integration method To evaluate the effect of proposed cue quality evaluation criterion, we conduct an experiment comparing different cue integration schemes on four video sequences ( Animal, Caviar, Woman and Football ) including Choose-max scheme (l s = l t =+ ), Average scheme (l s = l t = 0), Fig. 5 Algorithm 2: adaptive spatio-temporal visual tracker 129

7 Fig. 6 Quantitative tracking performances of our tracking algorithm using different cues integration method on four video sequences Clearly, our adaptive integration method outperforms other fusion schemes Fig. 7 Tracking performance comparison of three trackers using our alternate updating strategy Fig. 8 Bar graph of tracking results on twenty video sequences Nickel s scheme [29] and our adaptive scheme. Fig. 6 plots the tracking centre error curves, showing that our adaptive scheme obtains the best tracking results. The average scheme cannot select the better cues thus it is not robust to heavy occlusion in sequences Caviar and Woman. Although the tracker with Choose-max scheme successfully tracks the object before the 220th frame in the Woman sequence, it loses the object after the 220th frame because it is sensitive to drastic appearance variations. Nickel s fusion scheme achieves better results than Average and Choose max schemes, but it is also not favourable enough in our tracking framework. The tracker with Nickel s fusion scheme cannot successful track the objects in these four sequences, whereas our adaptive integration method obtains the most satisfactory results. 5.3 Evaluation of our alternate updating strategy To evaluate the effectiveness of our alternate updating strategy, we test IVT and MIL with our strategy on nine 130

8 sequences (e.g. Basketball, Caviar, DavidIndoor, DavidOutdoor, FaceOcc1, FaceOcc2, Girl, Jumping and Singer1) and reduce the number of modules to 1 in our tracker. (In this case, the alternate updating strategy cannot be applied and our tracker degenerates to tracking by four adaptive parts.) Fig. 7 shows the comparison results. The performance of our tracker is distinctly decreased with only one module. Although the template is divided into four parts, it cannot handle occlusion effectively in Caviar and DavidOutdoor. In contrast, by using our alternate updating strategy, IVT, MIL and our method are significantly improved in handling occlusion (e.g. Caviar, Caviar, DavidOutdoor, FaceOcc1, FaceOcc2 and Girl ). In addition, the alternate updating strategy is also helpful to other challenges such as pose variances. In the Basketball sequence, three trackers are all improved. And in the Jumping sequence, MIL can successfully track the object with our strategy whereas original MIL is not able to track the object. This experiment shows that (1) the alternate updating strategy plays a key role in our tracker. (2) This Fig. 9 Overlap rate curves of test sequences 131

9 strategy is helpful to alleviate drift and improve the tracking performance to deal with occlusion and other challenges. 5.4 Effect of background information and the stable module To further analyse our tracker, we first test our tracker without background information, and the corresponding tracking method is termed as BaseLine1. Secondly, we test our tracker without stable module mentioned in Section 3.1, and the corresponding tracking method is termed as BaseLine2. Fig. 8 shows the comparison results. The Caviar, Woman, Faceocc1 and Faceocc2 are with heavy occlusion. Even without the background information, our tracker can successfully track the object in these four sequences because of our appearance update strategy. However, it fails to track the objects in Box and Bolt, whereas the tracker with background subspace is able to track the objects in these sequences. In addition, in Singer2, Basketball and Girl, the overlap rate of our method with background information is nearly 10% higher than the one without background information. This experiment demonstrates that (1) by employing the background information, robustness can be improved when the appearance is drastically changed or the background is complex, and (2) even without the background information, our tracker is still able to track objects under many drastic appearance changes including heavy occlusion. Even without the stable module, our tracker performs well in most cases. Yet the trick of adding a stable module can improve our tracker in some cases, for example, in Basketball, there is not much occlusion, but the pose of the target athlete is drastically changed. In our observation, this trick can really make our tracker more robust and accurate. 5.5 Quantitative comparison We compute the tracking overlap rate and centre location error between the ground truth region V t gt and the estimated region Ω t. The overlap rate is defined as (V t gt > V t /V t gt < V t ) [37]. The centre location error is defined as the Euclid distance between the central locations of V t gt and Ω t. Fig. 9 plots the overlap rate curves. Table 1 reports the average overlap rate of the nine trackers on total twenty video sequences. Table 2 reports the average centre location error of the nine trackers. It is clear that our method achieves favourable results against the state-of-the-art methods, especially in the presence of heavy occlusion. 5.6 Qualitative comparison Owing to space limits, we only report some representative tracking results over 12 video sequences Motion blur: In Fig. 10, we test two sequences Animal and Jumping whose appearances are severely changed by motion blur. In Animal, serval deers run and jump rapidly, FragT, MIL and TLD start to lose the deer head after the 12nd, 13rd and 11st frames, respectively. In Jumping, the face of the jumping man is blurred by abrupt motion of this man and the camera motion. FragT, MIL, VTD and DIKT fail to track the face after the 31st, 96th, 52nd and 16th frames, respectively. And AFT, LSST and IVT track the head inaccurately as shown in the 96th and 276th frames. In the case of motion blur, AFT, IVT, LSST and our tracker are able to track the deer head and the blurred face, and our approach is more accurate Heavy occlusion: Video sequences Caviar, Woman, DavidOutdoor, Faceocc1 and Faceocc2 are utilised to test the performance under heavy occlusion, and we show the first three sequences in Fig. 11. In Caviar, AFT, IVT, MIL, VTD, TLD and DIKT lose the object after the first occlusion. FragT and Struck start to fail in tracking the person after the 265th and 124th frames. Only LSST and our tracker are able to track the object in this sequence, and our tracker is more accurate as shown in the 78th, 104th and 128th frames. In Woman, the walking woman is occluded by the cars and the environment is changed when she walks to different places. The challenging of this sequence is the long-term partial occlusion. FragT, AFT, IVT, MIL, VTD, TLD, DIKT and LSST fail to track the woman after the 122nd, 120th, 118th, Table 1 Quantitative evaluation: average overlap rate Video IVT VTD MIL Struck TLD FragT AFT DIKT LSST Ours Animal Basketball Bolt Box Car Caviar DavidIndoor DavidOutdoor Dollar Faceocc Faceocc Football Girl Jumping Singer Singer Stone Surfer Twinnings Woman Bold fonts indicate the best performance, the italic fonts indicate the second best ones 132

10 Table 2 Quantitative evaluation: average centre location error in pixels Video IVT VTD MIL Struck TLD FragT AFT DIKT LSST Ours Animal Basketball Bolt Box Car Caviar DavidIndoor DavidOutdoor Dollar Faceocc Faceocc Football Girl Jumping Singer Singer Stone Surfer Twinnings Woman Bold fonts indicate the best performance, the italic fonts indicate the second best ones 120th, 120th, 121st, 148th and 117th frames, respectively. In DavidOutdoor, affected by severe occlusion, pose variation, and background change, FragT, AFT, IVT, MIL, VTD and TLD fail to track the pedestrian after the 46th, 181st, 193rd, 117th, 105th and 127th frames, respectively. It can be observed that our tracker is robust to occlusion and able to successfully track the object in these sequences. FragT is a typical patch-based tracking approach, which is robust to occlusion. However, FragT cannot adapt to the environment change and pose change as it never updates the template. Fig. 10 Representative tracking results on the representative frames of Animal and Jumping with motion blur 133

11 Fig. 11 Representative tracking results on the representative frames of Caviar, Woman and DavidOutdoor with heavy occlusion Although MIL and TLD apply different techniques to increase the performance of the classifier, they always lose the object under heavy occlusion. The holistic tracker LSST applies Laplacian noise term to handle occlusion, but it cannot track the object in Woman because of the longterm partial occlusion. Our approach handles occlusion from the perspective of spatio-temporal cue integration, thus achieves more accurate and robust results Illumination change: Video sequences Singer1, Singer2, Car4 and DavidIndoor are used to test the performance under drastic illumination variation, and we show the first two sequences in Fig. 12. Both in Singer1 and Singer2, the stage light changes drastically and the viewpoint of the camera is varied. VTD, DIKT and our approach perform well in these two sequences whereas other methods drift to the background when drastic illumination change occurs. Although IVT, LSST work well in Singer1, they lose the target in Singer2 because there are pose change as well as illumination variation Background distraction: Video sequences Box, Football, Dollar and Stone are used to test the performance in the presence of background distraction, and we show the first two sequences in Fig

12 Fig. 12 Representative tracking results on the representative frames of Singer1 and Singer2 with drastic illumination variation Fig. 13 Representative tracking results on the representative frames of Box and Football with background distraction 135

13 Fig. 14 Representative tracking results on the representative frames of Basketball, Bolt and Twinnings with pose variation In Box, a suspended box moves fast in the complex background, and there are also severe occlusions. AFT, IVT, MIL, VTD, Struck, DIKT and LSST fail to track the box after the 30th, 452nd, 33rd, 932nd, 615th, 330th and 305th frames, respectively. Although FragT does not lose the object, the results are unsatisfactory. Only TLD and our method work well in this sequence, and our method achieves more accurate results. In Football, there are many regions, with similar appearances as the target object, and FragT, AFT, IVT, TLD, DIKT and LSST are confused by another player with a similar helmet to the target object when the two players crash at the 293rd frame. However, VTD, MIL and our tracker overcome this problem and can track the object successfully Pose variation: Video sequences Basketball, Bolt, Twinnings, Girl and Surfer are used to test the performance under pose variation, and we show the first three sequences in Fig. 14. Basketball and Bolt are two challenging sequences because the appearances of the target are drastically changed and both sequences feature other athletes whose appearances are similar to the target. Most trackers drift away from the target in these two sequences. Although 136

14 VTD and MIL achieve the best performance in Basketball and Bolt respectively, our tracker is able to track the objects in these two sequences. In the Twinnings sequence, suffering from the out-of-plane rotation and scale variations, IVT, TLD, VTD, FragT, AFT, DIKT and LSST fail to track the box after the 256th, 288th, 133rd, 255th, 241st, 200th and 293rd frames, respectively. Only MIL, Struck and our method work well in this sequence. 6 Conclusion This paper has presented a novel online visual tracking algorithm based on spatio-temporal cue integration within the Bayesian inference framework. The drift problem caused by straightforward model updating is alleviated by alternately updating multiple fragments-based appearance modules. To integrate the spatial and temporal cues, we have developed a simple but effective cue quality evaluation criterion that considers the appearance and motion information. Both qualitative and quantitative evaluations on 20 challenging video sequences demonstrate that the proposed tracker performs more favourable against several state-of-the-art methods. 7 Acknowledgment This work was supported in part by the National Basic Research Program of China (973 program) (2012CB720000) and Natural Science Foundation of China (NSFC) under Grant No References 1 Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking, IEEE Trans. Pattern Anal. Mach. Intell., 2003, 25, pp Jepson, A.D., Fleet, D.J., El-Maraghi, T.F.: Robust online appearance models for visual tracking, IEEE Trans. Pattern Anal. Mach. Intell., 2003, 25, pp Ross, D.A., Lim, J., Lin, R., Yang, M.: Incremental learning for robust visual tracking, Int. J. Comput. Vis., 2008, 77, pp Hu, W., Li, X., Zhang, X., Shi, X., Maybank, S., Zhang, Z.: Incremental tensor subspace learning and its applications to foreground segmentation and tracking, Int. J. Comput. Vis., 2011, 91, pp Kwon, J., Lee, K.M.: Visual tracking decomposition. Proc. IEEE Conf. Computer Vision and Pattern Recognition, San Francisco, USA, June 2010, pp Matthews, I., Ishikawa, T., Baker, S.: The template update problem, IEEE Trans. Pattern Anal. Mach. Intell., 2004, 26, pp Babenko, B., Yang, M., Belongie, S.: Robust object tracking with online multiple instance learning, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, pp Grabner, H., Leistner, C., Bischof, H.: Semi-supervised on-line boosting for robust tracking. European Conf. Computer Vision, Marseille, France, October 2008, pp Kalal, Z., Mikolajczyk, K., Matas, J.: P N learning: bootstrapping binary classifiers by structural constraints. Proc. IEEE Conf. Computer Vision and Pattern Recognition, San Francisco, USA, June 2010, pp Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking-learning-detection, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, pp Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. Proc. IEEE Conf. Computer Vision and Pattern Recognition, New York, USA, June 2006, pp Erdem, E., Dubuisson, S., Bloch, I.: Fragments based tracking with adaptive cue integration, Comput. Vis. Image Underst., 2012, 116, pp Yilmaz, A., Javed, O., Shah, M.: Object tracking: A survey, ACM Comput. Surv., 2006, 38, pp Zhang, S., Yao, H., Sun, X., Lu, X.: Sparse coding based visual tracking: review and experimental comparison, Pattern Recognit., 2013, 46, pp Li, X., Hu, W., Shen, C., Zhang, Z., Dick, A., Hengel, A.: A survey of appearance models in visual object tracking, ACM Trans. Intell. Syst. Technol., 2013, 4, (4), pp Liwicki, S., Zafeiriou, S., Tzimiropoulos, G., Pantic, M.: Efficient online subspace learning with an indefinite kernel for visual tracking and recognition, IEEE Trans. Neural Netw. Learning Syst., 2012, 23, pp Hare, S., Saffari, A., Torr, P.H.S.: Struck: structured output tracking with kernels. IEEE Int. Conf. Computer Vision, 2011, pp Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via online boosting. Proc. British Machine Vision Conf., Edinburgh, UK, September 2006, pp Zhang, S., Yao, H., Zhou, H., Sun, X., Liu, S.: Robust visual tracking based on online learning sparse representation, Neurocomputing, 2013, 100, pp Wang, Q., Chen, F., Xu, W.: Tracking by third-order tensor representation, IEEE Trans. Syst. Man Cybern. (Part B), 2011, 41, pp Li, X., Hu, W., Zhang, Z., Zhang, X., Zhu, M., Cheng, J.: Visual tracking via incremental log-euclidean riemannian subspace learning. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Alaska, USA, June 2008, pp Li, X., Dick, A., Shen, C., Hengel, A., Wang, H.: Incremental learning of 3d-dct compact representations for robust visual tracking, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, pp Mei, X., Ling, H.: Robust visual tracking and vehicle classification via sparse representation, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, pp Zhang, S., Yao, H., Sun, X., Liu, S.: Robust visual tracking using an effective appearance model based on sparse coding, ACM Trans. Intell. Syst. Technol. (TIST), 2012, 3, p Avidan, S.: Support vector tracking, IEEE Trans. Pattern Anal. Mach. Intell., 2004, 26, pp Wang, D., Lu, H., Yang, M.: Online object tracking with sparse prototypes, IEEE Trans. Image Process., 2013, 22, pp Wang, D., Lu, H., Yang, M.: Least soft-thresold squares tracking. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Portland, USA, June 2013, pp Fan, J., Yang, M., Wu, Y.: A bi-subspace model for robust visual tracking. Proc. IEEE Conf. Image Processing, San Diego, USA, October 2008, pp Nickel, K., Stiefelhagen, R.: Dynamic integration of generalized cues for person tracking. European Conf. Computer Vision, Marseille, France, October 2008, pp Stenger, B., Woodley, T., Cipolla, R.: Learning to track with multiple observers. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Florida, USA, June 2009, pp Santner, J., Leistner, C., Saffari, A., Pock, T., Bischof, H.: PROST: parallel robust online simple tracking. Proc. IEEE Conf. Computer Vision and Pattern Recognition, San Francisco, USA, June 2010, pp Li, X., Shen, C., Dick, A.R., van den Hengel, A.: Learning compact binary codes for visual tracking. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Portland, USA, 2013, pp Lu, H., Lu, S., Wang, D., Wang, S., Leung, H.: Pixel-wise spatial pyramid-based hybrid tracking, IEEE Trans. Circuits Syst. Video Tech., 2012, 22, pp Elden, L., Savas, B.: Handwritten digit classification using higher order singular value decomposition, Pattern Recognit., 2007, 40, pp De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition, SIAM J. Matrix Anal. Appli., 2000, 21, pp Isard, M., Blake, A.: Condensation conditional density propagation for visual tracking, Int. J. Comput. Vis., 1998, 29, pp Everingham, M., Van Gool, L., Williams, C., Winn, J., Zisserman, A.: The Pascal visual object classes (voc) challenge, Int. J. Comput. Vis., 2010, 88, pp

Tracking Using Online Feature Selection and a Local Generative Model

Tracking Using Online Feature Selection and a Local Generative Model Tracking Using Online Feature Selection and a Local Generative Model Thomas Woodley Bjorn Stenger Roberto Cipolla Dept. of Engineering University of Cambridge {tew32 cipolla}@eng.cam.ac.uk Computer Vision

More information

Keywords:- Object tracking, multiple instance learning, supervised learning, online boosting, ODFS tracker, classifier. IJSER

Keywords:- Object tracking, multiple instance learning, supervised learning, online boosting, ODFS tracker, classifier. IJSER International Journal of Scientific & Engineering Research, Volume 5, Issue 2, February-2014 37 Object Tracking via a Robust Feature Selection approach Prof. Mali M.D. manishamali2008@gmail.com Guide NBNSCOE

More information

Least Soft-thresold Squares Tracking

Least Soft-thresold Squares Tracking Least Soft-thresold Squares Tracking Dong Wang Dalian University of Technology wangdong.ice@gmail.com Huchuan Lu Dalian University of Technology lhchuan@dlut.edu.cn Ming-Hsuan Yang University of California

More information

Visual Tracking Using Pertinent Patch Selection and Masking

Visual Tracking Using Pertinent Patch Selection and Masking Visual Tracking Using Pertinent Patch Selection and Masking Dae-Youn Lee, Jae-Young Sim, and Chang-Su Kim School of Electrical Engineering, Korea University, Seoul, Korea School of Electrical and Computer

More information

Structural Sparse Tracking

Structural Sparse Tracking Structural Sparse Tracking Tianzhu Zhang 1,2 Si Liu 3 Changsheng Xu 2,4 Shuicheng Yan 5 Bernard Ghanem 1,6 Narendra Ahuja 1,7 Ming-Hsuan Yang 8 1 Advanced Digital Sciences Center 2 Institute of Automation,

More information

OBJECT tracking is an important problem in image

OBJECT tracking is an important problem in image 4454 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 1, OCTOBER 212 Object Tracking via Partial Least Squares Analysis Qing Wang, Student Member, IEEE, Feng Chen, Member, IEEE, Wenli Xu, and Ming-Hsuan

More information

Online Multiple Support Instance Tracking

Online Multiple Support Instance Tracking Online Multiple Support Instance Tracking Qiu-Hong Zhou Huchuan Lu Ming-Hsuan Yang School of Electronic and Information Engineering Electrical Engineering and Computer Science Dalian University of Technology

More information

arxiv: v1 [cs.cv] 24 Feb 2014

arxiv: v1 [cs.cv] 24 Feb 2014 EXEMPLAR-BASED LINEAR DISCRIMINANT ANALYSIS FOR ROBUST OBJECT TRACKING Changxin Gao, Feifei Chen, Jin-Gang Yu, Rui Huang, Nong Sang arxiv:1402.5697v1 [cs.cv] 24 Feb 2014 Science and Technology on Multi-spectral

More information

Online Discriminative Tracking with Active Example Selection

Online Discriminative Tracking with Active Example Selection 1 Online Discriminative Tracking with Active Example Selection Min Yang, Yuwei Wu, Mingtao Pei, Bo Ma and Yunde Jia, Member, IEEE Abstract Most existing discriminative tracking algorithms use a sampling-and-labeling

More information

arxiv: v2 [cs.cv] 9 Sep 2016

arxiv: v2 [cs.cv] 9 Sep 2016 arxiv:1608.08171v2 [cs.cv] 9 Sep 2016 Tracking Completion Yao Sui 1, Guanghui Wang 1,4, Yafei Tang 2, Li Zhang 3 1 Dept. of EECS, University of Kansas, Lawrence, KS 66045, USA 2 China Unicom Research Institute,

More information

Real-time Object Tracking via Online Discriminative Feature Selection

Real-time Object Tracking via Online Discriminative Feature Selection IEEE TRANSACTION ON IMAGE PROCESSING 1 Real-time Object Tracking via Online Discriminative Feature Selection Kaihua Zhang, Lei Zhang, and Ming-Hsuan Yang Abstract Most tracking-by-detection algorithms

More information

Superpixel Tracking. The detail of our motion model: The motion (or dynamical) model of our tracker is assumed to be Gaussian distributed:

Superpixel Tracking. The detail of our motion model: The motion (or dynamical) model of our tracker is assumed to be Gaussian distributed: Superpixel Tracking Shu Wang 1, Huchuan Lu 1, Fan Yang 1 abnd Ming-Hsuan Yang 2 1 School of Information and Communication Engineering, University of Technology, China 2 Electrical Engineering and Computer

More information

Object Tracking using HOG and SVM

Object Tracking using HOG and SVM Object Tracking using HOG and SVM Siji Joseph #1, Arun Pradeep #2 Electronics and Communication Engineering Axis College of Engineering and Technology, Ambanoly, Thrissur, India Abstract Object detection

More information

Struck: Structured Output Tracking with Kernels. Presented by Mike Liu, Yuhang Ming, and Jing Wang May 24, 2017

Struck: Structured Output Tracking with Kernels. Presented by Mike Liu, Yuhang Ming, and Jing Wang May 24, 2017 Struck: Structured Output Tracking with Kernels Presented by Mike Liu, Yuhang Ming, and Jing Wang May 24, 2017 Motivations Problem: Tracking Input: Target Output: Locations over time http://vision.ucsd.edu/~bbabenko/images/fast.gif

More information

OBJECT tracking has been an important and active research

OBJECT tracking has been an important and active research 3296 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 7, JULY 2012 Transferring Visual Prior for Online Object Tracking Qing Wang, Feng Chen, Jimei Yang, Wenli Xu, and Ming-Hsuan Yang Abstract Visual

More information

Object Tracking using Superpixel Confidence Map in Centroid Shifting Method

Object Tracking using Superpixel Confidence Map in Centroid Shifting Method Indian Journal of Science and Technology, Vol 9(35), DOI: 10.17485/ijst/2016/v9i35/101783, September 2016 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 Object Tracking using Superpixel Confidence

More information

CS229: Action Recognition in Tennis

CS229: Action Recognition in Tennis CS229: Action Recognition in Tennis Aman Sikka Stanford University Stanford, CA 94305 Rajbir Kataria Stanford University Stanford, CA 94305 asikka@stanford.edu rkataria@stanford.edu 1. Motivation As active

More information

Enhanced Laplacian Group Sparse Learning with Lifespan Outlier Rejection for Visual Tracking

Enhanced Laplacian Group Sparse Learning with Lifespan Outlier Rejection for Visual Tracking Enhanced Laplacian Group Sparse Learning with Lifespan Outlier Rejection for Visual Tracking Behzad Bozorgtabar 1 and Roland Goecke 1,2 1 Vision & Sensing, HCC Lab, ESTeM University of Canberra 2 IHCC,

More information

THE goal of object tracking is to estimate the states of

THE goal of object tracking is to estimate the states of 2356 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 5, MAY 2014 Robust Object Tracking via Sparse Collaborative Appearance Model Wei Zhong, Huchuan Lu, Senior Member, IEEE, and Ming-Hsuan Yang, Senior

More information

Probabilistic Index Histogram for Robust Object Tracking

Probabilistic Index Histogram for Robust Object Tracking Probabilistic Index Histogram for Robust Object Tracking Wei Li 1, Xiaoqin Zhang 2, Nianhua Xie 1, Weiming Hu 1, Wenhan Luo 1, Haibin Ling 3 1 National Lab of Pattern Recognition, Institute of Automation,CAS,

More information

Transferring Visual Prior for Online Object Tracking

Transferring Visual Prior for Online Object Tracking IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Transferring Visual Prior for Online Object Tracking Qing Wang Feng Chen Jimei Yang Wenli Xu Ming-Hsuan Yang Abstract Visual prior from generic real-world images

More information

Online Spatial-temporal Data Fusion for Robust Adaptive Tracking

Online Spatial-temporal Data Fusion for Robust Adaptive Tracking Online Spatial-temporal Data Fusion for Robust Adaptive Tracking Jixu Chen Qiang Ji Department of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute, Troy, NY 12180-3590, USA

More information

Ensemble Tracking. Abstract. 1 Introduction. 2 Background

Ensemble Tracking. Abstract. 1 Introduction. 2 Background Ensemble Tracking Shai Avidan Mitsubishi Electric Research Labs 201 Broadway Cambridge, MA 02139 avidan@merl.com Abstract We consider tracking as a binary classification problem, where an ensemble of weak

More information

IN computer vision develop mathematical techniques in

IN computer vision develop mathematical techniques in International Journal of Scientific & Engineering Research Volume 4, Issue3, March-2013 1 Object Tracking Based On Tracking-Learning-Detection Rupali S. Chavan, Mr. S.M.Patil Abstract -In this paper; we

More information

An efficient face recognition algorithm based on multi-kernel regularization learning

An efficient face recognition algorithm based on multi-kernel regularization learning Acta Technica 61, No. 4A/2016, 75 84 c 2017 Institute of Thermomechanics CAS, v.v.i. An efficient face recognition algorithm based on multi-kernel regularization learning Bi Rongrong 1 Abstract. A novel

More information

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Si Chen The George Washington University sichen@gwmail.gwu.edu Meera Hahn Emory University mhahn7@emory.edu Mentor: Afshin

More information

Supplementary material: Strengthening the Effectiveness of Pedestrian Detection with Spatially Pooled Features

Supplementary material: Strengthening the Effectiveness of Pedestrian Detection with Spatially Pooled Features Supplementary material: Strengthening the Effectiveness of Pedestrian Detection with Spatially Pooled Features Sakrapee Paisitkriangkrai, Chunhua Shen, Anton van den Hengel The University of Adelaide,

More information

2 Cascade detection and tracking

2 Cascade detection and tracking 3rd International Conference on Multimedia Technology(ICMT 213) A fast on-line boosting tracking algorithm based on cascade filter of multi-features HU Song, SUN Shui-Fa* 1, MA Xian-Bing, QIN Yin-Shi,

More information

THE recent years have witnessed significant advances

THE recent years have witnessed significant advances IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 4, APRIL 2014 1639 Robust Superpixel Tracking Fan Yang, Student Member, IEEE, Huchuan Lu, Senior Member, IEEE, and Ming-Hsuan Yang, Senior Member, IEEE

More information

People Tracking and Segmentation Using Efficient Shape Sequences Matching

People Tracking and Segmentation Using Efficient Shape Sequences Matching People Tracking and Segmentation Using Efficient Shape Sequences Matching Junqiu Wang, Yasushi Yagi, and Yasushi Makihara The Institute of Scientific and Industrial Research, Osaka University 8-1 Mihogaoka,

More information

Multi-Object Tracking Based on Tracking-Learning-Detection Framework

Multi-Object Tracking Based on Tracking-Learning-Detection Framework Multi-Object Tracking Based on Tracking-Learning-Detection Framework Songlin Piao, Karsten Berns Robotics Research Lab University of Kaiserslautern Abstract. This paper shows the framework of robust long-term

More information

Online Graph-Based Tracking

Online Graph-Based Tracking Online Graph-Based Tracking Hyeonseob Nam, Seunghoon Hong, and Bohyung Han Department of Computer Science and Engineering, POSTECH, Korea {namhs09,maga33,bhhan}@postech.ac.kr Abstract. Tracking by sequential

More information

Fragment-based Visual Tracking with Multiple Representations

Fragment-based Visual Tracking with Multiple Representations American Journal of Engineering and Applied Sciences Original Research Paper ragment-based Visual Tracking with Multiple Representations 1 Junqiu Wang and 2 Yasushi Yagi 1 AVIC Intelligent Measurement,

More information

Tracking Completion. Institute of Automation, CAS, Beijing, China

Tracking Completion.  Institute of Automation, CAS, Beijing, China Tracking Completion Yao Sui 1(B), Guanghui Wang 1,4, Yafei Tang 2, and Li Zhang 3 1 Department of EECS, University of Kansas, Lawrence, KS 66045, USA suiyao@gmail.com, ghwang@ku.edu 2 China Unicom Research

More information

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

Multi-Targets Tracking Based on Bipartite Graph Matching

Multi-Targets Tracking Based on Bipartite Graph Matching BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 14, Special Issue Sofia 014 Print ISSN: 1311-970; Online ISSN: 1314-4081 DOI: 10.478/cait-014-0045 Multi-Targets Tracking Based

More information

OBJECT tracking has been extensively studied in computer

OBJECT tracking has been extensively studied in computer IEEE TRANSAION ON IMAGE PROCESSING 1 Real-time Object Tracking via Online Discriminative Feature Selection Kaihua Zhang, Lei Zhang, and Ming-Hsuan Yang Abstract Most tracking-by-detection algorithms train

More information

Segmenting Objects in Weakly Labeled Videos

Segmenting Objects in Weakly Labeled Videos Segmenting Objects in Weakly Labeled Videos Mrigank Rochan, Shafin Rahman, Neil D.B. Bruce, Yang Wang Department of Computer Science University of Manitoba Winnipeg, Canada {mrochan, shafin12, bruce, ywang}@cs.umanitoba.ca

More information

AS ONE of the fundamental problems in computer vision,

AS ONE of the fundamental problems in computer vision, 34 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL., NO., JANUARY 3 Online Object Tracking With Sparse Prototypes Dong Wang, Huchuan Lu, Member, IEEE, and Ming-Hsuan Yang, Senior Member, IEEE Abstract Online

More information

Multi-Cue Visual Tracking Using Robust Feature-Level Fusion Based on Joint Sparse Representation

Multi-Cue Visual Tracking Using Robust Feature-Level Fusion Based on Joint Sparse Representation Multi-Cue Visual Tracking Using Robust Feature-Level Fusion Based on Joint Sparse Representation Xiangyuan Lan Andy J Ma Pong C Yuen Department of Computer Science, Hong Kong Baptist University {xylan,

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

Part-based Visual Tracking with Online Latent Structural Learning: Supplementary Material

Part-based Visual Tracking with Online Latent Structural Learning: Supplementary Material Part-based Visual Tracking with Online Latent Structural Learning: Supplementary Material Rui Yao, Qinfeng Shi 2, Chunhua Shen 2, Yanning Zhang, Anton van den Hengel 2 School of Computer Science, Northwestern

More information

NIH Public Access Author Manuscript Proc Int Conf Image Proc. Author manuscript; available in PMC 2013 May 03.

NIH Public Access Author Manuscript Proc Int Conf Image Proc. Author manuscript; available in PMC 2013 May 03. NIH Public Access Author Manuscript Published in final edited form as: Proc Int Conf Image Proc. 2008 ; : 241 244. doi:10.1109/icip.2008.4711736. TRACKING THROUGH CHANGES IN SCALE Shawn Lankton 1, James

More information

Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking Sensors 2014, 14, 3130-3155; doi:10.3390/s140203130 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

More information

AN EFFICIENT MULTI OBJECT TRACKING WITH WIEMENN SUBSPACE MODEL

AN EFFICIENT MULTI OBJECT TRACKING WITH WIEMENN SUBSPACE MODEL AN EFFICIENT MULTI OBJECT TRACKING WITH WIEMENN SUBSPACE MODEL D.Neelima 1, K.M. Prathyusha Kalathoti 2 1 Assistant Professor, Dept of CSE, Jawaharlal Nehru Technological University, Kakinada, India. 2

More information

Multi-camera Pedestrian Tracking using Group Structure

Multi-camera Pedestrian Tracking using Group Structure Multi-camera Pedestrian Tracking using Group Structure Zhixing Jin Center for Research in Intelligent Systems University of California, Riverside 900 University Ave, Riverside, CA 92507 jinz@cs.ucr.edu

More information

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte

More information

Co-training Framework of Generative and Discriminative Trackers with Partial Occlusion Handling

Co-training Framework of Generative and Discriminative Trackers with Partial Occlusion Handling Co-training Framework of Generative and Discriminative Trackers with Partial Occlusion Handling Thang Ba Dinh Gérard Medioni Institute of Robotics and Intelligent Systems University of Southern, Los Angeles,

More information

Tracking. Hao Guan( 管皓 ) School of Computer Science Fudan University

Tracking. Hao Guan( 管皓 ) School of Computer Science Fudan University Tracking Hao Guan( 管皓 ) School of Computer Science Fudan University 2014-09-29 Multimedia Video Audio Use your eyes Video Tracking Use your ears Audio Tracking Tracking Video Tracking Definition Given

More information

Tracking-Learning-Detection

Tracking-Learning-Detection IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 6, NO., JANUARY 200 Tracking-Learning-Detection Zdenek Kalal, Krystian Mikolajczyk, and Jiri Matas, Abstract Long-term tracking is the

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

Online Object Tracking: A Benchmark

Online Object Tracking: A Benchmark Online Object Tracking: A Benchmark Yi Wu, Ming-Hsuan Yang University of California at Merced {ywu29,mhyang}@ucmerced.edu Jongwoo Lim Hanyang University jlim@hanyang.ac.kr Abstract Object tracking is one

More information

Real Time Face Tracking By Using Naïve Bayes Classifier

Real Time Face Tracking By Using Naïve Bayes Classifier Real Time Face Tracking By Using Naïve Bayes Classifier Krishna Priya Arni 1, Satish Ammuloju 2, Jagan Mohan Vithalachari Gollapalli 3, Dinesh Bhogapurapu 4) Shaik Azeez 5 Student, Department of ECE, Lendi

More information

Moving Object Tracking in Video Sequence Using Dynamic Threshold

Moving Object Tracking in Video Sequence Using Dynamic Threshold Moving Object Tracking in Video Sequence Using Dynamic Threshold V.Elavarasi 1, S.Ringiya 2, M.Karthiga 3 Assistant professor, Dept. of ECE, E.G.S.pillay Engineering College, Nagapattinam, Tamilnadu, India

More information

Online Tracking Parameter Adaptation based on Evaluation

Online Tracking Parameter Adaptation based on Evaluation 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Online Tracking Parameter Adaptation based on Evaluation Duc Phu Chau Julien Badie François Brémond Monique Thonnat

More information

Face detection in a video sequence - a temporal approach

Face detection in a video sequence - a temporal approach Face detection in a video sequence - a temporal approach K. Mikolajczyk R. Choudhury C. Schmid INRIA Rhône-Alpes GRAVIR-CNRS, 655 av. de l Europe, 38330 Montbonnot, France {Krystian.Mikolajczyk,Ragini.Choudhury,Cordelia.Schmid}@inrialpes.fr

More information

Implementation of Robust Visual Tracker by using Weighted Spatio-Temporal Context Learning Algorithm

Implementation of Robust Visual Tracker by using Weighted Spatio-Temporal Context Learning Algorithm Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320 088X IMPACT FACTOR: 5.258 IJCSMC,

More information

Image Classification based on Saliency Driven Nonlinear Diffusion and Multi-scale Information Fusion Ms. Swapna R. Kharche 1, Prof.B.K.

Image Classification based on Saliency Driven Nonlinear Diffusion and Multi-scale Information Fusion Ms. Swapna R. Kharche 1, Prof.B.K. Image Classification based on Saliency Driven Nonlinear Diffusion and Multi-scale Information Fusion Ms. Swapna R. Kharche 1, Prof.B.K.Chaudhari 2 1M.E. student, Department of Computer Engg, VBKCOE, Malkapur

More information

Self Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation

Self Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation For Intelligent GPS Navigation and Traffic Interpretation Tianshi Gao Stanford University tianshig@stanford.edu 1. Introduction Imagine that you are driving on the highway at 70 mph and trying to figure

More information

Eye Detection by Haar wavelets and cascaded Support Vector Machine

Eye Detection by Haar wavelets and cascaded Support Vector Machine Eye Detection by Haar wavelets and cascaded Support Vector Machine Vishal Agrawal B.Tech 4th Year Guide: Simant Dubey / Amitabha Mukherjee Dept of Computer Science and Engineering IIT Kanpur - 208 016

More information

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO Makoto Arie, Masatoshi Shibata, Kenji Terabayashi, Alessandro Moro and Kazunori Umeda Course

More information

A Modified Mean Shift Algorithm for Visual Object Tracking

A Modified Mean Shift Algorithm for Visual Object Tracking A Modified Mean Shift Algorithm for Visual Object Tracking Shu-Wei Chou 1, Chaur-Heh Hsieh 2, Bor-Jiunn Hwang 3, Hown-Wen Chen 4 Department of Computer and Communication Engineering, Ming-Chuan University,

More information

Coupling Semi-supervised Learning and Example Selection for Online Object Tracking

Coupling Semi-supervised Learning and Example Selection for Online Object Tracking Coupling Semi-supervised Learning and Example Selection for Online Object Tracking Min Yang (B), Yuwei Wu, Mingtao Pei, Bo Ma, and Yunde Jia Beijing Laboratory of Intelligent Information Technology, School

More information

Action recognition in videos

Action recognition in videos Action recognition in videos Cordelia Schmid INRIA Grenoble Joint work with V. Ferrari, A. Gaidon, Z. Harchaoui, A. Klaeser, A. Prest, H. Wang Action recognition - goal Short actions, i.e. drinking, sit

More information

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image [6] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image Matching Methods, Video and Signal Based Surveillance, 6. AVSS

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Context Tracker: Exploring Supporters and Distracters in Unconstrained Environments

Context Tracker: Exploring Supporters and Distracters in Unconstrained Environments Context Tracker: Exploring Supporters and Distracters in Unconstrained Environments Thang Ba Dinh 1, Nam Vo 2, and Gérard Medioni 1 1 Institute of Robotics and Intelligent Systems University of Southern

More information

Efficient Visual Object Tracking with Online Nearest Neighbor Classifier

Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Steve Gu and Ying Zheng and Carlo Tomasi Department of Computer Science, Duke University Abstract. A tracking-by-detection framework

More information

Combining Deep Learning and Preference Learning for Object Tracking

Combining Deep Learning and Preference Learning for Object Tracking Combining Deep Learning and Preference Learning for Object Tracking Shuchao Pang 1,2, Juan José del Coz 2, Zhezhou Yu 1, Oscar Luaces 2, and Jorge Díez 2 1 Coll. Computer Science and Technology, Jilin

More information

Multidirectional 2DPCA Based Face Recognition System

Multidirectional 2DPCA Based Face Recognition System Multidirectional 2DPCA Based Face Recognition System Shilpi Soni 1, Raj Kumar Sahu 2 1 M.E. Scholar, Department of E&Tc Engg, CSIT, Durg 2 Associate Professor, Department of E&Tc Engg, CSIT, Durg Email:

More information

arxiv: v1 [cs.cv] 12 Sep 2012

arxiv: v1 [cs.cv] 12 Sep 2012 Visual Tracking with Similarity Matching Ratio Aysegul Dundar 1, Jonghoon Jin 2 and Eugenio Culurciello 1 1 Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA 2 Electrical

More information

Image and Vision Computing

Image and Vision Computing Image and Vision Computing 29 (2011) 787 796 Contents lists available at SciVerse ScienceDirect Image and Vision Computing journal homepage: www.elsevier.com/locate/imavis Object tracking via appearance

More information

Structural Local Sparse Tracking Method Based on Multi-feature Fusion and Fractional Differential

Structural Local Sparse Tracking Method Based on Multi-feature Fusion and Fractional Differential Journal of Information Hiding and Multimedia Signal Processing c 28 ISSN 273-422 Ubiquitous International Volume 9, Number, January 28 Structural Local Sparse Tracking Method Based on Multi-feature Fusion

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

Fast trajectory matching using small binary images

Fast trajectory matching using small binary images Title Fast trajectory matching using small binary images Author(s) Zhuo, W; Schnieders, D; Wong, KKY Citation The 3rd International Conference on Multimedia Technology (ICMT 2013), Guangzhou, China, 29

More information

Object detection using non-redundant local Binary Patterns

Object detection using non-redundant local Binary Patterns University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Object detection using non-redundant local Binary Patterns Duc Thanh

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

Object tracking using a convolutional network and a structured output SVM

Object tracking using a convolutional network and a structured output SVM Computational Visual Media DOI 10.1007/s41095-017-0087-3 Vol. 3, No. 4, December 2017, 325 335 Research Article Object tracking using a convolutional network and a structured output SVM Junwei Li 1, Xiaolong

More information

Real Time Unattended Object Detection and Tracking Using MATLAB

Real Time Unattended Object Detection and Tracking Using MATLAB Real Time Unattended Object Detection and Tracking Using MATLAB Sagar Sangale 1, Sandip Rahane 2 P.G. Student, Department of Electronics Engineering, Amrutvahini College of Engineering, Sangamner, Maharashtra,

More information

Robust False Positive Detection for Real-Time Multi-Target Tracking

Robust False Positive Detection for Real-Time Multi-Target Tracking Robust False Positive Detection for Real-Time Multi-Target Tracking Henrik Brauer, Christos Grecos, and Kai von Luck University of the West of Scotland University of Applied Sciences Hamburg Henrik.Brauer@HAW-Hamburg.de

More information

Shape Descriptor using Polar Plot for Shape Recognition.

Shape Descriptor using Polar Plot for Shape Recognition. Shape Descriptor using Polar Plot for Shape Recognition. Brijesh Pillai ECE Graduate Student, Clemson University bpillai@clemson.edu Abstract : This paper presents my work on computing shape models that

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

SCENE TEXT RECOGNITION IN MULTIPLE FRAMES BASED ON TEXT TRACKING

SCENE TEXT RECOGNITION IN MULTIPLE FRAMES BASED ON TEXT TRACKING SCENE TEXT RECOGNITION IN MULTIPLE FRAMES BASED ON TEXT TRACKING Xuejian Rong 1, Chucai Yi 2, Xiaodong Yang 1 and Yingli Tian 1,2 1 The City College, 2 The Graduate Center, City University of New York

More information

arxiv: v1 [cs.cv] 19 May 2018

arxiv: v1 [cs.cv] 19 May 2018 Long-term face tracking in the wild using deep learning Kunlei Zhang a, Elaheh Barati a, Elaheh Rashedi a,, Xue-wen Chen a a Dept. of Computer Science, Wayne State University, Detroit, MI arxiv:1805.07646v1

More information

Visual Tracking with Online Multiple Instance Learning

Visual Tracking with Online Multiple Instance Learning Visual Tracking with Online Multiple Instance Learning Boris Babenko University of California, San Diego bbabenko@cs.ucsd.edu Ming-Hsuan Yang University of California, Merced mhyang@ucmerced.edu Serge

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

A Novel Target Algorithm based on TLD Combining with SLBP

A Novel Target Algorithm based on TLD Combining with SLBP Available online at www.ijpe-online.com Vol. 13, No. 4, July 2017, pp. 458-468 DOI: 10.23940/ijpe.17.04.p13.458468 A Novel Target Algorithm based on TLD Combining with SLBP Jitao Zhang a, Aili Wang a,

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

A Comparative Study of Object Trackers for Infrared Flying Bird Tracking

A Comparative Study of Object Trackers for Infrared Flying Bird Tracking A Comparative Study of Object Trackers for Infrared Flying Bird Tracking Ying Huang, Hong Zheng, Haibin Ling 2, Erik Blasch 3, Hao Yang 4 School of Automation Science and Electrical Engineering, Beihang

More information

Multi-Camera Occlusion and Sudden-Appearance-Change Detection Using Hidden Markovian Chains

Multi-Camera Occlusion and Sudden-Appearance-Change Detection Using Hidden Markovian Chains 1 Multi-Camera Occlusion and Sudden-Appearance-Change Detection Using Hidden Markovian Chains Xudong Ma Pattern Technology Lab LLC, U.S.A. Email: xma@ieee.org arxiv:1610.09520v1 [cs.cv] 29 Oct 2016 Abstract

More information

Graph Embedding Based Semi-Supervised Discriminative Tracker

Graph Embedding Based Semi-Supervised Discriminative Tracker 2013 IEEE International Conference on Computer Vision Workshops Graph Embedding Based Semi-Supervised Discriminative Tracker Jin Gao 1, Junliang Xing 1, Weiming Hu 1, and Xiaoqin Zhang 2 1 National Laboratory

More information

I. INTRODUCTION. Figure-1 Basic block of text analysis

I. INTRODUCTION. Figure-1 Basic block of text analysis ISSN: 2349-7637 (Online) (RHIMRJ) Research Paper Available online at: www.rhimrj.com Detection and Localization of Texts from Natural Scene Images: A Hybrid Approach Priyanka Muchhadiya Post Graduate Fellow,

More information

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Akitsugu Noguchi and Keiji Yanai Department of Computer Science, The University of Electro-Communications, 1-5-1 Chofugaoka,

More information

Towards Robust Visual Object Tracking: Proposal Selection & Occlusion Reasoning Yang HUA

Towards Robust Visual Object Tracking: Proposal Selection & Occlusion Reasoning Yang HUA Towards Robust Visual Object Tracking: Proposal Selection & Occlusion Reasoning Yang HUA PhD Defense, June 10 2016 Rapporteur Rapporteur Examinateur Examinateur Directeur de these Co-encadrant de these

More information

Selection of Scale-Invariant Parts for Object Class Recognition

Selection of Scale-Invariant Parts for Object Class Recognition Selection of Scale-Invariant Parts for Object Class Recognition Gy. Dorkó and C. Schmid INRIA Rhône-Alpes, GRAVIR-CNRS 655, av. de l Europe, 3833 Montbonnot, France fdorko,schmidg@inrialpes.fr Abstract

More information

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important

More information

Structure-adaptive Image Denoising with 3D Collaborative Filtering

Structure-adaptive Image Denoising with 3D Collaborative Filtering , pp.42-47 http://dx.doi.org/10.14257/astl.2015.80.09 Structure-adaptive Image Denoising with 3D Collaborative Filtering Xuemei Wang 1, Dengyin Zhang 2, Min Zhu 2,3, Yingtian Ji 2, Jin Wang 4 1 College

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

Evaluation of Local Space-time Descriptors based on Cuboid Detector in Human Action Recognition

Evaluation of Local Space-time Descriptors based on Cuboid Detector in Human Action Recognition International Journal of Innovation and Applied Studies ISSN 2028-9324 Vol. 9 No. 4 Dec. 2014, pp. 1708-1717 2014 Innovative Space of Scientific Research Journals http://www.ijias.issr-journals.org/ Evaluation

More information