Robust Object Tracking Based on Motion Consistency

Size: px
Start display at page:

Download "Robust Object Tracking Based on Motion Consistency"

Transcription

1 sensors Article Robust Object Trackg Based on Motion Constency Lijun He, Xiaoya Qiao, Shuai Wen Fan Li * ID Department Information Communication Engeerg, School Electronic Information Engeerg, Xi an Jiaong University, Xi an , Cha; jzb @mail.xjtu.edu.cn (L.H.); qxy0212@stu.xjtu.edu.cn (X.Q.); wen201388@stu.xjtu.edu.cn (S.W.) * Correspondence: lifan@mail.xjtu.edu.cn; Tel.: Received: 28 December 2017; Accepted: 10 February 2018; Publhed: 13 February 2018 Abstract: Object trackg an important research direction computer vion widely used video surveillance, security monirg, video analys or fields. Conventional trackg algorithms perform poorly specific scenes, such as a with fast motion occlusion. cidate samples may lose true due its fast motion. Moreover, appearance may change with movement. In th paper, we propose an object trackg algorithm based on motion constency. In state transition model, cidate samples are obtaed by state, which predicted accordg temporal correlation. In appearance model, we defe position facr represent different importance cidate samples different positions usg double Gaussian probability model. cidate sample with highest likelihood selected as trackg result by combg holtic local responses with position facr. Moreover, an adaptive template updatg scheme proposed adapt s appearance changes, especially those caused by fast motion. experimental results on a 2013 benchmark dataset demonstrate that proposed algorithm performs better scenes with fast motion partial or full occlusion compared state---art algorithms. Keywords: object trackg; motion constency; state prediction; position facr; occlusion facr 1. Introduction Object trackg an important application for video sensor signal formation processg, which widely applied video surveillance, security monirg, video analys, or areas. Although numerous methods have been proposed, it still a challengg problem implement object trackg particular scenes, such as sports scenes for player trackg security scenes for crimal trackg. se scenes are characterized by fast motion, occlusion illumation variation. Improvg accuracy robustness trackg se particular scenes an open problem. Object trackg determation state contuous video frames. particle filter widely used object trackg, which uses Monte Carlo method simulate probability dtribution effective estimatg non-gaussian nonlear states [1]. In particle filter, state transition model appearance model are two important types probabiltic models. state transition model used predict current state based on previous states, which can be divided Gaussian dtribution model constant velocity model. Gaussian dtribution model assumes that velocities between adjacent frames are uncorrelated. Thus, cidate samples are predicted by simply addg a Gaussian dturbance previous state [2 8]. Th method has been widely used due its simplicity effectiveness. method performs well when moves short dtances rom directions. However, presence fast motion, a large variance more particles are needed avoid loss true, which will result higher computational burden. constant velocity model assumes a strong correlation Sensors 2018, 18, 572; doi: /s

2 Sensors 2018, 18, between velocity current state those previous states, cidate samples are predicted from previous states velocities with addition a schastic dturbance [9 15]. Th effect can be regarded as addg a dturbance a new state that far away from previous state terms velocity. Thus, when velocity current frame notably smaller than that previous frame, an improper variance dturbance will also cause loss. appearance model used represent similarity between real state observation cidates. It can be divided generative model [8,15 20], dcrimative model [21 27] collaboration model [2 4,6,28 30]. Trackg based on generative model focuses on foreground formation ignores background formation. cidates are represented sparsely by a foreground template set cidate with mimum reconstruction error can be selected as. Trackg based on dcrimative model considers trackg process as a classification process, a classifier traed updated onle dtguh foreground from background. Trackg based on collaboration model takes advantage generative dcrimative models make jot decions. Most extg trackg methods have obvious dadvantages complex scenes. cidate samples selected state transition model without prediction may not clude real, which might cause trackg drift or even trackg failure especially when has fast motion video. Moreover, if s appearance changes drastically durg movement, only considerg s appearance feature appearance model may reduce matchg accuracy between cidate samples. In th paper, we propose an object-trackg algorithm based on motion constency (). utilizes temporal correlation between contuous video frames describe motion constency. In state transition model, cidate samples are selected based on predicted state. In appearance model, cidate samples are represented by position facr, which defed describe motion constency. trackg result determed by combg position facr with local holtic responses. ma contributions our proposed algorithm are summarized as follows: 1. Cidate samplg based on state prediction A conventional transition model may result loss if number cidate samples limited. Th may lead trackg drift or even trackg failure. In th paper, cidate samplg based on state prediction proposed. Considerg motion constency, state current frame predicted based on motion charactertics previous frames. n, cidate samples are selected accordg predicted state overcome trackg drift. 2. Jot decion by combg position facr with local holtic responses appearance probability may differ different positions due motion constency. Different from conventional algorithms, which neglect temporal correlation video frames appearance model, position facr defed characterize importance cidate samples different positions accordg double Gaussian probability model. Meanwhile, occlusion facr proposed rectify similarity computed local representation obta local response, cidates are represented sparsely by positive negative templates holtic representation obta holtic response. n, trackg decion made by combg position facr with responses local holtic representations. Th approach makes full use temporal correlation state spatial similarity local holtic charactertics. 3. Adaptive template updatg based on velocity prediction fast motion will evitably brg about appearance changes background. To adapt changes, an adaptive template updatg approach proposed, which based on predicted velocity. positive template set divided dynamic

3 Sensors 2018, 18, static template sets. When moves fast, dynamic positive template set updated account for appearance changes. To keep primitive charactertics, static positive template set retaed. Moreover, negative holtic template set updated contuously adapt background changes. rest paper organized as follows: Section 2 reviews related work. In Section 3, an object trackg algorithm based on motion constency proposed. Section 4 gives experimental results with quantitative qualitative evaluations. Th paper s conclusions are presented Section Related Work A large number algorithms have been proposed for object trackg, an extensive review beyond scope th paper. Most extg algorithms maly focus on both state transition model appearance model based on particle filter. In state transition model, change state estimated cidate samples are predicted with various assumptions. Most state transition models based on particle filter can be divided Gaussian dtribution model constant velocity model, accordg assumption on wher velocity temporally correlated with previous frames. Appearance models, which are used represent similarity between real state observation cidates, can be categorized generative models, dcrimative models combations se two types models State Transition Models state transition model simulates state changes with time by particles prediction. Most algorithms use a Gaussian dtribution model predict cidate samples. Gaussian dtribution model regards previous state as current state, n obtas cidate samples by addg a Gaussian noe dpersion state space. For example, [2,3], particles are predicted by a Gaussian function with fixed mean variance, thus coverage particles limited. It describes motion best for short-dtance motion with rom directions, but it performs poorly when has a long-dtance motion a certa direction. Furrmore, [5], three types particles whose dtributions have different variances are prepared adapt different tasks. In vual trackg decomposition [7], motion model represented by a combation multiple basic motion models, each which covers a different type motion usg a Gaussian dturbance with a different variance. To a certa extent, model can cover abrupt motion with a large variance. However, fixed models cannot adapt complex practical situations. In addition, re are or algorithms which ignore temporally correlation contuous frames. For example, multi-stance learng tracker (MIL) [22] utilizes a simplified particle filterg method which sets a searchg radius around previous frame uses dense samplg obta samples. Small searchg radius cannot adapt fast motion, on or h large searchg radius will brg high computation burden. Sce algorithms [20,31] based on local optimum search obta cidates without prediction eir, thus a larger searchg radius also necessary when fast motion occurs. In subsequent studies, some algorithms use constant velocity model predict cidate samples, which assumes that velocity current frame correlated that previous frame. In [9,10], motion modeled by addg motion velocity with frame rate (motion dtance) previous frame previous state n dturbg it with Gaussian noe. In [11,12], average velocity previous frames considered obta translation parameters model object motion. Th suitable for situation that velocity current frame strongly correlated with that previous frame. However, actual motion complex. When current frame not correlated with that previous frame, improper variance Gaussian dturbance fite number particles will also lead trackg drift or even failure.

4 Sensors 2018, 18, Appearance Models Trackg based on generative models focuses more on foreground formation. cidates are represented by appearance model cidate with mimum reconstruction error can be selected as. In [8], multi-task trackg (MTT) represents each cidate by same dictionary considers each representation as a sgle task, n formulates object trackg as a multi-task sparse learng problem by jot sparsity. In [19], a local sparse appearance model with hgrams mean shift used track. Trackg based on L1 mimum (L1APG) [12] models appearance by templates trivial templates with sparse representation. However, it neglects background formation. Thus, presence background cluster or occlusion, trackg based on generative model cannot dtguh from background. Trackg based on dcrimative models, essence, considers trackg process as a classification task dtguhes foreground from background usg classifier with maximum classifier response. In [24,25], an onle boostg classifier proposed select most dcrimatg features from feature pool. In [26], superved semi-superved classifiers are traed on-le, which labeled first frame sample unlabeled subsequent trag samples are used. dcrimative model considers background foreground formation completely dtguh. However, a classifier with an improper updatg scheme or limited trag samples will lead bad classification. Trackg methods that combe generative dcrimative models exploit advantages se two models, fusg m usg m ger make decions. Trackg via sparse collaborative appearance model (SCM) [2] combes a sparse dcrimative classifier with a sparse generative model track object. With sparse dcrimative classifier, cidate samples are represented sparsely by holtic templates confidence value computed. In sparse generative model, cidate samples are divided patches represented by a dictionary compute similarity between each cidate sample template. However, positive templates rema; thus dcrimative classifier cannot adapt drastic appearance variations. In object trackg via key patch sparse representation [4], overlapped patch samplg strategy adopted capture local structure. In occlusion prediction scheme, positive negative bags are obtaed by samplg patches with outside boundg box. n SVM (support vecr mache) classifier traed by bags with labels, by which patches are predicted be occluded or not. Meanwhile, contribution facr computed via sparse representation. However, SVM classifier not timely updated, which may cause error accumulation occlusion decions. In object trackg via a cooperative appearance model [6], an object trackg algorithm based on cooperative appearance model proposed. dcrimative model establhed local representation with positive negative dictionaries. reconstruction error each cidate computed holtic representation, impact regions are proposed combe local holtic responses. In th paper, we propose a trackg algorithm based on motion constency, which cidate samples are predicted by a state transition model based on state prediction. In addition, appearance model, position facr proposed asst selection. As revealed [2,3,5,7], cidate samples are predicted simply by usg Gaussian dtribution model, which effective for short-dtance motion with rom direction. n, algorithms [9 12] take velocity consideration, a certa extent, which can adapt motion that correlated previous state, even long-dtance motion. Inspired by above work, th paper, predicted state formation added conventional Gaussian dtribution model. However, proposed model different from constant velocity model that only dynamic cidate samples are predicted with motion prediction; also, some basic cidate samples around previous state are predicted with Gaussian dturbance. In th way, varied cidate samples can adapt complex motions addition fast motion. Conversely, most trackg algorithms focus on similarity between cidate samples templates, such as that [2]. However, when appearance has a large variation, true cannot be selected by only considerg similarity. Thus, appearance model,

5 Sensors 2018, 18, Sensors 2018, 18, x FOR PEER REVIEW 5 27 considerg temporal correlation state, position facr proposed characterize proposed importance characterize cidate importance samples different cidate positions samples different collaborates positions with representation collaborates responses with representation select responses. select. 3. Robust Object Trackg Based on on Motion Constency Conventional algorithms which ignore temporal correlation states state transition model may lose true, especially when fast motion occurs, if if appearance changes drastically, algorithms that only focus on s appearance feature appearance model will willstruggle separate separateforeground foregrounds s from from background. background. Meanwhile, Meanwhile, an appropriate appropriate updatg updatg scheme scheme necessary necessary mata mata robustness algorithm. Thus we propose a robust object trackg algorithm based on on motion constency. trackg process our proposed algorithm shown assumed be known first frame. In state transition model, state, cludg motion direction dtance current frame, predicted accordg states previous frames. cidate samples are predicted by state transition model accordg predicted state. Next, all cidate samples are put appearance model. In appearance model, we propose position facr accordg double Gaussian probability model assign weights cidate samples located different positions. Meanwhile, local representation, similarity between cidate computed with sparse representation by dictionary obta local response each cidate. In holtic representation, holtic response each each cidate cidate sample sample computed computed with with sparse sparse representation representation by positive by positive negative templates. negative templates. Fally, Fally, cidate sample cidate withsample highest with likelihood highest decided likelihood as decided trackgas result by trackg combg result by combg holtic local holtic responses local withresponses position with facr. position To adapt facr. To adapt appearance changes appearance changes, especially, those especially caused those by fast caused motion, by fast motion, template sets template should sets beshould updated be adaptively updated adaptively based on based on velocity prediction. velocity prediction cidates cidates are are predicted predicted by by state state prediction prediction based based state state transition transition model. model. In In appearance appearance model, model, position position facr facr proposed proposed characterize characterize importance importance cidate cidate samples different different positions. positions. In In velocity velocity prediction prediction based based updatg updatg model, model, template template sets are are updated adaptively ensure ensure robustness effectiveness algorithm. algorithm.

6 Sensors 2018, 18, Sensors 2018, 18, x FOR PEER REVIEW State Transition Model Based on Target State Prediction 3.1. State Transition Model Based on Target State Prediction state state transition transition model model used used select select cidate cidate samples samples current current video video frame. frame. Conventional Conventional methods methods based based on on particle particle filter filter usually use Gaussian dtribution model model obta obta cidate cidate samples samples around around position where appears previous frame. When When has has fast fast motion, motion, selection cidate samples without prediction may result loss loss true true.. refore, considerg charactertics motion constency, we wepropose a astate transition model model based on on state prediction predict cidate samples Target Target State State Prediction Target Target state state prediction estimation position at frame at frame t. We predict a set motion directions dtances {( ˆθ t 1, ˆl t 1), ( ˆθ t 2, ˆl t 2),, ( ˆθ t u, ˆl, ( ˆθ t, ˆl } t. We predict a set motion directions dtances ˆ1 ˆ1 ˆ2 ˆ2 ˆu ˆu ˆU ˆU ( t U) t, lt ), ( t, lt ),, ( t, lt ),,( t, lt ) at frame at frame t accordg t accordg motion constency, which shown relative motion previous frame cludg motion direction motion 2. relative motion previous frame cludg motion direction motion dtance already known. n motion direction current frame predicted by dtance already known. n motion direction current frame predicted by addg addg a prediction range γ previous motion direction, motion dtance predicted a prediction range γ previous motion direction, motion dtance predicted accordg accordg previous motion dtance with variation rate. previous motion dtance with variation rate. It It assumed assumed that that has has aa steady motion direction velocity. We Wepredict motion motion direction direction by by usg usg motion direction at at frame frame t t 1. As 1. shown As shown 2, 2, relative relative motion motion direction direction from from from from frame frame t 1 t frame 1 frame t 2 t t 1. 2 n, θ motion t 1. n, direction predicted as a set values based on value t 1. Takg t 1 as center, we select U motion direction predicted as a set values based on value θ t 1. Takg θ t 1 as center, angles uniformly terval [ t 1 γ, t 1 + γ], where γ a constant, determe prediction we select U angles uniformly terval [θ t 1 γ, θ t 1 + γ], where γ a constant, determe range reduce estimation error. n, predicted values motion directions are: prediction range reduce estimation error. n, predicted values motion directions are: ˆ u 2 t t 1 ( u-1) u 1,2,..., U, (1) U 1 ˆθ t u = θ t 1 γ + 2γ (u 1) u = 1, 2,..., U, (1) U 1 where U number predicted directions. where UWe also number predict predicted motion dtance directions. based on relative motion dtance with three frames We also before predict frame. motion predicted dtance dtance based oneach direction relative motion : dtance with three frames before frame. predicted dtance each direction : where t 1 ˆu l ˆ t wt lt 1, (2) ˆl t u = ŵ t l t 1, (2) l relative motion dtance from frame t 1 frame t 2. w ˆt describes variation rate motion dtance previous frames defed as: where l t 1 relative motion dtance from frame t 1 frame t 2. ŵ t describes variation rate motion dtance lt 1 lt 2 wˆ t previous (1 frames ), defed as: (3) lt 2 lt 3 where a positive constant that ŵ t = less α than l t (1 α) l t 2, (3) l t 2 l t Thus, we obta a set motion directions dtances ( ˆ, ˆ ), ( ˆ, ˆ ), ( ˆu, ˆu ),,( ˆU, ˆU t lt t lt, t lt t lt ). where α a positive constant that less than 1.

7 Sensors 2018, 18, Thus, we obta a set motion directions dtances {( ˆθ t 1, ˆl t 1), ( ˆθ t 2, ˆl t 2),, ( ˆθ t u, ˆl t u),, ( ˆθ t U, ˆl t U)}. Sensors 2018, 18, x FOR PEER REVIEW Three-Step Three-Step Cidate Cidate Samplg Samplg conventional conventional Gaussian Gaussian dtribution dtribution model model generates generates cidate cidate samples samples near near position position previous previous frame. frame. For For rom rom motion motion over over a short short dtance, dtance, true true position position can can be be successfully successfully covered covered by by samples. samples. However, However, when when fast fast motion motion occurs, occurs, se se predicted predicted samples samples may may lose lose true true position, position, which which results results trackg trackg drift drift or or trackg trackg failure. failure. To To avoid avoid loss loss true true position, position, a three-step a three-step cidate cidate samplg samplg scheme scheme proposed proposed accordg accordg state prediction, state prediction, which which illustrated illustrated First, basic cidate samples ( blue dots) are obtaed around location at 3. First, basic cidate samples ( blue dots) are obtaed around location at frame 1. Second, basic cidate samples with largest relative dtance each predicted frame t 1. Second, basic cidate samples with largest relative dtance each predicted direction will be selected as benchmark cidate samples ( yellow dots). Third, addg more direction will be selected as benchmark cidate samples ( yellow dots). Third, addg more dynamic cidate samples ( green dots) predicted direction begs from benchmark dynamic cidate samples ( green dots) predicted direction begs from benchmark with with a predicted predicted dtance. dtance. 1. First First step: step: Basic Basic cidate cidate samplg samplg basic cidate samples current frame are predicted by Gaussian dtribution i i model. current state X t by state frame X t 1 t i predicted by state previous frame Xi t 1 dturbance by Gaussian by noe noe X Gauss X Gauss as follows: as follows: X i = t i i X Xi + X t 1 Gauss, (4) X X t t 1 Gauss, (4) where Xt i represents ith cidate frame t state each cidate represented i by where an affe X t represents transformation ith model cidate six frame parameters t X state each cidate represented by t i = ( α1 i i i i i i i i an affe transformation model six parameters Xt, αi 2, αi 3, αi 4, αi 5, ( 6) 1, 2, 3, 4, 5, 6 αi, which α i 1, α i ) 2, which i i represents translation, ( α 1, 3 i, ( 4) αi represents scalg α i 5, α6) i represents rotation. Each parameter 2 represents state vecr ( translation, α i i 3, 4 represents scalg i i 1 i, αi 2, αi 3, αi 4, αi 5, 6) αi assumed obey a Gaussian 5, 6 represents dtribution rotation. different Each parameter mean variance accordg actual situation. Next, relative translation between N basic cidate i i i i i i samples state vecr 1, 2, 3, 4, 5, 6 { previous frame assumed computed obey a Gaussian converted dtribution a polar coordate different mean form (θ 1, variance l accordg actual situation. Next, relative translation between N basic t t 1), (θ2 t, l2 t ),, (θn t, ln t ),, (θn t, ln t )}, which covers direction dtance formation each cidate cidate samples sample. previous frame computed converted a polar coordate form (, ), (, ), ( n, n ),, ( N, N t lt t lt, t lt t lt ) 2. Second step: Benchmark cidate samplg, which covers direction dtance formation each cidate sample. To make cidate samples cover as large a region as possible where possibly appears, considerg 2. Second both step: efficiency Benchmark cidate effectiveness, samplg we simplify problem by selectg benchmark with To make largest cidate relative dtance samples cover each as predicted large a region direction as possible n where addg more dynamic possibly cidate appears, samples considerg both same efficiency direction with effectiveness, a predicted we simplify dtance, which problem begs by selectg from benchmark. benchmark with largest relative dtance each predicted direction n addg more dynamic cidate samples same direction with a predicted dtance, which begs from benchmark benchmark cidate samples ( ˆ, ˆ ˆu u ˆU U t lmax ), ( t, lmax),, ( t, lmax),,( t, lmax) are selected from basic cidate samples accordance with motion direction predicted by Equation (1) as follows:

8 Sensors 2018, 18, benchmark cidate samples { (ˆθ 1 t, l1 max), (ˆθ 2 t, l2 max),, (ˆθ u t, lu max),, (ˆθ U t, lu max) } are selected from basic cidate samples accordance with motion direction predicted by Equation (1) as follows: ( ˆθ u t, l u max) = (θ j t, lj t ) j = argmax i {i θ i t = ˆθ u t,1 i N} l i t u = 1, 2,, U, (5) In ˆθ u t direction, basic cidate sample with largest relative dtance will be selected as benchmark cidate, as shown 3. re a rule that if it lacks a benchmark ˆθ t u direction, benchmark cidate sample will be defed as ( ˆθ t u, 0) with a relative dtance zero. 3. Third step: Dynamic cidate samplg Begng from benchmark, each predicted direction, a tal M dynamic cidate samples placed evenly at a dtance ˆl t u /M apart, as shown 3. re are UM dynamic cidate samples: ( ˆθ t u, lmax u + m ˆl t u ) u = 1, 2,, U m = 1, 2,, M, (6) M Thus, we obta relative position formation all cidate samples cludg N basic cidate samples UM dynamic cidate samples {(θt 1, l1 t ), (θ2 t, l2 t ),, (θn t, ln t ), (θn+1 t, lt N+1 ),, (θt N+UM, lt N+UM )} frame t. Moreover, se relative polar coordates are converted translation parameters, remag four parameters are assumed obey { Gaussian dtribution. } All cidate samples predicted frame t are represented by X t = Xt 1, X2 t,, XN+UM t Position Facr Based Cooperative Appearance Model In presence fast motion, s appearance changes drastically. refore, algorithms that only considerg changes appearance feature appearance model difficult dtguh foreground from a complex background. In view th, we propose position facr reduce effect appearance changes caused by movement. Due motion constency, re a different probability appearg each position. Generally, cidate sample location where may appear has a larger probability be selected as. Thus, cidate samples th kd should be assigned larger weights, while or cidate samples should be assigned smaller weights. refore, a quantitative parameter named position facr proposed assign different weights cidate samples different positions rectify representation error. likelihood cidate sample defed by combg position facr with responses local holtic representations Position Facr Based on Double Gaussian Probability Model position facr ith cidate sample computed by multiplyg direction score S i A dtance score S i l : F i = S i A Si l i = 1, 2,, N + MU, (7) To compute direction dtance scores, a double Gauss probability model used accordg states previous frames, which cludes direction Gaussian probability model dtance Gaussian probability model. 1. Direction score motion direction at frame t constent with that previous frame. It assumed that probability appearg θ t 1 direction largest, probability appearg opposite direction θ t 1 smallest. Thus, a Gaussian probability model with θ t 1 mean error proposed simulate dtribution s motion

9 Sensors 2018, 18, direction. Accordgly, probability cidate sample lyg θ t 1 beg selected as large it should be assigned a large weight. In contrast, cidate sample lyg direction opposite θ t 1 should be assigned a small weight. n, weight assignment model transformed accordg trend direction Gaussian probability model, from which direction score ith cidate sample derived as follows: S i A = T 1 1 e π e (θi t θ t 1) 2 + (1 T 1 ) i = 1, 2,, N + MU, (8) 1 e π where θ i t denotes relative direction between ith cidate at frame t 1, θ t 1 motion direction at frame t 1, T 1 a constant that depends versely on (θ t 1, p A max) (θ t 1 π, p A m ). 2. Dtance score motion velocities with two contuous frames are constent. In θ t 1 direction, velocity at frame t constent with that previous frame. Thus, accordg dtance predicted by Equation (2), it assumed that probability appearg near relative dtance ˆl u t largest. Conversely, direction opposite θ t 1, motion velocity small, probability at frame t appearg near at frame t 1 largest. Accordgly, each direction, re a dtance with largest probability appearance, as well as directions where cidate samples are located: T 2 l i = ( 1 e π e (θi t θ t 1) 2 + (1 T 2 1 e π ))ˆl u i = 1, 2,, N + MU, (9) t where T 2 a constant that depends versely on (θ t 1, ˆl t u) (θ t 1 π, kˆl t u ), k a positive constant that less than 1. It has been assumed that θt i direction, probability appearg at relative dtance l i largest smallest for fite relative dtances. Thus, probability cidate sample at ( θt i, li) beg selected as large weight th cidate defed as pmax, l whereas probability cidate sample at ( θt i, ) beg selected as small weight th cidate defed as pm l. Accordg th dtribution feature, θt i direction, motion dtance can be assumed obey a Gaussian probability model with li mean error. Thus, weight assignment model transformed accordg trend dtance Gaussian probability model, by which dtance score ith cidate sample computed as follows: Sl i = T 3 e (li 1 e ˆl i u t li ) 2 + (1 T 3 ) i = 1, 2,, N + MU, (10) 1 e ˆl i u where l i t represents relative dtance ith cidate sample; li correspondg dtance with largest probability θ i t direction, which computed by Equation (9); T 3 a constant that depends versely on (l i, p l max) (, p l m ) Local Representation with Occlusion Facr In local representation, cidate samples are divided patches are sparsely represented by local features, from which local formation cidate samples obtaed. local formation cidates, obtaed from sparse representation, used for adaptg occlusion or situations that may cause local appearance changes. Thus, th paper, based on sparse generative model (SGM) [2], occlusion formation each patch defed accordg reconstruction error with local sparse representation. Furrmore, adapt occlusions, occlusion facr defed rectify similarity computed by SGM. Thus, local response L each cidate represented as follows:

10 Sensors 2018, 18, where L c represents similarity β c represents occlusion facr. 1. Similarity function L = L c β c, (11) First, image normalized pixels, from which M patches are derived by overlappg slidg wdows. With sparse representation, patches are represented by a dictionary generated from first frame by K-Means [32] compute reconstruction error. patch with large reconstruction error regarded as an occludg part correspondg element occlusion dicar o set zero, as follows: o m = { 1 ε m < ε 0 0 orwe m = 1, 2,, M, (12) where ε m denotes reconstruction error mth patch ε 0 a constant used determe wher patch occluded or not. Moreover, a weighted hgram generated by concatenatg sparse coefficient with dicar o. similarity L c each cidate computed on bas weighted hgrams between cidate template. 2. Occlusion facr To furr adapt occlusion appearance model, we propose occlusion facr rectify similarity function. number patches whose reconstruction errors are larger than ε 0 computed stattically by: M N c = o m, (13) m=1 where a larger value N c represents that re more formation on background or occlusion. occlusion facr defed as follows: β c = σe (1 ω c), (14) where σ a constant ω c = N c /M stattical average N c. Accordg Equation (14), re a positive correlation between occlusion facr ω c. When occlusion occurs, value occlusion facr creases with crease ω c Holtic Representation Based on Sparse Dcrimative Classifier In holtic representation, foreground dtguhed from background by sparse dcrimative classifier (SDC) [2]. Initially, n p positive templates are drawn around location n n negative templates are drawn furr away from location. By way, label +1 assigned positive templates label 1 assigned negative templates. In order reduce redundancy grayscale template sets, a feature selection scheme proposed SDC. First, labels are represented sparsely by positive negative template sets a sparse vecr s generated by sparse coefficient vecr. n sparse vecr s constructed a diagonal matrix S a projection matrix S obtaed by removg all-zero rows from S. Fally, both templates cidates are reconstructed projected form by combg m with projection matrix S. With sparse representation, projected cidates are represented by projected template sets, reconstruction error projected cidate used compute confidence value Hc each cidate.

11 Sensors 2018, 18, Jot Decion by Combg Position Facrs with Local Holtic Responses By combg position facr with local holtic responses, likelihood function ith cidate sample computed as follows: p i = F i L i H i c i = 1, 2,, N + MU, (15) For each cidate, position facr large if it located position where appears with largest probability. Moreover, local holtic responses are large if cidate similar. Thus, cidate with largest likelihood, calculated by Equation (15), will be possibly identified as. trackg result X t current frame chosen from cidate samples by: X t = X j t j = arg i max p i, (16) i=1,2,,n+mu 3.3. Adaptive Template Updatg Scheme Based on Target Velocity Prediction It necessary update templates dictionary capture appearance changes ensure robustness effectiveness appearance model. In SCM [2], negative templates are updated SDC model every several frames from image regions away from current trackg result, but positive templates rema. In SGM model, stead updatg dictionary, template hgram updated by takg most recent trackg result account. However, fixed positive templates SDC model cannot match true effectively when s appearance changes drastically, which will evitably occur presence fast motion. To adapt changes s appearance, positive templates also need be updated capture appearance changes. In view th, an adaptive template updatg scheme based on prediction velocity proposed. positive template set divided dynamic static template sets, as follows: n p = {p d, p s } (17) where p d represents dynamic template set p s represents static template sets. For every several frames, we determe wher has fast motion by comparg relative motion dtance with a threshold p pixels. variable p denotes fast motion measure trackg process. When relative motion dtance l t larger than p, we deem that moves fast current frame, hence next frame will move with a large velocity accordg motion constency. Thus, may have appearance change at same time. Next, dynamic positive template set updated account for appearance changes. To keep itial formation avoid correct representation case recovery s appearance, static positive template set retaed, as follows: { { } p n p = d, p s l t p {p d, p s } orwe where p d represents updated dynamic template set, which drawn around location current frame. By combg updated dynamic set static template set, can be well matched with positive template set. Besides, negative templates are also updated SDC model every several frames order dtguh from complex changeable background trackg process. 4. Experiments To evaluate performance our proposed algorithm, we test our tracker on CVPR2013 (OTB2013) dataset, which has been widely used evaluation [33]. OTB contas 50 video sequences, (18)

12 Sensors 2018, 18, source codes for 29 trackers ir experimental results on 50 video sequences. We compared our tracker with eight state---art trackg algorithms: compressive trackg (CT) [23], dtribution fields for trackg (DFT) [31], onle robust image alignment (ORIA) [20], MTT [8], vual trackg via adaptive structural local sparse appearance model (ASLA) [18], L1APG [12], SCM [2], MIL [22] trackers. basic formation se trackers lted Table 1. As described related work, most algorithms ignore temporal correlation states. MTT, ASLA SCM obta cidates by Gaussian dtribution model, MIL CT obta cidates by dense samplg, DFT ORIA obta cidates by local optimum search. Only LIAPG considers motion speed previous frame obta cidates by constant velocity model. Besides, competition algorithms cover all type appearance model, but all appearance models are designed from perspective s appearance feature. We evaluate CT, DFT, ORIA, MTT, ASLA, L1APG, SCM, MIL trackers by usg available source codes. We conduct experiments on all 50 video sequences perform a concrete analys on eight specific video sequences with different problems. All problems we concerned are covered video sequences, thus effectiveness algorithm can be illustrated. basic formation eight test video sequences primary problems are shown Table 2 [33]. Table 1. Basic formation eight state---art trackg algorithms. Algorithm State Transition Model Appearance Model L1APG Constant velocity model Holtic representation MTT Gaussian dtribution model Holtic representation ASLA Gaussian dtribution model Local representation SCM Gaussian dtribution model Holtic & Local representation MIL Dense samplg search Holtic representation CT Dense samplg search Holtic representation DFT Local optimum search Holtic representation ORIA Local optimum search Holtic representation Table 2. Basic formation eight test videos. Video Image Size Target Size Ma Confronted Problems Coke Fast motion Boy Fast motion Football Occlusion Joggg Occlusion Girl Scale variation rotation Skatg Scale variation rotation Basketball Deformation illumation variation Sylvester Illumation variation In experiment, for each test video sequence, all formation unknown except for itial position first frame. number cidate samples 600 cludg 250 basic cidate samples 350 dynamic cidate samples. number motion directions we predicted seven. In each direction, a tal 50 cidate samples are obtaed. In update, five positive templates are updated every five frames from image regions away more than 20 pixels current trackg result Quantitative Evaluation Evaluation Metric We choose three popular evaluation metrics: center location error, trackg precion success rate.

13 Sensors 2018, 18, center location error (CLE) defed as Euclidean dtance between center position trackg result ground truth. Lower average center location error dicates better performance tracker. CLE calculated by: CLE = 1 N N i=1 ( ) d Ct, i Cgt i, (19) where N tal number frames, d(, ) represents dtance between center position Ct i trackg result ground truth Cgt i ith frame. trackg precion defed as ratio number frames for which error between center pot calculated by trackg algorithm groundtruth smaller than threshold th tal number frames, as follows: precion(th) = 1 N N i=1 F(d(C i t, C i gt) < th), (20) where F( ) a Boolean function, output which 1 when expression side brackets true, orwe output 0, threshold th [0, 100]. trackg precion depends on choice th. threshold th set 20 compute trackg precion followg experiment. success rate defed as ratio number frames for which overlap rate larger than λ tal number frames, as follows: success(λ) = 1 N N i=1 F(overlap i λ), (21) where overlap rate defed as overlap = τ gt τ t / τ gt τ t, threshold λ [0, 1] fixed, τ gt represents region side true trackg box generated by ground truth, τ t represents region side boundg box tracked by tracker, denotes pixels region, denote tersection union two regions, respectively. In followg experiment, stead usg success rate value for a specific threshold λ, area under curve (AUC) utilized evaluate algorithms Experiment on CVPR2013 Benchmark To evaluate performance our algorithm, we conduct an experiment test tracker on OTB dataset. OTB contas 50 videos 29 algorithms, along with ir experimental results on 50 video sequences. We compared our tracker with eight trackers terms overall performances we especially focus on performance presence fast motion (FM) occlusion (OCC). 1. comparon results for fast motion occlusion Fast motion (FM) occlusion (OCC) are two scenarios that we exame. results precion success rate for FM OCC are shown as 4. Our algorithm has highest precion success rate compared with ors. In presence FM, trackg precion 0.433, which 10% higher than that SCM algorithm. success rate 0.384, which improved by 8.8% compared with SCM algorithm. Accordg motion constency, state estimated merged state transition model, appearance model updatg scheme promote accuracy presence fast motion. In presence OCC, trackg precion 0.655, which 6% higher than that SCM algorithm. success rate 0.539, which 5.2% higher than that SCM algorithm. performance still best presence OCC. Th fdg attributable occlusion facr, which proposed rectify holtic local responses account for OCC.

14 Sensors 2018, 18, Sensors 2018, 18, x FOR PEER REVIEW Sensors 2018, 18, x FOR PEER REVIEW (a) (b) (a) (b) (c) (d) 4. precion(c) plot (a) success rate plot (b) trackg results on OTB for FM, (d) 4. precion plot (a) success rate plot (b) trackg results on OTB for FM, precion plot (c) success rate plot (d) trackg results on OTB for OCC. precion plot (c) success rate plot (d) trackg results on OTB 4. precion plot (a) success rate plot (b) trackg results on for OTBOCC. for FM, 2. precion plot (c) success rate plot (d) trackg results on OTB for OCC. comparon results for overall performance 2. results precion success rate for overall performance on 50 video sequences are comparon results for overall performance 2. comparon results for overall performance results precion success ratealgorithms, for overall performance on 50 video areare shown shown as 5. Compared with or precion successsequences rate results precion success rate for overall performance on 50 video sequences are as 5. Compared with or algorithms, precion success rate are highest. highest. precion 0.673, which 6.5% higher than that SCM. success rate shownwhich as 5.Compared with that precion success success are precion 0.673, or 6.5%algorithms, higher than SCM. rate 0.546, which 0.546, improved bywhich 4.7% compared with SCM algorithm. Compared withrate or eight highest. precion 0.673, which 6.5% higher than that SCM. success rate trackers,by has best performance. improved 4.7% compared with SCM algorithm. Compared with or eight trackers, 0.546, which improved by 4.7% compared with SCM algorithm. Compared with or eight has best performance. trackers, has best performance. (a) (b) 5. precion(a) plot (a) success rate plot (b) trackg results (b) on OTB for overall performances. 5. precion plot (a) success rate plot (b) trackg results on OTB for overall 5. precion plot (a) success rate plot (b) trackg results on OTB for performances. overall performances.

15 Sensors 2018, 18, Quantitative Evaluation on Eight Specific Video Sequences Sensors 2018, 18, x FOR PEER REVIEW followg experiment conducted on eight specific video sequences, namely, Basketball, Boy, Football, Quantitative Joggg-1, Evaluation Girl, Skatg1, on Eight Coke, Specific Sylvester, Video Sequences performances algorithms are evaluated on followg metrics experiment center location conducted error, on eight trackg specific precion video sequences, success namely, rate. Basketball, Boy, Football, Joggg-1, Girl, Skatg1, Coke, Sylvester, performances algorithms are 1. Center location error comparon evaluated on metrics center location error, trackg precion success rate. comparon center location errors shown as 6. For Basketball, CLE 1. Center location error comparon ORIA creases at approximately 50th frame, those CT, MTT, L1APG MIL crease at approximately comparon 200th frame. center location CLEs errors ASLA shown SCM as crease 6. For at approximately Basketball, CLE 500th ORIA creases at approximately 50th frame, those CT, MTT, L1APG MIL crease at frame. DFT performs similarly before 650th frame, but after 650th frame, DFT has approximately 200th frame. CLEs ASLA SCM crease at approximately 500th a sharply creasg CLE. Almost all methods fluctuate violently with a high CLE because frame. DFT performs similarly before 650th frame, but after 650th frame, DFT has a basketball sharply player creasg movescle. a Almost romall manner methods with deformation fluctuate violently illumation with a high CLE variation. because CLE basketball betweenplayer 80th moves frame a rom manner 200th frame with deformation fluctuates, butillumation it recovers soon variation. after with CLE a low CLE. For Boy, between CLE 80th ORIA frame creases at 200th approximately frame fluctuates, 100th but it recovers frame because soon after with motion a low blur begs. CLE. For CLEs Boy, DFT, CLE ASLA ORIA creases SCMat crease approximately at approximately 100th frame because 260th frame motion blur cannot resume begs. trackg aga CLEs because DFT, ASLA boy moves SCM crease fast withat deformation, approximately motion 260th blur frame scale cannot variation. performs resume trackg betteraga with because smaller fluctuations boy moves compared fast with deformation, with CT, MTT, motion L1APG blur scale MIL. variation. For Football, almost all performs algorithms better have with a smaller dramatic fluctuations fluctuation compared at approximately with CT, MTT, L1APG 170th frame MIL. because For Football, almost all algorithms have a dramatic fluctuation at approximately 170th frame deformation, but CLE lowest. For Joggg-1, only stable with lowest CLE. because deformation, but CLE lowest. For Joggg-1, only stable with For Coke, partial full occlusions appear from approximately 30th frame, accuracies lowest CLE. For Coke, partial full occlusions appear from approximately 30th frame, accuracies algorithms decrease, algorithms exceptdecrease, for. except For Skatg1, for. Girl For Skatg1, Sylvester, Girl matas Sylvester, a stable performance matas with a stable lowest performance CLE. with lowest CLE. 6. Cont.

16 Sensors 2018, 18, Sensors 2018, 18, x FOR PEER REVIEW Center location error algorithms. 6. Center location error comparon algorithms. Table 3 lts average center location errors eight algorithms. Red dicates that Table algorithm 3 lts has average lowest CLE center with location best performance errors followed eight algorithms. by blue green. Red We dicates can see that algorithm hasalgorithm lowest has CLE lowest with CLE best on seven performance se tests followed lowest by blue average green. CLE on We all tests. can see that To sum algorithm up, by has CLE comparon lowest CLE with onor seven algorithms, se tests we can determe lowest that average our proposed CLE on all tests. To algorithm, sum up,, by outperforms CLE comparon ors with under or different algorithms, challenges. we can determe that our proposed algorithm,, outperforms ors under different challenges. Table 3. Average center location error algorithms. Video Table CT 3. DFT Average ORIA centermtt locationasla error L1APG algorithms. SCM MIL Basketball Video Boy CT 9.03 DFT ORIA MTT ASLA L1APG SCM MIL 2.10 Football Basketball Joggg Boy Skatg Football Girl Joggg-1Coke Skatg1 Sylvester Girl Average Coke Red best, blue second, green third Sylvester Average Trackg precion comparon Red best, blue second, green third. 7 shows precion results each algorithm for various values threshold th 2. Trackg with precion [0, 100]. comparon first algorithm reach 100% precion, except on Coke Sylvester. However, for Coke, precion highest range [0, 80]. For Boy, Joggg-1, Girl, Football 7 shows Basketball, precion results stable each still algorithm first algorithm for various reach values 100% precion. threshold In range th with [0, 100]. approximately first [0, 15], algorithm precion reach 100% lower precion, than those except DFT on Coke SCM Sylvester. on Basketball However, for Coke, lower than precion those DFT, CT, MTT, highest ORIA, L1APG range on Football, [0, 80]. while For or Boy, algorithms Joggg-1, suffer Girl, from Football Basketball, driftg struggle stable reach 100% stillprecion. first In algorithm particular, r reach Football, 100% when precion. threshold In larger range approximately than 18, [0, 15], first reaches precion 100% precion at a lower high rate than those later creases DFT steadily. SCMFor on Skatg1, Basketball lower than range those approximately DFT, CT, MTT, [0, 20], ORIA, has similar L1APG performance on Football, while ASLA or SCM, algorithms but precion suffer from higher than those or algorithms. However, when threshold larger than 20, driftg struggle reach 100% precion. In particular, for Football, when threshold larger precion highest. Table 4 lts average precions eight comparon than 18, first reaches 100% precion at a high rate later creases steadily. For Skatg1, algorithms our algorithm for a threshold 20. average precion higher than those range approximately or eight algorithms [0, 20], on most has similar test sequences. performance Thus, ASLA outperforms SCM, or but precion comparon algorithms. higher than those or algorithms. However, when threshold larger than 20, precion highest. Table 4 lts average precions eight comparon algorithms our algorithm for a threshold 20. average precion higher than those or eight algorithms on most test sequences. Thus, outperforms or comparon algorithms.

17 Sensors 2018, 18, Sensors 2018, 18, x FOR PEER REVIEW Precion comparon algorithms. 7. Precion comparon algorithms. Table 4. Average precion algorithms. Table 4. Average precion algorithms. Video CT DFT ORIA MTT ASLA L1APG SCM MIL VideoBasketball CT DFT0.847 ORIA MTT ASLA L1APG SCM MIL0.869 Boy Basketball Football Boy Football Joggg Joggg-1 Skatg Skatg1 Girl Girl Coke Coke Sylvester Sylvester Average Average Red best, blue second, green third Red best, blue second, green third.

18 Sensors 2018, 18, Sensors 2018, 18, x FOR PEER REVIEW Success rate comparon 3. Success rate comparon 8 shows success rates each algorithm at different values threshold λ with [0, 1]. 8 shows area under success rates curve each (AUC) algorithm at different 8 utilized values evaluate threshold algorithms. with For Boy, [0, 1]. Joggg-1, area Skatg1, under Coke curve (AUC) Girl, AUC 8 utilized evaluate highest compared algorithms. with For Boy, or Joggg-1, Skatg1, Coke Girl, AUC highest compared with or algorithms. algorithms. Especially for Boy, Joggg-1 Girl, success rate decreases stably at a low rate. Especially for Boy, Joggg-1 Girl, success rate decreases stably at a low rate. For For Basketball, performance similar that DFT algorithm, but performance Basketball, performance similar that DFT algorithm, but performance better than that DFT at thresholds lower than approximately 0.3. For Sylvester, success better than that DFT at thresholds lower than approximately 0.3. For Sylvester, success rate highest at thresholds [0, 0.3] close 1, performance rate highest at thresholds [0, 0.3] close 1, performance similar similar those those CT, CT, SCM SCM ORIA. ORIA. Table Table 5 lts 5 lts average average success success rate rate (AUC) (AUC) eight eight comparon algorithms.. algorithm algorithm achieves achieves highest highest average average success success rate rate on on Boy, Boy, Joggg-1, Skatg1, Coke Girl. Girl. Accordg comparon comparon results, results, outperforms or algorithms on on metric success success rate. rate. 8. Success rate comparon algorithms. 8. Success rate comparon algorithms.

19 Sensors 2018, 18, Sensors 2018, 18, x FOR PEER REVIEW Table Average success rate rate algorithms. Video Video CT CT DFT DFT ORIA ORIA MTT ASLA L1APG SCM MIL MIL Basketball Basketball Boy Boy Football Football Joggg-1 Joggg Skatg1 Skatg Girl Girl Coke Coke Sylvester Sylvester Average Average Red Red best, blue second, green green third. third Qualitative Evaluation qualitative analys results algorithms on eight test videos are shown s All problems we concerned are covered se video sequences, fast motion scale variation, partial or full occlusion, deformation illumation variation. We analyze performance algorithms on eight specific video sequences with different problems one by by one. 1. Fast Fast motion motion As shown 9, 9, content video videosequence Boy Boy a adancg dancgboy. boy. In In process process dancg, dancg, shakes shakes romly romlywith withfast fastmotion motionas asshown from 88th 386th. boy s appearance blurs durg fast motion. trackg box ORIA cannot keep up with fast motion trackg drift occurs 108th. In 160th frame, moves backwards, trackg box ORIA loses completely 160th. In 293rd frame, boy squattg down, trackg boxes SCM ALSA cannot keep up with variation, which leads trackg failure failure after after 293rd. 293rd. Besides, Besides, trackg trackg boxes boxes MTT, MTT, L1APG L1APG MIL have MIL have trackg trackg drift drift 293rd. As 293rd. trackg As trackg contues, contues, CT, MTT, CT, L1APG, MTT, SCM, L1APG, MILSCM, or MIL algorithms or also algorithms cannotalso keepcannot up with keep up with motion. Only motion. Only able properly able track properly. track. 9. Trackg results on Boy. 9. Trackg results on Boy. For Coke, a can held by a man moves fast with a small area complex background with For Coke, a can held by a man moves fast with a small area complex background with dense bushes. 10 shows trackg results on Coke. In 42nd frame, because occlusion dense bushes. 10 shows trackg results on Coke. In 42nd frame, because by plants, algorithms DFT, L1APG MIL ASLA have trackg drifts different extents, while occlusion by plants, algorithms DFT, L1APG MIL ASLA have trackg drifts different MTT perform well. n keeps movg fast from 42nd 67th frame. extents, while MTT perform well. n keeps movg fast from 42nd 67th frame. motion directions vary rapidly, which results trackg drifts with L1APG, MTT

20 Sensors 2018, 18, motion Sensors 2018, directions 18, x FOR PEER vary REVIEW rapidly, which results trackg drifts with L1APG, MTT CT algorithms 67th. trackg box DFT, ORIA ASLA lose completely 67th, while CT algorithms 67th. trackg box DFT, ORIA ASLA lose completely MIL algorithms can accurately track presence fast motion. After 194th 67th, while MIL algorithms can accurately track presence fast motion. frame, all algorithms except fail track. Throughout entire process fast motion, After 194th frame, all algorithms except fail track. Throughout entire process algorithm can stably track. fast motion, algorithm can stably track Trackg resultson oncoke. Coke. Our Our algorithm, algorithm,,, performs performs notably well on videosthat thatvolve fast fast motion. motion. Th Th because because algorithm takes account motion constency so that ed cidate samples can be algorithm takes account motion constency so that ed cidate samples can be calculated after predictg s possible motion state durg fast motion, which can possibly calculated after predictg s possible motion state durg fast motion, which can possibly cover region where may appear. In addition, position facr proposed decrease cover region where may appear. In addition, position facr proposed decrease fluence secondary samples, an adaptive strategy utilized address change fluence s appearance secondary brought samples, about by fast an adaptive motion. All strategy se facrs utilized guarantee address highly stable change s accurate appearance trackg. brought about by fast motion. All se guarantee highly stable accurate trackg. 2. Partial full occlusion 2. Partial Two full ladies occlusion are joggg video Joggg-1. As shown 11, partial full occlusions Two are caused ladiesby are joggg pole durg video joggg Joggg-1. from As69th shown 80th frame. In 11, partial 69th frame, full occlusions DFT algorithm fails track first when partial occlusion occurs by pole DFT are caused by pole durg joggg from 69th 80th frame. In 69th frame, DFT algorithm turns track jogger with white shirt. In 80th frame, after jogger we tracked algorithm fails track first when partial occlusion occurs by pole DFT algorithm appears without partial or full occlusion, all algorithms lose completely except. turns track jogger with white shirt. In 80th frame, after jogger we tracked appears without In followg frames, while or algorithms fail track, can stably track partial or full until occlusion, occlusion all completely algorithms dappears lose 129th completely frame. except trackg. box In DFT followg has frames, trackg whiledrift or 221st algorithms frame. In fail 297th track frame,, all algorithms can stably except trackhave no overlap until occlusion with completely true. dappears 129th frame. trackg box DFT has trackg drift 221st frame. In trackg 297th results frame, on Football are algorithms shown except 12. For Football, have no occlusion overlap occurs withwhen true football. players trackg compete results on game. Football players are shown are crowded 12. presence For Football, ball occlusion robbg. occurs And when football partial players or full compete occlusion will occur game. players situation are ball crowded robbg. In presence 118th frame, ball players robbg. are And crowded same place we tracked middle crowds, n partial partial or full occlusion will occur situation ball robbg. In 118th frame, players occlusion occurs at head, which causes confusion different football players. As are crowded same place we tracked middle crowds, n partial competition goes, algorithms have a trackg drift frame 194th. In 292nd, only DFT occlusion occurs at head, which causes confusion different football players. As can accurately track under full occlusion. trackg box L1APG, ASLA, MIL competition or algorithms goes, lose algorithms have turn a trackg anor drift player frame close 194th. In 292nd, we tracked. onlywhen DFT can occlusion accurately dappears track 309th under frame, only full occlusion. can still track trackg. box L1APG, ASLA, MIL orin algorithms, occlusion lose facr proposed turn track local anor representation. player close When cclusion occurs, we tracked. Whenbackground occlusion formation dappears 309th creases, frame, only occlusion can facr stillwill track change. correspondgly. In Partial, full occlusions facr can be rectified proposed based on local occlusion representation. facr appearance When model. occlusion occurs, background formation creases, occlusion facr will change correspondgly. Partial full occlusions can be rectified based on occlusion facr appearance model.

21 Sensors 2018, 18, Sensors 2018, 18, x FOR PEER REVIEW Sensors 2018, 18, x FOR PEER REVIEW Trackg Trackg results results on onjoggg-1. Joggg Trackg results on Joggg Trackg results on Football. onfootball. Football Trackg Trackg results results on Deformation illumation variation Deformation illumation variation 3. Deformation illumation variation For Basketball, No.9 basketball player s appearance varies game because For Basketball, illumation No.9 basketball player s appearance varies game because constant motion. Moreover, changes because reporters take phos, especially when For Basketball, Moreover, No.9 basketball player s appearance varies game because constant motion. illumation changes because reporters take phos, especially when players are shootg. trackg results ne algorithms are shown 13. As we can players are shootg. trackg results ne algorithms are shown 13. As we can constant motion. Moreover, illumation changes because reporters take phos, especially when see, when player starts run, almost all trackg boxes have trackg drift frame 140th, see,are when starts run, almost trackg boxes occurs have drift frame 140th, players shootg. trackg results all ne algorithms are trackg shown 13. AsL1APG, we can see, ORIA loseplayer completely. When deformation 247th, ORIA, MTT, ORIA lose completely. When deformation occurs 247th, ORIA, MTT, L1APG, whenct player starts run, almost all trackg boxes have trackg drift frame 140th, MIL fail track. Only DFT can track with a slight trackg CT MIL fail track DFT while occurs can track DFT, with a slight trackg 344th frame,. player turns around,, SCM, ASLA succeed ORIAdrift. lose In completely. WhenOnly deformation 247th, ORIA, MTT, L1APG, CT drift. In 344th frame, player turns around, while, SCM, DFT, ASLA succeed or lose Only true. In 510th player trackg spectar,drift. all In MIL fail algorithms track. DFT canframe, trackwhen withmoves a slight or algorithms true. In 510th frame, DFT whenhas player moves spectar, all lose track except while DFT. However, drifts. When illumation 344thalgorithms frame, fail player turns around,, SCM, DFT, trackg ASLA succeed or algorithms algorithms fail track except DFT. However, DFT has trackg drifts. When illumation changes 650th frame, only when DFT trackmoves. lose true. In 510th frame, can player spectar, all algorithms fail changes 650th frame, only DFT can track. For Sylvester, a man shakes a y under light. With whole process, as shown 14, track except DFT. However, DFT has trackg drifts. When changes For Sylvester, a man shakesa y y under light. rotates wholeillumation process, as shown 14,650th illumation deformation change, With plane side outsideoccur. frame, only DFT can track. illumation deformation y y change, plane outside trackg boxes have drift when begs move under rotates light side frame 264th. Inoccur. 467th trackg boxes have drift when y begs move under light frame 264th. In 467th For Sylvester, a man shakes a y under light. With whole process, as shown frame, y lower its head, such a situation, algorithm DFT almost lose. n 14, frame, y lower such a situation, DFT almost lose. n occur. illumation deformation y change, algorithm plane rotates side outside 613rd frame, when its head, yhas rotation, algorithm L1APG fails track or 613rd frame, when y has rotation, algorithm L1APG fails track or trackg boxes driftdrift. whenin begs move light light, frame 264th. algorithms havehave trackg y 956th frame, yunder rotatg under morein algorithms drift. In frame, However, y algorithm rotatg light, more such as lower DFT L1APG lose. under performs well lose under 467thalgorithms frame, have y trackg its head, 956th such a situation, DFT almost. algorithms such as DFT L1APG lose. However, performs well under deformation especially shows robustness with ncircumstances 613rd with frame, when yillumation has rotation, algorithm L1APG fails illumation track circumstances with deformation 467th illumation especially shows robustness with illumation decreasg between 264th or algorithms have trackg drift. Inframe. 956th frame, y rotatg under light, decreasg between 264th 467th frame. more algorithms such as DFT L1APG lose. However, performs well under circumstances with deformation illumation especially shows robustness with illumation decreasg between 264th 467th frame.

22 Sensors 2018, 18, 572 Sensors 2018, 18, x FOR PEER REVIEW Trackg results on Basketball. 13. Trackg results on Basketball. 14. Trackg results on Sylvester. 14. Trackg results on Sylvester. First, First, based based on on particle particle filterg filterg framework, framework, so so that that some some extent extent diversity diversity cidate cidate samples can resolve elastic deformation. Secondly, holtic representation local representation samples can resolve elastic deformation. Secondly, holtic representation local representation are are utilized utilized appearance appearance model, model, which which shows shows great great robustness robustness by by enablg enablg holtic holtic local local representation capture changes s deformation illumation variation. Fally, when representation capture changes s deformation illumation variation. Fally, models are beg appropriately updatg positivepositive negative templates can also adapt when models areupdated, beg updated, appropriately updatg negative templates can also changes appearance environment. adapt changes appearance environment. 4. variation rotation 4. Scale Scale variation rotation For For Skatg1, Skatg1, aa female female athlete athlete skatg skatg on on ice, ice, athlete athlete contuously contuously slides slides so so relative relative dtance camera changes contuously, thus athlete has a scale variation scene. Besides, dtance camera changes contuously, thus athlete has a scale variation scene. Besides, dancg dancg process, process,thus thus athlete athletealso alsohas hasa arotation rotation scene. scene. can seen athlete athlete It It can bebe seen from 24thframe frame 176th 176thframe, frame, size size becomes becomes smaller, smaller, 15,15, from 24th algorithms And CT, CT, DFT, algorithms SCM, SCM, ASLA ASLA can can track track.. And DFT, ORIA, ORIA, L1APG L1APG MIL MIL lose lose true frame 176th. From 176th 371st frame, moves front true frame 176th. From 176th 371st frame, moves front stage becomes larger with rotation on stage, almost all algorithms fail track except stage becomes larger with rotation on stage, almost all algorithms fail track except SCM. SCM. But But SCM SCM has has trackg trackg drift. drift. In In 389th 389th frame, frame, SCM SCM loses loses athlete, athlete, only only can can track track..

23 Sensors 2018, 18, Sensors Sensors 2018, 2018, 18, 18, xx FOR FOR PEER PEER REVIEW REVIEW Trackg results on Skatg Trackg Trackg results results on on Skatg1. Skatg1. For For Girl, Girl, aa woman woman sittg sittg on on chair chair varies varies her her position position facg facg direction direction by by rotatg rotatg chair. It can be seen 16, size changes from 11st 392nd chair. chair. It can be seen 16, size changes from 11st 392nd frame frame with with rotation rotation on chair. Because rotationon on chair. chair. Because Because chair s chair s rotation, rotation, size size becomes becomes smaller smaller 101st 101st frame compared with that 11st frame. After that, girl turns around becomes larger frame compared with that 11st frame. After that, girl turns around becomes larger 134th. 134th. In In th th process, process, all all algorithms algorithms have have trackg trackg drift, drift, ORIA ORIA loses loses completely completely from from 101st 101st 487th. 487th. When When turns turns around around from from 101st 101st 134th, 134th, trackg trackg boxes boxes ORI, ORI, CT CT. Although appearance changes promcuously from 227th MIL lose. Although appearance changes promcuously from MIL lose. Although appearance changes promcuously from 227th 227th 487th, performs well under th circumstance shows robustness accuracy compared 487th, 487th, performs performs well well under under th th circumstance circumstance shows shows robustness robustness accuracy compared with algorithms. with or or algorithms. algorithms. Based Based on on particle particle filter, filter, six six affe affe parameters parameters cludg cludg parameters parameters dicate dicate scalg scalg variation variation rotation are used represent cidate samples. Thus, diversity scalg rotation are used represent represent cidate cidate samples. samples. Thus, diversity scalg can can be be ensured prediction cidate samples, which could adapt scene scale variation. prediction cidate samples, which could adapt scene scale variation. ensured prediction cidate samples, which could adapt scene scale variation. 16. Trackg results on Girl Trackg Trackg results results on on Girl. Girl Dcussion Dcussion In In th th part, part, we we furr furr illustrate illustrate selection selection related related means means comparon comparon trackg results computational complexity between SCM. trackg results computational computational complexity complexity between between SCM. SCM.

24 Sensors 2018, 18, Sensors 2018, 18, x FOR PEER REVIEW Dcussion about Number Particles Most trackg trackg methods methods with with particle particle filter framework filter framework exploit exploit Gaussian dtribution Gaussian model dtribution predict model cidate predict samples. cidate Oursamples. algorithm, Our, algorithm, predicts, ed predicts cidate ed samples cidate by estimatg samples by estimatg motion direction motion dtance; direction addition, dtance; position addition, facr defed position tacr assign weights defed cidate assign weights samples cidate different samples positions. different To prove positions. superiority To prove superiority algorithm terms algorithm resource terms savg, we resource showsavg, that we smaller show number that smaller samples number predicted samples by ourpredicted method can by our achieve method better can performance achieve better with performance fewer particles with fewer compared particles withcompared or algorithms. with or Toalgorithms. do so, we conduct To do so, an experiment we conduct on an experiment SCMon with 200, 400, SCM 600 with particles. 200, 400, test 600 sequence particles. consts test 17 sequence videos from consts CVPR videos benchmark from CVPR2013 datasets with benchmark fast motion datasets properties. with fast motion results are properties. shown results 17. are shown Comparon between with different number particles. success rate trackg precion with particles are higher than those success rate trackg precion with particles are higher than those SCM with 600 particles; success rate trackg precion with 200 particles are higher SCM with 600 particles; success rate trackg precion with 200 particles are than those SCM with particles. can achieve better performance with fewer higher than those SCM with particles. can achieve better performance with fewer particles compared with SCM. In th paper, we predict state corporate particles compared with SCM. In th paper, we predict state corporate state formation algorithm. More cidate samples are obtaed position where state formation algorithm. More cidate samples are obtaed position where may appear with a larger probability, weights are assigned all cidates may appear with a larger probability, weights are assigned all cidates accordg accordg position facr. Thus, impact valid samples on computation decreases position facr. Thus, impact valid samples on computation decreases better better performance can be achieved with fewer particles. performance can be achieved with fewer particles Dcussion Position Facr Accordg motion constency, states successive frames are are correlated. refore, probability appearg each position different, importances cidate cidate samples samples located located different different positions positions also also differ. differ. Thus, Thus, a double a double Gaussian Gaussian probability probability model model proposed proposed due due motion motion constency constency successive successive frames frames simulate simulate state, based state, on based which on which position facr position proposed facr proposed assign weights assign weights cidate cidate samples samples represent represent differences differences between cidate between cidate samples with samples different with different locations. locations. To demonstrate To demonstrate significant significant impact impact position position facr, facr, an experiment an experiment conducted conducted with 600 with particles 600 particles on on without without position position facr (W), facr (W), test sequence test sequence consts consts 17 videos 17from videos from CVPR2013 CVPR2013 benchmark benchmark datasets datasets with fast with motion fast motion properties. properties. results are results shown are shown 18. Accordg 18. Accordg experimental experimental results, results, precion precion 6.8% higher 6.8% than higher that than W, that W, success rate success 6.7% ratehigher 6.7% than higher that than W. that Thus, W. Thus, position facr position improves facr improves trackg precion trackg precion success rate. success rate.

25 Sensors 2018, 18, Sensors 2018, 18, x FOR PEER REVIEW (a) (b) precion plot plot (a) (a) success success rate rate plot plot (b) (b) on on W W for for FM. FM Dcussion Dcussion Computational Computational Complexity Complexity computational computational complexity complexity an an important important evaluation evaluation facr. facr. We We also also evaluate evaluate computational computational complexity complexity algorithm. algorithm. Sce Sce both both SCM algorithms SCM algorithms are based are on based particle on particle filter, we filter, will we provide will provide complexity complexity analys analys between between se se two two algorithms. algorithms. computational computational complexity complexity will be will evaluated be evaluated by by trackg trackg speed. speed. higher higher trackg speed, trackg speed, lower lower computational computational complexity complexity algorithm. algorithm. experiments experiments computational computational complexity complexity are all implemented are all implemented on computer on computer our lab, our computer lab, computer with 3.20 GHz with 3.20 CPU, GHz 4.0 GB CPU, RAM 4.0 GB RAM Wdows Wdows 10 operatg 10 operatg system. system. And And results results on eight on eight specific specific videos videos are shown are shown Table Table In In addition, addition, trackg trackg speed speed close close that that SCM. SCM. Simultaneously, Simultaneously, trackg trackg precion precion can can achieve achieve 10% 10% crement crement on on fast fast motion motion compared compared with with SCM SCM as as shown shown results results quantitative evaluation. Th because state prediction, position facr quantitative evaluation. Th because state prediction, position facr velocity prediction were developed state transition model, appearance model updatg velocity prediction were developed state transition model, appearance model updatg scheme scheme respectively compared with SCM. respectively compared with SCM. In addition, experimental results show that computational complexity tracker related size. Table It because 6. Trackg appearance speed comparon model, algorithms. divided patches, which will be represented sparely by dictionary. In fact, trackg speed also fluenced by or facrs, Trackg such as different update frequencies videos with different scenes. Accordgly, Coke Sylvester Skatg1 Basketball Joggg-1 Football Boy Girl Average computational Speed (FPS) complexity tracker will decrease if size becomes smaller, meanwhile SCM fluctuation will still ext Table 6. ( Trackg size speed decreases comparon from Coke Girl.) algorithms. InTrackg addition, Coke experimental Sylvester results Skatg1 show Basketball that Joggg-1 computational Football complexity Boy Girl tracker Average Speed (FPS) related size. It because appearance model, divided patches, which will SCM be represented sparely by dictionary. In fact, trackg speed also fluenced by or facrs, such as different update frequencies ( videos size decreases with different from Coke scenes. Girl.) Accordgly, computational complexity tracker will decrease if size becomes smaller, meanwhile fluctuation will still ext. 5. Conclusions 5. Conclusions In th paper, we propose an object trackg algorithm based on motion constency. state predicted based on motion direction dtance later merged state In th paper, we propose an object trackg algorithm based on motion constency. transition model predict additional cidate samples. appearance model first exploits state predicted based on motion direction dtance later merged state transition position facr assign weights cidate samples characterize different importance model predict additional cidate samples. appearance model first exploits position facr cidate samples different positions. At same time, occlusion facr utilized rectify assign weights cidate samples characterize different importance cidate samples similarity local representation obta local response, cidates are different positions. At same time, occlusion facr utilized rectify similarity local represented sparsely by positive negative templates holtic representation obta representation obta local response, cidates are represented sparsely by positive holtic response. trackg result determed by position facr with local holtic responses each cidate. adaptive template updatg scheme based on velocity prediction proposed adapt appearance changes when object moves quickly by updatg

26 Sensors 2018, 18, negative templates holtic representation obta holtic response. trackg result determed by position facr with local holtic responses each cidate. adaptive template updatg scheme based on velocity prediction proposed adapt appearance changes when object moves quickly by updatg dynamic positive templates negative templates contuously. Accordg qualitative quantitative experimental results, proposed has good performance agast state---art trackg algorithms, especially dealg with fast motion occlusion, on several challengg video sequences. Supplementary Materials: followg are available onle at Video S1: 9-Boy, Video S2: 10-Coke, Video S3: 11-Joggg-1, Video S4: 12-Football, Video S5: 13-Basketball, Video S6: 14-Sylester, Video S7: 15-Skatg1, Video S8: 16-Girl. Acknowledgments: authors would like thank National Science Foundation Cha: , National Science Foundation Cha: , Jot Foundation Mtry Education Cha: 6141A020223, Natural Science Basic Research Plan Shaanxi Provce Cha: 2017JM6018. Author Contributions: In th work, Lijun He Fan Li conceived ma idea, designed ma algorithms wrote manuscript. Xiaoya Qiao analyzed data, performed simulation experiments. Shuai Wen analyzed data, provided important suggestions. Conflicts Interest: authors declare no conflict terest. References 1. Chor, A.J.; Tu, X. A turial on particle filters for onle nonlear/nongaussian Bayesia trackg. Esaim Math. Model. Numer. Anal. 2012, 46, [CrossRef] 2. Zhong, W.; Lu, H.; Yang, M. Robust object trackg via sparse collaborative appearance model. IEEE Trans. Image Process. 2014, 23, [CrossRef] [PubMed] 3. Liu, H.; Li, S.; Fang, L. Robust object trackg based on prcipal component analys local sparse representation. IEEE Trans. Instrum. Meas. 2015, 64, [CrossRef] 4. He, Z.; Yi, S.; Cheung, Y.M.; You, X.; Yang, Y. Robust object trackg via key patch sparse representation. IEEE Trans. Cybern. 2016, 47, [CrossRef] [PubMed] 5. Hotta, K. Adaptive weightg local classifiers by particle filters for robust trackg. Pattern Recognit. 2009, 42, [CrossRef] 6. Li, F.; Liu, S. Object trackg via a cooperative appearance model. Knowl.-Based Syst. 2017, 129, [CrossRef] 7. Kwon, J.; Lee, K.M. Vual trackg decomposition. In Proceedgs IEEE Conference on Computer Vion Pattern Recognition (CVPR), San Francco, CA, USA, June Ahuja, N. Robust vual trackg via multi-task sparse learng. In Proceedgs IEEE Conference on Computer Vion Pattern Recognition (CVPR), Providence, RI, USA, June Breitenste, M.D.; Reichl, F.; Leibe, B.; Koller-Merier, E. Robust trackg-by-detection usg a detecr confidence particle filter. In Proceedgs IEEE Conference on Computer Vion (ICCV), Kyo, Japan, 29 September 2 Ocber Shan, C.; Tan, T.; Wei, Y. Real-time h trackg usg a mean shift embedded particle filter. Pattern Recognit. 2007, 40, [CrossRef] 11. Mei, X.; Lg, H. Robust Vual Trackg Vehicle Classification via Sparse Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, [PubMed] 12. Bao, C.; Wu, Y.; Lg, H.; Ji, H. Real time robust L1 tracker usg accelerated proximal gradient approach. In Proceedgs IEEE Conference on Computer Vion Pattern Recognition (CVPR), Providence, RI, USA, June Cai, Y.; Freitas, D.N.; Little, J.J. Robust Vual Trackg for Multiple Targets. In Proceedgs European Conference on Computer Vion (ECCV), Graz, Austria, 7 13 May Wang, J.; Yagi, Y. Adaptive mean-shift trackg with auxiliary particles. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2009, 39, [CrossRef] [PubMed] 15. Bellot, N.; Hu, H. Multensor-based human detection trackg for mobile service robots. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2009, 39, [CrossRef] [PubMed]

27 Sensors 2018, 18, Zhang, T.; Liu, S.; Ahuja, N.; Yang, M.H. Robust Vual Trackg via Constent Low-Rank Sparse Learng. Int. J. Comput. V. 2015, 111, [CrossRef] 17. Wang, Z.; Wang, J.; Zhang, S.; Gong, Y. Vual trackg based on onle sparse feature learng. Image V. Comput. 2015, 38, [CrossRef] 18. Jia, X.; Lu, H.; Yang, M.H. Vual trackg via adaptive structural local sparse appearance model. In Proceedgs IEEE Conference on Computer Vion Pattern Recognition (CVPR), Providence, RI, USA, June Liu, B.; Huang, J.; Yang, L.; Kulikowsk, C. Robust trackg usg local sparse appearance model K-selection. In Proceedgs IEEE Conference on Computer Vion Pattern Recognition (CVPR), Colorado Sprgs, CO, USA, June Wu, Y.; Shen, B.; Lg, H. Onle robust image alignment via iterative convex optimization. In Proceedgs IEEE Conference on Computer Vion Pattern Recognition (CVPR), Providence, RI, USA, June Hare, S.; Saffari, A.; Veet, V.; Cheng, M.M.; Hicks, S.L.; Torr, P.H.S. Struck: Structured output trackg with kernels. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, [CrossRef] [PubMed] 22. Bahenko, B.; Yang, M.H.; Belongie, S. Robust object trackg with onle multiple stance learng. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, [CrossRef] [PubMed] 23. Zhang, K.; Zhang, L.; Yang, M.H. Real-Time Compressive Trackg. In Proceedgs European Conference on Computer Vion (ECCV), Firenze, Italy, 7 13 Ocber Grabner, H.; Grabner, M.; Bch, H. Real-Time Trackg via On-le Boostg. In Proceedgs Brith Mache Vion Conference, Brl, UK, 9 13 September Grabner, H.; Bch, H. On-le Boostg Vion. In Proceedgs IEEE Conference on Computer Vion Pattern Recognition (CVPR), New York, NY, USA, June Sa, K.; Sekiguchi, S.; Fukumori, T.; Kawaghi, N. Beyond semi-superved trackg: Trackg should be as simple as detection, but not simpler than recognition. In Proceedgs IEEE Conference on Computer Vion (ICCV), Kyo, Japan, 29 September 2 Ocber Li, F.; Zhang, S.; Qiao, X. Scene-Aware Adaptive Updatg for Vual Trackg via Correlation Filters. Sensors 2017, 17, [CrossRef] [PubMed] 28. Li, J.; Fan, X. Outdoor augmented reality trackg usg 3D city models game enge. In Proceedgs International Congress on Image Signal Processg, Dalian, Cha, Ocber Dudek, D. Collaborative detection traffic anomalies usg first order Markov chas. In Proceedgs International Conference on Networked Sensg Systems, Antwerp, Belgium, June Chen, S.; Li, S.; Ji, R.; Yan, Y. Dcrimative local collaborative representation for onle object trackg. Knowl.-Based Syst. 2016, 100, [CrossRef] 31. Sevilla-Lara, L.; Learned-Miller, E. Dtribution fields for trackg. In Proceedgs IEEE Conference on Computer Vion Pattern Recognition (CVPR), Providence, RI, USA, June Kanungo, T.; Mount, D.M.; Netanyahu, N.S.; Piatko, C.D. An Efficient k-means Clusterg Algorithm: Analys Implementation. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, [CrossRef] 33. Wu, Y.; Lim, J.; Yang, M.H. Onle Object Trackg: A Benchmark. In Proceedgs IEEE Conference on Computer Vion Pattern Recognition (CVPR), Portl, OR, USA, June by authors. Licensee MDPI, Basel, Switzerl. Th article an open access article dtributed under terms conditions Creative Commons Attribution (CC BY) license (

Superpixel Tracking. The detail of our motion model: The motion (or dynamical) model of our tracker is assumed to be Gaussian distributed:

Superpixel Tracking. The detail of our motion model: The motion (or dynamical) model of our tracker is assumed to be Gaussian distributed: Superpixel Tracking Shu Wang 1, Huchuan Lu 1, Fan Yang 1 abnd Ming-Hsuan Yang 2 1 School of Information and Communication Engineering, University of Technology, China 2 Electrical Engineering and Computer

More information

Transformations on the Complex Γ Plane

Transformations on the Complex Γ Plane 2/12/27 Transformations on the Complex 1/7 Transformations on the Complex Γ Plane The usefulness of the complex Γ plane is apparent when we consider aga the termated, lossless transmission le: z = z =

More information

Transformations on the Complex Γ Plane

Transformations on the Complex Γ Plane 2/4/2010 Transformations on the Complex G plane present.doc 1/8 Transformations on the Complex Γ Plane The usefulness of the complex Γ plane is apparent when we consider aga the termated, lossless transmission

More information

A New Framework of Human Interaction Recognition Based on Multiple Stage Probability Fusion

A New Framework of Human Interaction Recognition Based on Multiple Stage Probability Fusion applied sciences Article A New Framework Human Interaction Recognition Bed on Multiple Stage Probability Fusion Xiaei Ji 1, *, Changhui Wang 1 Zhaojie Ju 2 1 School Aumation, Shenyang Aerospace University,

More information

HUE PRESERVING ENHANCEMENT ALGORITHM BASED ON WAVELET TRANSFORM AND HUMAN VISUAL SYSTEM

HUE PRESERVING ENHANCEMENT ALGORITHM BASED ON WAVELET TRANSFORM AND HUMAN VISUAL SYSTEM International Journal of Information Technology and Knowledge Management July-December 011, Volume 4, No., pp. 63-67 HUE PRESERVING ENHANCEMENT ALGORITHM BASED ON WAVELET TRANSFORM AND HUMAN VISUAL SYSTEM

More information

Efficient Acquisition of Human Existence Priors from Motion Trajectories

Efficient Acquisition of Human Existence Priors from Motion Trajectories Efficient Acquisition of Human Existence Priors from Motion Trajectories Hitoshi Habe Hidehito Nakagawa Masatsugu Kidode Graduate School of Information Science, Nara Institute of Science and Technology

More information

Image Recognition of Icing Thickness on Power Transmission Lines Based on a Least Squares Hough Transform

Image Recognition of Icing Thickness on Power Transmission Lines Based on a Least Squares Hough Transform energies Article Image Recognition Icg Thickness on Power Transmsion Les Based on a Least Squares Hough Transform Jgjg Wang 1, Junhua Wang 2, *, Jianwei Shao 2 Jiangui Li 3, * 1 School Geodesy Geomatics,

More information

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Si Chen The George Washington University sichen@gwmail.gwu.edu Meera Hahn Emory University mhahn7@emory.edu Mentor: Afshin

More information

Translation Symmetry Detection: A Repetitive Pattern Analysis Approach

Translation Symmetry Detection: A Repetitive Pattern Analysis Approach 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Translation Symmetry Detection: A Repetitive Pattern Analysis Approach Yunliang Cai and George Baciu GAMA Lab, Department of Computing

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

A Bayes Learning-based Anomaly Detection Approach in Large-scale Networks. Wei-song HE a*

A Bayes Learning-based Anomaly Detection Approach in Large-scale Networks. Wei-song HE a* 17 nd International Conference on Computer Science and Technology (CST 17) ISBN: 978-1-69-461- A Bayes Learng-based Anomaly Detection Approach Large-scale Networks Wei-song HE a* Department of Electronic

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Object Tracking using HOG and SVM

Object Tracking using HOG and SVM Object Tracking using HOG and SVM Siji Joseph #1, Arun Pradeep #2 Electronics and Communication Engineering Axis College of Engineering and Technology, Ambanoly, Thrissur, India Abstract Object detection

More information

THE recent years have witnessed significant advances

THE recent years have witnessed significant advances IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 4, APRIL 2014 1639 Robust Superpixel Tracking Fan Yang, Student Member, IEEE, Huchuan Lu, Senior Member, IEEE, and Ming-Hsuan Yang, Senior Member, IEEE

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Robust Ring Detection In Phase Correlation Surfaces

Robust Ring Detection In Phase Correlation Surfaces Griffith Research Online https://research-repository.griffith.edu.au Robust Ring Detection In Phase Correlation Surfaces Author Gonzalez, Ruben Published 2013 Conference Title 2013 International Conference

More information

Real-time Object Tracking via Online Discriminative Feature Selection

Real-time Object Tracking via Online Discriminative Feature Selection IEEE TRANSACTION ON IMAGE PROCESSING 1 Real-time Object Tracking via Online Discriminative Feature Selection Kaihua Zhang, Lei Zhang, and Ming-Hsuan Yang Abstract Most tracking-by-detection algorithms

More information

Background subtraction in people detection framework for RGB-D cameras

Background subtraction in people detection framework for RGB-D cameras Background subtraction in people detection framework for RGB-D cameras Anh-Tuan Nghiem, Francois Bremond INRIA-Sophia Antipolis 2004 Route des Lucioles, 06902 Valbonne, France nghiemtuan@gmail.com, Francois.Bremond@inria.fr

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Online Discriminative Tracking with Active Example Selection

Online Discriminative Tracking with Active Example Selection 1 Online Discriminative Tracking with Active Example Selection Min Yang, Yuwei Wu, Mingtao Pei, Bo Ma and Yunde Jia, Member, IEEE Abstract Most existing discriminative tracking algorithms use a sampling-and-labeling

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking Sensors 2014, 14, 3130-3155; doi:10.3390/s140203130 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging Florin C. Ghesu 1, Thomas Köhler 1,2, Sven Haase 1, Joachim Hornegger 1,2 04.09.2014 1 Pattern

More information

Automatic Data Acquisition Based on Abrupt Motion Feature and Spatial Importance for 3D Volleyball Analysis

Automatic Data Acquisition Based on Abrupt Motion Feature and Spatial Importance for 3D Volleyball Analysis Automatic Data Acquisition Based on Abrupt Motion Feature and Spatial Importance for 3D Volleyball Analysis 1. Introduction Sports analysis technologies have attracted increasing attention with the hosting

More information

Tri-modal Human Body Segmentation

Tri-modal Human Body Segmentation Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4

More information

Detection of Moving Ships in Sequences of Remote Sensing Images

Detection of Moving Ships in Sequences of Remote Sensing Images International Journal Geo-Information Article Detection Movg Ships Sequences Remote Sensg Images Shun Yao 1, Xueli Chang 2,3, *, Yufeng Cheng 1, Shuyg J 1 Deshan Zuo 4 1 State Key Laboratory Information

More information

CHAPTER 5 MOTION DETECTION AND ANALYSIS

CHAPTER 5 MOTION DETECTION AND ANALYSIS CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series

More information

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Combining PGMs and Discriminative Models for Upper Body Pose Detection

Combining PGMs and Discriminative Models for Upper Body Pose Detection Combining PGMs and Discriminative Models for Upper Body Pose Detection Gedas Bertasius May 30, 2014 1 Introduction In this project, I utilized probabilistic graphical models together with discriminative

More information

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT 2.1 BRIEF OUTLINE The classification of digital imagery is to extract useful thematic information which is one

More information

Learning video saliency from human gaze using candidate selection

Learning video saliency from human gaze using candidate selection Learning video saliency from human gaze using candidate selection Rudoy, Goldman, Shechtman, Zelnik-Manor CVPR 2013 Paper presentation by Ashish Bora Outline What is saliency? Image vs video Candidates

More information

EUDET Telescope Geometry and Resolution Studies

EUDET Telescope Geometry and Resolution Studies EUDET EUDET Telescope Geometry and Resolution Studies A.F.Żarnecki, P.Nieżurawski February 2, 2007 Abstract Construction of EUDET pixel telescope will significantly improve the test beam infrastructure

More information

Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing

Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Supplementary Material Introduction In this supplementary material, Section 2 details the 3D annotation for CAD models and real

More information

Global Optimization of Integrated Transformers for High Frequency Microwave Circuits Using a Gaussian Process Based Surrogate Model

Global Optimization of Integrated Transformers for High Frequency Microwave Circuits Using a Gaussian Process Based Surrogate Model Global Optimization of Integrated Transformers for High Frequency Microwave Circuits Usg a Gaussian Process Based Surrogate Model Bo Liu, Yg He, Patrick Reynaert, Georges Gielen ESAT-MICAS, Katholieke

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space

Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space MATEC Web of Conferences 95 83 (7) DOI:.5/ matecconf/79583 ICMME 6 Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space Tao Ni Qidong Li Le Sun and Lingtao Huang School

More information

Spatio-Temporal LBP based Moving Object Segmentation in Compressed Domain

Spatio-Temporal LBP based Moving Object Segmentation in Compressed Domain 2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance Spatio-Temporal LBP based Moving Object Segmentation in Compressed Domain Jianwei Yang 1, Shizheng Wang 2, Zhen

More information

Method of Semantic Web Service Discovery and Automatic Composition

Method of Semantic Web Service Discovery and Automatic Composition 298 JOURNAL OF EMERGING TECHNOLOGIES IN WEB INTELLIGENCE, VOL. 6, NO. 3, AUGUST 2014 Method of Semantic Web Service Discovery and Automatic Composition Yuem Wang Suqian College, 3 rd Department, Suqian,

More information

Linear combinations of simple classifiers for the PASCAL challenge

Linear combinations of simple classifiers for the PASCAL challenge Linear combinations of simple classifiers for the PASCAL challenge Nik A. Melchior and David Lee 16 721 Advanced Perception The Robotics Institute Carnegie Mellon University Email: melchior@cmu.edu, dlee1@andrew.cmu.edu

More information

Keywords:- Object tracking, multiple instance learning, supervised learning, online boosting, ODFS tracker, classifier. IJSER

Keywords:- Object tracking, multiple instance learning, supervised learning, online boosting, ODFS tracker, classifier. IJSER International Journal of Scientific & Engineering Research, Volume 5, Issue 2, February-2014 37 Object Tracking via a Robust Feature Selection approach Prof. Mali M.D. manishamali2008@gmail.com Guide NBNSCOE

More information

arxiv: v1 [cs.cv] 14 Dec 2016

arxiv: v1 [cs.cv] 14 Dec 2016 Temporal-Needle : A view and appearance invariant video descriptor Michal Yarom Michal Irani The Weizmann Institute of Science, Israel arxiv:1612.04854v1 [cs.cv] 14 Dec 2016 Abstract The ability to detect

More information

A Comparison of Alternative Distributed Dynamic Cluster Formation Techniques for Industrial Wireless Sensor Networks

A Comparison of Alternative Distributed Dynamic Cluster Formation Techniques for Industrial Wireless Sensor Networks sensors Article A Comparon Alternative Dtributed Dynamic Cluster Formation Techniques for Industrial Wireless Sensor Networks Mohammad Gholami Robert W. Brennan * Received: 23 October 2015; Accepted: 29

More information

IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING

IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING Jianzhou Feng Li Song Xiaog Huo Xiaokang Yang Wenjun Zhang Shanghai Digital Media Processing Transmission Key Lab, Shanghai Jiaotong University

More information

VISUAL tracking plays an important role in signal processing

VISUAL tracking plays an important role in signal processing IEEE TRANSACTIONS ON CYBERNETICS 1 Correlation Filter Learning Toward Peak Strength for Visual Tracking Yao Sui, Guanghui Wang, and Li Zhang Abstract This paper presents a novel visual tracking approach

More information

WP1: Video Data Analysis

WP1: Video Data Analysis Leading : UNICT Participant: UEDIN Fish4Knowledge Final Review Meeting - November 29, 2013 - Luxembourg Workpackage 1 Objectives Fish Detection: Background/foreground modeling algorithms able to deal with

More information

Spatial Outlier Detection

Spatial Outlier Detection Spatial Outlier Detection Chang-Tien Lu Department of Computer Science Northern Virginia Center Virginia Tech Joint work with Dechang Chen, Yufeng Kou, Jiang Zhao 1 Spatial Outlier A spatial data point

More information

PROBLEM FORMULATION AND RESEARCH METHODOLOGY

PROBLEM FORMULATION AND RESEARCH METHODOLOGY PROBLEM FORMULATION AND RESEARCH METHODOLOGY ON THE SOFT COMPUTING BASED APPROACHES FOR OBJECT DETECTION AND TRACKING IN VIDEOS CHAPTER 3 PROBLEM FORMULATION AND RESEARCH METHODOLOGY The foregoing chapter

More information

Research on PF-SLAM Indoor Pedestrian Localization Algorithm Based on Feature Point Map

Research on PF-SLAM Indoor Pedestrian Localization Algorithm Based on Feature Point Map micromaches Article Research on PF-SLAM Indoor Pedestrian Localization Algorithm Based on Feature Pot Map Jgjg Shi 1,2,3, Mgrong Ren 1,2,3, * ID, Pu Wang 1,2,3 Juan Meng 1,2,3 1 College Automation, Faculty

More information

Enhanced Laplacian Group Sparse Learning with Lifespan Outlier Rejection for Visual Tracking

Enhanced Laplacian Group Sparse Learning with Lifespan Outlier Rejection for Visual Tracking Enhanced Laplacian Group Sparse Learning with Lifespan Outlier Rejection for Visual Tracking Behzad Bozorgtabar 1 and Roland Goecke 1,2 1 Vision & Sensing, HCC Lab, ESTeM University of Canberra 2 IHCC,

More information

Component-Based Cartoon Face Generation

Component-Based Cartoon Face Generation electronics Article Component-Based Caron Face Generation Saman Sepehri Nejad Mohammad Ali Balafar *, Department IT, Faculty Electrical Computer Engeerg, University Tabriz, 29 Bahman Boulevard, Tabriz

More information

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Chris J. Needham and Roger D. Boyle School of Computing, The University of Leeds, Leeds, LS2 9JT, UK {chrisn,roger}@comp.leeds.ac.uk

More information

A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors

A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors sensors Article A Parameter Communication Optimization Strategy for Distributed Mache Learng Sensors Jil Zhang 1,2,3,4,5,, Hangdi Tu 1,2,, Yongjian Ren 1,2, *, Jian Wan 1,2,4,5, Li Zhou 1,2, Mgwei Li 1,2,

More information

AS a fundamental component in computer vision system,

AS a fundamental component in computer vision system, 18 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 7, NO. 3, MARCH 018 Exploiting Spatial-Temporal Locality of Tracking via Structured Dictionary Learning Yao Sui, Guanghui Wang, Senior Member, IEEE, Li Zhang,

More information

Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Supplementary Material

Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Supplementary Material Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Supplementary Material Chi Li, M. Zeeshan Zia 2, Quoc-Huy Tran 2, Xiang Yu 2, Gregory D. Hager, and Manmohan Chandraker 2 Johns

More information

Learning the Three Factors of a Non-overlapping Multi-camera Network Topology

Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Xiaotang Chen, Kaiqi Huang, and Tieniu Tan National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

10-701/15-781, Fall 2006, Final

10-701/15-781, Fall 2006, Final -7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly

More information

15-451/651: Design & Analysis of Algorithms November 20, 2018 Lecture #23: Closest Pairs last changed: November 13, 2018

15-451/651: Design & Analysis of Algorithms November 20, 2018 Lecture #23: Closest Pairs last changed: November 13, 2018 15-451/651: Design & Analysis of Algorithms November 20, 2018 Lecture #23: Closest Pairs last changed: November 13, 2018 1 Prelimaries We ll give two algorithms for the followg closest pair problem: Given

More information

DARGS: Dynamic AR Guiding System for Indoor Environments

DARGS: Dynamic AR Guiding System for Indoor Environments computers Article DARGS: Dynamic AR Guidg System for Indoor Environments Georg Gerstweiler *, Karl Platzer Hannes Kaufmann ID Institute Stware Technology Interactive Systems, Vienna University Technology,

More information

Link Prediction for Social Network

Link Prediction for Social Network Link Prediction for Social Network Ning Lin Computer Science and Engineering University of California, San Diego Email: nil016@eng.ucsd.edu Abstract Friendship recommendation has become an important issue

More information

International Journal of Advance Engineering and Research Development

International Journal of Advance Engineering and Research Development Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

OBJECT tracking has been extensively studied in computer

OBJECT tracking has been extensively studied in computer IEEE TRANSAION ON IMAGE PROCESSING 1 Real-time Object Tracking via Online Discriminative Feature Selection Kaihua Zhang, Lei Zhang, and Ming-Hsuan Yang Abstract Most tracking-by-detection algorithms train

More information

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation Alexander Andreopoulos, Hirak J. Kashyap, Tapan K. Nayak, Arnon Amir, Myron D. Flickner IBM Research March 25,

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

Struck: Structured Output Tracking with Kernels. Presented by Mike Liu, Yuhang Ming, and Jing Wang May 24, 2017

Struck: Structured Output Tracking with Kernels. Presented by Mike Liu, Yuhang Ming, and Jing Wang May 24, 2017 Struck: Structured Output Tracking with Kernels Presented by Mike Liu, Yuhang Ming, and Jing Wang May 24, 2017 Motivations Problem: Tracking Input: Target Output: Locations over time http://vision.ucsd.edu/~bbabenko/images/fast.gif

More information

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

A New Approach for Train Driver Tracking in Real Time. Ming-yu WANG, Si-le WANG, Li-ping CHEN, Xiang-yang CHEN, Zhen-chao CUI, Wen-zhu YANG *

A New Approach for Train Driver Tracking in Real Time. Ming-yu WANG, Si-le WANG, Li-ping CHEN, Xiang-yang CHEN, Zhen-chao CUI, Wen-zhu YANG * 2018 International Conference on Modeling, Simulation and Analysis (ICMSA 2018) ISBN: 978-1-60595-544-5 A New Approach for Train Driver Tracking in Real Time Ming-yu WANG, Si-le WANG, Li-ping CHEN, Xiang-yang

More information

Accelerometer Gesture Recognition

Accelerometer Gesture Recognition Accelerometer Gesture Recognition Michael Xie xie@cs.stanford.edu David Pan napdivad@stanford.edu December 12, 2014 Abstract Our goal is to make gesture-based input for smartphones and smartwatches accurate

More information

Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences

Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences Presentation of the thesis work of: Hedvig Sidenbladh, KTH Thesis opponent: Prof. Bill Freeman, MIT Thesis supervisors

More information

Received: 28 December 2017; Accepted: 5 February 2018; Published: 28 February 2018

Received: 28 December 2017; Accepted: 5 February 2018; Published: 28 February 2018 sensors Article Real-Time Spaceborne Syntic Aperture Radar Float-Pot Imagg System Usg Optimized Mappg Methodology a Multi-Node Parallel Acceleratg Technique Bgyi Li 1, Hao Shi 1,2, * ID, Liang Chen 1,

More information

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction CoE4TN4 Image Processing Chapter 5 Image Restoration and Reconstruction Image Restoration Similar to image enhancement, the ultimate goal of restoration techniques is to improve an image Restoration: a

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Integrated Detection and Tracking of Multiple Faces Using Particle Filtering and Optical Flow-based Elastic

Integrated Detection and Tracking of Multiple Faces Using Particle Filtering and Optical Flow-based Elastic Integrated Detection and Tracking of Multiple Faces Using Particle Filtering and Optical Flow-based Elastic Matching 1 Abstract The design and implementation of a multiple face tracking framework that

More information

Learning to Grasp Objects: A Novel Approach for Localizing Objects Using Depth Based Segmentation

Learning to Grasp Objects: A Novel Approach for Localizing Objects Using Depth Based Segmentation Learning to Grasp Objects: A Novel Approach for Localizing Objects Using Depth Based Segmentation Deepak Rao, Arda Kara, Serena Yeung (Under the guidance of Quoc V. Le) Stanford University Abstract We

More information

Image Resizing Based on Gradient Vector Flow Analysis

Image Resizing Based on Gradient Vector Flow Analysis Image Resizing Based on Gradient Vector Flow Analysis Sebastiano Battiato battiato@dmi.unict.it Giovanni Puglisi puglisi@dmi.unict.it Giovanni Maria Farinella gfarinellao@dmi.unict.it Daniele Ravì rav@dmi.unict.it

More information

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 122 CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 5.1 INTRODUCTION Face recognition, means checking for the presence of a face from a database that contains many faces and could be performed

More information

Adaptive Background Mixture Models for Real-Time Tracking

Adaptive Background Mixture Models for Real-Time Tracking Adaptive Background Mixture Models for Real-Time Tracking Chris Stauffer and W.E.L Grimson CVPR 1998 Brendan Morris http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Motivation Video monitoring and surveillance

More information

Structural Local Sparse Tracking Method Based on Multi-feature Fusion and Fractional Differential

Structural Local Sparse Tracking Method Based on Multi-feature Fusion and Fractional Differential Journal of Information Hiding and Multimedia Signal Processing c 28 ISSN 273-422 Ubiquitous International Volume 9, Number, January 28 Structural Local Sparse Tracking Method Based on Multi-feature Fusion

More information

Hand Tracking and Gesture Recognition Using Lensless Smart Sensors

Hand Tracking and Gesture Recognition Using Lensless Smart Sensors sensors Article H Trackg Gesture Recognition Usg Lensless Smart Sensors Lizy Abraham *, Andrea Urru, Niccolò Normani, Mariusz P. Wilk ID, Michael Walsh Brendan O Flynn Micro Nano Systems Centre Tyndall

More information

Supplementary Figure 1. Decoding results broken down for different ROIs

Supplementary Figure 1. Decoding results broken down for different ROIs Supplementary Figure 1 Decoding results broken down for different ROIs Decoding results for areas V1, V2, V3, and V1 V3 combined. (a) Decoded and presented orientations are strongly correlated in areas

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

Layered Scene Decomposition via the Occlusion-CRF Supplementary material

Layered Scene Decomposition via the Occlusion-CRF Supplementary material Layered Scene Decomposition via the Occlusion-CRF Supplementary material Chen Liu 1 Pushmeet Kohli 2 Yasutaka Furukawa 1 1 Washington University in St. Louis 2 Microsoft Research Redmond 1. Additional

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Spatio-temporal Feature Classifier

Spatio-temporal Feature Classifier Spatio-temporal Feature Classifier Send Orders for Reprints to reprints@benthamscience.ae The Open Automation and Control Systems Journal, 2015, 7, 1-7 1 Open Access Yun Wang 1,* and Suxing Liu 2 1 School

More information

Computer Vision I - Filtering and Feature detection

Computer Vision I - Filtering and Feature detection Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

Evaluation Measures. Sebastian Pölsterl. April 28, Computer Aided Medical Procedures Technische Universität München

Evaluation Measures. Sebastian Pölsterl. April 28, Computer Aided Medical Procedures Technische Universität München Evaluation Measures Sebastian Pölsterl Computer Aided Medical Procedures Technische Universität München April 28, 2015 Outline 1 Classification 1. Confusion Matrix 2. Receiver operating characteristics

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION. Maral Mesmakhosroshahi, Joohee Kim

IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION. Maral Mesmakhosroshahi, Joohee Kim IMPROVING SPATIO-TEMPORAL FEATURE EXTRACTION TECHNIQUES AND THEIR APPLICATIONS IN ACTION CLASSIFICATION Maral Mesmakhosroshahi, Joohee Kim Department of Electrical and Computer Engineering Illinois Institute

More information

On Secure Distributed Data Storage Under Repair Dynamics

On Secure Distributed Data Storage Under Repair Dynamics On Secure Distributed Data Storage Under Repair Dynamics Sameer Pawar Salim El Rouayheb Kannan Ramchandran Electrical Engeerg and Computer Sciences University of California at Berkeley Technical Report

More information

Institute of Electronics, Chinese Academy of Sciences, Beijing , China * Correspondence: Tel.:

Institute of Electronics, Chinese Academy of Sciences, Beijing , China * Correspondence: Tel.: remote sensg Article Identification Stable Backscatterg Features, Suitable for Matag Absolute Syntic Aperture Radar (SAR) Radiometric Calibration Sentel-1 Jtao Yang 1,2,3, Xiaolan Qiu 2,3 ID, Chibiao Dg

More information

MIXED-SIGNAL VLSI DESIGN OF ADAPTIVE FUZZY SYSTEMS

MIXED-SIGNAL VLSI DESIGN OF ADAPTIVE FUZZY SYSTEMS IE-SIGNAL VLSI ESIGN OF AAPTIVE FZZY SYSTES I. Baturone, S. Sánchez-Solano, J. L. Huertas Instituto de icroelectrónica de Sevilla - Centro Nacional de icroelectrónica Avda. Rea ercedes s/n, (Edif. CICA)

More information

Textural Features for Image Database Retrieval

Textural Features for Image Database Retrieval Textural Features for Image Database Retrieval Selim Aksoy and Robert M. Haralick Intelligent Systems Laboratory Department of Electrical Engineering University of Washington Seattle, WA 98195-2500 {aksoy,haralick}@@isl.ee.washington.edu

More information