Pairwise Identity Verification via Linear Concentrative Metric Learning

Size: px
Start display at page:

Download "Pairwise Identity Verification via Linear Concentrative Metric Learning"

Transcription

1 Parwse Identty Verfaton va Lnear Conentratve Metr Learnng Lle Zheng, Stefan Duffner, Khald Idrss, Chrstophe Gara, Atlla Baskurt To te ths verson: Lle Zheng, Stefan Duffner, Khald Idrss, Chrstophe Gara, Atlla Baskurt. Parwse Identty Verfaton va Lnear Conentratve Metr Learnng. IEEE Transatons on Cybernets, IEEE, 2016, pp < /TCYB >. <hal > HAL Id: hal Submtted on 13 Jan 2017 HAL s a mult-dsplnary open aess arhve for the depost and dssemnaton of sentf researh douments, whether they are publshed or not. The douments may ome from teahng and researh nsttutons n Frane or abroad, or from publ or prvate researh enters. L arhve ouverte plurdsplnare HAL, est destnée au dépôt et à la dffuson de douments sentfques de nveau reherhe, publés ou non, émanant des établssements d ensegnement et de reherhe franças ou étrangers, des laboratores publs ou prvés.

2 JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, JAN Parwse Identty Verfaton va Lnear Conentratve Metr Learnng Lle Zheng, Student Member, IEEE, Stefan Duffner, Khald Idrss, Chrstophe Gara, Atlla Baskurt Abstrat Ths paper presents a study of metr learnng systems on parwse dentty verfaton, nludng parwse fae verfaton and parwse speaker verfaton, respetvely. These problems are hallengng beause the ndvduals n tranng and testng are mutually exlusve, and also due to the probable settng of lmted tranng data. For suh parwse verfaton problems, we present a general framework of metr learnng systems and employ the stohast gradent desent algorthm as the optmzaton soluton. We have studed both smlarty metr learnng and dstane metr learnng systems, of ether a lnear or shallow nonlnear model under both restrted and unrestrted tranng settngs. Extensve experments demonstrate that wth lmted tranng pars, learnng a lnear system on smlar pars only s preferable due to ts smplty and superorty,.e. t generally aheves ompettve performane on both the LFW fae dataset and the NIST speaker dataset. It s also found that a pre-traned deep nonlnear model helps to mprove the fae verfaton results sgnfantly. Index Terms metr learnng, samese neural networks, fae verfaton, speaker verfaton, dentty verfaton, parwse metr 1 INTRODUCTION THE task of parwse dentty verfaton s to verfy whether a par of bometr dentty samples orresponds to the same person or not, where the dentty samples an be fae mages, speeh utteranes or any other bometr nformaton from ndvduals. Formally, n suh parwse verfaton problems, two dentty samples of the same person are alled a smlar par, and two samples of two dfferent persons are alled a dssmlar par or a dfferent par. Compared wth the tradtonal dentty lassfaton task n whh a deson of aeptane or rejeton s made by omparng an dentty sample to models (or templates) of eah ndvdual [1], [2], [3], parwse dentty verfaton s more hallengng beause of the mpossblty of buldng robust dentty models wth enough tranng data [4] for all the ndvduals. Atually, there may be only one dentty sample avalable for some ndvduals n parwse dentty verfaton. Besdes, ndvduals n tranng and testng should be mutually exlusve,.e. the testng set omprses only samples from unknown persons that are not part of the tranng set. Fae mages or speeh utteranes may be the most aessble and wdely used dentty nformaton. As a result, fae verfaton [1] and speaker verfaton [2] has been well studed over the last two deades. Espeally, parwse fae verfaton has drawn muh attenton n reent years thanks to the popularty of the dataset Labeled Faes n the Wld (LFW) [4]. Orgnally, the LFW dataset proposed a restrted tranng protool where only a few spefed data pars are allowed for tranng, a hallengng settng for All the authors are wth Unversté de Lyon, CNRS, INSA-Lyon, LIRIS, UMR5205, F-69621, Frane (e-mal: lle.zheng@lrs.nrs.fr; lzheng@nwpu-aslp.org; stefan.duffner@lrs.nrs.fr; khald.drss@lrs.nrs.fr; hrstophe.gara@lrs.nrs.fr; atlla.baskurt@lrs.nrs.fr). Manusrpt reeved Aprl 19, XXXX; revsed September 17, XXXX. effetve learnng algorthms to dsover prnples from a small number of tranng examples, just lke the human bengs [5]. On the other hand, n the NIST Speaker Reognton Evaluatons (SREs) sne 1996, varous speaker verfaton protools have been nvestgated [6], [7]. In order to follow the par generaton sheme n the LFW standard protool, we establsh the parwse speaker verfaton protool based on the data from the NIST Vetor Mahne Learnng Challenge [7]. The defnton of parwse dentty verfaton reveals the need of measurng the dfferene or smlarty between a par of samples, whh naturally leads us to the study of metr learnng [8],.e. methods that automatally learn a metr from a set of data pars. A metr learnng framework s mplemented wth a samese arhteture [9] whh onssts of two dental sub-systems sharng the same set of parameters. For a gven nput data par, the two samples are proessed by the two sub-systems respetvely. The overall system nludes a ost funton parameterzng the parwse relatonshp between data and a mappng funton allowng the system to learn hgh-level features from the tranng data. In terms of the ost funton, one an dvde metr learnng methods nto dstane metr learnng and smlarty metr learnng, where the ost funton s defned based on a dstane metr and a smlarty measurement, respetvely. The objetve of suh a ost funton s to nrease the smlarty value or to derease the dstane between a smlar par, and to redue the smlarty value or to nrease the dstane between two dssmlar data samples. In ths paper, we nvestgate two knds of metr learnng methods, namely, Trangular Smlarty Metr Learnng (TSML) [10] and Dsrmnatve Dstane Metr Learnng (DDML) [11]. In terms of the mappng funton, one an dvde metr learnng methods nto two man famles: lnear metr learnng and nonlnear metr learnng. Up to now, work n metr learnng has foused on lnear methods beause

3 JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, JAN they are more onvenent to optmze and less prone to over-fttng. For nstane, the best approahes suh as the Wthn Class Covarane Normalzaton (WCCN) and Cosne Smlarty Metr Learnng (CSML), have shown ther effetveness on the problem of parwse fae verfaton [12], [13]. Also, a few approahes have nvestgated nonlnear metr learnng and have shown ompettve performane on some lassfaton problems [11], [14], [15]. Moreover, omparng lnear systems wth ther nonlnear varants on a ommon ground helps to study the effet of nonlnearty on parwse verfaton. For example, the nonlnear transformaton Dffuson Maps (DM) has been ntrodued to fae verfaton [13] and speaker verfaton [16], respetvely. However, no lear evdene n the omparsons valdated the unversal effetveness of DM over the lnear systems [13]. Analogously, we present the TSML and DDML methods n both lnear and nonlnear formulatons for the sake of a thorough evaluaton. Note that the nonlnear formulatons are developed on the lnear ones by addng nonlnear atvaton funtons or stakng one more layer of transformaton, thus the mplemented nonlnearty s shallow. Overall, on the problem of parwse dentty verfaton va metr learnng, ths paper presents a omprehensve study nludng two knds of verfaton applatons (.e. fae verfaton and speaker verfaton), two knds of tranng settngs (.e. data-restrted and data-unrestrted), two knds of metr learnng ost funtons (.e. TSML and DDML), and three knds of mappng funtons (.e. lnear funton, sngle-layer nonlnear funton and mult-layer nonlnear funton). We wll show that under the settng of lmted tranng data, a lnear metr learnng system traned on smlar pars only generally yelds ompettve verfaton results. Ether lnear TSML or lnear DDML aheves the state-of-the-art performane on both the LFW mage dataset and the NIST speaker dataset. The ontrbutons of ths paper wth respet to prevous works are the followng: we establsh a parwse speaker verfaton protool based on the data from the NIST Vetor mahne learnng hallenge, whh has mutually exlusve tranng and test sets of speakers. Both the parwse fae verfaton protool of the LFW dataset and ths speaker verfaton task am at verfyng dentty nformaton by ndvduals bometr features. Another objetve of usng the two datasets s to show the effetveness of the proposed metr learnng systems on dfferent knds of data,.e. mages and speeh. we present the TSML and DDML methods n both lnear and nonlnear formulatons for parwse - dentty verfaton problems. A thorough evaluaton omparng the dfferent formulatons has shown that wth lmted tranng data, the lnear models are preferable due to ts superor performane and ts smplty. we study the nfluene of lmted tranng data. Generally, ompared wth unlmted tranng, the lmted ase suffers from over-fttng. However, we fnd that tranng the lnear models on smlar pars only onsderably redues the effet of over-fttng to lmted tranng data. we also ntegrate the proposed lnear and shallow nonlnear metr learnng models wth a pre-traned deep Convolutonal Neural Network (CNN) model to mprove the performane of parwse fae verfaton. We fnd that the lnear model serves as an effetve verfaton layer staked to the deep CNN. The remander of ths paper s organzed as follows: Seton 2 brefly summarzes the related work on metr learnng and feature representatons for mages and speeh. Seton 3 presents the objetve of metr learnng by llustratng the ost funtons of TSML and DDML. Seton 4 ntrodues the lnear and nonlnear formulatons and explans the detals of our stohast gradent desent algorthm for optmzaton. Seton 5 desrbes the datasets and experments for parwse fae verfaton and parwse speaker verfaton. Fnally, we draw our onlusons n Seton 6. 2 RELATED WORK 2.1 Metr Learnng and Samese Neural Networks Most of lnear metr learnng methods employ two types of metrs: the Mahalanobs dstane or a more general smlarty metr. In both of the two ases, a lnear transformaton matrx W s learnt to projet nput features nto a target s- pae. Typally, dstane metr learnng onerns the Mahalanobs dstane [17], [18]: d W (x, y) = (x y) T W (x y), where x and y are two sample vetors, and W s the matrx that needs to be learnt. Note that when W s the dentty matrx, d W (x, y) s the Euldean dstane. In ontrast, smlarty metr learnng methods learn a funton of the followng form: s W (x, y) = x T W y/n(x, y), where N(x, y) s a normalzaton term [19]. Spefally, when N(x, y) = 1, s W (x, y) s the blnear smlarty funton [20]; when N(x, y) = x T W x y T W y, s W (x, y) s the generalzed osne smlarty funton [12]. Nonlnear metr learnng methods are onstruted by smply substtutng the above lnear projeton wth a nonlnear transformaton [11], [14], [15], [21]. For example, [11] and [14] employed neural networks to aomplsh the nonlnear transformaton. These nonlnear methods are subjet to loal optma and more nlned to over-ft to the tranng data but have the potental to outperform lnear methods on some problems [8], [15]. Compared wth lnear models, nonlnear models are usually preferred on a redundant tranng set to well apture the underlyng dstrbuton of the data [22]. Sne neural networks are the most ommonly used nonlnear models, nonlnear metr learnng has a natural onneton wth samese neural networks [9], [14]. Atually, samese neural networks an also be lnear f the neurons have a lnear atvaton funton. From ths pont of vew, samese neural networks and metr learnng denote the same tehnque of optmzng a metr-based ost funton va a lnear or nonlnear mappng. The dfferene exsts n ther names: samese neural networks onern the symmetr struture of neural networks used for data mappng

4 JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, JAN but the term metr learnng emphaszes the parwse relatonshp (.e. the metr) n the data spae. For readers nterested n a broader sope on metr learnng n the lterature, we reommend a reent survey whh has provded an up-to-date and rtal revew of exstng metr learnng methods [8]. For those who prefer expermental analyss, an overvew and empral omparson s gven n [23]. x Mappng funton f ( ) W y Mappng funton f ( ) 2.2 Feature Representaton for Fae and Speaker For fae reognton, tremendous efforts have been put on developng robust fae desrptors [13], [24], [25], [26], [27], [28], [29], [30], [31], [32]. Popular fae desrptors nlude egenfaes [24], Gabor wavelets [27], SIFT [26], Loal Bnary Patterns (LBP) [25], et. Espeally, LBP and ts varants, suh as enter-symmetr LBP (CSLBP) [33], mult-blok LBP (M- LLBP) [34], three path LBP (TPLBP) [28] and over-omplete LBP (OCLBP) [13], have been proven to be effetve at desrbng faal texture. Espeally, the hgh-dmensonal varants usually perform better, for example, OCLBP [13]. Reently, another hgh-dmensonal anddate, Fsher Vetor (FV) fae, whh ombnes dense feature samplng wth mproved Fsher Vetor enodng, has aheved strkng results on parwse fae verfaton [30]. Besdes, ompared wth the above handrafted desrptors, automatal feature learnng usng Convolutonal Neural Networks (CNN) has attrated a lot of nterest n Computer Vson durng the past deade [35], [36], [37]. In ontrast to the handrafted features, these CNN-based approahes usually rely on large tranng data to learn a lot of parameters, but they have substantally rased the state-of-the-art reords on almost all the hallenges n Computer Vson [38]. For speaker reognton, the most popular features are developed on generatve models suh as Gaussan Mxture Model-Unversal Bakground Model (GMM-UBM) [39]. Buldng on the suess of GMM-UBM, Jont Fator Analyss (JFA) proposes powerful tools to model the nter-speaker varablty and to ompensate for hannel/sesson varablty n the ontext of GMMs [40]. Moreover, nspred by JFA, a new feature alled -vetor s developed [41], [42], [43]. JFA models the speaker varablty n the hgh-dmensonal spae of GMM supervetors, whereas -vetors are extrated n a low-dmensonal spae named total varablty spae. Takng advantage of the low dmensonalty of the total varablty spae, many mahne learnng tehnques an be appled to speaker verfaton [44]. Probablst Lnear Dsrmnant Analyss (PLDA) [45] s one of the most popular tehnques used for speaker verfaton: dfferent varants suh as the Gaussan PLDA (G-PLDA) [16], [46], Heavy- Taled PLDA (HT-PLDA) [47], [48], [49] and Nonlnear PL- DA [50] have been studed. In addton, Parwse Support Vetor Mahnes (PSVM) [51], [52] have been proposed to verfy utterane pars of dfferent speakers; and fusng PSVM wth PLDA an further mprove the verfaton performane [46]. Reently, the metr learnng framework DDML [11] was also shown to be helpful for PLDA-based speaker verfaton [50]. In our experments, nstead of studyng the CNN for fae verfaton or the PLDA for speaker verfaton, we fous on nvestgatng the same metr learnng models on the a = f ( x, W ) b f ( y, W ) Cost funton J ( ) = Attrat smlar pars Separate dssmlar pars Fg. 1. The samese struture used n metr learnng approahes. The objetve s to fnd an optmal mappng, makng a smlar par to be more loser and a dssmlar par further apart. two verfaton tasks. In terms of feature representatons, we hoose Fsher Vetor faes as the fae desrptors and -vetors as the speeh utterane desrptors. 3 METRIC LEARNING OBJECTIVES Metr learnng algorthms usually employ the samese arhteture [9] to ompare a par of data nputs. Fgure 1 shows the prnpal approah. A par of data s gven at the nput, and two outputs are produed respetvely wth the urrent mappng funton f( ). These outputs are onstraned by a metr-based ost funton J( ). By mnmzng ths ost funton, we an aheve the objetve of attratng smlar pars and separatng dssmlar pars. Conretely, f the par of nputs are smlar (.e. from the same ndvdual), the objetve s to make the outputs more smlar than the nputs; otherwse, the objetve s to make the outputs more dssmlar/dfferent. Popular hoes of the measurement on the output vetors nlude the Euldean dstane [11], [18] and the Cosne Smlarty [12], [20]. Therefore, we apply a dstane metr learnng method DDML [11] and a smlarty metr learnng method TSML [10] for the problem of parwse dentty verfaton. By representng the fae mages or speeh utteranes as numeral vetors, we use a trplet (x, y, s ) to represent a par of tranng nput nstanes, where x and y are two vetors, and s = 1 (respetvely s = 1) means that the two vetors are smlar (respetvely dssmlar). Takng a projeton f(z, W ) on the nputs, we obtan a new par (a, b ) n the target spae, where a = f(x, W ) and b = f(y, W ). Then, the TSML or DDML ost funton s onstruted to defne the parwse relatonshp between a and b. Fnally, the proedure of learnng the metr s arred out by mnmzng the ost on a set of tranng pars. 3.1 Trangular Smlarty Metr Learnng TSML onerns the Trangular Smlarty whh s equvalent to the Cosne Smlarty [53]. On the two outputs a and

5 s 1 1 s JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, JAN a s b a b s a s b a b a b a b a s 1 b a s 1 b -1 1 s s 1 s (a) (b) Fg. 2. Geometral nterpretaton of the TSML ost and gradent. (a) Mnmzng the ost means to make smlar vetors parallel and make dssmlar vetors opposte. (b) The gradent funton suggests unt vetors on the dagonals as targets for a and b : the same target vetor for a smlar par (s = 1); or the opposte s target vetors for a dssmlar par (s = 1). b, the ost funton of TSML s defned as: a a J = 1 2 b a b 2 + 1, b (1) DDML 2 where = a + s b : an be regarded as one of the two dagonalss of 1the parallelogram formed s 1by a sand b (Fg. 2(a)). Moreover, ths ost funton an be rewrtten as: J = 1 2 ( a 1) ( b 1) 2 + a + b. (2) We an see that mnmzng the frst part ams to make the vetors a and b havng unt length 1; the seond part onerns the well-known trangle nequalty theorem: the sum of the lengths of two sdes of a trangle must always be greater than the length of the thrd sde,.e. a + b > 0. More nterestngly, wth the length onstrants by the frst part, mnmzng the seond part s equvalent to mnmzng the angle θ nsde a smlar par (s = 1) or maxmzng the angle θ nsde a dssmlar par (s = 1), n other words, mnmzng the Cosne Smlarty between a and s b : a T os(a, s b ) = s b a b. (3) The gradent of the ost funton (Equaton (1)) wth respet to the parameters W s: J W = (a a )T W + (b s b )T W. (4) We an obtan the optmal ost at the zero gradent: a = 0 and b s = 0. In other words, the gradent funton has s and as targets for a and b, respetvely. Fg. 2(b) llustrates that: for a smlar par, a and b are mapped to the same target vetor along the dagonal (the red sold lne); for a dssmlar par, a and b are mapped to opposte unt vetors along the other dagonal (the blue sold lne). Ths perfetly reveals the objetve of attratng smlar pars and separatng dssmlar pars. 3.2 Dsrmnatve Dstane Metr Learnng In ontrast, DDML fouses on the parwse dstane between feature vetors. Unlke the Cosne Smlarty naturally defnes a mnmum of -1 and a maxmum of 1, the Euldean Smlar Dssmlar Fg. 3. Illustraton of the DDML ost funton, whose objetve s to fnd an optmal mappng to make a smlar par loser and to separate a dssmlar par wth a dstane margn of 2. dstane has only a mnmum of 0 and no maxmum. Hene a margn s usually defned n dstane metr learnng to assume that two vetors wth a dstane larger than the margn are well separated. Typally, for a par of outputs a and b, DDML defnes the ost funton as: J = 1 2 g(1 s (1 (a b ) 2 )), (5) where g(z) = 1 T log(1 + exp(t z)) s the generalzed logst loss funton [54], T s a sharpness parameter usually set to 10. Mnmzng the logst loss funton means to mnmze the value of z = 1 s (1 (a b ) 2 ). (6) Spefally, for a smlar par (s = 1), z an be smplfed as (a b ) 2, and mnmzng z requres a and b to be dental; for a dssmlar par (s = 1), the equaton suggests maxmzng z = (a b ) 2 2, that s to separate a dssmlar par wth a dstane of 2. An llustraton of the objetve s shown n Fg. 3. The gradent of the DDML ost funton (Equaton (5)) wth respet to the parameters W s: J W = s (a b ) (a b ) 1 + exp( T (1 s + s (a b ) 2 )) W. (7) 3.3 Cost and Gradent for Bath Tranng In prate, we may onsder a few data pars as a small bath n eah tranng teraton, thus the overall ost and

6 JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, JAN gradent of a bath s smply the average from all the tranng pars n the bath: J = 1 n J, (8a) n J W = 1 n =1 n =1 J W, (8b) where n s the number of tranng pars n a bath, J s the TSML ost n Equaton (1) or the DDML ost n Equaton (5), the orrespondng gradent J W s alulated by Equaton (4) or Equaton (7). Fnally, the gradent an be used n the Bakpropagaton algorthm [55] to perform gradent desent and searh an optmal soluton. 4 LINEAR AND NONLINEAR MAPPINGS When a ost funton defnes the parwse relatonshp between data n the target spae, a mappng funton represents the system s ablty of learnng to aheve the goal of the ost funton. From the pont of vew of neural networks, dfferent mappng funtons an be onsdered as dfferent ombnatons of neurons n network layers. We study three knds of mappng funtons here: Sngle layer of lnear neurons The smplest neurons are the lnear neurons wthout bas term whh only nvolve a parameter matrx W. For a gven nput z R d, the output s smply f(z, W ) = W z. For nstane, the TSML gradent of the th par wth respet to the parameter matrx W s: J W = (a )xt + (b s )yt. (9) Sngle layer of nonlnear neurons Besdes the parameter matrx W, nonlnear neurons nvolve a bas term, and a nonlnear atvaton funton, e.g. the tanh funton [22]. For a gven nput z R d, the output s: f(z, W ) = tanh(w z + h), (10) where h denotes the bas term of the neurons. Ths equaton an be rewrtten as: f(z, W ) = tanh(w z ), (11) where z = [z; 1] and W = [W h]. Remnd that dervatve of the tanh funton s tanh (z) = 1 tanh 2 (z). Based on the lnear ase n Equaton (9), the dervatve of the TSML ost funton wth respet to the parameters W : {W, h} s: J W = (a ) a W + (b s ) b W = [(1 a a ) (a )[x ; 1] T + (1 b b ) (b s )[y ; 1] T ], (12) where the notaton means element-wse multplaton. The dervaton of ths equaton an be easly obtaned wth the han rule used n the Bakpropagaton algorthm [22]. Multple layers of nonlnear neurons By ombnng several nteronneted nonlnear neurons together, Mult-Layer Pereptrons (MLP) are able to approxmate arbtrary nonlnear mappngs and thus have been the most popular knd of neural networks sne the 1980 s [55]. We adopt a 3-layer MLP, ontanng one nput layer and two layers of nonlnear neurons, to realze the nonlnear mappng. Smlar wth Equaton (12) and aordng to the Bakpropagaton han rule, we an alulate dervatves wth respet to eah parameter of the MLP for a gven tranng par. For the DDML ost funton, we an obtan dervatves wth respet to the weghts of neuron layers n the same way wth the TSML method. For all the lnear and shallow nonlnear systems, we employ the same stohast gradent desent optmzaton to update ther weghts untl reahng an optmal soluton. 4.1 Stohast Gradent Desent Sne all the three types of mappng funtons have smlar ost and gradent funtons, we employ the same algorthm to perform optmzaton. The proposed method s based on stohast gradent desent and s summarzed n Algorthm 1. More advaned optmzaton algorthms suh as onjugate gradent desent, L-BFGS [10], [56] ould be used as well but ther analyss would go beyond the sope of ths paper. We adopt early-stoppng [57] to prevent the over-fttng problem. Thus a small set s separated from the tranng data for valdaton, and the model wth the best performane on the valdaton set s retaned for evaluaton on the test set. In addton, we use a momentum [22] term to speed up tranng. The momentum λ s emprally set to be 0.99 for all the experments. Followng [30], [58], the nput vetors wll be passed through L2 normalzaton before tranng,.e. the length of nput vetors are normalzed to 1. Intalzng the weghts For the lnear mappng, lke n [10], [12], [58], we ntalze the transformaton matrx wth the dentty matrx. For the nonlnear mappngs, we use the normalzed random ntalzaton [59] that s onsdered to be helpful for the tanh networks. Conretely, weghts of eah layer are ntalzed wth an unform dstrbuton as: 6 6 {W (j), h (j) } U[, ], (13) nj + n j+1 nj + n j+1 where {W (j), h (j) } denotes the parameters between the j th and (j + 1) th layers; n j and n j+1 represent the number of nodes n the two layers, respetvely. 5 EXPERIMENTS AND ANALYSIS 5.1 Datasets In order to valdate the generalty of the proposed approahes, we arred out parwse dentty verfaton experments on two datasets n dfferent domans: the LFW mage dataset for parwse fae verfaton [4] and the NIST -vetor dataset for parwse speaker verfaton [7].

7 JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, JAN TABLE 1 Dstrbuton of ndvduals and mages n the 10 subsets, where the ndvduals are mutually exlusve. Index Total Number of ndvduals Number of mages TABLE 2 Dstrbuton of ndvduals and speeh utteranes n the 10 subsets, where the ndvduals are mutually exlusve. Index Total Number of ndvduals Number of utteranes Algorthm 1: Stohast Gradent Desent for TSML nput : Tranng set; Valdaton set; output: Parameter set W paramters: Learnng rate α = 10 4 ; Momentum λ = 0.99; Iteratve tolerane P t = ; Valdaton frequeny F t = 10 3 ; % ntalzaton f lnear mappng then W 0 I; % I s the dentty matrx f nonlnear mappng then randomly ntalze W 0 aordng to Equaton (13); W 0 0; Perform L2 normalzaton on the tranng set; Perform L2 normalzaton on the valdaton set; % optmzaton by Bakpropagaton for t = 1, 2,..., P t do % selet tranng data for eah epoh Randomly selet a smlar par and a dssmlar par from the tranng set; % forward propagaton Calulate the ost J on the seleted tranng pars; % bak propagaton J W t 1 ; Calulate the orrespondng gradent % updatng usng momentum W t = λ W t 1 + J W t 1 ; W t W t 1 + α W t ; % hekng on the valdaton set regularly f (P t mod F t ) == 0 then ompute the Deson Auray aordng to Equaton (14); % output the best matrx on the valdaton set W the W t gves the best result on the valdaton set; return W LFW dataset The LFW dataset ontans numerous annotated mages from the web. For all the mages, we used the ropped funneled verson of LFW [4]. We only used the Vew 2 subset of LFW for performane evaluaton. In Vew 2, to do 10-fold ross valdaton, all the 5749 persons n the dataset are dvded nto 10 subsets where the ndvduals are mutually exlusve. The total number of mages for all the persons s 13,233, however, the number of mages for eah ndvdual vares from 1 to 530. Table 1 summarzes the data dstrbuton of ndvduals and mages n the 10 subsets. We used Fsher Vetor faes as vetor representaton of fae mages, where data of the vetors are dretly provded by [30] 1 (Data for the settng 3), and the dmenson of a Fsher Vetor fae s 67,584. However, dretly takng the orgnal faal vetors for learnng auses omputatonal problems,.e. the tme requred for multplatons of the 67,584-d vetors would be unaeptable. Therefore, followng [12], [13], we apply Whtened Prnpal Component Analyss (WPCA) to redue the vetor dmenson to NIST -vetor dataset We used the data of the NIST 2014 Speaker -Vetor Challenge [7], whh onsst of -vetors derved from onversatonal telephone speeh data n the NIST speaker reognton evaluatons from 2004 to Eah -vetor, the dentty vetor, s a vetor of 600 omponents. Along wth eah -vetor, the amount of speeh (n seonds) used to ompute the -vetor s suppled as metadata. Segment duratons were sampled from a log normal dstrbuton wth a mean of seonds. Ths dataset onssts of a development set for buldng models and a test set for evaluaton. We only used the development data of ths Challenge and establshed an expermental protool of parwse speaker verfaton. There are 36,572 speeh utteranes n total n ths experment, belongng to 4958 dfferent speakers. The number of utteranes for a sngle speaker vares from 1 to 75. Lke n LFW, we also splt the data nto 10 subsets to do 10-fold ross valdaton. Table 2 shows the dstrbuton of ndvduals and speeh utteranes n the 10 subsets. 5.2 Expermental Setup On both of the two datasets, we performed ross-valdaton on the 10 folds: there are overall 10 experments, n eah repetton, sample pars from 9 folds are used for tranng, and sample pars from the remanng fold are used for testng. As we have announed n Seton 4.1, some tranng data are separated as an ndependent valdaton set to do early-stoppng Fxed testng To perform evaluaton on the test set for eah experment, t s better to fx the sample pars n eah fold so that we an farly ompare dfferent approahes on the same test data. Spefally, 600 mage pars are provded n eah fold of the 1. vgg/software/fae des/

8 JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, JAN LFW dataset, where 300 are smlar and the other 300 are dssmlar [4]. In the NIST -vetor dataset, there are more samples for eah ndvdual than n the LFW dataset, so we generate more sample pars for eah fold, namely, 1200 smlar pars and 1200 dssmlar pars Restrted and unrestrted tranng Followng [4], we defned two tranng settngs n our experments: the restrted settng n whh only the fxed sample pars n eah fold an be olleted for tranng, e.g. the spefed 300 smlar and 300 dssmlar pars n eah fold of the LFW dataset; n ontrast, the unrestrted settng allows to generate more sample pars for tranng by usng the dentty nformaton of all the samples. As mentoned prevously, the test sample pars are the same for both restrted and unrestrted settngs Maxmal deson auray Lke the mnmal Deson Cost Funton (mndcf) n [7], we defne a Deson Auray (DA) funton to measure the overall verfaton performane on a set of data pars: DA(γ) = number of rght desons (γ), (14) total number of pars where the threshold γ s used to make a deson on the fnal dstane or smlarty values: for the TSML system, os(a, b) > γ means (a, b) s a smlar par, otherwse t s dssmlar; for the DDML system, (a b) 2 < γ denotes a smlar par, otherwse t s dssmlar. The maxmal DA (maxda) over all possble threshold values s the fnal sore reorded. We report the mean maxda sores (±standard error of the mean) of the 10 experments. For the speaker verfaton results, we also measure the mean Equal Error Rate (EER) as t s ommonly used n the speaker reognton feld [47], [52]. 5.3 Expermental Results At the begnnng, we dretly alulated maxda sores on the whtened feature vetors,.e. the 500-dmensonal FV vetors for the LFW dataset and 600-dmensonal -vetors for the NIST -vetor dataset. We onsder ths evaluaton as the baselne. Aordng to the dfferent neuron models defned n Seton 4, we evaluated three knds of metr learnng approahes n the experments: TSML-Lnear and DDML-Lnear: usng a sngle layer of lnear neurons wthout bas term; TSML-Nonlnear and DDML-Nonlnear: usng a sngle layer of nonlnear neurons wth a bas term; TSML-MLP and DDML-MLP: usng two layers of nonlnear neurons wth bas terms; All these models are traned on both smlar and dssmlar pars. Results on the LFW-funneled dataset and the NIST -vetor dataset are summarzed n Tables 3 6. We also remplement the state-of-the-art WCCN method [13], [58] as a omparson. Learnng on Smlar Pars Only: omparng WCCN wth the proposed sx metr learnng models, we fnd that WCCN aheves better performane under the restrted tranng. The major dfferene between WCCN and the other TABLE 3 Mean maxda sores (±standard error of the mean) of parwse fae verfaton by the TSML systems on the LFW-funneled mage dataset. -Sm means learnng on smlar pars only. Approahes Restrted Tranng Unrestrted Tranng Baselne 84.83±0.38 WCCN 91.10± ±0.36 TSML-Lnear 87.95± ±0.38 TSML-Nonlnear 86.23± ±0.52 TSML-MLP 84.10± ±0.73 TSML-Lnear-Sm 91.90± ±0.48 TSML-Nonlnear-Sm 90.58± ±0.37 TSML-MLP-Sm 88.98± ±0.58 TABLE 4 Mean maxda sores (±standard error of the mean) and mean EER of parwse speaker verfaton by the TSML systems on the NIST -vetor speaker dataset. -Sm means learnng on smlar pars only. Approahes Restrted Tranng Unrestrted Tranng Baselne 87.78±0.39 / WCCN 91.69±0.29 / ±0.33 / TSML-Lnear 89.78±0.25 / ±0.20 / TSML-Nonlnear 87.43±0.31 / ±0.20 / TSML-MLP 84.88±0.24 / ±0.36 / TSML-Lnear-Sm 92.94±0.15 / ±0.24 / TSML-Nonlnear-Sm 91.29±0.25 / ±0.23 / TSML-MLP-Sm 89.59±0.45 / ±0.30 / models s that WCCN onerns only ntra-personal varane but gnores the nter-personal nformaton [13], [58]. In other words, WCCN performs learnng on smlar pars only but the urrent TSML and DSML systems take nto aount both smlar and dssmlar pars. To larfy ths ssue, we tran the proposed models on smlar pars only as sx new models: TSML-Lnear-Sm, TSML-Nonlnear-Sm and TSML-MLP- Sm; DDML-Lnear-Sm, DDML-Nonlnear-Sm and DDML- MLP-Sm. The results are also shown n Tables More tranng data The frst phenomenon we an observe s that unrestrted tranng produes better results than restrted tranng. More tranng data generally brng up an auray mprovement to eah model. We have known sne md-seventes [5], [38], [60] that many methods nrease n auray wth nreasng tranng data untl they reah optmal performane. Indeed, more tranng data better apture the underlyng dstrbuton of the whole dataset and thus redue the overfttng gap between tranng and test. Espeally for the parwse verfaton problem that requres learnng on data pars, ompared wth restrted tranng only allows to use a few spefed tranng pars n a dataset, unrestrted tranng overs enough data pars and thus protet the models from over-fttng to a small porton of tranng data Lnear vs. nonlnear The seond observaton s that the lnear models generally perform better than the shallow nonlnear models. Spefally, more parameters (.e. addtonal bas terms or/and more layers of neurons) and the nonlnearty make the

9 JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, JAN TABLE 5 Mean maxda sores (±standard error of the mean) of parwse fae verfaton by the DDML systems on the LFW-funneled mage dataset. -Sm means learnng on smlar pars only. Approahes Restrted Tranng Unrestrted Tranng Baselne 84.83±0.38 WCCN 91.10± ±0.36 DDML-Lnear 88.27± ±0.35 DDML-Nonlnear 88.12± ±0.36 DDML-MLP 88.60± ±0.42 DDML-Lnear-Sm 91.03± ±0.29 DDML-Nonlnear-Sm 90.82± ±0.40 DDML-MLP-Sm 89.57± ±0.44 TABLE 6 Mean maxda sores (±standard error of the mean) and mean EER of parwse speaker verfaton by the DDML systems on the NIST -vetor speaker dataset. -Sm means learnng on smlar pars only. Approahes Restrted Tranng Unrestrted Tranng Baselne 87.78±0.39 / WCCN 91.69±0.29 / ±0.33 / DDML-Lnear 89.77±0.21 / ±0.23 / DDML-Nonlnear 87.98±0.29 / ±0.23 / DDML-MLP 89.11±0.27 / ±0.25 / DDML-Lnear-Sm 92.95±0.29 / ±0.24 / DDML-Nonlnear-Sm 91.98±0.25 / ±0.22 / DDML-MLP-Sm 89.08±0.27 / ±0.36 / nonlnear models more powerful to adapt themselves to the tranng data. However, wthout any addtonal tehnques to prevent over-fttng, generalzaton to the test data s not guaranteed. Fgure 4 shows the learnng urves of TSML- Lnear, TSML-Nonlnear and TSML-MLP n restrted tranng, we an see that all of them easly ft the tranng data. Espeally, wth the most parameters, TSML-MLP s the strongest learnng mahne that reahes the auray of 100% on the tranng data wth the fewest teratons, but t performs the worst on the test data. More regularzaton tehnques, suh as weght deay [22] and dropout [61], an be ntrodued to redue the rsk of over-fttng for suh slghtly deeper nonlnear model, but ther analyss would go beyond the sope of ths paper. In ontrast, wth the same expermental settng, lnearty naturally ndates the property of generalzaton and thus makes TSML-Lnear better ft to the unseen data,.e. the valdaton and test sets Conentratve tranng on lmted data pars Fgure 5 ompares the performane of the lnear models of both TSML and DDML on the LFW-funneled dataset and the NIST -vetor datset, respetvely. In general, under the restrted tranng, the models traned on smlar pars only,.e. TSML-Lnear-Sm and DDML-Lnear-Sm, yeld sgnfantly better results; under the unrestrted tranng, all the lnear models perform omparably well. In general, a lnear onentratve model 2 should be adopted for restrted tranng beause of ts superor per- 2. We use the term onentratve to ndate learnng on smlar pars only sne t onerns losng a smlar par rather than separatng a dssmlar par. maxda(%) maxda(%) maxda(%) Tranng Valdaton Test Iteraton Number x (a) Learnng urve of TSML-Lnear Tranng Valdaton Test Iteraton Number x (b) Learnng urve of TSML-Nonlnear Tranng Valdaton Test Iteraton Number x 10 5 () Learnng urve of TSML-MLP Fg. 4. Learnng urves of dfferent TSML models. Curves on the tranng, valdaton and test sets are represented by blak, blue and red lnes, respetvely. All the models are traned on the LFW data under the restrted settng. Aordng to early stoppng, the vertal lne ndates the model havng the best performane on the valdaton set. Wthout any addtonal regularzaton tehnques, the more omplex the learnng model s,.e. havng more parameters, the larger the over-fttng gap s. maxda maxda Restrted Unrestrted TSML-Lnear TSML-Lnear-Sm Restrted Unrestrted DDML-Lnear DDML-Lnear-Sm Restrted Unrestrted TSML-Lnear TSML-Lnear-Sm DDML-Lnear DDML-Lnear-Sm maxda maxda Restrted Unrestrted (a) Results on LFW-funneled Restrted Unrestrted (b) Results on NIST -vetor Fg. 5. Performane omparson of the lnear models on the LFW funneled dataset Restrted and the NIST -vetor Unrestrted dataset. TSML-Lnear TSML-Lnear-Sm DDML-Lnear DDML-Lnear-Sm TSML-Lnear TSML-Lnear-Sm DDML-Lnear DDML-Lnear-Sm TSML-Lnear TSML-Lnear-Sm DDML-Lnear DDML-Lnear-Sm TSML-Lnear TSML-Lnear-Sm DDML-Lnear DDML-Lnear-Sm formane. Moreover, t should be also preferred for unrestrted tranng due to faster tranng. Compared wth models traned on both smlar and dssmlar pars, the lnear onentratve models only take nto aount half of the tranng data but yeld omparable verfaton results. Conretely, the settng of equal quantty of smlar and dssmlar pars s problemat for restrted tranng. As-

10 JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, JAN sumng a n-lass problem wth two samples n eah lass, the number of all possble smlar pars s n. But the number of all possble dssmlar pars s 2n(n 1), whh s muh larger than the number of smlar pars. However, the restrted onfguraton requres the number of dssmlar pars s the same as the number of smlar pars. For example, only 300 smlar pars and 300 dssmlar pars are provded n eah subset of the LFW dataset. As a onsequene, learnng on suh lmted number of dssmlar pars auses serous over-fttng problems to the normal models, that s why they perform worse than the lnear onentratve models. In ontrast, when the tranng s unrestrted, enough dssmlar pars an be overed durng tranng and the rsk of over-fttng s redued. Hene the normal models traned on both smlar and dssmlar pars perform well n unrestrted tranng. In short, restrted tranng on equal quantty of smlar and dssmlar pars does not aord wth the rato of smlar and dssmlar pars n prate. The smlar pars ndeed delver more postve ontrbutons for learnng a better metr. Apart from our suggeston of learnng on smlar pars only, ths goal an be aheved by other tehnques suh as shftng the Cosne Smlarty boundary [62], usng hnge loss funtons to flter nvald gradent desent from dssmlar pars [11] or weghtng the gradent ontrbutons from smlar and dssmlar pars [12], [63]. Overall, our proposed onentratve tranng s a ompettve hoe due to ts smplty. TABLE 7 Comparson of TSML-Lnear-Sm wth other state-of-the-art results under the restrted onfguraton wth no outsde data on LFW-funneled. Method Auray V1-lke/MKL [67] 79.35±0.55 APEM (fuson) [68] 79.06±1.51 MRF-MLBP [65] (no ROC) 79.08±0.14 SVM-Fsher vetor faes [30] 87.47±1.49 Egen-PEP (fuson) [69] 88.97±1.32 Herarhal-PEP (fuson) [70] 91.10±1.47 MRF-Fuson-CSKDA [66] (no ROC) 95.89±1.94 TSML-Lnear-Sm (ths work) 91.90±0.52 TABLE 8 Comparson of TSML-Lnear-Sm wth other methods usng sngle fae desrptor under the restrted onfguraton wth no outsde data on LFW-funneled. Method Feature Auray MRF-MLBP [65] mult-sale LBP 79.08±0.14 APEM [68] SIFT 81.88±0.94 APEM [68] LBP 81.97±1.90 Egen-PEP [69] PEP 88.47±0.91 Herarhal-PEP [70] PEP 90.40±1.35 SVM [30] Fsher Vetor faes 87.47±1.49 DDML-Lnear-Sm Fsher Vetor faes 91.03±0.61 WCCN [13] Fsher Vetor faes 91.10±0.45 TSML-Lnear-Sm Fsher Vetor faes 91.90± TSML vs. DDML Comparng the two metr learnng methods, TSML and DDML, we fnd omparable performane reords n Tables 3 6. Ths s reasonable beause the Euldean dstane s naturally related to the Cosne Smlarty. For the square of the Euldean dstane between two vetors, we have (a b) 2 = (a b) T (a b) = a 2 + b 2 2a T b. When the vetors are normalzed to unt length,.e. a 2 = b 2 = 1, the prevous equaton an be wrtten as (a b) 2 = 2 2os(a, b). That means n our stuaton, mnmzng the dstane between data pars s equvalent to maxmzng the parwse smlarty value. 5.4 Comparson wth the State-of-the-Art We ompared the proposed TSML-Lnear-Sm method wth several state-of-the-art methods on the LFW dataset under the mage-restrted onfguraton wth no outsde data [64]. The omparson s summarzed n Table 7, and the orrespondng ROC urves are shown n Fg. 6. The urves of MRF-MLBP [65] and MRF-Fuson-CSKDA [66] are mssng beause the urve data are not provded on the publ result page 3. We an see that MRF-Fuson-CSKDA oupes the frst plae and the proposed TSML-Lnear-Sm takes the seond one wth a relatvely large gap (91.90% vs %). Ths s beause MRF-Fuson-CSKDA employed mult-sale bnarzed statstal mage features and made a fuson on multple features [66]. However, the proposed TSML-Lnear-Sm method s muh smpler as t has only utlzed a sngle feature, the FV vetors Thus we olleted the results of methods usng a sngle feature n Table 8. Espeally, we also appled another stateof-the-art approah WCCN [13] on the FV vetors as a omparson. We an see that the proposed TSML-Lnear-Sm method aheves the best performane (91.90%) among all the methods usng a sngle feature. Espeally, TSML-Lnear- Sm sgnfantly surpasses the onventonal Support Vetor Mahnes (SVM) method [30] on the FV vetors by 4.43% ponts (from 87.47% to 91.90%). 5.5 Staked to Pre-traned Deep Nonlnearty As we have mentoned, the proposed shallow nonlnearty was onstraned due to lak of proper generalzaton strateges and more tranng data. At present, the suess of deep learnng n speeh reognton and vsual objet reognton shows that the deep nonlnearty s able to learn dsrmnatve representatons of data [38]. To release the power of nonlnearty, deep learnng approahes requre large datasets and perform tranng n a supervsed way [35], [36], [37]. However, t s dffult to dretly tran a deep metr learnng system on a large dataset havng hundred thousands of or even mllons of data samples [35], [71] beause the number of sample pars wll be dramatally rased. Atually tranng sem-supervsed samese neural networks s muh slower than tranng supervsed neural networks [53]. Reent empral work showed that tranng samese neural networks on arefully hosen trplets nstead of data pars s helpful for fast onvergene [72], [73]. Besdes, t was also found that even a smple lassfer an make good deson on the features produed by the learned deep models [35], [36], [71]. Therefore we stak

11 JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, JAN True Postve Rate Image Restrted, No Outsde Data 0.3 Lnear TSML sm (ths work) 0.2 Herarhal PEP Egen PEP (fuson) SVM Fsher vetor faes 0.1 APEM (fuson) V1 lke/mkl False Postve Rate Fg. 6. ROC urves of Lnear-TSML-Sm (red dashed lne) and other state-of-the-art methods on the LFW dataset under the restrted onfguraton wth no outsde data. TABLE 9 Mean maxda sores (±standard error of the mean) of parwse fae verfaton by stakng the metr learnng systems to the pre-traned deep CNN model on the LFW-funneled mage dataset. Approahes Auray Deep CNN 97.93±0.22 -Lnear -Nonlnear -MLP Deep CNN-TSML 98.25± ± ±0.22 Deep CNN-DDML 98.18± ± ±0.26 the proposed lnear and nonlnear metr learnng models to a pre-traned deep CNN [71] traned on the CASIS- Webfae dataset [74]. There are 493,456 labeled mages of 10,575 denttes n the CASIS-Webfae. [71] provdes two deep models traned on these data. We use the model A to extrat features from eah fae mage n the LFW dataset, resultng n a 256-dmensonal vetor. Then the proess of metr learnng s smlar wth that on the Fsher Vetors under the unrestrted tranng settng. All the TSML and DDML models are tested. Table 9 summarzes the results of the deep CNN model and the staked models. It s not surprsng that the deep CNN brngs sgnfant verfaton mprovement to our shallow models. By the learned dsrmnatve feature representatons from the CASIS-Webfae fae mages, the deep CNN tself aheves the auray of 97.93%. We an see that the lnear models, TSML-Lnear and DDML-Lnear, further mprove the verfaton performane to 98.25% and 98.18%. Ths mprovement s guaranteed by the dentty ntalzaton and early stoppng appled to the lnear models: the deep CNN results are taken as ntal status for metr learnng; and early stoppng marks the best reord on the valdaton set. In ontrast, the shallow nonlnear metr learnng models obtan slghtly worse results beause they take random ntalzaton and degrade the good deep CNN baselne. A probable reason s that we have restrted the nput/output sze of the nonlnear models to the sze of the lnear models, and t mght be possble to mprove the nonlnear models by tunng the sze of layers, tryng dfferent ntalzaton methods or addng regularzaton tehnques. However, the smple lnear metr learnng model s ndeed a good and quk opton that demands less effort on hyperparameter tunng than the shallow nonlnear ones. Thus we suggest the deep nonlnearty for robust feature learnng on large datasets and the shallow lnearty for lassfaton [37]. 6 CONCLUSION In ths paper, we have evaluated two metr learnng methods TSML and DDML for parwse fae verfaton on the LFW dataset and parwse speaker verfaton on the NIST -vetor dataset. Under the settng of lmted tranng pars, we found that learnng a lnear model on smlar pars only s a smple but effetve soluton for dentfy verfaton. When labeled outsde data are avalable, a pre-traned deep CNN model helps the lnear TSML and DDML systems to reah ompettve performane on fae verfaton. We presented several strateges and onfrmed ther effetveness on redung the rsk of over-fttng. These strateges nlude usng more tranng pars; usng a lnear model to keep generalzaton; learnng on smlar pars only for restrted tranng; separatng a valdaton set to perform early stoppng; ntrodung a deep CNN model pre-traned on a large dataset. Wth these strateges, the nature of learnng a good metr of the TSML and DDML methods makes themselves effetve on the two dfferent parwse verfaton tasks. The defned parwse verfaton task s not lmted to only human denttes, the objets an be douments, audo, mages or ndvduals n any other ategores. For any parwse verfaton problems wth objets that an be represented as numeral vetors, we beleve that the proposed methods are applable, and the observed phenomena are repeatable. ACKNOWLEDGMENTS Thanks to the Chna Sholarshp Counl (CSC) and the Laboratore d InfoRmatque en Image et Systèmes d nformaton (LIRIS) for supportng ths work. REFERENCES [1] J. Matas, M. Hamouz, K. Jonsson, J. Kttler, Y. L, C. Kotropoulos, A. Tefas, I. Ptas, T. Tan, H. Yan, et al., Comparson of fae verfaton results on the XM2VTFS database, n Pro. ICPR. IEEE, 2000, vol. 4, pp [2] D. A. Reynolds, Speaker dentfaton and verfaton usng gaussan mxture speaker models, Speeh ommunaton, vol. 17, no. 1, pp , [3] Dapeng Tao, Lanwen Jn, Yongfe Wang, and Xuelong L, Person redentfaton by mnmum lassfaton error-based kss metr learnng, IEEE transatons on ybernets, vol. 45, no. 2, pp , [4] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Mller, Labeled faes n the wld: A database for studyng fae reognton n unonstraned envronments, Teh. Rep., Unversty of Massahusetts, Amherst, [5] Erk Learned-Mller, Gary Huang, Arun RoyChowdhury, Haoxang L, and Gang Hua, Labeled faes n the wld: A survey, 2015.

12 JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, JAN [6] Alvn F Martn and Crag S Greenberg, The NIST 2010 speaker reognton evaluaton, n Eleventh Annual Conferene of the Internatonal Speeh Communaton Assoaton, [7] C. S. Greenberg, D. Bansé, G. R. Doddngton, D. Gara-Romero, J. J. Godfrey, T. Knnunen, A. F. Martn, A. MCree, M. Przybok, and D. A. Reynolds, The NIST 2014 speaker reognton -vetor mahne learnng hallenge, n Odyssey: The Speaker and Language Reognton Workshop, [8] A. Bellet, A. Habrard, and M. Sebban, A survey on metr learnng for feature vetors and strutured data, Computng Researh Repostory, vol. abs/ , [9] J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. Säknger, and R. Shah, Sgnature verfaton usng a Samese tme delay neural network, Internatonal Journal of Pattern Reognton and Artfal Intellgene, vol. 7, no. 04, pp , [10] L. Zheng, K. Idrss, C. Gara, S. Duffner, and A. Baskurt, Trangular smlarty metr learnng for fae verfaton, n Pro. FG, [11] J. Hu, J. Lu, and Y.-P. Tan, Dsrmnatve deep metr learnng for fae verfaton n the wld, n Pro. CVPR, 2014, pp [12] N. V. Heu and B. L, Cosne smlarty metr learnng for fae verfaton, n Pro. ACCV. 2011, pp , Sprnger. [13] O. Barkan, J. Well, L. Wolf, and H. Aronowtz, Fast hgh dmensonal vetor multplaton fae reognton, n Pro. ICCV, [14] S. Chopra, R. Hadsell, and Y. LeCun, Learnng a smlarty metr dsrmnatvely, wth applaton to fae verfaton, n Pro. CVPR. IEEE, 2005, vol. 1, pp [15] D. Kedem, S. Tyree, F. Sha, G. R. Lankret, and K. Q. Wenberger, Non-lnear metr learnng, n Advanes n Neural Informaton Proessng Systems, 2012, pp [16] Oren Barkan and Haga Aronowtz, Dffuson maps for PLDAbased speaker verfaton, n 2013 IEEE Internatonal Conferene on Aousts, Speeh and Sgnal Proessng. IEEE, 2013, pp [17] E. P. Xng, A. Y. Ng, M. I. Jordan, and S. Russell, Dstane metr learnng wth applaton to lusterng wth sde-nformaton, Advanes n Neural Informaton Proessng Systems, pp , [18] K. Wenberger, J. Bltzer, and L. Saul, Dstane metr learnng for large margn nearest neghbor lassfaton, Advanes n Neural Informaton Proessng Systems, vol. 18, pp. 1473, [19] A. M. Qamar, E. Gausser, J. P. Chevallet, and J. H. Lm, Smlarty learnng for nearest neghbor lassfaton, n Pro. ICDM. IEEE, 2008, pp [20] G. Chehk, V. Sharma, U. Shalt, and S. Bengo, Large sale onlne learnng of mage smlarty through rankng, The Journal of Mahne Learnng Researh, vol. 11, pp , [21] Jun Yu, Xaokang Yang, Fe Gao, and Daheng Tao, Deep multmodal dstane metr learnng usng lk onstrants for mage rankng., IEEE transatons on ybernets, [22] Y. A. LeCun, L. Bottou, G. B. Orr, and K.-R. Müller, Effent bakprop, n Neural networks: Trks of the trade, pp Sprnger, [23] Panagots Moutafs, Mengjun Leng, and Ioanns A Kakadars, An overvew and empral omparson of dstane metr learnng methods, IEEE Transatons on Cybernets, [24] Ma. A. Turk and A. P. Pentland, Fae reognton usng egenfaes, n Pro. CVPR. IEEE, 1991, pp [25] T. Ahonen, A. Hadd, and M. Petkänen, Fae reognton wth loal bnary patterns, n Pro. ECCV. 2004, pp , Sprnger. [26] Y. Ke and R. Sukthankar, PCA-SIFT: A more dstntve representaton for loal mage desrptors, n Pro. CVPR. IEEE, 2004, vol. 2, pp. II 506. [27] J. G. Daugman, Complete dsrete 2-D gabor transforms by neural networks for mage analyss and ompresson, Aousts, Speeh and Sgnal Proessng, IEEE Transatons on, vol. 36, no. 7, pp , [28] L. Wolf, T. Hassner, and Y. Tagman, Desrptor based methods n the wld, n Workshop on Faes n Real-Lfe Images: Deteton, Algnment, and Reognton, [29] S. U. Hussan, T. Napoléon, and F. Jure, Fae reognton usng loal quantzed patterns, n Pro. BMVC, [30] K. Smonyan, O. M. Parkh, A. Vedald, and A. Zsserman, Fsher vetor faes n the wld, n Pro. BMVC, 2013, vol. 1, p. 7. [31] D. Chen, X. Cao, F. Wen, and J. Sun, Blessng of dmensonalty: Hgh-dmensonal feature and ts effent ompresson for fae verfaton, n Pro. CVPR. IEEE, 2013, pp [32] Chuan-Xan Ren, Zhen Le, Dao-Qng Da, and Stan Z L, Enhaned loal gradent order features and dsrmnant analyss for fae reognton, IEEE Transatons on Cybernets, vol. PP, no. 99, pp. 1 14, [33] M. Hekklä, M. Petkänen, and C. Shmd, Desrpton of nterest regons wth enter-symmetr loal bnary patterns, n Computer Vson, Graphs and Image Proessng, pp Sprnger, [34] L. Zhang, R. Chu, S. Xang, S. Lao, and S. Z. L, Fae deteton based on mult-blok lbp representaton, n Advanes n bometrs, pp Sprnger, [35] Y. Tagman, M. Yang, M. A. Ranzato, and L. Wolf, Deepfae: Closng the gap to human-level performane n fae verfaton, n Pro. CVPR. IEEE, 2014, pp [36] Y. Sun, Y. Chen, X. Wang, and X. Tang, Deep learnng fae representaton by jont dentfaton-verfaton, n Advanes n Neural Informaton Proessng Systems, 2014, pp [37] K. Chatfeld, K. Smonyan, A. Vedald, and A. Zsserman, Return of the devl n the detals: Delvng deep nto onvolutonal nets, n Pro. BMVC, [38] Yann A LeCun, Yoshua Bengo, and Geoffrey E Hnton, Deep learnng, Nature, vol. 521, pp , [39] D. A. Reynolds, T. F. Quater, and R. B. Dunn, Speaker verfaton usng adapted Gaussan mxture models, Dgtal sgnal proessng, vol. 10, no. 1, pp , [40] P. Kenny, P. Ouellet, N. Dehak, V. Gupta, and P. Dumouhel, A study of nterspeaker varablty n speaker verfaton, Audo, Speeh, and Language Proessng, IEEE Transatons on, vol. 16, no. 5, pp , [41] N. Dehak, P. Kenny, R. Dehak, P. Dumouhel, and P. Ouellet, Front-end fator analyss for speaker verfaton, Audo, Speeh, and Language Proessng, IEEE Transatons on, vol. 19, no. 4, pp , [42] Danel Gara-Romero and Carol Y Espy-Wlson, Analyss of - vetor length normalzaton n speaker reognton systems., n Interspeeh, 2011, pp [43] Yun Le, Nolas Sheffer, Luana Ferrer, and Mthell MLaren, A novel sheme for speaker reognton usng a phonetallyaware deep neural network, n 2014 IEEE Internatonal Conferene on Aousts, Speeh and Sgnal Proessng (ICASSP). IEEE, 2014, pp [44] A. Larher, K. A. Lee, B. Ma, and H. L, Text-dependent speaker verfaton: Classfers, databases and RSR2015, Speeh Communaton, vol. 60, pp , [45] Smon JD Prne and James H Elder, Probablst lnear dsrmnant analyss for nferenes about dentty, n 2007 IEEE 11th Internatonal Conferene on Computer Vson. IEEE, 2007, pp [46] Sbel Yaman and Jason Peleanos, Usng polynomal kernel support vetor mahnes for speaker verfaton, IEEE Sgnal Proessng Letters, vol. 20, no. 9, pp , [47] Patrk Kenny, Bayesan speaker verfaton wth heavy-taled prors, n Odyssey, 2010, p. 14. [48] Lukáš Burget, Oldřh Plhot, Sandro Cuman, Ondřej Glembek, Pavel Matějka, and Nko Brümmer, Dsrmnatvely traned probablst lnear dsrmnant analyss for speaker verfaton, n 2011 IEEE nternatonal onferene on aousts, speeh and sgnal proessng (ICASSP). IEEE, 2011, pp [49] Pavel Matějka, Ondřej Glembek, Fabo Castaldo, Md Jahangr Alam, Oldřh Plhot, Patrk Kenny, Lukáš Burget, and Jan Černoky, Full-ovarane UBM and heavy-taled PLDA n - vetor speaker verfaton, n 2011 IEEE Internatonal Conferene on Aousts, Speeh and Sgnal Proessng (ICASSP). IEEE, 2011, pp [50] Sergey Novoselov, Tmur Pekhovsky, Oleg Kudashev, Valentn Mendelev, and Alexey Prudnkov, Non-lnear PLDA for -vetor speaker verfaton, ISCA Interspeeh, [51] Sandro Cuman, Nko Brümmer, Lukáš Burget, and Petro Lafae, Fast dsrmnatve speaker verfaton n the -vetor spae, n 2011 IEEE Internatonal Conferene on Aousts, Speeh and Sgnal Proessng (ICASSP). IEEE, 2011, pp [52] Sandro Cuman and Petro Lafae, Large-sale tranng of parwse support vetor mahnes for speaker reognton, IEEE/ACM Transatons on Audo, Speeh and Language Proessng (TASLP), vol. 22, no. 11, pp , 2014.

13 JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, JAN [53] Lle Zheng, Trangular Smlarty Metr Learnng: a Samese Arhteture Approah, Ph.D. thess, Unversty of Lyon, [54] Alexs Mgnon and Frédér Jure, Pa: A new approah for dstane learnng from sparse parwse onstrants, n Computer Vson and Pattern Reognton (CVPR), 2012 IEEE Conferene on. IEEE, 2012, pp [55] Davd E Rumelhart, Geoffrey E Hnton, and Ronald J Wllams, Learnng nternal representatons by error propagaton, Teh. Rep., DTIC Doument, [56] D. C. Lu and J. Noedal, On the lmted memory BFGS method for large sale optmzaton, Mathematal programmng, vol. 45, no. 1-3, pp , [57] Lutz Prehelt, Early stoppng - but when?, n Neural Networks: Trks of the Trade, pp Sprnger, [58] Q. Cao, Y. Yng, and P. L, Smlarty metr learnng for fae reognton, n Pro. ICCV, [59] Xaver Glorot and Yoshua Bengo, Understandng the dffulty of tranng deep feedforward neural networks, n Internatonal onferene on artfal ntellgene and statsts, 2010, pp [60] Charles J Stone, Consstent nonparametr regresson, The annals of statsts, pp , [61] Ntsh Srvastava, Geoffrey Hnton, Alex Krzhevsky, Ilya Sutskever, and Ruslan Salakhutdnov, Dropout: A smple way to prevent neural networks from overfttng, The Journal of Mahne Learnng Researh, vol. 15, no. 1, pp , [62] Lle Zheng, Khald Idrss, Chrstophe Gara, Stefan Duffner, and Atlla Baskurt, Logst smlarty metr learnng for fae verfaton, n Aousts, Speeh and Sgnal Proessng, 2015 IEEE Internatonal Conferene on. IEEE, [63] Junln Hu, Jwen Lu, and Yap-Peng Tan, Deep transfer metr learnng, n Proeedngs of the IEEE Conferene on Computer Vson and Pattern Reognton, 2015, pp [64] G. B. Huang and E. Learned-Mller, Labeled faes n the wld: Updates and new reportng proedures,. [65] S. R. Arashloo and J. Kttler, Effent proessng of MRFs for unonstraned-pose fae reognton, n Bometrs: Theory, Applatons and Systems (BTAS), 2013 IEEE Sxth Internatonal Conferene on. IEEE, 2013, pp [66] Shervn Rahmzadeh Arashloo and Josef Kttler, Class-spef kernel fuson of multple desrptors for fae verfaton usng multsale bnarsed statstal mage features, Informaton Forenss and Seurty, IEEE Transatons on, vol. 9, no. 12, pp , [67] N. Pnto, J. J. DCarlo, and D. D. Cox, How far an you get wth a modern fae reognton test set usng only smple features?, n Pro. CVPR. IEEE, 2009, pp [68] H. L, G. Hua, Z. Ln, J. Brandt, and J. Yang, Probablst elast mathng for pose varant fae verfaton, n Pro. CVPR. IEEE, 2013, pp [69] Haoxang L, Gang Hua, Xaohu Shen, Zhe Ln, and Jonathan Brandt, Egen-pep for vdeo fae reognton, n Computer Vson ACCV 2014, pp Sprnger, [70] H. L and G. Hua, Herarhal-PEP model for real-world fae reognton, n Pro. CVPR. 2015, pp , IEEE. [71] Xang Wu, Ran He, and Zhenan Sun, A lghtened CNN for deep fae representaton, arxv preprnt arxv: , [72] Floran Shroff, Dmtry Kalenhenko, and James Phlbn, Faenet: A unfed embeddng for fae reognton and lusterng, n Proeedngs of the IEEE Conferene on Computer Vson and Pattern Reognton, 2015, pp [73] Omkar M Parkh, Andrea Vedald, and Andrew Zsserman, Deep fae reognton, n Brtsh Mahne Vson Conferene, 2015, vol. 1, p. 6. [74] Dong Y, Zhen Le, Shenga Lao, and Stan Z L, Learnng fae representaton from srath, arxv preprnt arxv: , Lle Zheng reeved a Bahelor s degree n 2009 and a Master s degree n 2012, all n omputer sene and tehnology from Northwestern Polytehnal Unversty, X an, Chna. He s urrently a Ph.D. student n the shool of nformaton and mathemats, Unversty of Lyon. Hs urrent researh nterests nlude mahne learnng and omputer vson. Stefan Duffner reeved a Bahelor s degree n Computer Sene from the Unversty of Appled Senes Konstanz, Germany n 2002 and a Master s degree n Appled Computer Sene from the Unversty of Freburg, Germany n He performed hs dssertaton researh at Orange Labs n Rennes, Frane, on fae mage analyss wth statstal mahne learnng methods, and n 2008, he obtaned a Ph.D. degree n Computer Sene from the Unversty of Freburg. He then worked for 4 years as a post-dotoral researher at the Idap Researh Insttute n Martgny, Swtzerland, n the feld of omputer vson and manly fae trakng. As of today, Stefan Duffner s an assoate professor n the IMAGINE team of the LIRIS researh lab at the Natonal Insttute of Appled Senes (INSA) of Lyon, Frane. Khald Idrss reeved a B.S. degree and the M.S n 1984 n the shool of eletral engneerng from INSA-Lyon, Frane. From 1985 to 1991, he has been workng as an engneer, then as projet leaders n ndustry. He reeved the Agrégaton n eletral engneerng n 1994 and he has been professeur agrégé untl 2003 at the Frenh Guyana Unversty then at INSA- Lyon. He reeved hs Ph.D degree n 2003, and then HDR n He s urrently workng as an Assoate Professor at the Teleommunaton Department of INSA-Lyon sne He s manly workng on mage analyss and segmentaton for mage ompresson, mage retreval, shape deteton and dentfaton, faal analyss. Chrstophe Gara reeved hs Ph.D. degree n omputer vson from Unversté Claude Bernard Lyon I, Frane, n 1994 and hs Habltaton à Drger des Reherhes (HDR) from INSA Lyon / Unversty of Lyon I, n Sne 2010, he s a Full Professor at INSA de Lyon and the deputy Dretor of the LIRIS laboratory. He holds 17 ndustral patents and has publshed more than 140 artles n nternatonal onferenes and journals. He has served n more than 30 program ommttees of nternatonal onferenes and s an atve revewer n 15 nternatonal journals where he has oorganzed several speal ssues. Hs urrent tehnal and researh atvtes are n the areas of deep learnng, neural networks, pattern reognton and omputer vson. Atlla Baskurt reeved the B.S. degree n 1984, the M.S. n 1985 and the Ph.D. n 1989, all n eletral engneerng from INSA-Lyon, Frane. From 1989 to 1998, he was Maître de Conférenes at INSA-Lyon. Sne 1998, he s Professor n Eletral and Computer Engneerng, frst at the Unversty Claude Bernard of Lyon, and now at INSA-Lyon. From 2003 to 2008, he was the Dretor of the Teleommunaton Department of INSA-Lyon. From September 2006 to Deember 2008, he was Chargé de msson on Informaton and Communaton Tehnologes (ICT) at the Frenh Researh Mnstry MESR. Currently, he s Dretor of the LIRIS Researh Lab. He leads hs researh atvtes n two teams of LIRIS: the IMAGINE team and the M2DsCo team. These teams work on mage and 3D data analyss and segmentaton for mage ompresson, mage retreval, shape deteton and dentfaton. Hs tehnal researh and experene nlude dgtal mage proessng, 2D-3D data analyss for segmentaton, ompresson and retreval, vdeo ontent analyss for aton reognton and objet trakng.

Color Texture Classification using Modified Local Binary Patterns based on Intensity and Color Information

Color Texture Classification using Modified Local Binary Patterns based on Intensity and Color Information Color Texture Classfaton usng Modfed Loal Bnary Patterns based on Intensty and Color Informaton Shvashankar S. Department of Computer Sene Karnatak Unversty, Dharwad-580003 Karnataka,Inda shvashankars@kud.a.n

More information

LOCAL BINARY PATTERNS AND ITS VARIANTS FOR FACE RECOGNITION

LOCAL BINARY PATTERNS AND ITS VARIANTS FOR FACE RECOGNITION IEEE-Internatonal Conferene on Reent Trends n Informaton Tehnology, ICRTIT 211 MIT, Anna Unversty, Chenna. June 3-5, 211 LOCAL BINARY PATTERNS AND ITS VARIANTS FOR FACE RECOGNITION K.Meena #1, Dr.A.Suruland

More information

Multilabel Classification with Meta-level Features

Multilabel Classification with Meta-level Features Multlabel Classfaton wth Meta-level Features Sddharth Gopal Carnege Mellon Unversty Pttsburgh PA 523 sgopal@andrew.mu.edu Ymng Yang Carnege Mellon Unversty Pttsburgh PA 523 ymng@s.mu.edu ABSTRACT Effetve

More information

Performance Analysis of Hybrid (supervised and unsupervised) method for multiclass data set

Performance Analysis of Hybrid (supervised and unsupervised) method for multiclass data set IOSR Journal of Computer Engneerng (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 4, Ver. III (Jul Aug. 2014), PP 93-99 www.osrjournals.org Performane Analyss of Hybrd (supervsed and

More information

Research on Neural Network Model Based on Subtraction Clustering and Its Applications

Research on Neural Network Model Based on Subtraction Clustering and Its Applications Avalable onlne at www.senedret.om Physs Proeda 5 (01 ) 164 1647 01 Internatonal Conferene on Sold State Deves and Materals Sene Researh on Neural Networ Model Based on Subtraton Clusterng and Its Applatons

More information

Adaptive Class Preserving Representation for Image Classification

Adaptive Class Preserving Representation for Image Classification Adaptve Class Preservng Representaton for Image Classfaton Jan-Xun M,, Qankun Fu,, Wesheng L, Chongqng Key Laboratory of Computatonal Intellgene, Chongqng Unversty of Posts and eleommunatons, Chongqng,

More information

Matrix-Matrix Multiplication Using Systolic Array Architecture in Bluespec

Matrix-Matrix Multiplication Using Systolic Array Architecture in Bluespec Matrx-Matrx Multplaton Usng Systol Array Arhteture n Bluespe Team SegFault Chatanya Peddawad (EEB096), Aman Goel (EEB087), heera B (EEB090) Ot. 25, 205 Theoretal Bakground. Matrx-Matrx Multplaton on Hardware

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Boosting Weighted Linear Discriminant Analysis

Boosting Weighted Linear Discriminant Analysis . Okada et al. / Internatonal Journal of Advaned Statsts and I&C for Eonoms and Lfe Senes Boostng Weghted Lnear Dsrmnant Analyss azunor Okada, Arturo Flores 2, Marus George Lnguraru 3 Computer Sene Department,

More information

Performance Evaluation of TreeQ and LVQ Classifiers for Music Information Retrieval

Performance Evaluation of TreeQ and LVQ Classifiers for Music Information Retrieval Performane Evaluaton of TreeQ and LVQ Classfers for Mus Informaton Retreval Matna Charam, Ram Halloush, Sofa Tsekerdou Athens Informaton Tehnology (AIT) 0.8 km Markopoulo Ave. GR - 19002 Peana, Athens,

More information

Logistic Similarity Metric Learning for Face Verification

Logistic Similarity Metric Learning for Face Verification Logistic Similarity Metric Learning for Face Verification Lilei Zheng, Khalid Idrissi, Christophe Garcia, Stefan Duffner, Atilla Baskurt To cite this version: Lilei Zheng, Khalid Idrissi, Christophe Garcia,

More information

TAR based shape features in unconstrained handwritten digit recognition

TAR based shape features in unconstrained handwritten digit recognition TAR based shape features n unonstraned handwrtten dgt reognton P. AHAMED AND YOUSEF AL-OHALI Department of Computer Sene Kng Saud Unversty P.O.B. 578, Ryadh 543 SAUDI ARABIA shamapervez@gmal.om, yousef@s.edu.sa

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Pattern Classification: An Improvement Using Combination of VQ and PCA Based Techniques

Pattern Classification: An Improvement Using Combination of VQ and PCA Based Techniques Ameran Journal of Appled Senes (0): 445-455, 005 ISSN 546-939 005 Sene Publatons Pattern Classfaton: An Improvement Usng Combnaton of VQ and PCA Based Tehnques Alok Sharma, Kuldp K. Palwal and Godfrey

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

arxiv: v3 [cs.cv] 31 Oct 2016

arxiv: v3 [cs.cv] 31 Oct 2016 Unversal Correspondene Network Chrstopher B. Choy Stanford Unversty hrshoy@a.stanford.edu JunYoung Gwak Stanford Unversty jgwak@a.stanford.edu Slvo Savarese Stanford Unversty sslvo@stanford.edu arxv:1606.03558v3

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Steganalysis of DCT-Embedding Based Adaptive Steganography and YASS

Steganalysis of DCT-Embedding Based Adaptive Steganography and YASS Steganalyss of DCT-Embeddng Based Adaptve Steganography and YASS Qngzhong Lu Department of Computer Sene Sam Houston State Unversty Huntsvlle, TX 77341, U.S.A. lu@shsu.edu ABSTRACT Reently well-desgned

More information

Progressive scan conversion based on edge-dependent interpolation using fuzzy logic

Progressive scan conversion based on edge-dependent interpolation using fuzzy logic Progressve san onverson based on edge-dependent nterpolaton usng fuzzy log P. Brox brox@mse.nm.es I. Baturone lum@mse.nm.es Insttuto de Mroeletróna de Sevlla, Centro Naonal de Mroeletróna Avda. Rena Meredes

More information

Interval uncertain optimization of structures using Chebyshev meta-models

Interval uncertain optimization of structures using Chebyshev meta-models 0 th World Congress on Strutural and Multdsplnary Optmzaton May 9-24, 203, Orlando, Florda, USA Interval unertan optmzaton of strutures usng Chebyshev meta-models Jngla Wu, Zhen Luo, Nong Zhang (Tmes New

More information

Gabor-Filtering-Based Completed Local Binary Patterns for Land-Use Scene Classification

Gabor-Filtering-Based Completed Local Binary Patterns for Land-Use Scene Classification Gabor-Flterng-Based Completed Loal Bnary Patterns for Land-Use Sene Classfaton Chen Chen 1, Lbng Zhou 2,*, Janzhong Guo 1,2, We L 3, Hongjun Su 4, Fangda Guo 5 1 Department of Eletral Engneerng, Unversty

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

Avatar Face Recognition using Wavelet Transform and Hierarchical Multi-scale LBP

Avatar Face Recognition using Wavelet Transform and Hierarchical Multi-scale LBP 2011 10th Internatonal Conferene on Mahne Learnng and Applatons Avatar Fae Reognton usng Wavelet Transform and Herarhal Mult-sale LBP Abdallah A. Mohamed, Darryl D Souza, Naouel Bal and Roman V. Yampolsky

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Cluster ( Vehicle Example. Cluster analysis ( Terminology. Vehicle Clusters. Why cluster?

Cluster (  Vehicle Example. Cluster analysis (  Terminology. Vehicle Clusters. Why cluster? Why luster? referene funton R R Although R and R both somewhat orrelated wth the referene funton, they are unorrelated wth eah other Cluster (www.m-w.om) A number of smlar ndvduals that our together as

More information

Bottom-Up Fuzzy Partitioning in Fuzzy Decision Trees

Bottom-Up Fuzzy Partitioning in Fuzzy Decision Trees Bottom-Up Fuzzy arttonng n Fuzzy eson Trees Maej Fajfer ept. of Mathemats and Computer Sene Unversty of Mssour St. Lous St. Lous, Mssour 63121 maejf@me.pl Cezary Z. Janow ept. of Mathemats and Computer

More information

Multi-scale and Discriminative Part Detectors Based Features for Multi-label Image Classification

Multi-scale and Discriminative Part Detectors Based Features for Multi-label Image Classification Proeedngs of the wenty-seventh Internatonal Jont Conferene on Artfal Intellgene (IJCAI-8) Mult-sale and Dsrmnatve Part Detetors Based Features for Mult-lael Image Classfaton Gong Cheng, Deheng Gao, Yang

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Connectivity in Fuzzy Soft graph and its Complement

Connectivity in Fuzzy Soft graph and its Complement IOSR Journal of Mathemats (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 1 Issue 5 Ver. IV (Sep. - Ot.2016), PP 95-99 www.osrjournals.org Connetvty n Fuzzy Soft graph and ts Complement Shashkala

More information

Improved Accurate Extrinsic Calibration Algorithm of Camera and Two-dimensional Laser Scanner

Improved Accurate Extrinsic Calibration Algorithm of Camera and Two-dimensional Laser Scanner JOURNAL OF MULTIMEDIA, VOL. 8, NO. 6, DECEMBER 013 777 Improved Aurate Extrns Calbraton Algorthm of Camera and Two-dmensonal Laser Sanner Janle Kong, Le Yan*, Jnhao Lu, Qngqng Huang, and Xaokang Dng College

More information

Bit-level Arithmetic Optimization for Carry-Save Additions

Bit-level Arithmetic Optimization for Carry-Save Additions Bt-leel Arthmet Optmzaton for Carry-Sae s Ke-Yong Khoo, Zhan Yu and Alan N. Wllson, Jr. Integrated Cruts and Systems Laboratory Unersty of Calforna, Los Angeles, CA 995 khoo, zhanyu, wllson @sl.ula.edu

More information

FULLY AUTOMATIC IMAGE-BASED REGISTRATION OF UNORGANIZED TLS DATA

FULLY AUTOMATIC IMAGE-BASED REGISTRATION OF UNORGANIZED TLS DATA FULLY AUTOMATIC IMAGE-BASED REGISTRATION OF UNORGANIZED TLS DATA Martn Wenmann, Bors Jutz Insttute of Photogrammetry and Remote Sensng, Karlsruhe Insttute of Tehnology (KIT) Kaserstr. 12, 76128 Karlsruhe,

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

A MPAA-Based Iterative Clustering Algorithm Augmented by Nearest Neighbors Search for Time-Series Data Streams

A MPAA-Based Iterative Clustering Algorithm Augmented by Nearest Neighbors Search for Time-Series Data Streams A MPAA-Based Iteratve Clusterng Algorthm Augmented by Nearest Neghbors Searh for Tme-Seres Data Streams Jessa Ln 1, Mha Vlahos 1, Eamonn Keogh 1, Dmtros Gunopulos 1, Janwe Lu 2, Shouan Yu 2, and Jan Le

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Session 4.2. Switching planning. Switching/Routing planning

Session 4.2. Switching planning. Switching/Routing planning ITU Semnar Warsaw Poland 6-0 Otober 2003 Sesson 4.2 Swthng/Routng plannng Network Plannng Strategy for evolvng Network Arhtetures Sesson 4.2- Swthng plannng Loaton problem : Optmal plaement of exhanges

More information

Minimize Congestion for Random-Walks in Networks via Local Adaptive Congestion Control

Minimize Congestion for Random-Walks in Networks via Local Adaptive Congestion Control Journal of Communatons Vol. 11, No. 6, June 2016 Mnmze Congeston for Random-Walks n Networks va Loal Adaptve Congeston Control Yang Lu, Y Shen, and Le Dng College of Informaton Sene and Tehnology, Nanjng

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Active Contours/Snakes

Active Contours/Snakes Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng

More information

FUZZY SEGMENTATION IN IMAGE PROCESSING

FUZZY SEGMENTATION IN IMAGE PROCESSING FUZZY SEGMENTATION IN IMAGE PROESSING uevas J. Er,, Zaldívar N. Danel,, Roas Raúl Free Unverstät Berln, Insttut für Inforat Tausstr. 9, D-495 Berln, Gerany. Tel. 0049-030-8385485, Fax. 0049-030-8387509

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

Link Graph Analysis for Adult Images Classification

Link Graph Analysis for Adult Images Classification Lnk Graph Analyss for Adult Images Classfaton Evgeny Khartonov Insttute of Physs and Tehnology, Yandex LLC 90, 6 Lev Tolstoy st., khartonov@yandex-team.ru Anton Slesarev Insttute of Physs and Tehnology,

More information

A Fast Way to Produce Optimal Fixed-Depth Decision Trees

A Fast Way to Produce Optimal Fixed-Depth Decision Trees A Fast Way to Produe Optmal Fxed-Depth Deson Trees Alreza Farhangfar, Russell Grener and Martn Znkevh Dept of Computng Sene Unversty of Alberta Edmonton, Alberta T6G 2E8 Canada {farhang, grener, maz}@s.ualberta.a

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

A Robust Algorithm for Text Detection in Color Images

A Robust Algorithm for Text Detection in Color Images A Robust Algorthm for Tet Deteton n Color Images Yangng LIU Satosh GOTO Takesh IKENAGA Abstrat Tet deteton n olor mages has beome an atve researh area sne reent deades. In ths paper we present a novel

More information

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram Shape Representaton Robust to the Sketchng Order Usng Dstance Map and Drecton Hstogram Department of Computer Scence Yonse Unversty Kwon Yun CONTENTS Revew Topc Proposed Method System Overvew Sketch Normalzaton

More information

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY SSDH: Sem-supervsed Deep Hashng for Large Scale Image Retreval Jan Zhang, and Yuxn Peng arxv:607.08477v2 [cs.cv] 8 Jun 207 Abstract Hashng

More information

Fast High Dimensional Vector Multiplication Face Recognition

Fast High Dimensional Vector Multiplication Face Recognition Fast Hgh Dmensonal Vector Multplcaton Face Recognton Oren Barkan Tel Avv Unversty orenbarkan@post.tau.ac.l Jonathan Well Tel Avv Unversty yonathanw@post.tau.ac.l Lor Wolf Tel Avv Unversty wolf@cs.tau.ac.l

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Scale Selective Extended Local Binary Pattern For Texture Classification

Scale Selective Extended Local Binary Pattern For Texture Classification Scale Selectve Extended Local Bnary Pattern For Texture Classfcaton Yutng Hu, Zhlng Long, and Ghassan AlRegb Multmeda & Sensors Lab (MSL) Georga Insttute of Technology 03/09/017 Outlne Texture Representaton

More information

Optimal shape and location of piezoelectric materials for topology optimization of flextensional actuators

Optimal shape and location of piezoelectric materials for topology optimization of flextensional actuators Optmal shape and loaton of pezoeletr materals for topology optmzaton of flextensonal atuators ng L 1 Xueme Xn 2 Noboru Kkuh 1 Kazuhro Satou 1 1 Department of Mehanal Engneerng, Unversty of Mhgan, Ann Arbor,

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

An Adaptive Filter Based on Wavelet Packet Decomposition in Motor Imagery Classification

An Adaptive Filter Based on Wavelet Packet Decomposition in Motor Imagery Classification An Adaptve Flter Based on Wavelet Paket Deomposton n Motor Imagery Classfaton J. Payat, R. Mt, T. Chusak, and N. Sugno Abstrat Bran-Computer Interfae (BCI) s a system that translates bran waves nto eletral

More information

Computing Cloud Cover Fraction in Satellite Images using Deep Extreme Learning Machine

Computing Cloud Cover Fraction in Satellite Images using Deep Extreme Learning Machine Computng Cloud Cover Fraton n Satellte Images usng Deep Extreme Learnng Mahne L-guo WENG, We-bn KONG, Mn XIA College of Informaton and Control, Nanjng Unversty of Informaton Sene & Tehnology, Nanjng Jangsu

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

AVideoStabilizationMethodbasedonInterFrameImageMatchingScore

AVideoStabilizationMethodbasedonInterFrameImageMatchingScore Global Journal of Computer Sene and Tehnology: F Graphs & vson Volume 7 Issue Verson.0 Year 207 Type: Double Blnd Peer Revewed Internatonal Researh Journal Publsher: Global Journals In. (USA) Onlne ISSN:

More information

International Journal of Pharma and Bio Sciences HYBRID CLUSTERING ALGORITHM USING POSSIBILISTIC ROUGH C-MEANS ABSTRACT

International Journal of Pharma and Bio Sciences HYBRID CLUSTERING ALGORITHM USING POSSIBILISTIC ROUGH C-MEANS ABSTRACT Int J Pharm Bo S 205 Ot; 6(4): (B) 799-80 Researh Artle Botehnology Internatonal Journal of Pharma and Bo Senes ISSN 0975-6299 HYBRID CLUSTERING ALGORITHM USING POSSIBILISTIC ROUGH C-MEANS *ANURADHA J,

More information

Fitting: Deformable contours April 26 th, 2018

Fitting: Deformable contours April 26 th, 2018 4/6/08 Fttng: Deformable contours Aprl 6 th, 08 Yong Jae Lee UC Davs Recap so far: Groupng and Fttng Goal: move from array of pxel values (or flter outputs) to a collecton of regons, objects, and shapes.

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

The Simulation of Electromagnetic Suspension System Based on the Finite Element Analysis

The Simulation of Electromagnetic Suspension System Based on the Finite Element Analysis 308 JOURNAL OF COMPUTERS, VOL. 8, NO., FEBRUARY 03 The Smulaton of Suspenson System Based on the Fnte Element Analyss Zhengfeng Mng Shool of Eletron & Mahanal Engneerng, Xdan Unversty, X an, Chna Emal:

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

ABHELSINKI UNIVERSITY OF TECHNOLOGY Networking Laboratory

ABHELSINKI UNIVERSITY OF TECHNOLOGY Networking Laboratory ABHELSINKI UNIVERSITY OF TECHNOLOGY Networkng Laboratory Load Balanng n Cellular Networks Usng Frst Poly Iteraton Johan an Leeuwaarden Samul Aalto & Jorma Vrtamo Networkng Laboratory Helsnk Unersty of

More information

Clustering incomplete data using kernel-based fuzzy c-means algorithm

Clustering incomplete data using kernel-based fuzzy c-means algorithm Clusterng noplete data usng ernel-based fuzzy -eans algorth Dao-Qang Zhang *, Song-Can Chen Departent of Coputer Sene and Engneerng, Nanjng Unversty of Aeronauts and Astronauts, Nanjng, 210016, People

More information

Elsevier Editorial System(tm) for NeuroImage Manuscript Draft

Elsevier Editorial System(tm) for NeuroImage Manuscript Draft Elsever Edtoral System(tm) for NeuroImage Manusrpt Draft Manusrpt Number: Ttle: Comparson of ampltude normalzaton strateges on the auray and relablty of group ICA deompostons Artle Type: Tehnal Note Seton/Category:

More information

Fuzzy Modeling for Multi-Label Text Classification Supported by Classification Algorithms

Fuzzy Modeling for Multi-Label Text Classification Supported by Classification Algorithms Journal of Computer Senes Orgnal Researh Paper Fuzzy Modelng for Mult-Label Text Classfaton Supported by Classfaton Algorthms 1 Beatrz Wlges, 2 Gustavo Mateus, 2 Slva Nassar, 2 Renato Cslagh and 3 Rogéro

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

A Novel Dynamic and Scalable Caching Algorithm of Proxy Server for Multimedia Objects

A Novel Dynamic and Scalable Caching Algorithm of Proxy Server for Multimedia Objects Journal of VLSI Sgnal Proessng 2007 * 2007 Sprnger Sene + Busness Meda, LLC. Manufatured n The Unted States. DOI: 10.1007/s11265-006-0024-7 A Novel Dynam and Salable Cahng Algorthm of Proxy Server for

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

ON THE USE OF THE SIFT TRANSFORM TO SELF-LOCATE AND POSITION EYE-IN-HAND MANIPULATORS USING VISUAL CONTROL

ON THE USE OF THE SIFT TRANSFORM TO SELF-LOCATE AND POSITION EYE-IN-HAND MANIPULATORS USING VISUAL CONTROL XVIII Congresso Braslero de Automáta / a 6-setembro-00, Bonto-MS ON THE USE OF THE SIFT TRANSFORM TO SELF-LOCATE AND POSITION EYE-IN-HAND MANIPULATORS USING VISUAL CONTROL ILANA NIGRI, RAUL Q. FEITOSA

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Image Alignment CSC 767

Image Alignment CSC 767 Image Algnment CSC 767 Image algnment Image from http://graphcs.cs.cmu.edu/courses/15-463/2010_fall/ Image algnment: Applcatons Panorama sttchng Image algnment: Applcatons Recognton of object nstances

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Pixel-Based Texture Classification of Tissues in Computed Tomography

Pixel-Based Texture Classification of Tissues in Computed Tomography Pxel-Based Texture Classfaton of Tssues n Computed Tomography Ruhaneewan Susomboon, Danela Stan Rau, Jaob Furst Intellgent ultmeda Proessng Laboratory Shool of Computer Sene, Teleommunatons, and Informaton

More information

Fuzzy C-Means Initialized by Fixed Threshold Clustering for Improving Image Retrieval

Fuzzy C-Means Initialized by Fixed Threshold Clustering for Improving Image Retrieval Fuzzy -Means Intalzed by Fxed Threshold lusterng for Improvng Image Retreval NAWARA HANSIRI, SIRIPORN SUPRATID,HOM KIMPAN 3 Faculty of Informaton Technology Rangst Unversty Muang-Ake, Paholyotn Road, Patumtan,

More information

Data Mining: Model Evaluation

Data Mining: Model Evaluation Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct

More information

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros. Fttng & Matchng Lecture 4 Prof. Bregler Sldes from: S. Lazebnk, S. Setz, M. Pollefeys, A. Effros. How do we buld panorama? We need to match (algn) mages Matchng wth Features Detect feature ponts n both

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

Learning Ensemble of Local PDM-based Regressions. Yen Le Computational Biomedicine Lab Advisor: Prof. Ioannis A. Kakadiaris

Learning Ensemble of Local PDM-based Regressions. Yen Le Computational Biomedicine Lab Advisor: Prof. Ioannis A. Kakadiaris Learnng Ensemble of Local PDM-based Regressons Yen Le Computatonal Bomedcne Lab Advsor: Prof. Ioanns A. Kakadars 1 Problem statement Fttng a statstcal shape model (PDM) for mage segmentaton Callosum segmentaton

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

Clustering Data. Clustering Methods. The clustering problem: Given a set of objects, find groups of similar objects

Clustering Data. Clustering Methods. The clustering problem: Given a set of objects, find groups of similar objects Clusterng Data The lusterng problem: Gven a set of obets, fnd groups of smlar obets Cluster: a olleton of data obets Smlar to one another wthn the same luster Dssmlar to the obets n other lusters What

More information

Measurement and Calibration of High Accuracy Spherical Joints

Measurement and Calibration of High Accuracy Spherical Joints 1. Introduton easurement and Calbraton of Hgh Auray Spheral Jonts Ale Robertson, Adam Rzepnewsk, Alexander Sloum assahusetts Insttute of Tehnolog Cambrdge, A Hgh auray robot manpulators are requred for

More information

Microprocessors and Microsystems

Microprocessors and Microsystems Mroproessors and Mrosystems 36 (2012) 96 109 Contents lsts avalable at SeneDret Mroproessors and Mrosystems journal homepage: www.elsever.om/loate/mpro Hardware aelerator arhteture for smultaneous short-read

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information