Robust Kernel Representation with Statistical Local Features. for Face Recognition

Size: px
Start display at page:

Download "Robust Kernel Representation with Statistical Local Features. for Face Recognition"

Transcription

1 Robust Kernel Representaton wth Statstcal Local Features for Face Recognton Meng Yang, Student Member, IEEE, Le Zhang 1, Member, IEEE Smon C. K. Shu, Member, IEEE, and Davd Zhang, Fellow, IEEE Dept. of Computng, The Hong Kong Polytechnc Unversty, Hong Kong, Chna Abstract. Factors such as msalgnment, pose varaton and occluson make robust face recognton a dffcult problem. It s known that statstcal features such as LBP are effectve for local feature extracton, whle the recently proposed sparse or collaboratve representaton based classfcaton has shown nterestng results n robust face recognton. In ths paper, we propose a novel robust kernel representaton model wth statstcal local features (SLF) for robust face recognton. Frst, mult-partton max poolng s used to enhance the SLF s nvarance to mage regstraton error. Then, a kernel based representaton model s proposed to fully explot the dscrmnaton nformaton embedded n the SLF, and robust regresson s adopted to effectvely handle the occluson n face mages. Extensve experments are conducted on benchmark face databases, ncludng Extended Yale B, AR, Mult-PIE, FERET, FRGC and LFW, whch have varous varatons of lghtng, expresson, pose and occlusons, demonstratng the promsng performance of the proposed method. Keywords: robust kernel representaton, sparse representaton, collaboratve representaton, face recognton 1 Correspondng author. Emal: cslzhang@comp.polyu.edu.hk. Ths work s supported by the HK RGC PPR grant (PolyU5019- PPR-11). 1

2 1. Introducton Automatc face recognton (FR) s one of the most actve and vsble research topcs n computer vson, machne learnng and bometrcs [11] due to ts wde range of applcatons such as access control, vdeo survellance, and the lke. After many years nvestgaton, FR s stll very challengng due to the low qualty of face mages [1], and the rch varatons of facal mages from the same or dfferent subjects, e.g., lghtng, expresson, occluson, msalgnment, etc [11]. In order for dfferent communtes to benchmark and verfy ther FR methods, many large scale face databases, such as FERET [-3], FRGC [35], LFW [4][3] and PubFg [5], have been establshed and used as evaluaton platforms. Although facal mages have a hgh dmensonalty, ther dscrmnatve characterstcs usually le or can be extracted n a lower dmensonal subspaces or sub-manfolds. Therefore, subspace and manfold learnng methods have been domnantly used n appearance based FR [-9][4]. Classcal methods such as the Egenface and Fsherface [-3][4] manly consder the global scatter of tranng samples and may fal to reveal the essental data structures nonlnearly embedded n the hgh dmensonal space. The manfold learnng methods were proposed to overcome ths lmtaton [5-6], and the representatve manfold learnng methods nclude localty preservng projecton (LPP) [7], local dscrmnant embeddng (LDE) [8], unsupervsed dscrmnant projecton (UDP) [9], etc. In addton, kernel based subspace learnng was also proposed for FR. For nstance, Yang et al. [60] presented a Kernel Fsher dscrmnant framework for feature extracton and recognton; Zaferou et al. [4] proposed a robust approach to dscrmnant kernel-based feature extracton for face recognton and verfcaton. The subspace or manfold learnng methods only consder the holstc feature of face mages, whch are usually very senstve to the varatons of msalgnment, pose, and occluson. Recent researches have shown that local feature based methods [16-18][43-48][6] are very promsng n object recognton, texture classfcaton and uncontrolled FR. Gabor flters, whch could effectvely extract local drectonal features on multple scales, have been successfully used n FR [17-18]. Compared to the holstc feature based approaches such as Egenface [] and FsherFace [3], Gabor flterng s less senstve to mage varatons (e.g., llumnaton, expresson). Another type of local feature wdely used n FR s statstcal local feature (SLF), such as hstogram of local bnary pattern (LBP) [43]. The man dea s that a face mage can be seen as a composton of mcro-patterns [6]. By parttonng the face mage nto several blocks, the statstcal

3 feature (e.g., hstogram of LBP) of these blocks s extracted, and fnally the descrpton of the mage s formed by concatenatng the extracted features n all blocks. Zhang et al. [45-46] proposed to use Gabor magntude or phase map nstead of the ntensty map to generate LBP features. New codng technologes on Gabor features have also been proposed. In [47], Zhang et al. extracted and encoded the global and local varatons of the real and magery parts n mult-scale Gabor representaton. Xe et al. [48] proposed local Gabor XOR patterns (LGXP), whch utlzes XOR (exclusve or) to encode the local varaton of Gabor phase, to fuse Gabor magntude and phase nformaton. These local pattern based statstcal features have shown very promsng results n large scale face databases, such as FERET [-3] and FRGC [35]. Apart from the employed features, the employed classfer s also mportant to the performance of FR. Nearest Neghbor (NN), SVM and Hdden Markov Models are the wdely used classfers n face recognton [43][45-48][59][7]. Moreover, n order to better explot the pror knowledge that face mages from the same subject construct a subspace, nearest subspace (NS) classfers [19][36-38][51][58] were also developed, whch are usually superor to the popular NN classfer. Recently an nterestng classfer, namely sparse representaton based classfcaton (SRC), was proposed by Wrght et al. [10] for robust FR. In Wrght et al. s work, a testng mage s sparsely coded on the whole tranng set by l 1 -norm mnmzaton, and then classfed to the class that yelds the least codng resdual. By assumng that the outler pxels n the face mage are sparse and by usng an dentty matrx to code the outlers, SRC shows good robustness to face occluson and corrupton. SRC has been attractng much nterest and has been wdely studed n the computer vson research communty [8-31]. Very recently, Zhang et al. [33] ndcated that the l 1 -norm sparsty may not be the key of the success of SRC, and they proposed the collaboratve representaton based classfcaton (CRC), whch uses l -norm to regularze the codng coeffcents nstead of the tme consumng l 1 -norm, for FR and acheved smlar result to SRC but wth much less tme complexty. Although the statstcal local features (SLF) and SRC/CRC have shown powerful abltes n the feld of feature extracton and sgnal classfcaton, few works have been proposed to ntegrate them together for better performance. Many works ether use NN/NS/SVM as the classfer wth SLF as nputs (e.g., NN n [43][45-48]) or use SRC/CRC to do classfcaton wth holstc features [10][31][33]. Although the methods [1-13] am to combne LBP and sparse representaton together, no effectve representaton model was proposed to deal wth varatons such as occluson and msalgnment, etc. 3

4 In ths paper, we proposed a novel SLF based robust kernel representaton (RKR) model for FR. Frst, we propose a mult-partton max poolng technology to enhance the nvarance of local features to mage regstraton error (e.g., msalgnment). Second, we propose a robust kernel representaton model, whch not only uses kernel representaton to fully explot the dscrmnaton nformaton embedded n the local features, but also adopts a robust regresson functon as the measure to effectvely handle the occluson n facal mages. Compared to the prevous classfcaton methods, e.g., NN wth SLF features and SRC wth holstc features, the proposed SLF based RKR model shows much stronger robustness to varous face mage varatons (e.g., llumnaton, expresson, occluson and msalgnment), as demonstrated n our extensve experments conducted on benchmark face databases. The rest of the paper s organzed as follows. Secton brefly revews some related work. Secton 3 presents the proposed SLF based robust kernel representaton algorthm. Secton 4 presents the expermental results. Secton 5 summarzes the paper.. Related Work.1. Statstcal Local Feature The extracton of statstcal local features (SLF) has three steps: feature map generaton, pattern map codng, and hstogram computng. The commonly used feature maps nclude orgnal ntensty map [43] and Gabor feature maps (e.g., magntude [45], phase [46]). LBP [43][45-46], local XOR (exclusve or) operator [48] or others [47][49] could be adopted for pattern map codng. Fnally the encoded pattern map s parttoned nto non-overlappng blocks, n whch the local hstogram feature s computed. The descrptor of the nput face mage s the concatenaton of all the hstograms computed n each block... Sparse Representaton or Collaboratve Representaton based Classfer Dfferent from Nearest Neghbor (NN) and Nearest Subspace (NS) classfers [19][36-38][51][58], whch forbds representng the query sample across classes, the recently developed l 1 -regularzed sparse representaton [10] or l -regularzed collaboratve representaton [33] represents the query mage by the tranng samples from all classes, whch could effectvely overcome the small-sample-sze or overfttng m n problem of NN and NS. Let X = [ s,1, s,,..., s, ] R denote the set of tranng samples of the th object n 4

5 class, where s,j, j=1,,,n, s an m-dmensonal vector stretched by the j th sample of the th class. Let m y R be a query sample to be classfed. The representaton model of sparse representaton based classfer (SRC) or collaboratve representaton based classfer (CRC) could be wrtten as { 0 l p } αˆ = arg mnα y Xα +λ α (1) where X=[X 1, X,, X c ] and c s the number of classes; l p s the l p -norm, and p=1 for SRC n [10], whle p= for CRC n [33]. The classfcaton of y s done by n n where ( ) : { } dentty( y) = arg mn y X ( ˆ δ α ) () δ R R s the characterstc functon that selects from ˆ α the coeffcents assocated wth the th class [10]. It s shown n [33] that CRC has very competng accuracy wth SRC n FR wthout occluson but wth much faster speed. In the case of occluson or corrupton, Robust-SRC [10] classfes the occluded face mage y by { ˆ } dentty( y) = arg mn y X ( ˆ δ α) Xeα (3) e where { } [ ˆ; ˆ ] = argmn y X X +λ [ ; ] αα α α αα (4) e αα, e e e e 1 and X e s an occluson dctonary to code the outlers. X e s smply set as the dentty matrx I n [10]..3. Robust Sparse Codng The representaton model of Robust-SRC [10] s equvalent to mnα y Xα s.t. 1 α σ (5) 1 whch s actually a Maxmum Lkelhood Estmaton (MLE) of α when the representaton resdual y-xα follows Laplacan dstrbuton. However, for the real occluson and dsguse n practcal facal mages, the representaton resdual rarely follows Laplacan model, makng robust-src less effectve to handle occlusons n FR. Yang et al. [31] proposed a robust sparse codng model to acheve robust face recognton wth outlers. More generally, s,j, j=1,,,n, could be the feature vector extracted from the j th sample of the th class. 5

6 Instead of usng l 1 -norm to regularze the data fdelty term n the codng model, Yang et al. formulated the sgnal representaton as an MLE-lke estmator: mnα ρθ ( y rα ) s.t. α σ (6) 1 m = 1 where r s the th row vector of X and y s the th element of y. Ths robust sparse codng could be effcently solved by an teratve reweghted sparse codng algorthm. In each teraton, the orgnal robust sparse codng model becomes 1( ) α W y Xα α (7) mn s.t. σ where W s a dagonal matrx wth W, ω ( e) ` 1 exp( μe μδ) 1 ( ) = = +, e =y -r α, μ and δ are two automatcally updated scalar parameters n the weght functon [31]. After the representaton coeffcent ˆ α s obtaned, the weghted representaton resdual s used for classfcaton,.e., dentty(y)=argmn W 1/ (y- X δ ( ˆ α )). 3. Statstcal Local Feature based Robust Kernel Representaton 3.1. Mult-partton max poolng (MPMP) Facal mage msalgnment caused by factors such as scalng, translaton and rotaton can make a lot of troubles n less-controlled face recognton system. Even usng some advanced face detector (e.g., the Vola and Jones face detector [53]) to crop and algn the query face mage, there are stll regstraton errors of several pxels, whch wll deterorate much the FR performance [54]. Although there are some preprocessng methods [55][5] to algn the query face mage to the well cropped tranng mages, t s more nterestng that we could mprove the robustness of the feature extracton step to face msalgnment. In ths secton, we propose a smple but very effectve poolng technque to ths end. Poolng technques are wdely used n object and mage classfcaton to extract nvarant features. In general, there are two categores of poolng methods, sum poolng [50][57] and max poolng [39][56-57]. Denote by f the th feature vector n a pool, and by {f} j the j th element of the feature vector f. In the case of sum poolng, the output feature vector f s s computed by {f s } j ={f 1 } j +{f } j + +{f n } j, whle n the case of max poolng the output feature f m s {f m } j =max{ {f 1 } j, {f } j,, {f n } j }. A smple 1-D example wth f 1 [0,1] and f [0,1] s shown n Fg. 1. It can be seen that the doman of (f 1, f ) wth f m =1 s larger than the 6

7 doman of (f 1, f ) wth f s =1, whch ndcates that max poolng s more robust to the changes of f 1 or f. The experments n [39][56-57] also show that max poolng s more robust than sum poolng to mage spatal varatons. In addton, spatal dscrmnaton nformaton can be ntroduced by usng spatal pyramd, whchh dvdes the mages nto mult-scale regons (e.g., 1 1,, and 4 4 for a total 1 regons n [39][56].) Fgure 1: A smple 1-D example for llustratng sum poolng and max poolng. In the paper we propose a mult-partton max poolng (MPMP) scheme for the statstcal feature of local pattern. The man dfferences between the proposed MPM and prevous max poolng methods le n the mage partton and feature generaton. Dfferent from the partton of spatal pyramd, such as 1 1,, and 4 4, we adopt a more flexble partton. As shown n the frst row of Fg., for example, the partton of the pattern map (e.g., LBP) can be made as, 3 3 and 4 4, respectvely, wth 9 blocks of three dfferent szes n total. Ths knd of partton could flexbly set the number of blocks n each scale and s expected to capture more spatal dscrmnaton nformaton than the spatal pyramd. In the proposed MPM based statstcal local feature (SLF) extracton, we adopt S+1 level block partton, where s=0, 1,, S. That s to say, n the s th level, the whole mage s dvded nto P s Q s blocks, each of whch s further parttoned nto p s q s sub-blocks. The poolng technology s operated on a seres of local features generated n each parttoned sub-block. Dfferent from the feature generaton (e.g., the codng coeffcents of local patches descrptors, such as SIFT or raw ntensty) n prevous works of poolng [39][50][56], here we extract a sequence of statstcal local features (SLF) whch are smpler and wdely used n face recognton. As shown n the second row of Fg., n each sub-block we frst createe a sequence of sldng boxes (e.g., the red box shown n Fg. ), and then compute the hstogram of each box s local feature (e.g., LBP). Here the sze of the box s smaller than the block, and usually the heght and wdth of the 7

8 box are set as rato s (rato s < 1) tmes of those of the sub-block n the s th scale partton. In ths paper, MPMP s defned as the one wth the followng settng: p s = and q s = for partton scale s=0 and 1; p s =1 and q s =1 for s>1; rato s =1 for s=0; and rato s = 0.5 for other values of s. Take the feature generaton n one sub-block as an example. Denote by f the feature vector (e.g., the hstogram feature) extracted from the th sldng box, and suppose that there are n feature vectors, f 1, f,, f n, whch are extracted from all possble sldng boxes n ths sub-block, and then the fnal output feature vector, denoted by f, after max poolng s {f} j =max{ {f 1 } j, {f } j,, {f n } j } } (8) Orgnal mage Mult-sze-block partton on the pattern map (e.g., LBP) -th sldng box Slde the red box n the blue sub-block and extract feature f from the -th box {f 1, f,, f n } max poolng f Fgure : Illustraton of the proposed mult-partton max poolng. Let s suppose that the mage s parttoned nto B blocks n total. In each block, after extractng the MPMP based SLF of every sub-block, we concatenate the SLFs of all sub-blocks as the output feature vector. Denote by y the output feature vector n the th block. Then the concatenaton of all feature vectors extracted from all blocks,.e., y = [y 1, y,, y B ] could be taken as the descrptor of the mage. The proposed MPMP based SLF could not only ntroduce more spatal nformaton to LSF due to ts use of mult-partton, but also enhance the robustness of LSF to mage msalgnment due to ts use of max poolng. 8

9 3.. Robust kernel representaton How to measure the smlarty of two features s an mportant ssue n pattern classfcaton. The commonly used classfers, such as the lnear SVM, NN and NS classfers [19][36-38][51][58], as well as the SRC and CRC classfers [10][33], often adopt the l -norm to measure the dstance (.e., Eucldean dstance). Apart from l -norm based measurement, kernel methods have become ncreasngly popular for pattern classfcaton, especally face recognton [4][60]. The kernel trck could map the non-lnearly separable features nto a hgh dmensonal feature space, n whch features of dfferent classes can be more easly separated by lnear classfers. From the vew of kernel representaton, l -norm measurement, whch could be regarded as a lnear kernel, s effectve to solve the lnearly separable problem. For SLF, more specfcally the local hstogram feature, t has been shown that hstogram ntersecton and Ch-square dstances are more powerful than l -norm dstance n classfcaton [6][43-48]. Therefore, more dscrmnant nformaton embedded n SLF could be exploted f the hstogram ntersecton kernel [34] or Ch-square kernel could be adopted n the l -norm dstance based classfers such as SRC and CRC. However, drectly applyng these kernels to SLF based representaton may not be robust to facal occlusons. In ths secton, we propose a new model, namely robust kernel representaton, to mprove the robustness of SLF based face representaton and classfcaton. Suppose that there exsts a kernel functon κ(, ) = φ( ), φ( ) ν ν ν ν, where s the nner product j k j k operator, and φ : R d h R s a feature mappng functon, whch maps the feature vectors ν j and ν k to a p q hgher dmensonal feature space. For a matrx Z = z1, z,, z q R, we defne K ZZ as a q q matrx wth { K } = ZZ κ ( zj z k ) and k Zν as a q 1 vector wth { kz } = κ ( zj, ), jk, ( ) ( ), 1, ( q ) φ Z = φ φ z z, we could have T T K = φ( Z) φ( Z ); = φ( ) φ( ) ZZ Ζν ν ν. Denote by j k Z ν (9) After the MPMP based SLF extracton on the query mage, B blocks of multple parttons are obtaned, and B sub-feature vectors, denoted by y 1, y,, y B, are extracted. Smlarly, for each of the tranng samples, we can extract the sub-feature vectors, and let s denote by A the matrx formed by all the sub-feature vectors of the th block from all tranng samples. Take the th block as an example, the kernel representaton of y over the matrx A could be formulated as 9

10 ( y ) φ( A ) mn φ α α s.t. α σ lp (10) where α s the codng coeffcent vector n the hgh dmensonal feature space mapped by the kernel functon φ. If we enforce that α = α j for dfferent blocks j,.e., we assume that the dfferent blocks y extracted from the same test sample have the same representaton over ther assocated matrx A, then kernel representaton of the query mage by combnng all the block features could be wrtten as ( y ) ( y ) ( y ) ( A ) ( A ) ( A ) mn φ ; φ ; ; φ φ ; φ ; ; φ α s.t. α 1 B 1 B α lp σ (11) where α s the codng coeffcent vector of the query sample. The above model seeks the regularzed representaton for a mapped feature under the mapped bass n the hgh dmensonal space. In the kernel representaton model Eq. (11), the l -norm s used to measure the representaton resdual. Such a kernel representaton s effectve when there are no outlers n the query mage. However, n FR the facal occluson and facal dsguses (e.g., sunglasses and scarf) can often appear n the query face mage. In such case, the block n whch outlers appear wll have a bg representaton resdual, reducng the role of clean blocks n the fnal classfcaton. In short, the representaton model n Eq. (11) s very senstve to outlers [31]. In order to make the kernel representaton robust to block occluson and dsguses, we propose to adopt some robust fdelty term n the modelng. Denote by e = [e 1, e,, e B ] the representaton resdual vector, where e s the kernel representaton resdual of the th block,.e., e = φ( ) φ( ) y A α. We assume that e s ndependent from e j f j snce they represent the representaton resduals of dfferent blocks. The proposed robust kernel representaton can then be formulated as mn ρ( e) α s.t. α σ (1) l p B where ρ( ) ρ( e ) e = = 1 and the cost functon ρ( ) s expected to be nsenstve to the outlers n the query sample. Usually, we requre that ρ(0) s the global mnmal of ρ(x) and ρ(x 1 )> ρ(x ) f x 1 > x. Wthout loss of generalty, we let ρ(0)=0. Obvously, f we defne the cost as ( e ) ( e ) ρ = (.e., ρ ( ) = e e ), the robust kernel representaton n Eq. (1) wll be reduced to the normal kernel representaton n Eq. (11). However, as shown n Fg. 3, ths smple settng of ρ(x) wll make the representaton very senstve to outlers because the cost (.e., ρ(e )) of 10

11 those representaton resduals correspondng to outlers are often very bg. We can also set ( e ) ρ ( ) = 1 ρ = e (.e., e e ). As can be seen n Fg. 3, ρ ( e ) = e s much less senstve to outlers than ρ ( e ) = ( e ) snce the absolute value of an outler s representaton resdual s less sgnfcant than ts square. However, wth ( e ) ρ = e Eq. (1) s dffcult to solve because e s not dfferentable, whle e s not bounded wth e, makng ρ(e ) not robust enough to large outlers. Intutvely, f we can fnd a functon ρ(e ) such as the blue curve n Fg. 3, whch s dfferentable and bounded when e s bg, then a good nstantaton of the robust kernel representaton n Eq. (1) can be mplemented. Cost functon value Desred cost functon ρ(e )= e ρ(e )=(e ) e Fgure 3: Three typcal settngs of the cost functon ρ(e ) Soluton of the robust kernel representaton After dong Taylor expanson of ρ(e) n the neghborhood of e 0, an approxmaton of ρ(e) could be wrtten as 1 1 ρ ( e ) = W e + b e 0 (13) where b e s a scalar constant determned by e 0 0, W s a dagonal matrx and ts th dagonal element s W ( ) ( ) = ω e = ρ e e, 0, 0, 0,, ρ s the dervatve of ρ, and e 0, s the th element of e 0. Accordng to the property (.e., ρ(x 1 )> ρ(x ) f x 1 > x ) of ρ, we could see that W. s a postve scalar. Clearly, ω( ) can be vewed as a weght functon appled to e. A good weght functon should be robust to outlers,.e., ω(e ) has bg value when e s small (e.g., blocks wthout outlers), and small value when e s bg (e.g., blocks wth outlers). The wdely used logstc functon can be chosen as the weght functon: 11

12 ( ) ( e) 1 1 exp( e ) ω = + μ μδ (14) The above weght functon could effectvely assgn the outlers wth bg representaton resdual low weghts, and assgn nlers wth small representaton resdual hgh weghts (here the weght value s normalzed to the range of [0, 1]). It should be noted that the weght values of each testng sample are estmated onlne, and there s not a tranng phase of them. The correspondng cost functon ρ to the weght functon n Eq. (14) wll be dfferentable and bounded, as the blue curve shown n Fg. 3. Wth the above development, the orgnal robust kernel representaton n Eq. (1) could be approxmated by mn W α 1 e s.t. α l p σ (15) After some dervaton, Eq. (15) could be rewrtten as B mn ω φ( y ) φ( A ) α s.t. α = 1 α l p σ (16) where ω s computed by Eq. (14) wth e = φ( ) φ( ) y A α,and α 0 s an known codng coeffcent 0 vector. Here μ and δ are scalar parameters, whch could be set as a constant value or automatcally updated. μδ s usually set as 8 to make the weght close to 1 when e =0, δ s set as the τb largest elements of the set { e =1,, B}, where τb outputs the largest nteger smaller than τb, and τ s dscussed n the secton 4.1. Wth the defned kernel matrx K ZZ and kernel vector k Zν n Eq. (9), Eq. (16) could be re-wrtten as T T k ( ) AA B B B ˆ α = arg mn ω y, y + α ω K α α ω k s.t. α = 1 = 1 = 1 Ay α σ (17) l p 1 y = y, B From Eq. (17) we can see that the weghted-sum kernel terms, ncludng ω k (, ) B K, = ω 1 AA and B k = ω 1 Ay, could explot the dscrmnaton nformaton n the mapped hgher dmensonal feature space; at the same tme, the weght ω can effectvely remove the outlers effect on computng the codng vector. The codng vector α s regularzed by l p -norm. In ths paper, we dscuss two mportant cases: p=1 for sparse regularzaton and p= for non-sparse regularzaton. When p=1, l 1 -norm mnmzaton methods such as the effcent feature-sgn search algorthm [40] could be used to solve the sparse codng problem of Eq. (17). When p=, a closed-form soluton of Eq. (17) could be derved as B ( ) 1 AA ˆ α = ω K + λi ωk = 1 = 1 Ay B, where λ s the Lagrange multpler. 1

13 Because the approxmaton of ρ ( e ) (.e., Eq. (13)) s the Taylor expanson of ρ(e) n the neghborhood of e 0, the solvng of robust kernel representaton (Eq. (1)) s an teratve and alternatve process: the weght value (.e., ω n Eq. (17)) s estmated va Eq. (14) wth known codng coeffcent, and then the codng coeffcent s computed va Eq. (17) wth known weght value. After gettng the soluton ˆ α after some teratons, the classfcaton of the query sample s done va dentty( y ) = mn = j B { ω 1 ε, j} (18) where ε, = φ( ) φ(, ) y A ˆ α s the th -block kernel representaton resdual assocated wth the j th class, j j j Α = Α,1, Α,,, Α, c wth A,j beng the sub-matrx of A assocated wth the j th class, and [ ] ˆ α = ˆ α ; ˆ α ; ; ˆ α c wth ˆ α j beng the representaton coeffcent vector assocated wth the j th class. From 1 Eq. (18) t can be seen that the classfcaton crtera s based on a weght sum of kernel representaton resduals, whch utlzes both the dscrmnaton power of kernel representaton n hgh dmensonal feature space and the nsenstveness of robust representaton to outlers. In addton, the kernel representaton T T resdual, ε,j, could be rewrtten as ε j= k ( y, y) + jk,, j jk, ˆ ˆ ˆ, α A A α α A y. j j j 3.4 The algorthm The whole algorthm of the proposed statstcal local feature based robust kernel representaton (SLF-RKR) s summarzed n Table 1. It ncludes three steps. The frst step extracts the SLF usng the proposed MPMP. The second step performs robust kernel representaton, and the last step performs classfcaton. Gven the feature type (e.g., hstogram of LBP) and the partton parameters of MPMP (e.g., S, rato s, P s and Q s ), the algorthm of SLF-RKR could be run. The second step s an teratve process. By experments, we found that ths process converges fast. For nstance, when there s no occluson, only or 3 teratons are needed, and when there s occluson n the query mage, about 10 teratons can lead to a good soluton. We denote by SLF-RKR_l 1 and SLF-RKR_l the mplementatons of SLF-RKR model wth l 1 -norm regularzaton and l - norm regularzaton, respectvely. The tme complexty of SLF-RKR manly les n MPMP based SLF extracton and solvng the robust kernel representaton. Accordng to the characterstcs of hstogram feature, we can adopt the ntegral mage method [53] to speed up MPMP based SLF extracton. For each pxel n a sub-block, only addtons are 13

14 needed n computng ntegral mage and 3 addtons are needed n computng hstogram bn value. So the computng of each hstogram bn for ths sub-block needs 3hw(1-rato) addtons and 1 max operaton, where h and w are the heght and wdth of the sub-block, and rato s the parameter of the sldng box. For the robust kernel representaton, n the case of FR wthout occluson, the weght ω n each block could be B fxed as 1. Snce the matrx nverse n the closed-form soluton (.e., ˆ = ( + ) 1 α ω K λi ωk ) = 1 AA = 1 Ay of SLF-RKR_l could be computed offlne, SLF-RKR_l wth ω =1 has tme complexty of O(n ), where n s the number of tranng samples. The soluton to SLF-RKR_l 1 wth ω =1 can be obtaned by standard sparse codng. The tme complexty of l 1 -norm sparse codng wth an m n dctonary s about O(m n 1.5 ) [6], whle the l 1 -norm mnmzers such as the effcent feature-sgn search algorthm [40] used n ths paper can have a much faster speed n practce. Therefore for FR wthout occluson, SLF-RKR_l wth ω =1 s much faster than SRC [10], whle the tme complexty of SLF-RKR_l 1 wth ω =1 s smlar to that of SRC [10]. For FR wth occluson or dsguse, the weght ω n each block needs to be updated onlne. In ths case, the tme complexty of SLF-RKR_l wll ncrease to about T tmes of that of SLF-RKR_l wth ω =1, where T s the total number of teratons to update ω. For SLF-RKR_l 1 wth updated weght, the step a) (.e., weghted kernel representaton wth p=1) s an teratve process tself, and the steps b), c) and d) could be operated n each teraton of step a). Overall, the tme complexty of SLF-RKR_l 1 wth updated weght s almost the same as that of SLF-RKR_l 1 wth ω =1, snce the former has almost the same solvng procedure as the latter wth only an addtonal step to update weght n each teraton. In FR wth occluson/dsguse, SRC needs an addtonal occluson matrx to code the occluson, and thus ts tme complexty s very hgh. The runnng speed of SLF-RKR s very fast. Under the programmng envronment of Matlab verson R011a n a desktop of 1.86HHz CPU wth.99g RAM, the runnng tme of SRC (executed by the fast l 1 - norm mnmzer such as feature-sgn search algorthm [40] or Dual ALM [61]) and SLF-RKR s compared n Table. In the experment of AR database wth 7 tranng samples per class (refer to Secton 4. for the detaled expermental settng), the average runnng tme of SLF-RKR_l and SLF-RKR_l 1 s second and second, respectvely; whle the average runnng tme of SRC s second. In the experment of Extended Yale B wth 50% occluson (refer to Secton 4.4 for the detaled expermental settng), the average runnng tme of SLF-RKR_l and SLF-RKR_l 1 s second and second, respectvely, whch are much less than that of SRC ( seconds). B 14

15 Table 1: Algorthm of statstcal local feature based robust kernel representaton (SLF-RKR). Statstcal Local Feature based Robust Kernel Representaton (SLF-RKR) 1. Extract statstcal local features va mult-partton max poolng.. Robust kernel representaton: Intalze the weght n each block as 1: ω = 1. Whle not converge, Do a). Weghted kernel representaton: T T k ( ) AA B B B 1 y y 1 K 1 k = = = Ay lp ˆ α = arg mn ω, + α ω α α ω s.t. α σ α b). Compute the reconstructon resdual n each block: e ( ) φ( ) k( ) T T = φ y A ˆ α = y, y + ˆ α K ˆ α ˆ α k c). Estmate the weght value as ( ( e )) ω = 1 1+ exp μ μδ, AA Ay where μ=8/δ, δ =ψ 1 (e) τb, τb outputs the largest nteger smaller than τb, and ψ 1 (e) k s the k th largest element of the set { e j, j=1,,b} [31]. d). Checkng convergence condton: where γ s a small postve scalar and End Whle 3. Do classfcaton: () ( ) ( t t 1 ( 1) ) ( t ) ω ω ω < γ, ( t) ω s the weght value of block n teraton t. B T B T B { ω ( ) ˆ ˆ ˆ 1 k y y α j ω 1, j, j j j ω = K = A A α α = 1 ka, jy} dentty = mn, + j where A,j s the sub-matrx of A assocated wth the j th class and ˆ α j s the representaton coeffcent vector assocated wth the j th class. Table : Average runnng tme (second) of SLF-RKR and SRC. Method AR database Extended Yale B wth 50% occluson SLF+SRC SLF-RKR_l SLF-RKR_l Expermental Results In ths secton, we present expermental results on benchmark face databases to llustrate the effectveness of our method. In secton 4.1, we dscuss the parameter settng. In secton 4. we present the expermental results on Extended Yale B [58][0] and AR [1] databases captured n controlled envronments. In secton 4.3 we demonstrate the robustness of SLF-RKR to pose varaton and msalgnment. Then n secton 4.4, we 15

16 test FR aganst block occluson and real dsguse. Fnally, the comprehensve evaluatons on large-scale face databases, ncludng FERET [-3], FRGC [35] and LFW [3], are presented n secton Parameter settng The proposed SLF-RKR conssts of two man procedures: feature extracton and robust kernel representaton. If no specfc nstructon, the parameters of SLF-RKR are set as what shown n Table 3. In feature extracton, the hstogram of LBP encoded on the raw mage s used as the SLF, and the number of hstogram bns for each sub-block s set to 16. In the proposed MPMP based SLF extracton, we set S=0, P 0 =5, and Q 0 =4 for FR wth well algned mages. For FR wth regstraton error (e.g., msalgnment and pose), we set S=3, and (P s, Q s )={(5,4), (3,), (4,), (,1)} for s={0, 1,, 3}. In the procedure of robust kernel representaton, the hstogram ntersecton kernel [34] (.e., κ( j, k) mn { ν j, l, νk, l} ν ν = wth ν j,l and ν k,l the l th entry of ν j and ν k, respectvely) s used as the kernel functon. In the onlne updatng of weghts, we set τ=0.6 for FR wth occluson and τ=0.8 for FR wthout occluson. The Lagrange multpler λ of SLF-RKR_l 1 (refer to Eq. (17)) s set as 0.005, whle the Lagrange multpler λ of SLF-RKR_l s usually set as a larger value (e.g., 0.1) for l -norm regularzaton s weaker than l 1 -norm regularzaton. l Table 3: Parameter settng of SLF-RKR. Procedure Feature extracton Robust kernel representaton parameter settng MPMP P 0 =5, Q 0 =4 when S=0; (P 0,Q 0 )=(5,4), (P 1,Q 1 )=(3,), (P,Q )=(4,), (P 3,Q 3 )=(,1) when S=3. hstogram bn number 16 kernel functon hstogram ntersecton kernel weght update τ=0.6 for occluson; τ=0.8 for non-occluson Lagrange multpler λ=0.005 (SLF-RKR_l 1 ); λ=0.1(slf-rkr_l ) 4. Face recognton on Extended Yale B and AR We frst evaluate the performance of the proposed algorthm on two representatve face mage databases captured n controlled envronment: Extended Yale B [58][0] and AR [1]. The orgnal SRC wth holstc Egenface feature [10] s used as the baselne method, and we then apply the proposed MPMP based SLF feature to SRC [10], CRC [33], Lnear Regresson for Classfcaton (LRC) [38], hstogram ntersecton kernel based Support Vector Machne (HISVM) and Nearest Neghbor (NN) wth hstogram ntersecton as ts smlarty measurement, and compare them wth SLF-RKR. 16

17 1) Extended Yale B Database: The Extended Yale B database conssts of,43 frontal-face mages of 38 ndvduals (each subject has 64 samples), captured under varous laboratory-controlled lghtng condtons [58][0]. For each subject, N tr samples are randomly chosen as tranng samples and 3 of the remanng mages are randomly chosen as the testng data. Here the mages are normalzed to and the experment for each N tr runs 10 tmes. The FR results, ncludng mean recognton accuracy and standard varance, of all the competng methods are lsted n Table 4. The proposed SLF-RKR acheves the best performance, wth more than % mprovement over all the others when N tr s small (e.g., 5, and 10). When 0 tranng samples are selected, an accuracy of 99.5% s acheved by SLF-RKR. It could also be seen that those methods based on collaboratve representaton (e.g., SLF-RKR, SLF+CRC, SLF+SRC and orgnal SRC) are more powerful than other knds of lnear representaton methods (e.g., SLF+LRC, SLF+NN). Table 4: Face recognton results (%) on Extended Yale B database. N tr Orgnal SRC [10] 80.0± ± ±0.49 SLF+NN 59.7± ± ±0.87 SLF+LRC 59.0± ± ±0.83 SLF+HISVM 7.0± ± ±0.3 SLF+CRC 83.0± ± ±0.3 SLF+SRC 8.8± ± ±0.30 SLF-RKR_l ± ± ±0.18 SLF-RKR_l 85.8± ± ±0.18 ) AR database: The AR database conssts of over 4,000 frontal mages from 16 ndvduals [1]. For each ndvdual, 6 pctures were taken n two separate sessons. As n [10], n the experment we chose a subset of the dataset consstng of 50 male subjects and 50 female subjects. For each subject, the seven mages wth llumnaton change and expressons from Sesson 1 were used for tranng, and the other seven mages wth only llumnaton change and expresson from Sesson were used for testng. The sze of orgnal face mage s The recognton rates of all the competng methods versus dfferent number tranng samples are lsted n Table 5. In each test we selected the frst N tr tranng samples as the tranng data set. We could see that SLF-RKR acheves the hghest recognton rates, followed by SLF+SRC and SLF+CRC. In all cases wth less than 6 tranng samples, at least % mprovement of SLF-RKR over other methods could be acheved. In ths experment, orgnal SRC gets the worst results for that the holstc 17

18 features (e.g., Egenfaces) has much less dscrmnaton nformaton than the statstcal local feature (e.g., hstogram of LBP) n dealng wth varatons of expresson and tme. Table 5: Face recognton results (%) on the AR database. N tr Orgnal SRC [10] SLF+NN SLF+LRC SLF+HISVM SLF+CRC SLF+SRC SLF-RKR_l SLF-RKR_l Apart from LBP, recently Tzmropoulos et al. [63] utlzed mage gradent orentaton as local feature to perform subspace learnng for face recognton. The results n [63] show that mage gradent orentaton could lead to better performance than LBP. For example, wth mage gradent orentaton feature, the recognton rate could be 95.65% on Extended Yale B by usng 5 tranng samples per subject, whle the recognton rate could be 98.66% on AR by usng the frst 4 tranng samples of sesson 1 per subject. From Table 4 and Table 5, we can see that the recognton rate of the proposed SLF-RKR could acheve 99.5% on Extended Yale B and 99.4% on AR by usng LBP as ts SLF. Ths clearly shows that the use of robust kernel representaton sgnfcantly ncreases the recognton rates. Further mprovement could be acheved for SLF- RKR f mage gradent orentaton s used to desgn the statstcal local feature. In addton, n ths experment the l 1 -norm regularzaton and l -norm regularzaton n SLF-RKR lead to lttle dfference n the recognton rates, but the later has much less tme complexty. 4.3 Robustness to msalgnment and pose In ths secton, we test the robustness of the proposed method to local deformaton, ncludng mage msalgnment ntroduced by face detector and pose varaton. Here the number of hstogram bn n each subblock s set to

19 Table 6: Face recognton rates (%) on the MPIE databases wth msalgnment. The numbers n round brackets show the recognton rates of SLF-RKR wthout MPMP. Sesson Sesson Sesson 3 Sesson 4 Orgnal SRC SLF+NN SLF+LRC SLF+HISVM SLF+CRC SLF+SRC SLF-RKR_l (81.4) 83.3(80.) 8.6(79.0) SLF-RKR_l 84.3(79.5) 8.1(78.4) 80.8(75.0) 1) Large-scale Mult-PIE database: The CMU Mult-PIE database [41] contans mages of 337 subjects captured n four sessons wth smultaneous varatons n pose, expresson, and llumnaton. In the experments, all the 49 subjects n Sesson 1 were used. For the tranng set, we used the 7 frontal mages wth llumnatons {0,1,7,13,14,16,18} and neutral expresson. For the testng sets, 10 typcal frontal mages of even-number llumnatons taken wth neutral expressons from Sesson to Sesson 4 were used. Here the tranng samples are cropped and normalzed to 90 7 based on the coordnates of manually located eye centers; whle the testng samples are automatcally detected usng Vola and Jone s face detector [53] wthout manual nterventon, and thus there are often some msalgnments n the testng samples. Table 6 lsts the results of all the competng methods. It can be seen that the proposed SLF-RKR acheves the hghest recognton rates, wth at least 4%, 5% and 3% mprovements than all the other methods n Sesson, Sesson 3 and Sesson 4, respectvely. The orgnal SRC wth Egenfaces gets the worst recognton rates, much lower than SLF+SRC. Ths valdates that SLF s robust to msalgnment to some extent. Collaboratve representatons (e.g., CRC and SRC) combned wth SLF could have about 10% mprovements over other knds of classfers (e.g., HISVM, LRC and NN). In addton, SLF-RKR_l 1 slghtly outperforms SLF-RKR_l n ths experment. In order to show the effectveness of MPMP, we also gve the recognton rate of SLF-RKR wthout the step of MPMP n Table 6. One can see that even wthout MPMP, SLF-RKR_l 1 stll outperforms SLF+SRC by 1.9% n average, whle SLF-RKR_l outperforms SLF+CRC by.3%. It can also be observed that the mprovement ntroduced by MPMP s over 3% n each sesson, whch clearly show the effectveness of the proposed MPMP n dealng wth msalgnment. ) FERET pose database: In ths experment we use the FERET pose dataset [-3], whch ncludes 1,400 mages from 198 subjects (about 7 each). Ths subset s composed of the mages marked wth ba, bd, be, bf, bg, bj, and bk. Some sample mages of one person are shown n Fg. 4. Four tests wth 19

20 dfferent pose angles were performed, wth mages marked wth ba, bj and bk as gallery, and the mages wth bg, bf, be and bd as probes, respectvely. Here the mage s cropped and normalzed to Because the wdth and heght of the face mage are equal, we reset the parameters n the MPMP based SLF extracton as follows: (P s, Q s ) = {(4,4), (,), (,), (1,1)} for s={0, 1,, 3}. The expermental results on the tests wth dfferent pose varatons proposed SLF-RKR could not only ncrease the dscrmnaton and nvarance of local features, but also has powerful classfcaton ablty due to the use of robust kernel representaton. In addton, we fnd that SLF- RKR_l 1 s slghtly better than SLF-RKR_l but wth more computatonal costs. are lsted n Table 7. The proposed SLF-RKR sgnfcantly outperforms all the methods. In partcular, t has at least 6.5%, 7%, and 17.5% mprovement over all the other methods n the testng data wth -5 degree, 15 degree and 5 degree pose varatons, respectvely. The orgnal SRC and SLF+HISVM have the worst performance snce Egenface feature s senstve to pose varaton and HISVM cannot learn pose varaton from frontal tranng set. We also gve n Table 7 the results of SLF-RKR wthout MPMP on all poses. Smlar concluson to thatt n Mult-PIE could be made,.e., sgnfcant mprovements (e.g., over 13% mprovement when pose degrees are ±5 o ) could be acheved by usng MPMP. ba: gallery bj: expresson bk: llumnaton be: +15 bf: -15 bg: -5 bd: +5 Fgure 4: Some samples of a subject n the pose subset of the FERET database. Table 7: Face recogntonn rates (%) on the FERET pose subset. The numbers n round brackets show the recognton rates of SLF-RKR wthout MPMP. Pose (degree) -5 Orgnal SRC 3.5 SLF+NN 48.5 SLF+LRC 45.0 SLF+HISVM 1.5 SLF+CRC 34.5 SLF+SRC 31.5 SLF-RKR_ll (4.0) SLF-RKR_ll 56.5(38.5) (99.5) 99.5(99.5) ) 95.0( ( (88.0) 57.0(39.0) (88.0) 54.5(36.0) From the above experments of FR wth msalgnment and pose varaton, we could conclude that the 0

21 4.4 Robustness to occluson and dsguse Facal occluson and dsguse are very challengng ssues n FR. One nterestng property of SRC [10] s ts robustness to face occlusons. In ths secton, we test the performance of SLF-RKR to varous occlusons, ncludng block occluson and real dsguse. In SLF-RKR, the robustness to occluson manly comes from ts teratve reweghed kernel robust representaton. In ths secton the weght W n each block s automatcally updated. The state-of-the-art methods to deal wth face occluson, ncludng the robust verson of SRC [10] (.e., usng l 1 -norm to characterze the representaton resduals), kernel verson of SRC (KSRC) [8] (.e., usng RBF kernel to map the orgnal feature to a hgher dmensonal feature space), kernel verson of CRC (KCRC), and Robust Sparse Codng (RSC) [31], are employed to compare wth SLF_RKR. 1) FR wth random block occluson: In the database of Extended Yale B [58][0], we chose Subsets 1 and (717 mages, normal-to-moderate lghtng condtons) for tranng, and Subset 3 (453 mages, more extreme lghtng condtons) for testng. Smlar to the settngs n [10], we smulate varous levels of contguous occluson, from 0% to 60%, by replacng a randomly located square block of each testng mage wth an unrelated mage, as llustrated n Fg. 5, where (a) shows a face mage wth 30% block occluson and (b) shows a face mage wth 40% block occluson. Here the locaton of occluson s randomly chosen for each mage and s unknown to each algorthm, and the mage sze s normalzed to Table 8 lsts the FR results versus varous levels of occlusons. Here λ of SLF-RKR_l 1 s set as 0.1. From Table 8, we can see that almost all methods could correctly classfy all the testng samples when occluson level s from 0% to 0%. However, when occluson percentage s larger than 0%, the advantage of SLF- RKR over other methods becomes sgnfcant. For nstance, when occluson s 50%, SLF-RKR could acheve at least 94% recognton accuracy, compared to at most 87.4% for other methods. For SLF-RKR_l 1, when there s 60% block occluson, t can stll acheve a recognton rate of over 84%. Ths clearly demonstrates the effectveness of the proposed SLF-RKR method to deal wth face occluson. In addton, both KCRC and KSRC could get better performance than CRC, but worse performance than SRC and SLF- RKR. Ths s because the l 1 -norm fdelty term of the robust verson of SRC can deal wth outlers to some extent; however, the RBF kernel s senstve to sgnal s outlers. 1

22 (a) Fgure 5: The examples of face mages wth occluson: (a) 30% block occluson; (b) 40% block occluson. (b) Table 8: Face recognton rates (%) of dfferent methods under dfferent levels of block occluson. Occlusson 0% 10% 0% 30% 40% 50% 60% Robust SRC[10] RSC [31] SLF+NN SLF+LRC SLF+HISVM SLF+CRC SLF+KCRC SLF+SRC SLF+KSRC SLF-RKR_l SLF-RKR_l ) FR wth dsguse: A subset of 50 males and 50 females are selected from the AR database [1]. For each subject, 7 samples wthout occluson from sesson 1 are used for tranng, wth all the remanng samples wth dsguses for testng. These testng samples (ncludng 3 samples wth sunglass n Sesson1, 3 samples wth sunglass n Sesson, 3 samples wth scarf n Sesson 1 and 3 samples wth scarf n Sesson per subject) not only have dsguses, but also have varatons of tme and llumnaton. Here the mage sze s normalzed to Table 9 lsts the FR results on the four test sets wth dsguse. It can be seen that n the two tests of Sesson 1, the proposed methods acheve 100% recognton accuracy, much hgher than the state-of-the-art results reported n lterature, for examples, 83.3% (Sunglass-S1) and 48.7% (Scarf-S1) for orgnal SRC, and 94.7% (Sunglass-S1) and 91.0% (Scarf-S1) for RSC. In the two tests of Sesson, the mprovement of SLF-RKR over all the other methods s at least 6%, whch clearly shows the superor classfcaton ablty of SLF-RKR. The SLF-RKR_l 1 s slghtly better than SLF-RKR_l n the tests of Sesson, whch agan shows that l 1 -norm regularzaton could ntroduce more dscrmnaton nto the codng coeffcents but at the prce of speed. For large-scale database, SLF-RKR_l can be a good canddate to balance the recognton accuracy and runnng speed. In addton, lke n the experments of FR wth block occluson, one can see that KCRC and KSRC have lower recognton rates than SLF-RKR and SRC snce the standard RBF kernel s not robust to outlers.

23 Table 9: Face recognton rates (%) on the challengng datasets wth real dsguse. Sunglass-S1 Scarf-S1 Sunglass-S Scarf-S Robust SRC [10] RSC [31] SLF+NN SLF+LRC SLF+HISVM SLF+CRC SLF+KCRC SLF+SRC SLF+KSRC SLF-RKR_l SLF-RKR_l Face recognton on large-scale face database Fnally, we verfy the performance of SLF-RKR on three large scale face databases: FERET [-3], FRGC [35] and LFW [3]. To demonstrate the effectveness of SLF-RKR, we also report the results of KSRC and KCRC wth RBF kernel. Consderng that SLF-RKR_l has smlar recognton accuracy to SLF-RKR_l 1 but has much lower tme complexty, n ths secton we only report the results of SLF-RKR_l. We update the weght of SLF-RKR_l and set the number of hstogram bn n each sub-block as 30. 1) FERET database: The FERET database [-3] s often used to valdate an algorthm s effectveness because t contans many knds of mage varatons. By takng Fa subset as a gallery, the probe subsets Fb and Fc were captured wth expresson and llumnaton varatons (the mages n Fc were captured by a dfferent camera). Especally, Dup1 and Dup consst of mages that were taken at dfferent tmes. For some people, more than two years elapsed between the gallery set and Dup1 or Dup set. The mage sze s normalzed to Table 10 lsts the FR results of competng methods. Because each subject n the gallery set has only one tranng sample, the LRC s equvalent to NN so that we only report the result of NN classfer. The proposed SLF-RKR_l acheves the best performance n all tests. Especally, t acheves much hgher performance than the compettors on Dup 1 and Dup. The proposed RKR has 7.5% and 5.1% average mprovement over CRC and SRC, respectvely. Standard kernel (e.g., RBF) could mprove the performance of CRC and SRC, but stll 6.1% and 3.5% lower than SLF-RKR_l n average. It s also nterestng that the collaboratve representaton based classfers (e.g., SRC, CRC, KSRC, KCRC, and RKR) stll have much hgher recognton rates than NN and HISVM n the case that each subject has only one tranng sample. 3

24 Table 10: Face recognton rates (%) on FERET database. Method Fb Fc Dup1 Dup SLF+NN SLF+HISVM SLF+CRC SLF+KCRC SLF+SRC SLF+KSRC SLF-RKR_l Another wdely used statstcal local feature s the hstogram of LBP encoded on the Gabor magntude [45]. In addton, the block based Fsher s lnear dscrmnant (BFLD) proposed n [48] has shown powerful ablty to extract the dscrmnatve low-dmensonal feature n each block. Therefore, here we compare the proposed SLF-RKR (usng Gabor magntude based SLF) wth the state-of-the art methods on FERET database. The feature dmensonalty extracted by BFLD n each block s set to 400 and Gaussan kernel ( j, k) = exp{ j k ξ } κ ν ν ν ν s used n SLF-RKR. The results of SLF-RKR_l, SLF+NN, SLF+SVM and other state-of-the-art methods are lsted n Table 11. It shows that the proposed SLF-RKCR_l not only outperforms SLF+NN and SLF+SVM n all cases, but also has better performance than the best methods reported n lterature. Especally, SLF-RKCR_l has recognton accuraces of 96.3% and 94.4% n Dup1 and Dup, respectvely, whch may be the best results so far. Table 11: Face recognton rates (%) of SLF-RKR and other state-of-the-art methods on the FERET database. Method Fb Fc Dup1 Dup SLF+NN SLF+SVM Tan s [14] Zou s [15] Xe s [48] SLF-RKR_l ) FRGC.0: FRGC verson.0 [35] s a large-scale face databases desgned wth uncontrolled ndoor and outdoor settngs. We use the subset (35 subjects havng no less than 15 samples n the orgnal target set) of Experment 4, whch s the most challengng dataset n FRGC.0 wth large lghtng varatons, ageng and mage blur. Some examples are shown n Fg. 6. The selected target set contans 5,80 samples, and the query set has 7,606 samples. The mage s normalzed to The feature dmensonalty extracted by BFLD n each block s set to 0 and Gaussan kernel s used n SLF-RKR. 4

25 Three tests wth 5, 10 and 15 target samples for each subject are made n the experments. The recognton rates of SLF+NN, SLF+LRC, SLF+HKSVM, SLF+CRC, SLF+SRC, SLF+KCRC, SLF+KSRC, and the proposed SLF-RKR are lsted n Table 1. Agan, SLF-RKR performs the best, though the mprovement s not sgnfcant snce there are no occluson, msalgnment and pose varatons n the query set. (a) Fgure 6: Samples of FRGC.0. (a) and (b) are samples n target and query sets, respectvely. (b) Table 1: Face recognton result (%) wth Gabor magntude based SLF on the FRGC database. Method SLF+NN SLF+SVM SLF+LRC SLF+CRC SLF+KCRC SLF+SRC SLF+KSRC SLF-RKR_l ) LFW: Labeled Faces n the Wld (LFW) [3] s a large-scale database of face photographs desgned for unconstraned FR wth varatons of pose, llumnaton, expresson, msalgnment and occluson, etc. Some examples are shown n Fg. 7. Two subsets of algned LFW [4] are used n the experments. In subset 1 (LFW6) whch conssts of 311 subjects wth no less than 6 samples per subject, we use the frst 5 samples as tranng data and the remanng samples as testng data. In subset (LFW11) whch conssts of 143 subjects wth no less than 11 samples per subject, we use the frst 10 samples as tranng data and the remanng samples as testng data. (a) Fgure 7: Samples of LFW. (a) and (b) are samples n tranng and testng sets, respectvely. (b) 5

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Local Quaternary Patterns and Feature Local Quaternary Patterns

Local Quaternary Patterns and Feature Local Quaternary Patterns Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

General Regression and Representation Model for Face Recognition

General Regression and Representation Model for Face Recognition 013 IEEE Conference on Computer Vson and Pattern Recognton Workshops General Regresson and Representaton Model for Face Recognton Janjun Qan, Jan Yang School of Computer Scence and Engneerng Nanjng Unversty

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Learning a Class-Specific Dictionary for Facial Expression Recognition

Learning a Class-Specific Dictionary for Facial Expression Recognition BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Computer Aided Drafting, Design and Manufacturing Volume 25, Number 2, June 2015, Page 14

Computer Aided Drafting, Design and Manufacturing Volume 25, Number 2, June 2015, Page 14 Computer Aded Draftng, Desgn and Manufacturng Volume 5, Number, June 015, Page 14 CADDM Face Recognton Algorthm Fusng Monogenc Bnary Codng and Collaboratve Representaton FU Yu-xan, PENG Lang-yu College

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

Competitive Sparse Representation Classification for Face Recognition

Competitive Sparse Representation Classification for Face Recognition Vol. 6, No. 8, 05 Compettve Sparse Representaton Classfcaton for Face Recognton Yng Lu Chongqng Key Laboratory of Computatonal Intellgence Chongqng Unversty of Posts and elecommuncatons Chongqng, Chna

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Robust Low-Rank Regularized Regression for Face Recognition with Occlusion

Robust Low-Rank Regularized Regression for Face Recognition with Occlusion Robust Low-Rank Regularzed Regresson for ace Recognton wth Occluson Janjun Qan, Jan Yang, anlong Zhang and Zhouchen Ln School of Computer Scence and ngneerng, Nanjng Unversty of Scence and echnology Key

More information

Neurocomputing 101 (2013) Contents lists available at SciVerse ScienceDirect. Neurocomputing

Neurocomputing 101 (2013) Contents lists available at SciVerse ScienceDirect. Neurocomputing Neurocomputng (23) 4 5 Contents lsts avalable at ScVerse ScenceDrect Neurocomputng journal homepage: www.elsever.com/locate/neucom Localty constraned representaton based classfcaton wth spatal pyramd patches

More information

Image Alignment CSC 767

Image Alignment CSC 767 Image Algnment CSC 767 Image algnment Image from http://graphcs.cs.cmu.edu/courses/15-463/2010_fall/ Image algnment: Applcatons Panorama sttchng Image algnment: Applcatons Recognton of object nstances

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS P.G. Demdov Yaroslavl State Unversty Anatoly Ntn, Vladmr Khryashchev, Olga Stepanova, Igor Kostern EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS Yaroslavl, 2015 Eye

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Tone-Aware Sparse Representation for Face Recognition

Tone-Aware Sparse Representation for Face Recognition Tone-Aware Sparse Representaton for Face Recognton Lngfeng Wang, Huayu Wu and Chunhong Pan Abstract It s stll a very challengng task to recognze a face n a real world scenaro, snce the face may be corrupted

More information

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures A Novel Adaptve Descrptor Algorthm for Ternary Pattern Textures Fahuan Hu 1,2, Guopng Lu 1 *, Zengwen Dong 1 1.School of Mechancal & Electrcal Engneerng, Nanchang Unversty, Nanchang, 330031, Chna; 2. School

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

Modular PCA Face Recognition Based on Weighted Average

Modular PCA Face Recognition Based on Weighted Average odern Appled Scence odular PCA Face Recognton Based on Weghted Average Chengmao Han (Correspondng author) Department of athematcs, Lny Normal Unversty Lny 76005, Chna E-mal: hanchengmao@163.com Abstract

More information

Face Detection with Deep Learning

Face Detection with Deep Learning Face Detecton wth Deep Learnng Yu Shen Yus122@ucsd.edu A13227146 Kuan-We Chen kuc010@ucsd.edu A99045121 Yzhou Hao y3hao@ucsd.edu A98017773 Mn Hsuan Wu mhwu@ucsd.edu A92424998 Abstract The project here

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning Internatonal Journal of Intellgent Informaton Systems 2018; 7(2): 15-22 http://www.scencepublshnggroup.com/j/js do: 10.11648/j.js.20180702.11 ISSN: 2328-7675 (Prnt); ISSN: 2328-7683 (Onlne) Kernel Collaboratve

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline mage Vsualzaton mage Vsualzaton mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and Analyss outlne mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and

More information

Fast Feature Value Searching for Face Detection

Fast Feature Value Searching for Face Detection Vol., No. 2 Computer and Informaton Scence Fast Feature Value Searchng for Face Detecton Yunyang Yan Department of Computer Engneerng Huayn Insttute of Technology Hua an 22300, Chna E-mal: areyyyke@63.com

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM Classfcaton of Face Images Based on Gender usng Dmensonalty Reducton Technques and SVM Fahm Mannan 260 266 294 School of Computer Scence McGll Unversty Abstract Ths report presents gender classfcaton based

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Recognizing Faces. Outline

Recognizing Faces. Outline Recognzng Faces Drk Colbry Outlne Introducton and Motvaton Defnng a feature vector Prncpal Component Analyss Lnear Dscrmnate Analyss !"" #$""% http://www.nfotech.oulu.f/annual/2004 + &'()*) '+)* 2 ! &

More information

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity Journal of Sgnal and Informaton Processng, 013, 4, 114-119 do:10.436/jsp.013.43b00 Publshed Onlne August 013 (http://www.scrp.org/journal/jsp) Corner-Based Image Algnment usng Pyramd Structure wth Gradent

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

WIRELESS CAPSULE ENDOSCOPY IMAGE CLASSIFICATION BASED ON VECTOR SPARSE CODING.

WIRELESS CAPSULE ENDOSCOPY IMAGE CLASSIFICATION BASED ON VECTOR SPARSE CODING. WIRELESS CAPSULE ENDOSCOPY IMAGE CLASSIFICATION BASED ON VECTOR SPARSE CODING Tao Ma 1, Yuexan Zou 1 *, Zhqang Xang 1, Le L 1 and Y L 1 ADSPLAB/ELIP, School of ECE, Pekng Unversty, Shenzhen 518055, Chna

More information

Human Face Recognition Using Generalized. Kernel Fisher Discriminant

Human Face Recognition Using Generalized. Kernel Fisher Discriminant Human Face Recognton Usng Generalzed Kernel Fsher Dscrmnant ng-yu Sun,2 De-Shuang Huang Ln Guo. Insttute of Intellgent Machnes, Chnese Academy of Scences, P.O.ox 30, Hefe, Anhu, Chna. 2. Department of

More information

Gender Classification using Interlaced Derivative Patterns

Gender Classification using Interlaced Derivative Patterns Gender Classfcaton usng Interlaced Dervatve Patterns Author Shobernejad, Ameneh, Gao, Yongsheng Publshed 2 Conference Ttle Proceedngs of the 2th Internatonal Conference on Pattern Recognton (ICPR 2) DOI

More information

Detection of an Object by using Principal Component Analysis

Detection of an Object by using Principal Component Analysis Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,

More information

Scale Selective Extended Local Binary Pattern For Texture Classification

Scale Selective Extended Local Binary Pattern For Texture Classification Scale Selectve Extended Local Bnary Pattern For Texture Classfcaton Yutng Hu, Zhlng Long, and Ghassan AlRegb Multmeda & Sensors Lab (MSL) Georga Insttute of Technology 03/09/017 Outlne Texture Representaton

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

Robust Mean Shift Tracking with Corrected Background-Weighted Histogram

Robust Mean Shift Tracking with Corrected Background-Weighted Histogram Robust Mean Shft Trackng wth Corrected Background-Weghted Hstogram Jfeng Nng, Le Zhang, Davd Zhang and Chengke Wu Abstract: The background-weghted hstogram (BWH) algorthm proposed n [] attempts to reduce

More information

An Image Fusion Approach Based on Segmentation Region

An Image Fusion Approach Based on Segmentation Region Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Why consder unlabeled samples?. Collectng and labelng large set of samples s costly Gettng recorded speech s free, labelng s tme consumng 2. Classfer could be desgned

More information

Histogram of Template for Pedestrian Detection

Histogram of Template for Pedestrian Detection PAPER IEICE TRANS. FUNDAMENTALS/COMMUN./ELECTRON./INF. & SYST., VOL. E85-A/B/C/D, No. xx JANUARY 20xx Hstogram of Template for Pedestran Detecton Shaopeng Tang, Non Member, Satosh Goto Fellow Summary In

More information

Discriminative classifiers for object classification. Last time

Discriminative classifiers for object classification. Last time Dscrmnatve classfers for object classfcaton Thursday, Nov 12 Krsten Grauman UT Austn Last tme Supervsed classfcaton Loss and rsk, kbayes rule Skn color detecton example Sldng ndo detecton Classfers, boostng

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

Combination of Local Multiple Patterns and Exponential Discriminant Analysis for Facial Recognition

Combination of Local Multiple Patterns and Exponential Discriminant Analysis for Facial Recognition Sensors & ransducers 203 by IFSA http://.sensorsportal.com Combnaton of Local Multple Patterns and Exponental Dscrmnant Analyss for Facal Recognton, 2 Lfang Zhou, 2 Bn Fang, 3 Wesheng L, 3 Ldou Wang College

More information

Face Recognition using 3D Directional Corner Points

Face Recognition using 3D Directional Corner Points 2014 22nd Internatonal Conference on Pattern Recognton Face Recognton usng 3D Drectonal Corner Ponts Xun Yu, Yongsheng Gao School of Engneerng Grffth Unversty Nathan, QLD, Australa xun.yu@grffthun.edu.au,

More information

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research

More information

Object-Based Techniques for Image Retrieval

Object-Based Techniques for Image Retrieval 54 Zhang, Gao, & Luo Chapter VII Object-Based Technques for Image Retreval Y. J. Zhang, Tsnghua Unversty, Chna Y. Y. Gao, Tsnghua Unversty, Chna Y. Luo, Tsnghua Unversty, Chna ABSTRACT To overcome the

More information

Multi-view 3D Position Estimation of Sports Players

Multi-view 3D Position Estimation of Sports Players Mult-vew 3D Poston Estmaton of Sports Players Robbe Vos and Wlle Brnk Appled Mathematcs Department of Mathematcal Scences Unversty of Stellenbosch, South Afrca Emal: vosrobbe@gmal.com Abstract The problem

More information

Robust Dictionary Learning with Capped l 1 -Norm

Robust Dictionary Learning with Capped l 1 -Norm Proceedngs of the Twenty-Fourth Internatonal Jont Conference on Artfcal Intellgence (IJCAI 205) Robust Dctonary Learnng wth Capped l -Norm Wenhao Jang, Fepng Ne, Heng Huang Unversty of Texas at Arlngton

More information

A Background Subtraction for a Vision-based User Interface *

A Background Subtraction for a Vision-based User Interface * A Background Subtracton for a Vson-based User Interface * Dongpyo Hong and Woontack Woo KJIST U-VR Lab. {dhon wwoo}@kjst.ac.kr Abstract In ths paper, we propose a robust and effcent background subtracton

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

High-Boost Mesh Filtering for 3-D Shape Enhancement

High-Boost Mesh Filtering for 3-D Shape Enhancement Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization

Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization Image Deblurrng and Super-resoluton by Adaptve Sparse Doman Selecton and Adaptve Regularzaton Wesheng Dong a,b, Le Zhang b,1, Member, IEEE, Guangmng Sh a, Senor Member, IEEE, and Xaoln Wu c, Senor Member,

More information

Brushlet Features for Texture Image Retrieval

Brushlet Features for Texture Image Retrieval DICTA00: Dgtal Image Computng Technques and Applcatons, 1 January 00, Melbourne, Australa 1 Brushlet Features for Texture Image Retreval Chbao Chen and Kap Luk Chan Informaton System Research Lab, School

More information

Nearest. Points between Image Sets. Meng Yang. Electrical Engineering/IBBT, K.U. Leuven, Belgium

Nearest. Points between Image Sets. Meng Yang. Electrical Engineering/IBBT, K.U. Leuven, Belgium Face Recognton based on Regularzed Ponts between Image Sets Nearest Meng Yang, Pengfe Zhu, Luc Van Gool,3, Le Zhang Department of Informatonn echnology and Electrcal Engneerng, EH Zurch, Swtzerland Department

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

Scale and Orientation Adaptive Mean Shift Tracking

Scale and Orientation Adaptive Mean Shift Tracking Scale and Orentaton Adaptve Mean Shft Trackng Jfeng Nng, Le Zhang, Davd Zhang and Chengke Wu Abstract A scale and orentaton adaptve mean shft trackng (SOAMST) algorthm s proposed n ths paper to address

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Hybrid Non-Blind Color Image Watermarking

Hybrid Non-Blind Color Image Watermarking Hybrd Non-Blnd Color Image Watermarkng Ms C.N.Sujatha 1, Dr. P. Satyanarayana 2 1 Assocate Professor, Dept. of ECE, SNIST, Yamnampet, Ghatkesar Hyderabad-501301, Telangana 2 Professor, Dept. of ECE, AITS,

More information

Robust Face Alignment for Illumination and Pose Invariant Face Recognition

Robust Face Alignment for Illumination and Pose Invariant Face Recognition Robust Face Algnment for Illumnaton and Pose Invarant Face Recognton Fath Kahraman 1, Bnnur Kurt 2, Muhttn Gökmen 2 Istanbul Techncal Unversty, 1 Informatcs Insttute, 2 Computer Engneerng Department 34469

More information

Facial Expression Recognition Based on Local Binary Patterns and Local Fisher Discriminant Analysis

Facial Expression Recognition Based on Local Binary Patterns and Local Fisher Discriminant Analysis WSEAS RANSACIONS on SIGNAL PROCESSING Shqng Zhang, Xaomng Zhao, Bcheng Le Facal Expresson Recognton Based on Local Bnary Patterns and Local Fsher Dscrmnant Analyss SHIQING ZHANG, XIAOMING ZHAO, BICHENG

More information

Two-Dimensional Supervised Discriminant Projection Method For Feature Extraction

Two-Dimensional Supervised Discriminant Projection Method For Feature Extraction Appl. Math. Inf. c. 6 No. pp. 8-85 (0) Appled Mathematcs & Informaton cences An Internatonal Journal @ 0 NP Natural cences Publshng Cor. wo-dmensonal upervsed Dscrmnant Proecton Method For Feature Extracton

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Categorizing objects: of appearance

Categorizing objects: of appearance Categorzng objects: global and part-based models of appearance UT Austn Generc categorzaton problem 1 Challenges: robustness Realstc scenes are crowded, cluttered, have overlappng objects. Generc category

More information

Development of an Active Shape Model. Using the Discrete Cosine Transform

Development of an Active Shape Model. Using the Discrete Cosine Transform Development of an Actve Shape Model Usng the Dscrete Cosne Transform Kotaro Yasuda A Thess n The Department of Electrcal and Computer Engneerng Presented n Partal Fulfllment of the Requrements for the

More information

Lecture 13: High-dimensional Images

Lecture 13: High-dimensional Images Lec : Hgh-dmensonal Images Grayscale Images Lecture : Hgh-dmensonal Images Math 90 Prof. Todd Wttman The Ctadel A grayscale mage s an nteger-valued D matrx. An 8-bt mage takes on values between 0 and 55.

More information

Wavefront Reconstructor

Wavefront Reconstructor A Dstrbuted Smplex B-Splne Based Wavefront Reconstructor Coen de Vsser and Mchel Verhaegen 14-12-201212 2012 Delft Unversty of Technology Contents Introducton Wavefront reconstructon usng Smplex B-Splnes

More information

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors Onlne Detecton and Classfcaton of Movng Objects Usng Progressvely Improvng Detectors Omar Javed Saad Al Mubarak Shah Computer Vson Lab School of Computer Scence Unversty of Central Florda Orlando, FL 32816

More information

AUTOMATED personal identification using biometrics

AUTOMATED personal identification using biometrics A 3D Feature Descrptor Recovered from a Sngle 2D Palmprnt Image Qan Zheng,2, Ajay Kumar, and Gang Pan 2 Abstract Desgn and development of effcent and accurate feature descrptors s crtcal for the success

More information