Feature Extraction Based on Maximum Nearest Subspace Margin Criterion

Size: px
Start display at page:

Download "Feature Extraction Based on Maximum Nearest Subspace Margin Criterion"

Transcription

1 Neural Process Lett DOI 10.7/s y Feature Extracton Based on Maxmum Nearest Subspace Margn Crteron Y Chen Zhenzhen L Zhong Jn Sprnger Scence+Busness Meda New York 2012 Abstract Based on the classfcaton rule of sparse representaton-based classfcaton (SRC) and lnear regresson classfcaton (LRC), we propose the maxmum nearest subspace margn crteron for feature extracton. The proposed method can be seen as a preprocessng step of SRC and LRC. By maxmzng the nter-class reconstructon error and mnmzng the ntra-class reconstructon error smultaneously, the proposed method sgnfcantly mproves the performances of SRC and LRC. Compared wth lnear dscrmnant analyss, the proposed method avods the small sample sze problem and can extract more features. Moreover, we extend LRC to overcome the potental sngular problem. The expermental results on the extended Yale B (YALE-B), AR, PolyU fnger knuckle prnt and the CENPARMI handwrtten numeral databases demonstrate the effectveness of the proposed method. Keywords Feature extracton Dmensonalty reducton Face recognton Fnger knuckle prnt recognton Lnear regresson classfcaton 1 Introducton Recently, the classfcatons based on reconstructon errors [1 6] have attracted a lot of researchers. Among the exstng reconstructon errors based classfcatons, the most popular ones are SRC [5] andlrc[6]. The man dfference between SRC and LRC s the reconstructon strategy. Let us take face recognton for example. SRC, whch s based on sparse representaton, represents a face mage as a sparse combnaton of all the face mages. Dfferently, based on lnear regresson model, LRC represents a face mage as a lnear combnaton of all the face mages from one class. Although SRC and LRC have dfferent Y. Chen (B) Z. Jn School of Computer Scence and Technology, Nanjng Unversty of Scence and Technology, Nanjng 294, People s Republc of Chna e-mal: cystory@qq.com Z. L School of Informaton Engneerng, Jangx Manufacturng Technology College, Nanchang , Chna

2 Y. Chen et al. reconstructon strateges, ther classfcaton rules are based on the same assumpton: the probe mage belongs to the class wth the mnmum reconstructon error. Suppose x s a probe mage and ˆx ( = 1, 2,...,c) s the reconstructed mage by the th class. The dstance from x to the th class s defned as d = x ˆx 2. Then the label of x wll be assgned as the class wth the mnmum reconstructon error mn d (x). Unfortunately, although SRC and LRC have been successfully appled for face recognton, we notce the performances of SRC and LRC degrade severely under the llumnatons and noses condtons. That means the ntra-class reconstructon errors are probably larger than the nter-class reconstructon errors when the mages contans varatons of llumnatons and noses. In other words, the classfcaton rule of SRC and LRC may not hold well n the orgnal space. To acheve a hgh performance, a good classfcaton rule consders the characterstcs of the feature space. Therefore, a classfer works more effectvely only the feature space fts for the classfcaton rule of the classfer. Then a queston s: what s the optmal feature space for SRC and LRC? Intutvely, the optmal feature space can ft for the classfcaton rule of SRC and LRC as far as possble. To the best of our knowledge, most of the exstng feature extracton methods such as [7 15] are desgned based on the data structures rather than the classfcatons. Wthout consderng the classfcaton rule of SRC and LRC, the feature subspaces learned by the above methods may not hold the assumpton of SRC and LRC well. Therefore, the performances of SRC and LRC potentally degrade. To enhance the performances of SRC and LRC, we propose the maxmum nearest subspace margn crteron (MNSMC) accordng to the classfcaton rules of SRC and LRC. Based on MNSMC, a new feature extractor s developed to fnd the optmal feature subspace for SRC and LRC. The rest of the paper s organzed as follows. Related works are revewed n Sect. 2. In Sect. 3, MNSMC s descrbed n detal. In Sect. 4, the experments are presented on the well-known databases to demonstrate the effectveness of the proposed method. Fnally, conclusons are drawn n Sect Related Works In ths secton, we brefly revew SRC and LRC. Suppose x j R n ( = 1, 2,...,c, j = 1, 2,...,n ) s the jth sample from the th class and X =[x 1, x2,...,xn ] s a set of all the samples from the th class. Let X = [X 1, X 2,...,X c ] be the set of orgnal tranng samples. 2.1 Sparse Representaton-Based Classfcaton SRC consders sparse representaton has natural dscrmnatng power: takng face mages nto account, the most compact expresson of a certan face mage s generally gven by the face mages from the same class [5]. Denote by z a probe mage. We represent z n a overcomplete dctonary whose bass vectors are tranng sample themselves,.e., z = Xβ (1) The sparsest soluton to Eq. (1) can be sought by solvng the followng optmzaton problem: ˆβ = arg mn β 0, subject to z = Xβ (2) where 0 denotes the L 0 -norm, whch counts the number of nonzero entres n a vector.

3 Feature Extracton Based on MNSMC Recent research efforts reveal that for certan dctonares, f the soluton ˆβ s sparse enough, fndng the soluton of the L 0 optmzaton problem s equvalent to fndng the soluton to the followng L 1 optmzaton problem [16 18]: ˆβ = arg mn β 1, subject to z = Xβ (3) Then fnd the class wth the mnmum reconstructon error dentty (z) = arg mn {ε } (4) where ε = z X ˆβ 2, ˆβ = [ˆβ1; ˆβ2;...; ˆβc] and ˆβ s the coeffcent vector assocated wth class. 2.2 Lnear Regresson Classfcaton LRC s based on the assumpton that samples from a specfc object class le on a lnear subspace. Usng ths concept, a lnear model s developed. In ths model, a probe mage s represented as a lnear combnaton of class-specfc samples. Thereby the task of recognton s defned as a problem of lnear regresson. Least-squares estmaton (LSE) [19 21]sused to estmate the reconstructon coeffcents for a gven probe mage aganst all class models. Fnally, the label s sgned as the class wth the most precse estmaton. Suppose z s a probe sample from the th class, t should be represented as a lnear combnaton of the mages from the same class (lyng on the same subspace),.e., z = X β (5) where β R n 1 s the reconstructon coeffcents. Gven that n n, the system of equatons n Eq. (5) s well condtoned and can be estmated by LSE: ( ) 1 ˆβ = X T X X T z (6) The probe sample can be reconstructed by Eq. (7): ẑ = X ˆβ = X (X T X ) 1 X T z (7) The label s sgned as the class wth the mnmum reconstructon error,.e., dentty (z) = arg mn z X ˆβ 2 (8) 3 Feature Extracton by Maxmum Nearest Subspace Margn Crteron 3.1 Motvaton In the applcaton of face recognton, SRC and LRC demonstrate mpressve results. When we nvestgate the msclassfcaton, we fnd the varaton of llumnatons can sgnfcantly affect the classfcaton results. An ntutve example s provded n Fg. 1 to state ths problem. On the YALE-B face database, we select four face mages of the frst class under dfferent llumnaton condtons. Accordng to the classfcaton rule of SRC and LRC, the face mage can be classfed correctly when the reconstructon error of the frst class s mnmal. However, takng LRC for example, as can be seen n Fg. 1, only the face mage wth the frontal llumnaton (Fg. 1a) has the mnmum ntra-class reconstructon error. In an extreme case,

4 Y. Chen et al. Reconstructon Errors Class Labels (a) Reconstructon Errors Class Labels (b) Reconstructon Errors Class Labels (c) Reconstructon Errors Class Labels Fg. 1 The reconstructon errors of four mages of the frst class under the dfferent llumnaton condtons on the YALE-B database (d).e., the face mage wth the backlght llumnaton (Fg. 1d), the ntra-class reconstructon error s even larger than most of the nter-class reconstructon errors. The above example ndcates the orgnal mage space s not sutable for classfcaton due to the varatons of llumnatons. To solve ths problem, we am to fnd a feature subspace to reduce the mpact of llumnatons and enhance the performance of SRC and LRC. Accordng to the classfcaton rule of SRC and LRC, the feature subspace wth larger nter-class reconstructon errors and smaller ntra-class errors wll lead to a good performance. To fnd the optmal feature subspace, we develop MNSMC for feature extracton. 3.2 Maxmum Nearest Subspace Margn Crteron In ths secton we wll ntroduce our MNSMC n detal. Frst let us ntroduce some notatons and prelmnary defntons. Let x j R n ( = 1, 2,...,c, j = 1, 2,...,n ) be the jth sample from the th class and X =[x 1, x2,...,xn ] s a set of all the samples from the th class. Based on the reconstructon errors, we frst defne the nearest subspace. Defnton 1 (Nearest subspace). For a gven sample x j j, ts nearest subspace N s the subspace spanned by the class wth the mnmum reconstructon error of x j. To ntroduce the label nformaton, we further defne the two types of nearest subspaces.

5 Feature Extracton Based on MNSMC Defnton 2 (Homogeneous nearest subspace). For a gven sample x j, ts homogeneous nearest subspace s the subspace spanned by the th class. Defnton 3 (Heterogeneous nearest subspace). For a gven sample x j, ts heterogeneous nearest subspace s the subspace spanned by the class wth the mnmum nter-class reconstructon error of x j. Before descrbe the nearest subspace margn, we provde a way to compute the two types of dstances: the pont-to-ntra-class dstance and the pont-to-nter-class dstance. In ths paper, we focus on the reconstructon errors. Thus, the pont-to-ntra-class dstance s defned as the ntra-class reconstructon error: ε j = x j X j β j 2 (9) where X j =[x 1, x2 j 1,...,x, x j+1,...,x n ] ndcates x j s excluded from X and β j s the reconstructon coeffcent vector obtaned from LSE,.e., ( β j = X j T X ) 1 j X j T x j (10) For each x j,wecanfndtsk heterogeneous nearest subspaces Ne j. The pont-to-nter-class dstance s defned as: η j = x j X mβm j 2 (11) X m N e j where represents the cardnalty of a set, X m s one of the k heterogeneous nearest subspaces and β j m s the reconstructon coeffcent vector. Based on the pont-to-ntra-class and the pont-to-nter-class dstance, we can defne the nearest subspace margn. N e j Defnton 4 (Nearest subspace margn). The nearest subspace margn γ j as follows. for x j s defned γ j = η j ε j (12) Ths margn measures the dstances from x j to ts own class and smlar heterogeneous classes. Accordng to the classfcaton rule of SRC and LRC, a large η j and a small ε j,.e., a large γ j, ndcate x j s easy to be classfed correctly. Then the total nearest subspace margn for the whole data set s defned to be as follows Defnton 5 (Total nearest subspace margn). The total nearest subspace margn γ for all the samples s defned as: γ = γ j = ( ) ηj ε j = η j ε j (13) j j j j Geometrcally, j η j measures the class separablty and j ε j measures the compactness of the ntra-class samples. From the defntons, a large j η j and a small j ε j,.e.,alargeγ, wll lead to a good separablty.

6 Y. Chen et al. 3.3 Lnear Feature Extracton When performng dmensonalty reducton, we am to fnd a mappng W=[w 1, w 2,...,w d ] R n d from the orgnal space to some feature space such that γ s maxmzed after the transformaton,.e., W = arg max γ (W) (14) W Let y j = W T x j be the mage of x j n the projected subspace. Then y j Y mβ j m 2 j X m Nj e Nj e = W T x j W T X mβ j m 2 j X m Nj e Nj e ( ) = W T x j W T X mβ j T ( ) m W T x j W T X mβm j j X m Nj e Nj e ( )( W T x j W T X mβm j W T x j W T X mβm j = tr j X m Nj e Nj e ( )( ) x j = tr W T X mβm j x j X mβ j T m j X m Nj e Nj e W ) = tr (W T Sb R W ) T where tr ( ) s the notaton of trace operator and S R b = j X m N e j ( )( x j X mβm j x j X mβm j N e j ) T (15) Smlarly, y j Ỹ jβ j j 2 = W T x j W T X jβ j 2 j = tr ( )( ) W T x j W T X jβ j W T x j W T X jβ j T j

7 Feature Extracton Based on MNSMC ( )( ) = tr W T x j X jβ j x j X jβ j T W ) = tr (W T Sw R W j where S R w = j ( )( ) x j X jβ j x j X jβ j T (16) The total nearest subspace margn n the projected subspace changes to: γ (W) = tr ( ( ) ) W T Sb R SR w W = d wk T k=1 ( ) Sb R SR w w k (17) To elmnate the freedom, we add the constrant w T k w k = 1. Thus the goal of MNSMC s to solve the followng optmzaton problem: max d wk T k=1 ( S R b Sw R ) wk subject to w T k w k = 1 (18) It s easy to prove the optmal soluton W s composed of the d egenvectors correspondng to the d largest egenvalues of Eq. (19). ( ) Sb R SR w w = λw (19) Compared wth LDA, the proposed method does not needs to compute the nverse of Sw R. Therefore, MNSMC avods the small sample sze (SSS) problem and can extract more features [9]. To show the effectveness of MNSMC, we recalculate the reconstructon errors of the four mages as shown n Fg. 1 n the MNSMC s subspace and llustrate the results n Fg. 2. As can be seen n Fg. 1, only one face mage has the smallest ntra-class reconstructon error. But n Fg. 2, we can fnd the ntra-class reconstructon errors are sgnfcantly smaller and three face mages n Fg. 2 have the mnmum ntra-class reconstructon errors. The expermental results ndcate that MNSMC can reduce the mpact of the llumnatons and mprove the performances of SRC and LRC. Note that, n Eq. (10), when the tranng number n of the th class s larger than the dmenson of x j, the matrx X j T X j s sngular. Thus the nverse of X j T X j can not be calculated drectly. In ths case, we can apply rdge regresson (RR) [22,23] to solve the sngularty problem. Dfferent from LSE, RR ams to mnmze the followng cost functon: ( x j mn X jβ j 2 + λ β j 2) (20) where λ s a postve factor to reduce the soluton space.

8 Y. Chen et al Reconstructon errors Reconstructon errors Class Labels (a) Class Labels (b) Reconstructon errors Class Labels (c) Reconstructon erros Class Labels (d) Fg. 2 The reconstructon errors of four mages of the frst class under the dfferent llumnaton condtons n the MNSMC s subspace The cost functon can be rewrtten as: x j X jβ j 2 + λ β j 2 ( ) = x j X jβ j T ( x j X jβ j ( ( ) = x j T ( j x + β j ) ( + λ ) T X T j X jβ j 2 ( β j β j ) T β j ) T X T j x j ) ( + λ β j ) T β j Takng dervatves and equalng them to zero, then the optmal soluton of the cost functon can be obtaned as follows. ( β j = X j T X ) 1 j + λi X j T x j (21) where I s the dentty matrx. It s easy to prove X T j X j + λi s nonsngular. As can be seen n Eq. (6), smlar to MNSMC, LRC also suffers from the sngularty problem. Usng RR nstead of LSE, Eq. (6) can be rewrtten as: ˆβ = ( X T X + λi) 1 X T y (22)

9 Feature Extracton Based on MNSMC Table 1 The algorthm of MNSMC Input: Column sample matrx X, the nearest subspace sze k Output: Transform matrx W Step 1: Construct Sw R and SR b usng X. ( ) Step 2: Solve the generalzed egenvectors of Sb R SR w w = λw and construct W = {w 1, w 2,...,w d } correspondng to the d largest egenvalues. Step 3: Output W. Table 2 The detals of the four databases Database Sze Number of classes Number of samples per class Number of tranng sample per class YALE-B AR FKP CENPARMI Based on RR, we extend LRC to any cases wthout lmts. And the modfed LRC s called rdge regresson classfcaton (RRC) n ths paper. 3.4 The Algorthm of MNSMC The man steps of MNSMC s summarzed n Table 1. 4 Experments To evaluate the performance of the proposed method, we compare t wth three feature extracton methods,.e., prncpal component analyss (PCA) [24], LDA and (MMC) [25] over three classfers,.e., nearest neghbor classfer (NNC) [26], SRC and LRC, on four well-known databases. The detals of the databases are summarzed n Table 2. As a baselne, we drectly employ NNC, SRC and LRC to classfy the raw data. Then we nvestgate whether MNSMC can enhance the performances of SRC and LRC. We also compare MNSMC wth PCA, LDA and MMC to fnd whch feature subspace s more sutable for SRC and LRC. The NNC s used to compare wth SCR and LRC to determne whch classfer s more sutable for MNSMC. Note that LDA can extract at most c 1 features due to the characterstc (c s the total number of the classes). Therefore, the dmenson of LDA s lmted to a value n the fgures. 4.1 Parameter Selecton For effcency, on the YALE-B, AR and FKP databases, PCA s frst appled to reduce the dmensonalty. Then the experments are performed on the 150-dmensonal PCA subspaces. On the YALE-B, AR and FKP databases, there s only one model parameter,.e., k heterogeneous nearest subspace. And on the CENPARMI database, the parameter λ s ntroduced to overcome the sngularty problem. In the experments, the values of the parameters are emprcally set as n Table Face Recognton The YALE-B database [27 29] conssts of 2432 frontal face mages of 38 subjects under varous lghtng condtons. The database was dvded n fve subsets: subset 1 consstng of 266 mages (seven mages per subject) under nomnal lghtng condtons was used as the gallery.

10 Y. Chen et al. Table 3 The parameter settngs of the four databases Database PCA dmensons k λ YALE-B None AR None FKP None CENPARMI Table 4 The average maxmal recognton rates on the YALE-B database Baselne PCA LDA MMC MNSMC NNC 44.3 ± ± ± ± ± 2.0 SRC 86.2 ± ± ± ± ± 1.1 LRC 81.7 ± ± ± ± ± 0.9 Sgnfcance of bold are the hghest recognton rates of the correspondng classfers Fg. 3 Sample mages of one person from the YALE-B face database Subsets 2 and 3, each consstng of 12 mages per subject, characterze slght-to-moderate lumnance varatons, whle subset 4 (14 mages per person) and subset 5 (19 mages per person) depct severe lght varatons. The mages are also grayscale and normalzed to a resoluton of pxels. Sample mages of one person from the YALE-B face database are shown n Fg. 3. We randomly choose 10 samples mages from all the subsets and the rest sample mages are used for test. Ths procedure s repeated for 50 tmes. The average recognton rates and the correspondng standard devaton are ndcated n Table 4.And the recognton rates versus the dmensons are llustrated n Fgs. 4, 5 and 6. The AR face database [30,31] contans over 4,000 color face mages of 126 people ( men and 56 women), ncludng frontal vews of faces wth dfferent facal expressons, lghtng condtons and occlusons. The pctures of most persons were taken n two sessons (separated by two weeks). Each secton contans 13 color mages and 120 ndvdual s (65 men and 55 women) partcpated n both sessons. The mages of these 120 ndvduals were selected and used n our experment. We manually cropped the face porton of the mage and then normalzed t to pxels. Sample mages of one person from the AR face database are shown n Fg. 7. We randomly choose 5 samples mages from each person and the rest sample mages are used for test. Ths procedure s repeated for 50 tmes. The average recognton rates and the correspondng standard devaton are ndcated n Table 5. And the recognton rates versus the dmensons are llustrated n Fgs. 8, 9 and Fnger Knuckle Prnt Recognton In PolyU FKP database [32 34], FKP mages were collected from 165 volunteers, ncludng 125 males and 40 females. Among them, 143 subjects were years old and the

11 Feature Extracton Based on MNSMC Recognton rates PCA+LRC LDA+LRC MMC+LRC MNSMC+LRC Dmensons Fg. 4 The recognton rates of 4 methods plus LRC on the YALE-B database Recognton rates PCA+SRC LDA+SRC MMC+SRC MNSMC+SRC Dmensons Fg. 5 The recognton rates of 4 methods plus SRC on the YALE-B database Table 5 The average maxmal recognton rates on the AR database Baselne PCA LDA MMC MNSMC NNC 59.6 ± ± ± ± ± 1.3 SRC 87.5 ± ± ± ± ± 0.5 LRC 74.8 ± ± ± ± ± 0.9 Sgnfcance of bold are the hghest recognton rates of the correspondng classfers

12 Y. Chen et al. Recognton rates SRC LRC MNSMC+LRC MNSMC+SRC Dmensons Fg. 6 The recognton rates of MNSMC plus SRC and LRC versus the baselnes of SRC and LRC on the YALE-B database Fg. 7 Sample mages of one person from the AR face database others were years old. The samples were collected n two separate sessons. In each sesson, the subject was asked to provde sx mages for each of the left ndex fnger, the left mddle fnger, the rght ndex fnger and the rght mddle fnger. Therefore, 48 mages from four fngers were collected from each subject. In total, the database contans 7,920 mages from 660 dfferent fngers. The average tme nterval between the frst and the second sessons was about 25 days. The maxmum and mnmum tme ntervals were 96 days and 14 days respectvely. All the samples n the database are hstogram equalzed and reszed to Sample mages of one person from the FKP database are shown n Fg. 11. We randomly choose fve samples mages from each person and the rest sample mages are used for test. Ths procedure s repeated for 50 tmes. The average recognton rates and the correspondng standard devaton are ndcated n Table 6. And the recognton rates versus the dmensons are llustrated n Fgs. 12, 13 and 14.

13 Feature Extracton Based on MNSMC Recognton rates PCA+LRC LDA+LRC MMC+LRC MNSMC+LRC Dmensons Fg. 8 The recognton rates of 4 methods plus LRC on the AR database Recognton rates PCA+SRC LDA+SRC MMC+SRC MNSMC+SRC Dmensons Fg. 9 The recognton rates of 4 methods plus SRC on the AR database Table 6 The average maxmal recognton rates on the FKP database Baselne PCA LDA MMC MNSMC NNC 86.6 ± ± ± ± ± 1.5 SRC 86.6 ± ± ± ± ± 0.9 LRC 86.1 ± ± ± ± ± 1.2 Sgnfcance of bold are the hghest recognton rates of the correspondng classfers

14 Y. Chen et al Recognton rates SRC LRC MNSMC+LRC MNSMC+SRC Dmensons Fg. 10 The recognton rates of MNSMC plus SRC and LRC versus the baselnes of SRC and LRC on the AR database Fg. 11 Sample mages of the rght ndex fnger from one ndvdual 4.4 Handwrtten Numeral Recognton The Concorda Unversty CENPARMI handwrtten numeral database [35] s used to test the performance of MNSMC plus RRC. The database contans 10 numeral classes and each class has 600 samples. In our experment, we randomly choose 200 samples of each class for tranng, the remanng 400 samples for testng. Thus, the total number of tranng samples s 2,000 whle the total number of testng samples s Snce the dmensonalty of one dgt (121) s less than tranng samples (200) of one class, LRC fals under ths crcumstance. So RRC s employed n ths experment. The recognton rates versus the dmensons are llustrated n Fgs. 15, 16 and 17.

15 Feature Extracton Based on MNSMC Table 7 The recognton rates on the four subsets usng dfferent methods and classfers Subset2 Subset3 Subset4 Subset5 NNC SRC LRC NNC SRC LRC NNC SRC LRC NNC SRC LRC PCA LDA MMC MNSMC Sgnfcance of bold are the hghest recognton rates of the correspondng classfers 95 Recognton rates PCA+LRC LDA+LRC MMC+LRC MNSMC+LRC Dmensons Fg. 12 The recognton rates of 4 methods plus LRC on the FKP database 95 Recognton rates PCA+SRC LDA+SRC MMC+SRC MNSMC+SRC Dmensons Fg. 13 The recognton rates of 4 methods plus SRC on the FKP database

16 Y. Chen et al. 95 Recognton rates LRC SRC MNSMC+LRC MNSMC+SRC Dmensons Fg. 14 The recognton rates of MNSMC plus SRC and LRC versus the baselnes of SRC and LRC on the FKP database Recognton rates PCA+RRC LDA+RRC MMC+RRC MNMSC+RRC Dmensons Fg. 15 The recognton rates of 4 methods plus LRC on the CENPARMI database 4.5 Face Recognton Under llumnaton and nose condtons Further experments on the YALE-B face database are conducted to nvestgate the performance of MNSMC under llumnaton and nose condton. The YALE-B was dvded nto fve subsets accordng to the llumnaton drectons. Sample mages of one person from the subsets are shown n Fg. 18. We follow the evaluaton protocol as reported n [28,36]. Tranng s conducted usng subset 1 and the system s valdated on the remanng subsets. The expermental results are lsted n Table 7.

17 Feature Extracton Based on MNSMC Recognton rates PCA+SRC LDA+SRC MMC+SRC MNSMC+SRC Dmensons Fg. 16 The recognton rates of 4 methods plus SRC on the CENPARMI database 95 Recognton rates RRC SRC MNSMC+RRC MNSMC+SRC Dmensons Fg. 17 The recognton rates of MNSMC plus SRC and LRC versus the baselnes of SRC and LRC on the CENPARMI database In the next set of experments we contamnate the face mages n the subset2 wth salt and pepper nose [37]. Fgure 19 reflects face mages dstorted wth varous degrees of salt and pepper nose. The expermental results can be found n Table Dscussons From the expermental results, we can draw the followng conclusons: (1) Snce the Yale-B and AR databases contan llumnatons, occlusons and expressons, the recognton rates are not qute good when classfers are drectly appled on the raw

18 Y. Chen et al. Table 8 The recognton rates on the subsets2 usng dfferent methods and classfcatons Densty 20 % 40 % 60 % % NNC SRC LRC NNC SRC LRC NNC SRC LRC NNC SRC LRC PCA LDA MMC MNSMC Sgnfcance of bold are the hghest recognton rates of the correspondng classfers Table 9 The average maxmal recognton rates on the CENPARMI database Baselne PCA LDA MMC MNSMC NNC 88.3 ± ± ± ± ± 0.5 SRC 93.2 ± ± ± ± ± 0.4 RRC.4 ± ± ± ± ± 0.5 Sgnfcance of bold are the hghest recognton rates of the correspondng classfers data. More mportantly, we fnd that not all the feature extracton methods are helpful to the classfers. As can be seen n Tables 4 and 9, the recognton rates of LDA plus Fg. 18 Each row represents typcal mages from subsets 1, 2, 3, 4 and 5, respectvely

19 Feature Extracton Based on MNSMC Fg. 19 Face mage wth a 20 %, b 40 %, c 60 %, d % salt and pepper nose densty SRC are lower than the correspondng baselnes. That means selectng an approprate feature extracton method s very mportant to a specfc classfer. (2) Snce MNSMC s desgned accordng to the classfcaton rule of SRC and LRC, t matches SRC and LRC perfectly. As can be seen n Fgs. 6, 10, 14,and17, the recognton rates of MNSMC plus SRC or LRC are consstently hgher than the correspondng baselnes. Meanwhle, we observe MNSMC plus LRC outperforms MNSMC plus SRC on the YALE-B, AR and FKP databases. As we ntroduced above, SRC and LRC are based on dfferent reconstructon strateges. In the proposed method, MNSMC shares the same reconstructon strategy wth LRC. Thus, MNSMC plus LRC performs better n most of the experments. (3) Compared wth PCA, LDA and MMC, MNSMC s the best feature extracton method for SRC and LRC. As can be seen n Fgs. 4, 5, 8, 9, 12, 13, 15,and16, MNSMC plus SRC and LRC consstently outperform other combnatons. As MNSMC consders the reconstructon error, t perfectly fts the classfcaton rule of SRC and LRC. Therefore, MNSMC performs better than other feature extracton methods. (4) The expermental results also ndcate, compared wth SRC and LRC, NNC s not very sutable for MNSMC. Techncally, NNC assumes a sample and ts nearest neghbor are n the same class. However, MNSMC consders the nearest subspace rather than the nearest neghbor. Therefore, MNSMC may not match NNC perfectly. 5 Conclusons In ths paper, accordng to the classfcaton rule of SRC and LRC, a new feature extractor based on MNSMC s proposed. By maxmzng the nter-class reconstructon error and mnmzng the ntra-class reconstructon error smultaneously, the proposed method mproves the performances of SRC and LRC sgnfcantly. Our method avods the SSS problem and can extract more features than LDA. Moreover, based on RR, we develop RRC to overcome the potental sngular problem of LRC. The expermental results on the YALE-B, AR, FKP and the CENPARMI handwrtten numeral databases show the effectveness of the proposed method. Acknowledgements Ths work s partally supported by the Natonal Scence Foundaton of Chna under grant no , , , References 1. L SZ (1998) Face recognton based on nearest lnear combnatons. In: Proceedngs of IEEE nternatonal conference on computer vson and pattern recognton, pp

20 Y. Chen et al. 2. L SZ, Lu J (1999) Face recognton usng nearest feature lne method. IEEE Trans Neural Netw 10(2): L SZ, Chan KL, Wang CL (2000) Performance evaluaton of the nearest feature lne method n mage classfcaton and retreval. IEEE Trans Pattern Anal Mach Intell 22(11): Chen J-T, Wu C-C (2002) Dscrmnant wavelet faces and nearest feature classfers for face recognton. IEEE Trans Pattern Anal Mach Intell 24(12): Wrght J, Yang A, Ganesh A, Sastry S, Ma Y (2009) Robust face recognton va sparse representaton. IEEE Trans Pattern Anal Mach Intell 31(2): Naseem I, Togner R, Bennamoun M (2010) Lnear regresson for face recognton. IEEE Trans Pattern Anal Mach Intell 32(11): Jollffe IT (1986) Prncpal component analyss. Sprnger, New York 8. Belhumeur PN, Hespanda J, Kregeman D (1997) Egenfaces vs Fsherfaces: recognton usng class specfc lnear projecton. IEEE Trans Pattern Anal Mach Intell 19(7): L H, Jang T, Zhang K (2003) Effcent and robust feature extracton by maxmum margn crteron. In: Proceedngs of advances n neural nformaton processng systems, pp He X, Yan S, Hu Y, Nyog P, Zhang H-J (2005) Face recognton usng Laplacan faces. IEEE Trans Pattern Anal Mach Intell 27(3): He X, Ca Deng, Yan S, Zhang HJ (2005a) Neghborhood preservng embeddng. In: Proceedngs of the 10th IEEE nternatonal conference on computer vson, pp Yan S, Xu D, Zhang B, Zhang H, Yang Q, Ln S (2007) Graph embeddng and extenson: a general framework for dmensonalty reducton. IEEE Trans Pattern Anal Mach Intell 29(1): Yang J, Zhang D, Yang J, Nu B (2007) Globally maxmzng, locally mnmzng: unsupervsed dscrmnant projecton wth applcatons to face and palm bometrcs. IEEE Trans Pattern Anal Mach Intell 29(4): Chen H-T, Chang H-W, Lu T-L (2005) Local dscrmnant embeddng and ts varants. In: IEEE conference on computer vson and pattern recognton (CVPR 2005), pp Wang F, Wang X, Zhang D, Zhang CS, L T (2009) margnface: a novel face recognton method by average neghborhood margn maxmzaton. Pattern Recognt 42(11): Candès E, Tao T (2006) Near optmal sgnal recovery from random projectons: unversal encodng strateges?. IEEE Trans Inf Theory 52(12): Donoho D (2006) For most large underdetermned systems of lnear equatons the mnmal l1-norm soluton s also the sparsest soluton. Commun Pure Appl Math 59(6): Candès E, Romberg J, Tao T (2006) Stable sgnal recovery from ncomplete and naccurate measurements. Commun Pure Appl Math 59(8): Haste T, Tbshran R, Fredman J (2001) The elements of statstcal learnng: data mnng, nference and predcton. Sprnger, New York 20. Seber GAF (2003) Lnear regresson analyss. Wley-Interscence, Hoboken 21. Ryan TP (1997) Modern regresson methods. Wley-Interscence, Hoboken 22. Hoerl AE, Kennard RW (19) Rdge regresson: applcatons to nonorthogonal problems. Technometrcs 12(1): Hoerl AE, Kennard RW (19) Rdge regresson: based estmaton for nonorthogonal problems. Technometrcs 12(1): Jollffe IT (1986) Prncpal component analyss. Sprnger, New York 25. L H, Jang T, Zhang K (2003) Effcent and robust feature extracton by maxmum margn crteron. In: Proceedngs of Advances n Neural Informaton Processng Systems, pp Cover TM, Hart PE (1967) Nearest neghbor pattern classfcaton. IEEE Trans Inf Theory 13(1): Lee K, Ho J, Kregman D (2005) Acqurng lnear subspaces for face recognton under varable lghtng. IEEE Trans Pattern Anal Mach Intell 27(5): Georghades A, Belhumeur P, Kregman D (2001) From few to many: llumnaton cone models for face recognton under varable lghtng and pose. IEEE Trans Pattern Anal Mach Intell 23(6): The extended YALE-B database: Martnez AM, enavente RB (1998) The AR Face Database. CVC Techncal Report, no Martnez AM, enavente RB (2003) The AR Face Database. alex_face_db.html 32. Zhang L, Zhang L, Zhang D, Zhu H (2010) Onlne fnger-knuckle-prnt verfcaton for personal authentcaton. Pattern Recognt 43(7): Zhang L, Zhang L, Zhang D (2009) Fnger-knuckle-prnt: a new bometrc dentfer. In: Proceedngs of the IEEE nternatonal conference on mage processng 34. The FKP database.

21 Feature Extracton Based on MNSMC 35. Lao SX, Pawlak M (1996) On mage analyss by moments. IEEE Trans Pattern Anal Mach Intell 18(3): Welong C, Joo ME, Wu S (2006) Illumnaton compensaton and normalzaton for robust face recognton usng dscrete cosne transform n logarthm doman. IEEE Trans Syst Man Cybernet 36(2): Gonzalez RC, Woods RE (2007) Dgtal mage processng. Pearson Prentce Hall, Upper Saddle Rver

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

Recognizing Faces. Outline

Recognizing Faces. Outline Recognzng Faces Drk Colbry Outlne Introducton and Motvaton Defnng a feature vector Prncpal Component Analyss Lnear Dscrmnate Analyss !"" #$""% http://www.nfotech.oulu.f/annual/2004 + &'()*) '+)* 2 ! &

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

Two-Dimensional Supervised Discriminant Projection Method For Feature Extraction

Two-Dimensional Supervised Discriminant Projection Method For Feature Extraction Appl. Math. Inf. c. 6 No. pp. 8-85 (0) Appled Mathematcs & Informaton cences An Internatonal Journal @ 0 NP Natural cences Publshng Cor. wo-dmensonal upervsed Dscrmnant Proecton Method For Feature Extracton

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

Competitive Sparse Representation Classification for Face Recognition

Competitive Sparse Representation Classification for Face Recognition Vol. 6, No. 8, 05 Compettve Sparse Representaton Classfcaton for Face Recognton Yng Lu Chongqng Key Laboratory of Computatonal Intellgence Chongqng Unversty of Posts and elecommuncatons Chongqng, Chna

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

Human Face Recognition Using Generalized. Kernel Fisher Discriminant

Human Face Recognition Using Generalized. Kernel Fisher Discriminant Human Face Recognton Usng Generalzed Kernel Fsher Dscrmnant ng-yu Sun,2 De-Shuang Huang Ln Guo. Insttute of Intellgent Machnes, Chnese Academy of Scences, P.O.ox 30, Hefe, Anhu, Chna. 2. Department of

More information

General Regression and Representation Model for Face Recognition

General Regression and Representation Model for Face Recognition 013 IEEE Conference on Computer Vson and Pattern Recognton Workshops General Regresson and Representaton Model for Face Recognton Janjun Qan, Jan Yang School of Computer Scence and Engneerng Nanjng Unversty

More information

Modular PCA Face Recognition Based on Weighted Average

Modular PCA Face Recognition Based on Weighted Average odern Appled Scence odular PCA Face Recognton Based on Weghted Average Chengmao Han (Correspondng author) Department of athematcs, Lny Normal Unversty Lny 76005, Chna E-mal: hanchengmao@163.com Abstract

More information

Detection of an Object by using Principal Component Analysis

Detection of an Object by using Principal Component Analysis Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Palmprint Recognition Using Directional Representation and Compresses Sensing

Palmprint Recognition Using Directional Representation and Compresses Sensing Research Journal of Appled Scences, Engneerng and echnology 4(22): 4724-4728, 2012 ISSN: 2040-7467 Maxwell Scentfc Organzaton, 2012 Submtted: March 31, 2012 Accepted: Aprl 30, 2012 Publshed: November 15,

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM Classfcaton of Face Images Based on Gender usng Dmensonalty Reducton Technques and SVM Fahm Mannan 260 266 294 School of Computer Scence McGll Unversty Abstract Ths report presents gender classfcaton based

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning Internatonal Journal of Intellgent Informaton Systems 2018; 7(2): 15-22 http://www.scencepublshnggroup.com/j/js do: 10.11648/j.js.20180702.11 ISSN: 2328-7675 (Prnt); ISSN: 2328-7683 (Onlne) Kernel Collaboratve

More information

Local Quaternary Patterns and Feature Local Quaternary Patterns

Local Quaternary Patterns and Feature Local Quaternary Patterns Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents

More information

Computer Aided Drafting, Design and Manufacturing Volume 25, Number 2, June 2015, Page 14

Computer Aided Drafting, Design and Manufacturing Volume 25, Number 2, June 2015, Page 14 Computer Aded Draftng, Desgn and Manufacturng Volume 5, Number, June 015, Page 14 CADDM Face Recognton Algorthm Fusng Monogenc Bnary Codng and Collaboratve Representaton FU Yu-xan, PENG Lang-yu College

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures A Novel Adaptve Descrptor Algorthm for Ternary Pattern Textures Fahuan Hu 1,2, Guopng Lu 1 *, Zengwen Dong 1 1.School of Mechancal & Electrcal Engneerng, Nanchang Unversty, Nanchang, 330031, Chna; 2. School

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

RECOGNIZING GENDER THROUGH FACIAL IMAGE USING SUPPORT VECTOR MACHINE

RECOGNIZING GENDER THROUGH FACIAL IMAGE USING SUPPORT VECTOR MACHINE Journal of Theoretcal and Appled Informaton Technology 30 th June 06. Vol.88. No.3 005-06 JATIT & LLS. All rghts reserved. ISSN: 99-8645 www.jatt.org E-ISSN: 87-395 RECOGNIZING GENDER THROUGH FACIAL IMAGE

More information

Robust Low-Rank Regularized Regression for Face Recognition with Occlusion

Robust Low-Rank Regularized Regression for Face Recognition with Occlusion Robust Low-Rank Regularzed Regresson for ace Recognton wth Occluson Janjun Qan, Jan Yang, anlong Zhang and Zhouchen Ln School of Computer Scence and ngneerng, Nanjng Unversty of Scence and echnology Key

More information

Tone-Aware Sparse Representation for Face Recognition

Tone-Aware Sparse Representation for Face Recognition Tone-Aware Sparse Representaton for Face Recognton Lngfeng Wang, Huayu Wu and Chunhong Pan Abstract It s stll a very challengng task to recognze a face n a real world scenaro, snce the face may be corrupted

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Face Recognition by Fusing Binary Edge Feature and Second-order Mutual Information

Face Recognition by Fusing Binary Edge Feature and Second-order Mutual Information Face Recognton by Fusng Bnary Edge Feature and Second-order Mutual Informaton Jatao Song, Bejng Chen, We Wang, Xaobo Ren School of Electronc and Informaton Engneerng, Nngbo Unversty of Technology Nngbo,

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Face Recognition Method Based on Within-class Clustering SVM

Face Recognition Method Based on Within-class Clustering SVM Face Recognton Method Based on Wthn-class Clusterng SVM Yan Wu, Xao Yao and Yng Xa Department of Computer Scence and Engneerng Tong Unversty Shangha, Chna Abstract - A face recognton method based on Wthn-class

More information

Semi-Supervised Discriminant Analysis Based On Data Structure

Semi-Supervised Discriminant Analysis Based On Data Structure IOSR Journal of Computer Engneerng (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 3, Ver. VII (May Jun. 2015), PP 39-46 www.osrournals.org Sem-Supervsed Dscrmnant Analyss Based On Data

More information

An Image Fusion Approach Based on Segmentation Region

An Image Fusion Approach Based on Segmentation Region Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua

More information

Learning a Class-Specific Dictionary for Facial Expression Recognition

Learning a Class-Specific Dictionary for Facial Expression Recognition BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

CLASSIFICATION OF ULTRASONIC SIGNALS

CLASSIFICATION OF ULTRASONIC SIGNALS The 8 th Internatonal Conference of the Slovenan Socety for Non-Destructve Testng»Applcaton of Contemporary Non-Destructve Testng n Engneerng«September -3, 5, Portorož, Slovena, pp. 7-33 CLASSIFICATION

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.

More information

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity Journal of Sgnal and Informaton Processng, 013, 4, 114-119 do:10.436/jsp.013.43b00 Publshed Onlne August 013 (http://www.scrp.org/journal/jsp) Corner-Based Image Algnment usng Pyramd Structure wth Gradent

More information

Robust Dictionary Learning with Capped l 1 -Norm

Robust Dictionary Learning with Capped l 1 -Norm Proceedngs of the Twenty-Fourth Internatonal Jont Conference on Artfcal Intellgence (IJCAI 205) Robust Dctonary Learnng wth Capped l -Norm Wenhao Jang, Fepng Ne, Heng Huang Unversty of Texas at Arlngton

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

Histogram-Enhanced Principal Component Analysis for Face Recognition

Histogram-Enhanced Principal Component Analysis for Face Recognition Hstogram-Enhanced Prncpal Component Analyss for Face ecognton Ana-ara Sevcenco and Wu-Sheng Lu Dept. of Electrcal and Computer Engneerng Unversty of Vctora sevcenco@engr.uvc.ca, wslu@ece.uvc.ca Abstract

More information

Integrated Expression-Invariant Face Recognition with Constrained Optical Flow

Integrated Expression-Invariant Face Recognition with Constrained Optical Flow Integrated Expresson-Invarant Face Recognton wth Constraned Optcal Flow Chao-Kue Hseh, Shang-Hong La 2, and Yung-Chang Chen Department of Electrcal Engneerng, Natonal Tsng Hua Unversty, Tawan 2 Department

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

High-Boost Mesh Filtering for 3-D Shape Enhancement

High-Boost Mesh Filtering for 3-D Shape Enhancement Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,

More information

A Computer Vision System for Automated Container Code Recognition

A Computer Vision System for Automated Container Code Recognition A Computer Vson System for Automated Contaner Code Recognton Hsn-Chen Chen, Chh-Ka Chen, Fu-Yu Hsu, Yu-San Ln, Yu-Te Wu, Yung-Nen Sun * Abstract Contaner code examnaton s an essental step n the contaner

More information

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION 1 THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMANIAN ACADEMY, Seres A, OF THE ROMANIAN ACADEMY Volume 4, Number 2/2003, pp.000-000 A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION Tudor BARBU Insttute

More information

Learning a Locality Preserving Subspace for Visual Recognition

Learning a Locality Preserving Subspace for Visual Recognition Learnng a Localty Preservng Subspace for Vsual Recognton Xaofe He *, Shucheng Yan #, Yuxao Hu, and Hong-Jang Zhang Mcrosoft Research Asa, Bejng 100080, Chna * Department of Computer Scence, Unversty of

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

Correlative features for the classification of textural images

Correlative features for the classification of textural images Correlatve features for the classfcaton of textural mages M A Turkova 1 and A V Gadel 1, 1 Samara Natonal Research Unversty, Moskovskoe Shosse 34, Samara, Russa, 443086 Image Processng Systems Insttute

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

UB at GeoCLEF Department of Geography Abstract

UB at GeoCLEF Department of Geography   Abstract UB at GeoCLEF 2006 Mguel E. Ruz (1), Stuart Shapro (2), June Abbas (1), Slva B. Southwck (1) and Davd Mark (3) State Unversty of New York at Buffalo (1) Department of Lbrary and Informaton Studes (2) Department

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

A Robust Method for Estimating the Fundamental Matrix

A Robust Method for Estimating the Fundamental Matrix Proc. VIIth Dgtal Image Computng: Technques and Applcatons, Sun C., Talbot H., Ourseln S. and Adraansen T. (Eds.), 0- Dec. 003, Sydney A Robust Method for Estmatng the Fundamental Matrx C.L. Feng and Y.S.

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Combination of Local Multiple Patterns and Exponential Discriminant Analysis for Facial Recognition

Combination of Local Multiple Patterns and Exponential Discriminant Analysis for Facial Recognition Sensors & ransducers 203 by IFSA http://.sensorsportal.com Combnaton of Local Multple Patterns and Exponental Dscrmnant Analyss for Facal Recognton, 2 Lfang Zhou, 2 Bn Fang, 3 Wesheng L, 3 Ldou Wang College

More information

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline mage Vsualzaton mage Vsualzaton mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and Analyss outlne mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and

More information

Fast Feature Value Searching for Face Detection

Fast Feature Value Searching for Face Detection Vol., No. 2 Computer and Informaton Scence Fast Feature Value Searchng for Face Detecton Yunyang Yan Department of Computer Engneerng Huayn Insttute of Technology Hua an 22300, Chna E-mal: areyyyke@63.com

More information

Face Recognition using 3D Directional Corner Points

Face Recognition using 3D Directional Corner Points 2014 22nd Internatonal Conference on Pattern Recognton Face Recognton usng 3D Drectonal Corner Ponts Xun Yu, Yongsheng Gao School of Engneerng Grffth Unversty Nathan, QLD, Australa xun.yu@grffthun.edu.au,

More information

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Positive Semi-definite Programming Localization in Wireless Sensor Networks Postve Sem-defnte Programmng Localzaton n Wreless Sensor etworks Shengdong Xe 1,, Jn Wang, Aqun Hu 1, Yunl Gu, Jang Xu, 1 School of Informaton Scence and Engneerng, Southeast Unversty, 10096, anjng Computer

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

An Efficient Illumination Normalization Method with Fuzzy LDA Feature Extractor for Face Recognition

An Efficient Illumination Normalization Method with Fuzzy LDA Feature Extractor for Face Recognition www.mer.com Vol.2, Issue.1, pp-060-065 ISS: 2249-6645 An Effcent Illumnaton ormalzaton Meod w Fuzzy LDA Feature Extractor for Face Recognton Behzad Bozorgtabar 1, Hamed Azam 2 (Department of Electrcal

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

On Modeling Variations For Face Authentication

On Modeling Variations For Face Authentication On Modelng Varatons For Face Authentcaton Xaomng Lu Tsuhan Chen B.V.K. Vjaya Kumar Department of Electrcal and Computer Engneerng, Carnege Mellon Unversty Abstract In ths paper, we present a scheme for

More information

Appearance-based Statistical Methods for Face Recognition

Appearance-based Statistical Methods for Face Recognition 47th Internatonal Symposum ELMAR-2005, 08-10 June 2005, Zadar, Croata Appearance-based Statstcal Methods for Face Recognton Kresmr Delac 1, Mslav Grgc 2, Panos Latss 3 1 Croatan elecom, Savsa 32, Zagreb,

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

Sixth Indian Conference on Computer Vision, Graphics & Image Processing

Sixth Indian Conference on Computer Vision, Graphics & Image Processing Sxth Indan Conference on Computer Vson, Graphcs & Image Processng Incorporatng Cohort Informaton for Relable Palmprnt Authentcaton Ajay Kumar Bometrcs Research Laboratory, Department of Electrcal Engneerng

More information

RECOGNITION AND AGE PREDICTION WITH DIGITAL IMAGES OF MISSING CHILDREN

RECOGNITION AND AGE PREDICTION WITH DIGITAL IMAGES OF MISSING CHILDREN RECOGNIION AND AGE PREDICION WIH DIGIAL IMAGES OF MISSING CHILDREN A Wrtng Project Presented to he Faculty of the Department of Computer Scence San Jose State Unversty In Partal Fulfllment of the Requrements

More information

Neurocomputing 101 (2013) Contents lists available at SciVerse ScienceDirect. Neurocomputing

Neurocomputing 101 (2013) Contents lists available at SciVerse ScienceDirect. Neurocomputing Neurocomputng (23) 4 5 Contents lsts avalable at ScVerse ScenceDrect Neurocomputng journal homepage: www.elsever.com/locate/neucom Localty constraned representaton based classfcaton wth spatal pyramd patches

More information

A Background Subtraction for a Vision-based User Interface *

A Background Subtraction for a Vision-based User Interface * A Background Subtracton for a Vson-based User Interface * Dongpyo Hong and Woontack Woo KJIST U-VR Lab. {dhon wwoo}@kjst.ac.kr Abstract In ths paper, we propose a robust and effcent background subtracton

More information

Performance Assessment and Fault Diagnosis for Hydraulic Pump Based on WPT and SOM

Performance Assessment and Fault Diagnosis for Hydraulic Pump Based on WPT and SOM Performance Assessment and Fault Dagnoss for Hydraulc Pump Based on WPT and SOM Be Jkun, Lu Chen and Wang Zl PERFORMANCE ASSESSMENT AND FAULT DIAGNOSIS FOR HYDRAULIC PUMP BASED ON WPT AND SOM. Be Jkun,

More information

Robust Kernel Representation with Statistical Local Features. for Face Recognition

Robust Kernel Representation with Statistical Local Features. for Face Recognition Robust Kernel Representaton wth Statstcal Local Features for Face Recognton Meng Yang, Student Member, IEEE, Le Zhang 1, Member, IEEE Smon C. K. Shu, Member, IEEE, and Davd Zhang, Fellow, IEEE Dept. of

More information

Scale Selective Extended Local Binary Pattern For Texture Classification

Scale Selective Extended Local Binary Pattern For Texture Classification Scale Selectve Extended Local Bnary Pattern For Texture Classfcaton Yutng Hu, Zhlng Long, and Ghassan AlRegb Multmeda & Sensors Lab (MSL) Georga Insttute of Technology 03/09/017 Outlne Texture Representaton

More information

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches Proceedngs of the Internatonal Conference on Cognton and Recognton Fuzzy Flterng Algorthms for Image Processng: Performance Evaluaton of Varous Approaches Rajoo Pandey and Umesh Ghanekar Department of

More information

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research

More information

Object Recognition Based on Photometric Alignment Using Random Sample Consensus

Object Recognition Based on Photometric Alignment Using Random Sample Consensus Vol. 44 No. SIG 9(CVIM 7) July 2003 3 attached shadow photometrc algnment RANSAC RANdom SAmple Consensus Yale Face Database B RANSAC Object Recognton Based on Photometrc Algnment Usng Random Sample Consensus

More information

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram Shape Representaton Robust to the Sketchng Order Usng Dstance Map and Drecton Hstogram Department of Computer Scence Yonse Unversty Kwon Yun CONTENTS Revew Topc Proposed Method System Overvew Sketch Normalzaton

More information