Computational and Theoretical Analysis of Null Space and Orthogonal Linear Discriminant Analysis
|
|
- Ami Stokes
- 6 years ago
- Views:
Transcription
1 Jounal of Machine Leaning Reseach ) Submitted 12/05; Revised 3/06; Published 7/06 Computational and Theoetical Analysis of Null Space and Othogonal Linea Disciminant Analysis Jieping Ye Depatment of Compute Science and Engineeing Aizona State Univesity Tempe, AZ 85287, USA JIEPING.YE@ASU.EDU Tao Xiong Depatment of Electical and Compute Engineeing Univesity of Minnesota Minneapolis, MN 55455, USA TXIONG@ECE.UMN.EDU Edito: David Madigan Abstact Dimensionality eduction is an impotant pe-pocessing step in many applications. Linea disciminant analysis LDA) is a classical statistical appoach fo supevised dimensionality eduction. It aims to maximize the atio of the between-class distance to the within-class distance, thus maximizing the class discimination. It has been used widely in many applications. Howeve, the classical LDA fomulation equies the nonsingulaity of the scatte matices involved. Fo undesampled poblems, whee the data dimensionality is much lage than the sample size, all scatte matices ae singula and classical LDA fails. Many extensions, including null space LDA NLDA) and othogonal LDA OLDA), have been poposed in the past to ovecome this poblem. NLDA aims to maximize the between-class distance in the null space of the within-class scatte matix, while OLDA computes a set of othogonal disciminant vectos via the simultaneous diagonalization of the scatte matices. They have been applied successfully in vaious applications. In this pape, we pesent a computational and theoetical analysis of NLDA and OLDA. Ou main esult shows that unde a mild condition which holds in many applications involving highdimensional data, NLDA is equivalent to OLDA. We have pefomed extensive expeiments on vaious types of data and esults ae consistent with ou theoetical analysis. We futhe apply the egulaization to OLDA. The algoithm is called egulaized OLDA o ROLDA fo shot). An efficient algoithm is pesented to estimate the egulaization value in ROLDA. A compaative study on classification shows that ROLDA is vey competitive with OLDA. This confims the effectiveness of the egulaization in ROLDA. Keywods: linea disciminant analysis, dimensionality eduction, null space, othogonal matix, egulaization 1. Intoduction Dimensionality eduction is impotant in many applications of data mining, machine leaning, and bioinfomatics, due to the so-called cuse of dimensionality Bellmanna, 1961; Duda et al., 2000; Fukunaga, 1990; Hastie et al., 2001). Many methods have been poposed fo dimensionality eduction, such as pincipal component analysis PCA) Jolliffe, 1986) and linea disciminant analysis c 2006 Jieping Ye and Tao Xiong.
2 YE AND XIONG LDA) Fukunaga, 1990). LDA aims to find the optimal disciminant vectos tansfomation) by maximizing the atio of the between-class distance to the within-class distance, thus achieving the maximum class discimination. It has been applied successfully in many applications including infomation etieval Bey et al., 1995; Deeweste et al., 1990), face ecognition Belhumeou et al., 1997; Swets and Weng, 1996; Tuk and Pentland, 1991), and micoaay gene expession data analysis Dudoit et al., 2002). Howeve, classical LDA equies the so-called total scatte matix to be nonsingula. In many applications such as those mentioned above, all scatte matices in question can be singula since the data points ae fom a vey high-dimensional space and in geneal the sample size does not exceed this dimensionality. This is known as the singulaity o undesampled poblem Kzanowski et al., 1995). In ecent yeas, many appoaches have been poposed to deal with such high-dimensional, undesampled poblem, including null space LDA NLDA) Chen et al., 2000; Huang et al., 2002), othogonal LDA OLDA) Ye, 2005), uncoelated LDA ULDA) Ye et al., 2004a; Ye, 2005), subspace LDA Belhumeou et al., 1997; Swets and Weng, 1996), egulaized LDA Fiedman, 1989), and pseudo-invese LDA Raudys and Duin, 1998; Skuichina and Duin, 1996). Null space LDA computes the disciminant vectos in the null space of the within-class scatte matix. Uncoelated LDA and othogonal LDA ae among a family of algoithms fo genealized disciminant analysis poposed in Ye, 2005). The featues in ULDA ae uncoelated, while the disciminant vectos in OLDA ae othogonal to each othe. Subspace LDA o PCA+LDA) applies an intemediate dimensionality eduction stage such as PCA to educe the dimensionality of the oiginal data befoe classical LDA is applied. Regulaized LDA uses a scaled multiple of the identity matix to make the scatte matix nonsingula. Pseudo-invese LDA employs the pseudo-invese to ovecome the singulaity poblem. Moe details on these methods, as well as thei elationship, can be found in Ye, 2005). In this pape, we pesent a detailed computational and theoetical analysis of null space LDA and othogonal LDA. In Chen et al., 2000), the null space LDA NLDA) was poposed, whee the between-class distance is maximized in the null space of the within-class scatte matix. The singulaity poblem is thus implicitly avoided. Simila idea has been mentioned biefly in Belhumeou et al., 1997). Huang et al., 2002) impoved the efficiency of the algoithm by fist emoving the null space of the total scatte matix, based on the obsevation that the null space of the total scatte matix is the intesection of the null space of the between-class scatte matix and the null space of the withinclass scatte matix. In othogonal LDA OLDA), a set of othogonal disciminant vectos is computed, based on a genealized optimization citeion Ye, 2005). The optimal tansfomation is computed though the simultaneous diagonalization of the scatte matices, while the singulaity poblem is ovecome implicitly. Disciminant analysis with othogonal tansfomations has been studied in Duchene and Lecleq, 1988; Foley and Sammon, 1975). By a close examination of the computations involved in OLDA, we can decompose the OLDA algoithm into thee steps: fist emove the null space of the total scatte matix; followed by classical uncoelated LDA ULDA), a vaiant of classical LDA details can be found in Section 2.1); and finally apply an othogonalization step to the tansfomation. Both the NLDA algoithm Huang et al., 2002) and the OLDA algoithm Ye, 2005) esult in othogonal tansfomations. Howeve, they applied diffeent schemes in deiving the optimal tansfomations. NLDA computes an othogonal tansfomation in the null space of the within-class scatte matix, while OLDA computes an othogonal tansfomation though the simultaneous diagonaliza- 1184
3 ANALYSIS OF NULL SPACE AND ORTHOGONAL LINEAR DISCRIMINANT ANALYSIS tion of the scatte matices. Inteestingly, we show in Section 5 that NLDA is equivalent to OLDA, unde a mild condition C1, 1 which holds in many applications involving high-dimensional data see Section 7). Based on the equivalence esult, an impoved algoithm fo NLDA, called inlda, is pesented, which futhe educes the computational cost of the oiginal NLDA algoithm. We extend the OLDA algoithm by applying the egulaization technique, which is commonly used to stabilize the sample covaiance matix estimation and impove the classification pefomance Fiedman, 1989). The algoithm is called egulaized OLDA o ROLDA fo shot). The key idea in ROLDA is to add a constant λ to the diagonal elements of the total scatte matix. Hee λ > 0 is known as the egulaization paamete. Choosing an appopiate egulaization value is a citical issue in ROLDA, as a lage λ may significantly distub the infomation in the scatte matix, while a small λ may not be effective in impoving the classification pefomance. Coss-validation is commonly used to estimate the optimal λ fom a finite set of candidates. Selecting an optimal value fo a paamete such as λ is called model selection Hastie et al., 2001). The computational cost of model selection fo ROLDA can be expensive, especially when the candidate set is lage, since it equies expensive matix computations fo each λ. We show in Section 6 that the computations in ROLDA can be decomposed into two components: the fist component involves matices of high dimensionality but independent of λ, while the second component involves matices of low dimensionality. When seaching fo the optimal λ fom a set of candidates via coss-validation, we epeat the computations involved in the second component only, thus educing the computational cost of model selection in ROLDA. We have conducted expeiments using 14 data sets fom vaious data souces, including lowdimensional data fom UCI Machine Leaning Repositoy 2 and high-dimensional data such as text documents, face images, and gene expession data. Details on these data sets can be found in Section 7.) We did a compaative study of NLDA, inlda, OLDA, ULDA, ROLDA, and Suppot Vecto Machines SVM) Schökopf and Smola, 2002; Vapnik, 1998) in classification. Expeimental esults show that Fo all low-dimensional data sets, the null space of the within-class scatte matix is empty, and both NLDA and inlda do not apply. Howeve, OLDA is applicable and the educed dimensionality of OLDA is in geneal k 1, whee k is the numbe of classes. Condition C1 holds fo most high-dimensional data sets eight out of nine data sets). NLDA, inlda, and OLDA achieve the same classification pefomance, in all cases when condition C1 holds. Fo cases whee condition C1 does not hold, OLDA outpefoms NLDA and inlda, as OLDA has a lage numbe of educed dimensions than NLDA and inlda. These empiical esults ae consistent with ou theoetical analysis. inlda and NLDA achieve simila pefomance in all cases. OLDA is vey competitive with ULDA. This confims the effectiveness of the final othogonalization step in OLDA. ROLDA achieves a bette classification pefomance than OLDA, which shows the effectiveness of the egulaization in ROLDA. Oveall, ROLDA and SVM ae vey competitive with othe methods in classification. The est of the pape is oganized as follows. An oveview of classical LDA and classical uncoelated LDA is given in Section 2. NLDA and OLDA ae discussed in Section 3 and Section 4, 1. Condition C1 equies that the ank of the total scatte matix equals to the sum of the ank of the between-class scatte matix and the ank of the within-class scatte matix. Moe details will be given in Section mlean/mlrepositoy.html 1185
4 YE AND XIONG Notation Desciption Notation Desciption A data matix n numbe of taining data points m data dimensionality l educed dimensionality k numbe of classes S b between-class scatte matix S w within-class scatte matix S t total scatte matix G tansfomation matix S i covaiance matix of the i-th class c i centoid of the i-th class n i sample size of the i-th class c global centoid K numbe of neighbos in K-NN t ank of S t q ank of S b Table 1: Notation. espectively. The elationship between NLDA and OLDA is studied in Section 5. The ROLDA algoithm is pesented in Section 6. Section 7 includes the expeimental esults. We conclude in Section 8. Fo convenience, Table 1 lists the impotant notation used in the est of this pape. 2. Classical Linea Disciminant Analysis Given a data set consisting of n data points {a j } n in IRm, classical LDA computes a linea tansfomation G IR m l l < m) that maps each a j in the m-dimensional space to a vecto â j in the l-dimensional space by â j = G T a j. Define thee matices H w, H b, and S t as follows: H w = 1 n [A 1 c 1 e T ),,A k c k e T )], 1) H b = 1 n [ n 1 c 1 c),, n k c k c)], 2) H t = 1 n A ce T ), 3) whee A = [a 1,,a n ] is the data matix, A i, c i, S i, and n i ae the data matix, the centoid, the covaiance matix, and the sample size of the i-th class, espectively, c is the global centoid, k is the numbe of classes, and e is the vecto of all ones. Then the between-class scatte matix S b, the within-class scatte matix S w, and the total scatte matix S t ae defined as follows Fukunaga, 1990): S w = H w H T w, S b = H b H T b, and S t = H t H T t. It follows fom the definition Ye, 2005) that taces w ) measues the within-class cohesion, taces b ) measues the between-class sepaation, and taces t ) measues the vaiance of the data set, whee the tace of a squae matix is the summation of its diagonal enties Golub and Van Loan, 1996). It is easy to veify that S t = S b +S w. In the lowe-dimensional space esulting fom the linea tansfomation G, the scatte matices become S L w = G T S w G, S L b = GT S b G, and S L t = G T S t G. An optimal tansfomation G would maximize taces L b ) and minimize tacesl w). Classical LDA 1186
5 ANALYSIS OF NULL SPACE AND ORTHOGONAL LINEAR DISCRIMINANT ANALYSIS aims to compute the optimal G by solving the following optimization poblem: G G = ag max tace T S w G ) 1 G T S b G). 4) G IR m l :G T S w G=I l Othe optimization citeia, including those based on the deteminant could also be used instead Duda et al., 2000; Fukunaga, 1990). The solution to the optimization poblem in Eq. 4) is given by the eigenvectos of Sw 1 S b coesponding to the nonzeo eigenvalues, povided that the withinclass scatte matix S w is nonsingula Fukunaga, 1990). The columns of G fom the disciminant vectos of classical LDA. Since the ank of the between-class scatte matix is bounded fom above by k 1, thee ae at most k 1 disciminant vectos in classical LDA. Note that classical LDA does not handle singula scatte matices, which limits its applicability to low-dimensional data. Seveal methods, including null space LDA and othogonal LDA subspace LDA, wee poposed in the past to deal with such singulaity poblem as discussed in Section Classical Uncoelated LDA Classical uncoelated LDA culda) is an extension of classical LDA. A key popety of culda is that the featues in the tansfomed space ae uncoelated, thus educing the edundancy in the tansfomed space. culda aims to find the optimal disciminant vectos that ae S t -othogonal. 3 Specifically, suppose vectos φ 1,φ 2,,φ ae obtained, then the +1)-th vecto φ +1 is the one that maximizes the Fishe citeion function Jin et al., 2001): fφ) = φt S b φ φ T S w φ, 5) subject to the constaints: φ T +1 S tφ i = 0, fo i = 1,,. The algoithm in Jin et al., 2001) finds the disciminant vectos φ i s successively by solving a sequence of genealized eigenvalue poblems, which is expensive fo lage and high-dimensional data sets. Howeve, it has been shown Ye et al., 2004a) that the disciminant vectos of culda can be computed efficiently by solving the following optimization poblem: G G = ag max tace T S w G ) 1 G T S b G), 6) G IR m l :G T S t G=I l whee G = [φ 1,,φ l ], if thee exist l disciminant vectos in culda. Note that in Eq. 6), all disciminant vectos in G ae computed simultaneously. The optimization poblem above is a vaiant of the one in Eq. 4). The optimal G is given by the eigenvectos of St 1 S b. 3. Null Space LDA Chen et al., 2000) poposed the null space LDA NLDA) fo dimensionality eduction, whee the between-class distance is maximized in the null space of the within-class scatte matix. The basic idea behind this algoithm is that the null space of S w may contain significant disciminant infomation if the pojection of S b is not zeo in that diection Chen et al., 2000; Lu et al., 2003). 3. Two vectos x and y ae S t -othogonal, if x T S t y =
6 YE AND XIONG The singulaity poblem is thus ovecome implicitly. The optimal tansfomation of NLDA can be computed by solving the following optimization poblem: G = agmax G T S w G=0 tacegt S b G). 7) The computation of the optimal G involves the computation of the null space of S w, which may be lage fo high-dimensional data. Indeed, the dimensionality of the null space of S w is at least m+k n, whee m is the data dimensionality, k is the numbe of classes, and n is the sample size. In Chen et al., 2000), a pixel gouping method was used to extact geometic featues and educe the dimensionality of samples, and then NLDA was applied in the new featue space. Huang et al., 2002) impoved the efficiency of the algoithm in Chen et al., 2000) by fist emoving the null space of the total scatte matix S t. It is based on the obsevation that the null space of S t is the intesection of the null space of S b and the null space of S w, as S t = S w + S b. We can efficiently emove the null space of S t as follows. Let H t = UΣV T be the Singula Value Decomposition SVD) Golub and Van Loan, 1996) of H t, whee H t is defined in Eq. 3), U and V ae othogonal, Σt 0 Σ = 0 0 Σ t IR t t is diagonal with the diagonal enties soted in the non-inceasing ode, and t = anks t ). Then ) S t = H t Ht T = UΣV T V Σ T U T = UΣΣ T U T Σ 2 = U t 0 U T. 8) 0 0 Let U = U 1,U 2 ) be a patition of U with U 1 IR m t and U 2 IR m m t). Then the null space of S t can be emoved by pojecting the data onto the subspace spanned by the columns of U 1. Let S b, S w, and S t be the scatte matices afte the emoval of the null space of S t. That is, ), S b = U T 1 S b U 1, S w = U T 1 S w U 1, and S t = U T 1 S t U 1. Note that only U 1 is involved fo the pojection. We can thus apply the educed SVD computation Golub and Van Loan, 1996) on H t with the time complexity of Omn 2 ), instead of Om 2 n). When the data dimensionality m is much lage than the sample size n, this leads to a big eduction in tems of the computational cost. With the computed U 1, the optimal tansfomation of NLDA is given by G = U 1 N, whee N is obtained by solving the following optimization poblem: N = agmax N T S w N=0 tacent S b N). 9) That is, the columns of N lie in the null space of S w, while maximizing tacen T S b N). Let W be the matix so that the columns of W span the null space of S w. Then N = WM, fo some matix M, which is to be detemined next. Since the constaint in Eq. 9) is satisfied with N = WM fo any M, the optimal M can be computed by maximizing tacem T W T S b WM). By imposing the othogonality constaint on M Huang et al., 2002), the optimal M is given by the eigenvectos of W T S b W coesponding to the nonzeo eigenvalues. With the computed U 1, W, and M above, the optimal tansfomation of NLDA is given by G = U 1 WM. 1188
7 ANALYSIS OF NULL SPACE AND ORTHOGONAL LINEAR DISCRIMINANT ANALYSIS Algoithm 1: NLDA Null space LDA) Input: data matix A Output: tansfomation matix G 1. Fom the matix H t as in Eq. 3); 2. Compute the educed SVD of H t as H t = U 1 Σ t V1 T ; 3. Fom the matices S b = U1 T S bu 1 and S w = U1 T S wu 1 ; 4. Compute the null space, W, of S w, via the eigen-decomposition; 5. Constuct the matix M, consisting of the top eigenvectos of W T S b W; 6. G U 1 WM. In Huang et al., 2002), the matix W is computed via the eigen-decomposition of S w. Moe specifically, let ) 0 0 S w = [W, W] [W, W] T 0 w be its eigen-decomposition, whee [W, W] is othogonal and w is diagonal with positive diagonal enties. Then W foms the null space of S w. The pseudo-code fo the NLDA algoithm is given in Algoithm Othogonal LDA Othogonal LDA OLDA) was poposed in Ye, 2005) as an extension of classical LDA. The disciminant vectos in OLDA ae othogonal to each othe. Futhemoe, OLDA is applicable even when all scatte matices ae singula, thus ovecoming the singulaity poblem. It has been applied successfully in many applications, including document classification, face ecognition, and gene expession data classification. The optimal tansfomation in OLDA can be computed by solving the following optimization poblem: G = agmax G IR m l :G T G=I l tace G T S t G) + G T S b G ), 10) whee M + denotes the pseudo-invese of matix M Golub and Van Loan, 1996). The othogonality condition is imposed in the constaint. The computation of the optimal tansfomation of OLDA is based on the simultaneous diagonalization of the thee scatte matices as follows Ye, 2005). Fom Eq. 8), U 2 lies in the null space of both S b and S w. Thus, U T S b U = U T 1 S b U ), U T S w U = U T 1 S w U ). 11) Denote B = Σt 1 U1 T H b and let B = P ΣQ T be the SVD of B, whee P and Q ae othogonal and Σ is diagonal. Define the matix X as Σ 1 t P 0 X = U 0 I m t ). 12) It can be shown Ye, 2005) that X simultaneously diagonalizes S b, S w, and S t. That is X T S b X = D b, X T S w X = D w, and X T S t X = D t, 13) 1189
8 YE AND XIONG Algoithm 2: OLDA Othogonal LDA) Input: data matix A Output: tansfomation matix G 1. Compute U 1, Σ t, and P; 2. X q U 1 Σt 1 P q, whee q = anks b ); 3. Compute the QR decomposition of X q as X q = QR; 4. G Q. whee D b, D w, and D t ae diagonal with the diagonal enties in D b soted in the non-inceasing ode. The main esult in Ye, 2005) has shown that the optimal tansfomation of OLDA can be computed though the othogonalization of the columns in X, as summaized in the following theoem: Theoem 4.1 Let X be the matix defined in Eq. 12) and let X q be the matix consisting of the fist q columns of X, whee q = anks b ). Let X q = QR be the QR-decomposition of X q, whee Q has othonomal columns and R is uppe tiangula. Then G = Q solves the optimization poblem in Eq. 10). Fom Theoem 4.1, only the fist q columns of X ae used in computing the optimal G. Fom Eq. 12), the fist q columns of X ae given by X q = U 1 Σ 1 t P q, 14) whee P q consists of the fist q columns of the matix P. We can obseve that U 1 coesponds to the emoval of the null space of S t as in NLDA, while Σt 1 P q is the optimal tansfomation when classical ULDA is applied to the intemediate dimensionality) educed space by the pojection of U 1. The OLDA algoithm can thus be decomposed into thee steps: 1) Remove the null space of S t ; 2) Apply classical ULDA as an intemediate step, since the educed total scatte is nonsingula; and 3) Apply an othogonalization step to the tansfomation, which coesponds to the QR decomposition of X q in Theoem 4.1. The pseudo-code fo the OLDA algoithm is given in Algoithm 2. Remak 1 The ULDA algoithm in Ye et al., 2004a; Ye, 2005) consists of steps 1 and 2 above, without the final othogonalization step. Expeimental esults in Section 7 show that OLDA is competitive with ULDA. The ationale behind this may be that ULDA involves the minimum edundancy in the tansfomed space and is susceptible to ovefitting; OLDA, on the othe hand, emoves the R matix though the QR decomposition in the final othogonalization step, which intoduces the edundancy in the educed space, but may be less susceptible to ovefitting. 5. Relationship Between NLDA and OLDA Both the NLDA algoithm and the OLDA algoithm esult in othogonal tansfomations. Ou empiical esults show that they often lead to simila pefomance, especially fo high-dimensional data. This implies thee may exist an intinsic elationship between these two algoithms. In this section, we take a close look at the elationship between NLDA and OLDA. Moe specifically, we show that NLDA is equivalent to OLDA, unde a mild condition C1 : anks t ) = anks b )+anks w ), 15) 1190
9 ANALYSIS OF NULL SPACE AND ORTHOGONAL LINEAR DISCRIMINANT ANALYSIS which holds in many applications involving high-dimensional data see Section 7). It is easy to veify fom the definition of the scatte matices that anks t ) anks b )+anks w ). Fom Eqs. 8) and 11), the null space, U 2, of S t can be emoved, as follows: S t = U T 1 S t U 1 = U T 1 S b U 1 +U T 1 S w U 1 = S w + S b IR t t. Since the null space of S t is the intesection of the null space of S b and the null space of S w, the following equalities hold: ank S t ) = anks t ) = t, ank S b ) = anks b ), and ank S w ) = anks w ). Thus condition C1 is equivalent to ank S t ) = ank S b )+ank S w ). The null space of S b and the null space of S w ae citical in ou analysis. The elationship between these two null spaces is studied in the following lemma. Lemma 5.1 Let S t, S b, and S w be defined as above and t = ank S t ). Let {w 1,,w } foms an othonomal basis fo the null space of S w, and let {b 1,,b s } foms an othonomal basis fo the null space of S b. Then, {w 1,,w,b 1,,b s } ae linealy independent. Poof Pove by contadiction. Assume thee exist α i s and β j s, not all zeos, such that It follows that α i w i + s β j b j = 0. 0 = α i w i + s ) T β j b j S w α i w i + since w i s lie in the null space of S w. Hence, ) T ) s s s β j b j S t β j b j = = 0. s ) ) T s s β j b j = β j b j S w β j b j ), ) T )+ s s β j b j S w β j b j ) T ) s β j b j S b β j b j Since S t is nonsingula, we have s β jb j = 0. Thus β j = 0, fo all j, since {b 1,,b s } foms an othonomal basis fo the null space of S b. Similaly, we have and 0 = α i w i + s ) T ) α i w i S t α i w i ) T β j b j S b α i w i + = = 0. s α i w i ) T S w β j b j ) = α i w i )+ ) T α i w i S b α i w i ). ) T ) α i w i S b α i w i 1191
10 YE AND XIONG Hence α iw i = 0, and α i = 0, fo all i, since {w 1,,w } foms ae othonomal basis fo the null space of S w. This contadicts ou assumption that not all of the α i s and the β j s ae zeo, Thus, {w 1,,w,b 1,,b s } ae linealy independent. Next, we show how to compute the optimal tansfomation of NLDA using these two null spaces. Recall that in NLDA, the null space of S t may be emoved fist. In the following discussion, we wok on the educed scatte matices S w, S b, and S t diectly as in Lemma 5.1. The main esult is summaized in the following theoem. Theoem 5.1 Let U 1, S t, S b, and S w be defined as above and t = ank S t ). Let R = [W,B], whee W = [w 1,,w ], B = [b 1,,b s ], and {w 1,,w,b 1,,b s } ae defined as in Lemma 5.1. Assume that condition C1: anks t ) = anks b )+anks w ) holds. Then G = U 1 WM solves the optimization poblem in Eq. 9), whee the matix M, consisting of the eigenvectos of W T S b W, is othogonal. Poof Fom Lemma 5.1, {w 1,,w,b 1,,b s } IR t is linealy independent. Condition C1 implies that t = + s. Thus {w 1,,w,b 1,,b s } foms a basis fo IR t, that is, R = [W,B] is nonsingula. It follows that R T S t R = R T S b R+R T S w R W T S = b W W T ) S b B W T S B T S b W B T + w W W T S w B S b B B T S w W B T S w B ) ) W T S = b W B T. S w B Since matix R T S t R has full ank, W T S b W, the pojection of S b onto the null space of S w, is nonsingula. Let W T S b W = M b M T be the eigen-decomposition of W T S b W, whee M is othogonal and b is diagonal with positive diagonal enties note that W T S b W is positive definite). Then, fom Section 3, the optimal tansfomation G of NLDA is given by G = U 1 WM. ) Recall that the matix M in NLDA is computed so that tacem T W T S b WM) is maximized. Since taceqaq T ) = tacea) fo any othogonal Q, the solution in NLDA is invaiant unde an abitay othogonal tansfomation. Thus G = U 1 W is also a solution to NLDA, since M is othogonal, as summaized in the following coollay. Coollay 5.1 Assume condition C1: anks t ) = anks b ) + anks w ) holds. Let U 1 and W be defined as in Theoem 5.1. Then G = U 1 W solves the optimization poblem in Eq. 9). That is, G = U 1 W is an optimal tansfomation of NLDA. Coollay 5.1 implies that when condition C1 holds, Step 5 in Algoithm 1 may be emoved, as well as the fomation of S b in Step 3 and the multiplication of U 1 W with M in Step 6. This impoves the efficiency of the NLDA algoithm. The impoved NLDA inlda) algoithm is given in Algoithm 3. Note that it is ecommended in Liu et al., 2004) that the maximization of the between-class distance in Step 5 of Algoithm 1 should be emoved to avoid possible ovefitting. Howeve, Coollay 5.1 shows that unde condition C1, the emoval of Step 5 has no effect on the pefomance of the NLDA algoithm. Next, we show the equivalence elationship between NLDA and OLDA, when condition C1 holds. The main esult is summaized in the following theoem. 1192
11 ANALYSIS OF NULL SPACE AND ORTHOGONAL LINEAR DISCRIMINANT ANALYSIS Algoithm 3: inlda impoved NLDA) Input: data matix A Output: tansfomation matix G 1. Fom the matix H t as in Eq. 3); 2. Compute the educed SVD of H t as H t = U 1 Σ t V1 T ; 3. Constuct the matix S w = U1 T S wu 1 ; 4. Compute the null space, W, of S w, via the eigen-decomposition; 5. G U 1 W. Theoem 5.2 Assume that condition C1: anks t ) = anks b )+anks w ) holds. Let U 1 and W be defined as in Theoem 5.1. Then, G = U 1 W solves the optimization poblem in Eq. 10). That is, unde the given assumption, OLDA and NLDA ae equivalent. Poof Recall that the optimization involved in OLDA is G = agmax G IR m l :G T G=I l tace S L t ) + S L b), 16) whee S L t = G T S t G and S L b = GT S b G. Fom Section 4, the maximum numbe, l, of disciminant vectos is no lage than q, which is the ank of S b. Recall that q = anks b ) = ank S b ) = ank S t ) ank S w ) =, whee is the dimension of the null space of S w. Based on the popety of the tace of matices, we have tace St L ) + Sb) L + tace S L t ) + Sw) L = tace S L t ) + St L ) = anks L t ) q =, whee the second equality follows since tacea + A) = anka) fo any squae matix A, and the inequality follows since the ank of S L t IR l l is at most l q. It follows that tace S L t ) + S L b), since tace S L t ) + S L w), the tace of the poduct of two positive semi-definite matices, is always nonnegative. Next, we show that the maximum is achieved, when G = U 1 W. Recall that the dimension of the null space, W, of S w is. That is, W IR t. It follows that U 1 W) T S t U 1 W) IR, and anku 1 W) T S t U 1 W)) =. Futhemoe, U 1 W) T S w U 1 W) = W T S w W = 0, as W foms the null space of S w. It follows that, U1 tace W) T S t U 1 W) ) ) + U1 W) T S w U 1 W) = 0. Hence, U1 tace W) T S t U 1 W) ) ) + U1 W) T S b U 1 W) = ank U 1 W) T S t U 1 W) ) tace U1 W) T S t U 1 W) ) + U1 W) T S w U 1 W) )) =. 1193
12 YE AND XIONG Thus G = U 1 W solves the optimization poblem in Eq. 10). That is, OLDA and NLDA ae equivalent. Theoem 5.2 above shows that unde condition C1, OLDA and NLDA ae equivalent. Next, we show that condition C1 holds when the data points ae linealy independent as summaized below. Theoem 5.3 Assume that condition C2, that is, the n data points in the data matix A IR m n ae linealy independent, holds. Then condition C1: anks t ) = anks b )+anks w ) holds. Poof Since the n columns in A ae linealy independent, H t = A ce T is of ank n 1. That is, anks t ) = n 1. Next we show that anks b ) = k 1 and anks w ) = n k. Thus condition C1 holds. It is easy to veify that anks b ) k 1 and anks w ) n k. We have n 1 = anks t ) anks b )+anks w ) k 1)+n k) = n 1. 17) It follows that all inequalities in Eq. 17) become equalities. That is, anks b ) = k 1, anks w ) = n k, and anks t ) = anks b )+anks w ). 18) Thus, condition C1 holds. Ou expeimental esults in Section 7 show that fo high-dimensional data, the linea independence condition C2 holds in many cases, while condition C1 is satisfied in most cases. This explains why NLDA and OLDA often achieve the same pefomance in many applications involving highdimensional data, such as text documents, face images, and gene expession data. 6. Regulaized Othogonal LDA Recall that OLDA involves the pseudo-invese of the total scatte matix, whose estimation may not be eliable especially fo undesampled data, whee the numbe of dimensions exceeds the sample size. In such case, the paamete estimates can be highly unstable, giving ise to high vaiance. By employing a method of egulaization, one attempts to impove the estimates by egulating this bias vaiance tade-off Fiedman, 1989). We employ the egulaization technique to OLDA by adding a constant λ to the diagonal elements of the total scatte matix. Hee λ > 0 is known as the egulaization paamete. The algoithm is called egulaized OLDA ROLDA). The optimal tansfomation, G, of ROLDA can be computed by solving the following optimization poblem: G G = agmax G IR tace T m l S :G T G=I t + λi m )G ) ) + G T S b G. 19) l The optimal G can be computed by solving an eigenvalue poblem as summaized in the following theoem The poof follows Theoem 3.1 in Ye, 2005) and is thus omitted): Theoem 6.1 Let X q be the matix consisting of the fist q eigenvectos of the matix S t + λi m ) 1 S b 20) coesponding to the nonzeo eigenvalues, whee q = anks b ). Let X q = QR be the QR-decomposition of X q, whee Q has othonomal columns and R is uppe tiangula. Then G = Q solves the optimization poblem in Eq. 19). 1194
13 ANALYSIS OF NULL SPACE AND ORTHOGONAL LINEAR DISCRIMINANT ANALYSIS Theoem 6.1 implies that the main computation involved in ROLDA is the eigen-decomposition of the matix S t + λi m ) 1 S b. Diect fomation of the matix is expensive fo high-dimensional data, as it is of size m by m. In the following, we pesent an efficient way of computing the eigendecomposition. Denote B = Σ 2 t + λi t ) 1/2 U T 1 H b 21) and let B = P Σ Q ) T 22) be the SVD of B. Fom Eqs. 8) and 11), we have S t + λi m ) 1 Σ 2 S b = U t + λi t ) 1 ) 0 0 λ 1 U T U T U 1 S b U 1 0 I m t 0 0 Σ 2 = U t + λi t ) 1 U1 T H bhb TU ) 1 0 U T 0 0 Σ 2 = U t + λi t ) 1/2 B B ) T Σt 2 + λi t ) 1/2 ) 0 U T 0 0 Σ 2 = U t + λi t ) 1/2 P Σ Σ ) T P ) T Σt 2 + λi t ) 1/ It follows that the columns of the matix U 1 Σ 2 t + λi t ) 1/2 P q ) U T ) U T. fom the eigenvectos of S t + λi m ) 1 S b coesponding to the top q nonzeo eigenvalues, whee P q denotes the fist q columns of P. That is, X q in Theoem 6.1 is given by X q = U 1 Σ 2 t + λi t ) 1/2 P q. 23) The pseudo-code fo the ROLDA algoithm is given in Algoithm 4. The computations in ROLDA can be decomposed into two components: the fist component involves the matix, U 1 IR m t, of high dimensionality but independent of λ, while the second component involves the matix, Σ 2 t + λi t ) 1/2 P q IR t q, of low dimensionality. When we apply coss-validation to seach fo the optimal λ fom a set of candidates, we epeat the computations involved in the second component only, thus making the computational cost of model selection small. Moe specifically, let Λ = {λ 1,,λ Λ } 24) be the candidate set fo the egulaization paamete λ, whee Λ denotes the size of the candidate set Λ. We apply v-fold coss-validation fo model selection we choose v = 5 in ou expeiment), whee the data is divided into v subsets of appoximately) equal size. All subsets ae mutually exclusive, and in the i-th fold, the i-th subset is held out fo testing and all othe subsets ae used fo taining. Fo each λ j j = 1,, Λ ), we compute the coss-validation accuacy, Accu j), defined as the mean of the accuacies fo all folds. The optimal egulaization value λ j is the one with j = agmax j Accu j). 25) 1195
14 YE AND XIONG Algoithm 4: ROLDA Regulaized OLDA) Input: data matix A and egulaization value λ Output: tansfomation matix G 1. Compute U 1, Σ t, and Pq, whee q = anks b ); 2. Xq U 1 Σt 2 + λi t ) 1/2 Pq; 3. Compute the QR decomposition of Xq as Xq = QR; 4. G Q. The K-Neaest Neighbo algoithm with K = 1, called 1-NN, is used fo computing the accuacy. The pseudo-code fo the model selection pocedue in ROLDA is given in Algoithm 5. Note that we apply the QR decomposition to instead of as done in Theoem 6.1, since U 1 has othonomal columns. Σ 2 t + λi t ) 1/2 P q IR t q 26) X q = U 1 Σ 2 t + λi t ) 1/2 P q IR m q, 27) Algoithm 5: Model selection fo ROLDA Input: data matix A and candidate set Λ = {λ 1,,λ Λ } Output: optimal egulaization value λ j 1. Fo i = 1 : v /* v-fold coss-validation */ 2. Constuct A i and Aî; /* A i = i-th fold, fo taining and Aî = est, fo testing */ 3. Constuct H b and H t using A i as in Eqs. 2) and 3), espectively; 4. Compute the educed SVD of H t as H t = U 1 Σ t V1 T ; t ankh t); 5. H b,l U1 T H b, q ankh b ); 6. A i L U 1 T Ai ; AîL U 1 T Aî; /* Pojection by U 1 */ 7. Fo j = 1 : Λ /* λ 1,,λ Λ */ 8. D j Σt 2 + λ j I t ) 1/2 ; B D j H b,l 9. Compute the SVD of B as B = P Σ Q ) T ; 10. D q,p D j Pq; Compute the QR decomposition of D q,p as D q,p = QR; 11. A i L QT A i L ; AîL QT AîL ) ; 12. Run 1-NN on A i L,AîL and compute the accuacy, denoted as Accui, j); 13. EndFo 14. EndFo 15. Accu j) 1 v v Accui, j); 16. j agmax j Accu j); 17. Output λ j as the optimal egulaization value. 6.1 Time Complexity We conclude this section by analyzing the time complexity of the model selection pocedue descibed above. 1196
15 ANALYSIS OF NULL SPACE AND ORTHOGONAL LINEAR DISCRIMINANT ANALYSIS Line 4 in Algoithm 5 takes On 2 m) time fo the educed SVD computation. Lines 5 and 6 take Omtk) = Omnk) and Otmn) = Omn 2 ) time, espectively, fo the matix multiplications. Fo each λ j, fo j = 1,, Λ, of the Fo loop, Lines 9 and 10 take Otk 2 ) = Onk 2 ) time fo the SVD and QR decomposition and matix multiplication. Line 11 takes Oktn) = Okn 2 ) time fo the matix multiplication. The computation of the classification accuacy by 1-NN in Line 12 takes On 2 k/v) time, as the size of the test set, AîL, is about n/v. Thus, the time complexity, T Λ ), of the model selection pocedue is T Λ ) = O v n 2 m+mn 2 + mnk+ Λ nk 2 + kn 2 + n 2 k/v) )). Fo high-dimensional and undesampled data, whee the sample size, n, is much smalle than the dimensionality m, the time complexity is simplified to T Λ ) = O vn 2 m+ Λ n 2 k) ) = O vn 2 m 1+ km )) Λ. When the numbe, k, of classes in the data set is much smalle than the dimensionality, m, the ovehead of estimating the optimal egulaization value among a lage candidate set may be small. Ou expeiments on a collection of high-dimensional and undesampled data see Section 7) show that the computational cost of the model selection pocedue in ROLDA gows slowly as Λ inceases. 7. Expeimental Studies In this section, we pefom extensive expeimental studies to evaluate the theoetical esults and the ROLDA algoithm pesented in this pape. Section 7.1 descibes ou test data sets. We pefom a detailed compaison of NLDA, inlda, and OLDA in Section 7.2. Results ae consistent with ou theoetical analysis. In Section 7.3, we compae the classification pefomance of NLDA, inlda, OLDA, ULDA, ROLDA, and SVM. The K-Neaest-Neighbo K-NN) algoithm with K = 1 is used as the classifie fo all LDA based algoithms. 7.1 Data Sets We used 14 data sets fom vaious data souces in ou expeimental studies. The statistics of ou test data sets ae summaized in Table 2. The fist five data sets, including spambase, 4 balance, wine, wavefom, and vowel, ae lowdimensional data fom the UCI Machine Leaning Repositoy. The next nine data sets, including text documents, face images, and gene expession data, have high dimensionality: e1, e0, and t41 ae thee text document data sets, whee e1 and e0 ae deived fom Reutes text categoization test collection Distibution 1.0, 5 and t41 is deived fom the TREC-5, TREC-6, and TREC-7 collections; 6 ORL, 7 AR, 8 and PIX 9 ae thee face image data sets; GCM, colon, and ALLAML4 ae thee gene expession data sets Ye et al., 2004b). 4. Only a subset of the oiginal spambase data set is used in ou study aleix/aleix face DB.html
16 YE AND XIONG Data Set Sample size n) # of dimensions # of classes taining test total m) k) spambase balance wine wavefom vowel e e t ORL AR PIX GCM colon ALLAML Table 2: Statistics of ou test data sets. Fo the fist five data sets, we used the given patition of taining and test sets, while fo the last nine data sets, we did andom splittings into taining and test sets of atio 2: Compaison of NLDA, inlda, and OLDA In this expeiment, we did a compaative study of NLDA, inlda, and OLDA. Fo the fist five low-dimensional data sets fom the UCI Machine Leaning Repositoy, we used the given splitting of taining and test sets. The esult is summaized in Table 3. Fo the next nine high-dimensional data sets, we pefomed ou study by epeated andom splittings into taining and test sets. The data was patitioned andomly into a taining set, whee each class consists of two-thids of the whole class and a test set with each class consisting of one-thid of the whole class. The splitting was epeated 20 times and the esulting accuacies of diffeent algoithms fo the fist ten splittings ae summaized in Table 4. Note that the mean accuacy fo the 20 diffeent splittings will be epoted in the next section. The ank of thee scatte matices, S b, S w, and S t, fo each of the splittings is also epoted. The main obsevations fom Table 3 and Table 4 include: Fo the fist five low-dimensional data sets, we have anks b ) = k 1, and anks w ) = anks t ) = m, whee m is the data dimensionality. Thus the null space of S w is empty, and both NLDA and inlda do not apply. Howeve, OLDA is applicable and the educed dimensionality of OLDA is k 1. Fo the next nine high-dimensional data sets, condition C1: anks t ) = anks b )+anks w ) is satisfied in all cases except the e0 data set. Fo the e0 data set, eithe anks t ) = anks b )+ anks w ) o anks t ) = anks b )+anks w ) 1 holds, that is, condition C1 is not seveely violated fo e0. Note that e0 has the smallest numbe of dimensions among the nine high- 1198
17 ANALYSIS OF NULL SPACE AND ORTHOGONAL LINEAR DISCRIMINANT ANALYSIS Data Set spambase balance wine wavefom vowel NLDA Method inlda OLDA S b Rank S w S t Table 3: Compaison of NLDA, inlda, and OLDA on classification accuacy in pecentage) using five low-dimensional data sets fom UCI Machine Leaning Repositoy. The anks of thee scatte matices ae epoted. dimensional data sets. Fom the expeiments, we may infe that condition C1 is moe likely to hold fo high-dimensional data. NLDA, inlda, and OLDA achieve the same classification pefomance in all cases when condition C1 holds. The empiical esult confims the theoetical analysis in Section 5. This explains why NLDA and OLDA often achieve simila pefomance fo high-dimensional data. We can also obseve that NLDA and inlda achieve simila pefomance in all cases. The numbes of taining data points fo the nine high-dimensional data in the same ode as in the table) ae 325, 212, 140, 280, 450, 210, 125, 68, and 48, espectively. By examining the ank of S t in Table 4, we can obseve that the taining data in six out of nine data sets, including t41, ORL, AR, GCM, colon, and ALLAML4, ae linealy independent. That is, the independence assumption C2 fom Theoem 5.3 holds fo these data sets. It is clea fom the table that fo these six data sets, condition C1 holds and NLDA, inlda, and OLDA achieve the same pefomance. These ae consistent with the theoetical analysis in Section 5. Fo the e0 data set, whee condition C1 does not hold, i.e., anks t ) < anks b )+anks w ), OLDA achieves highe classification accuacy than NLDA and inlda. Recall that the educed dimensionality of OLDA equals anks b ) q. The educed dimensionality in NLDA and inlda equals the dimension of the null space of S w, which equals anks t ) anks w ) < anks b ). That is, OLDA keeps moe dimensions in the tansfomed space than NLDA and inlda. Expeimental esults in e0 show that these exta dimensions used in OLDA impove its classification pefomance. 7.3 Compaative Studies on Classification In this expeiment, we conducted a compaative study of NLDA, inlda, OLDA, ULDA, ROLDA, and SVM in tems of classification. Fo ROLDA, the optimal λ is estimated though coss-validation on a candidate set, Λ = {λ j } Λ. Recall that T Λ ) denotes the computational cost of the model selection pocedue in ROLDA, whee Λ is the size of the candidate set of the egulaization values. We have pefomed model selection on all nine high-dimensional data sets using diffeent values of 1199
18 YE AND XIONG Data Set Method Ten diffeent splittings into taining and test sets of atio 2:1 NLDA inlda e1 OLDA S b S w S t NLDA inlda e0 OLDA S b S w S t NLDA inlda t41 OLDA S b S w S t NLDA inlda ORL OLDA S b S w S t NLDA inlda AR OLDA S b S w S t NLDA inlda PIX OLDA S b S w S t NLDA inlda GCM OLDA S b S w S t NLDA inlda colon OLDA S b S w S t NLDA inlda ALLAML4 OLDA S b S w S t Table 4: Compaison of classification accuacy in pecentage) fo NLDA, inlda, and OLDA using nine high-dimensional data sets. Ten diffeent splittings into taining and test sets of atio 2:1 fo each of the k classes) ae applied. The ank of thee scatte matices fo each splitting is epoted. 1200
19 ANALYSIS OF NULL SPACE AND ORTHOGONAL LINEAR DISCRIMINANT ANALYSIS Data Set NLDA inlda OLDA ULDA ROLDA SVM e ) ) ) ) ) ) e ) ) ) ) ) ) t ) ) ) ) ) ) ORL ) ) ) ) ) ) AR ) ) ) ) ) ) PIX ) ) ) ) ) ) GCM ) ) ) ) ) ) Colon ) ) ) ) ) ) ALLAML ) ) ) ) ) ) Table 5: Compaison of classification accuacy in pecentage) fo six diffeent methods: NLDA, inlda, OLDA, ULDA, ROLDA, and SVM using nine high-dimensional data sets. The mean accuacy and standad deviation in paenthesis) fom 20 diffeent uns ae epoted. Λ. We have obseved that T Λ ) gows slowly as Λ inceases, and the atio, T1024)/T1), on all nine data sets anges fom 1 to 5. Thus, we can un model selection using a lage candidate set of egulaization values, without damatically inceasing the cost. In the following expeiments, we apply model selection to ROLDA with a candidate set of size Λ = 1024, whee λ j = α j /1 α j ), 28) with {α j } Λ unifomly distibuted between 0 and 1. As fo SVM, we employed the coss-validation to estimate the optimal paamete using a candidate set of size 50. To compae diffeent classification algoithms, we applied the same expeimental setting as in Section 7.2. The splitting into taining and test sets of atio 2:1 fo each of the k classes) was epeated 20 times. The final accuacy epoted was the aveage of the 20 diffeent uns. The standad deviation fo each data set was also epoted. The esult on the nine high-dimensionality data sets is summaized in Table 5. As obseved in Section 7.2, OLDA has the same pefomance as NLDA and inlda in all cases except the e0 data set, while NLDA and inlda achieve simila pefomance in all cases. Oveall, ROLDA and SVM ae vey competitive with othe methods. SVM pefoms well in all cases except GCM. The poo pefomance of SVM in GCM has also been obseved in Li et al., 2004). ROLDA outpefoms OLDA fo e0, AR, and GCM, while it is compaable to OLDA fo all othe cases. This confims the effectiveness of the egulaization applied in ROLDA. Note that fom Remak 1, ULDA is closely elated to OLDA. Howeve, unlike OLDA, ULDA does not apply the final othogonalization step. Expeimental esult in Table 5 confims the effectiveness of the othogonalization step in OLDA, especially fo thee face image data sets and GCM. 8. Conclusions In this pape, we pesent a computational and theoetical analysis of two LDA based algoithms, including null space LDA and othogonal LDA. NLDA computes the disciminant vectos in the null space of the within-class scatte matix, while OLDA computes a set of othogonal disciminant vectos via the simultaneous diagonalization of the scatte matices. They have been applied successfully in many applications, such as document classification, face ecognition, and gene expession data classification. 1201
20 YE AND XIONG Both NLDA and OLDA esult in othogonal tansfomations. Howeve, they applied diffeent schemes in deiving the optimal tansfomation. Ou theoetical analysis in this pape shows that unde a mild condition C1 which holds in many applications involving high-dimensional data, NLDA is equivalent to OLDA. Based on the theoetical analysis, an impoved algoithm fo null space LDA algoithm, called inlda, is poposed. We have pefomed extensive expeimental studies on 14 data sets, including both low-dimensional and high-dimensional data. Results have shown that condition C1 holds fo eight out of the nine high-dimensional data sets, while the null space of S w is empty fo all five low-dimensional data. Thus, NLDA may not be applicable fo low-dimensional data, while OLDA is still applicable in this case. Results ae also consistent with ou theoetical analysis. That is, fo all cases when condition C1 holds, NLDA, inlda, and OLDA achieve the same classification pefomance. We also obseve that fo othe cases with condition C1 violated, OLDA outpefoms NLDA and inlda, due to the exta numbe of dimensions used in OLDA. We also compae NLDA, inlda, and OLDA with uncoelated LDA ULDA), which does not pefom the final othogonalization step. Results show that OLDA is vey competitive with ULDA, which confims the effectiveness of the othogonalization step used in OLDA. Ou empiical and theoetical esults pesented in this pape povide futhe insights into the natue of these two LDA based algoithms. We also pesent the ROLDA algoithm, which extends the OLDA algoithm by applying the egulaization technique. Regulaization may stabilize the sample covaiance matix estimation and impove the classification pefomance. ROLDA involves the egulaization paamete λ, which is commonly estimated via coss-validation. To speed up the coss-validation pocess, we decompose the computations in ROLDA into two components: the fist component involves matices of high dimensionality but independent of λ, while the second component involves matices of low dimensionality. When seaching fo the optimal λ fom a candidate set, we epeat the computations involved in the second component only. A compaative study on classification shows that ROLDA is vey competitive with OLDA, which shows the effectiveness of the egulaization applied in ROLDA. Ou extensive expeimental studies have shown that condition C1 holds fo most high-dimensional data sets. We plan to cay out theoetical analysis on this popety in the futue. Some of the theoetical esults in Hall et al., 2005) may be useful fo ou analysis. The algoithms in Yang et al., 2005; Yu and Yang, 2001) ae closely elated to the null space LDA algoithm discussed in this pape. The analysis pesented in this pape may be useful in undestanding why these algoithms pefom well in many applications, especially in face ecognition. We plan to exploe this futhe in the futue. Acknowledgements We thank the eviewes fo helpful comments. Reseach of JY is sponsoed, in pat, by the Cente fo Evolutionay Functional Genomics of the Biodesign Institute at the Aizona State Univesity. Refeences P. N. Belhumeou, J. P. Hespanha, and D. J. Kiegman. Eigenfaces vs. Fishefaces: Recognition using class specific linea pojection. IEEE Tans Patten Analysis and Machine Intelligence, 19 7): ,
Journal of World s Electrical Engineering and Technology J. World. Elect. Eng. Tech. 1(1): 12-16, 2012
2011, Scienceline Publication www.science-line.com Jounal of Wold s Electical Engineeing and Technology J. Wold. Elect. Eng. Tech. 1(1): 12-16, 2012 JWEET An Efficient Algoithm fo Lip Segmentation in Colo
More informationAn Unsupervised Segmentation Framework For Texture Image Queries
An Unsupevised Segmentation Famewok Fo Textue Image Queies Shu-Ching Chen Distibuted Multimedia Infomation System Laboatoy School of Compute Science Floida Intenational Univesity Miami, FL 33199, USA chens@cs.fiu.edu
More informationControlled Information Maximization for SOM Knowledge Induced Learning
3 Int'l Conf. Atificial Intelligence ICAI'5 Contolled Infomation Maximization fo SOM Knowledge Induced Leaning Ryotao Kamimua IT Education Cente and Gaduate School of Science and Technology, Tokai Univeisity
More informationIP Network Design by Modified Branch Exchange Method
Received: June 7, 207 98 IP Netwok Design by Modified Banch Method Kaiat Jaoenat Natchamol Sichumoenattana 2* Faculty of Engineeing at Kamphaeng Saen, Kasetsat Univesity, Thailand 2 Faculty of Management
More informationA modal estimation based multitype sensor placement method
A modal estimation based multitype senso placement method *Xue-Yang Pei 1), Ting-Hua Yi 2) and Hong-Nan Li 3) 1),)2),3) School of Civil Engineeing, Dalian Univesity of Technology, Dalian 116023, China;
More informationPoint-Biserial Correlation Analysis of Fuzzy Attributes
Appl Math Inf Sci 6 No S pp 439S-444S (0 Applied Mathematics & Infomation Sciences An Intenational Jounal @ 0 NSP Natual Sciences Publishing o Point-iseial oelation Analysis of Fuzzy Attibutes Hao-En hueh
More informationSegmentation of Casting Defects in X-Ray Images Based on Fractal Dimension
17th Wold Confeence on Nondestuctive Testing, 25-28 Oct 2008, Shanghai, China Segmentation of Casting Defects in X-Ray Images Based on Factal Dimension Jue WANG 1, Xiaoqin HOU 2, Yufang CAI 3 ICT Reseach
More informationOptical Flow for Large Motion Using Gradient Technique
SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 3, No. 1, June 2006, 103-113 Optical Flow fo Lage Motion Using Gadient Technique Md. Moshaof Hossain Sake 1, Kamal Bechkoum 2, K.K. Islam 1 Abstact: In this
More informationMapReduce Optimizations and Algorithms 2015 Professor Sasu Tarkoma
apreduce Optimizations and Algoithms 2015 Pofesso Sasu Takoma www.cs.helsinki.fi Optimizations Reduce tasks cannot stat befoe the whole map phase is complete Thus single slow machine can slow down the
More informationDetection and Recognition of Alert Traffic Signs
Detection and Recognition of Alet Taffic Signs Chia-Hsiung Chen, Macus Chen, and Tianshi Gao 1 Stanfod Univesity Stanfod, CA 9305 {echchen, macuscc, tianshig}@stanfod.edu Abstact Taffic signs povide dives
More informationHISTOGRAMS are an important statistic reflecting the
JOURNAL OF L A T E X CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 1 D 2 HistoSketch: Disciminative and Dynamic Similaity-Peseving Sketching of Steaming Histogams Dingqi Yang, Bin Li, Laua Rettig, and Philippe
More informationA Two-stage and Parameter-free Binarization Method for Degraded Document Images
A Two-stage and Paamete-fee Binaization Method fo Degaded Document Images Yung-Hsiang Chiu 1, Kuo-Liang Chung 1, Yong-Huai Huang 2, Wei-Ning Yang 3, Chi-Huang Liao 4 1 Depatment of Compute Science and
More informationRANDOM IRREGULAR BLOCK-HIERARCHICAL NETWORKS: ALGORITHMS FOR COMPUTATION OF MAIN PROPERTIES
RANDOM IRREGULAR BLOCK-HIERARCHICAL NETWORKS: ALGORITHMS FOR COMPUTATION OF MAIN PROPERTIES Svetlana Avetisyan Mikayel Samvelyan* Matun Kaapetyan Yeevan State Univesity Abstact In this pape, the class
More informationvaiation than the fome. Howeve, these methods also beak down as shadowing becomes vey signicant. As we will see, the pesented algoithm based on the il
IEEE Conf. on Compute Vision and Patten Recognition, 1998. To appea. Illumination Cones fo Recognition Unde Vaiable Lighting: Faces Athinodoos S. Geoghiades David J. Kiegman Pete N. Belhumeu Cente fo Computational
More informationTowards Adaptive Information Merging Using Selected XML Fragments
Towads Adaptive Infomation Meging Using Selected XML Fagments Ho-Lam Lau and Wilfed Ng Depatment of Compute Science and Engineeing, The Hong Kong Univesity of Science and Technology, Hong Kong {lauhl,
More informationSYSTEM LEVEL REUSE METRICS FOR OBJECT ORIENTED SOFTWARE : AN ALTERNATIVE APPROACH
I J C A 7(), 202 pp. 49-53 SYSTEM LEVEL REUSE METRICS FOR OBJECT ORIENTED SOFTWARE : AN ALTERNATIVE APPROACH Sushil Goel and 2 Rajesh Vema Associate Pofesso, Depatment of Compute Science, Dyal Singh College,
More informationFACE VECTORS OF FLAG COMPLEXES
FACE VECTORS OF FLAG COMPLEXES ANDY FROHMADER Abstact. A conjectue of Kalai and Eckhoff that the face vecto of an abitay flag complex is also the face vecto of some paticula balanced complex is veified.
More informationAn Extension to the Local Binary Patterns for Image Retrieval
, pp.81-85 http://x.oi.og/10.14257/astl.2014.45.16 An Extension to the Local Binay Pattens fo Image Retieval Zhize Wu, Yu Xia, Shouhong Wan School of Compute Science an Technology, Univesity of Science
More information1996 European Conference on Computer Vision. Recognition Using Class Specic Linear Projection
1996 Euopean Confeence on Compute Vision Eigenfaces vs. Fishefaces: Recognition Using Class Specic Linea Pojection Pete N. Belhumeu? Jo~ao P. Hespanha?? David J. Kiegman??? Dept. of Electical Engineeing,
More informationEffective Data Co-Reduction for Multimedia Similarity Search
Effective Data Co-Reduction fo Multimedia Similaity Seach Zi Huang Heng Tao Shen Jiajun Liu Xiaofang Zhou School of Infomation Technology and Electical Engineeing The Univesity of Queensland, QLD 472,
More informationColor Correction Using 3D Multiview Geometry
Colo Coection Using 3D Multiview Geomety Dong-Won Shin and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 13 Cheomdan-gwagio, Buk-ku, Gwangju 500-71, Republic of Koea ABSTRACT Recently,
More informationCommunication vs Distributed Computation: an alternative trade-off curve
Communication vs Distibuted Computation: an altenative tade-off cuve Yahya H. Ezzeldin, Mohammed amoose, Chistina Fagouli Univesity of Califonia, Los Angeles, CA 90095, USA, Email: {yahya.ezzeldin, mkamoose,
More informationClustering Interval-valued Data Using an Overlapped Interval Divergence
Poc. of the 8th Austalasian Data Mining Confeence (AusDM'9) Clusteing Inteval-valued Data Using an Ovelapped Inteval Divegence Yongli Ren Yu-Hsn Liu Jia Rong Robet Dew School of Infomation Engineeing,
More informationA VECTOR PERTURBATION APPROACH TO THE GENERALIZED AIRCRAFT SPARE PARTS GROUPING PROBLEM
Accepted fo publication Intenational Jounal of Flexible Automation and Integated Manufactuing. A VECTOR PERTURBATION APPROACH TO THE GENERALIZED AIRCRAFT SPARE PARTS GROUPING PROBLEM Nagiza F. Samatova,
More informationFrequency Domain Approach for Face Recognition Using Optical Vanderlugt Filters
Optics and Photonics Jounal, 016, 6, 94-100 Published Online August 016 in SciRes. http://www.scip.og/jounal/opj http://dx.doi.og/10.436/opj.016.68b016 Fequency Domain Appoach fo Face Recognition Using
More informationClassifying Datasets Using Some Different Classification Methods
Intenational Jounal of Engineeing and Technical Reseach (IJETR) ISSN: -869 (O) 454-4698 (P), Volume-5, Issue, June 6 Classifying Datasets Using Some Diffeent Classification Methods Sanaa Abou Elhamayed
More informationOn Error Estimation in Runge-Kutta Methods
Leonado Jounal of Sciences ISSN 1583-0233 Issue 18, Januay-June 2011 p. 1-10 On Eo Estimation in Runge-Kutta Methods Ochoche ABRAHAM 1,*, Gbolahan BOLARIN 2 1 Depatment of Infomation Technology, 2 Depatment
More informationAssessment of Track Sequence Optimization based on Recorded Field Operations
Assessment of Tack Sequence Optimization based on Recoded Field Opeations Matin A. F. Jensen 1,2,*, Claus G. Søensen 1, Dionysis Bochtis 1 1 Aahus Univesity, Faculty of Science and Technology, Depatment
More informationSeparability and Topology Control of Quasi Unit Disk Graphs
Sepaability and Topology Contol of Quasi Unit Disk Gaphs Jiane Chen, Anxiao(Andew) Jiang, Iyad A. Kanj, Ge Xia, and Fenghui Zhang Dept. of Compute Science, Texas A&M Univ. College Station, TX 7784. {chen,
More information(a, b) x y r. For this problem, is a point in the - coordinate plane and is a positive number.
Illustative G-C Simila cicles Alignments to Content Standads: G-C.A. Task (a, b) x y Fo this poblem, is a point in the - coodinate plane and is a positive numbe. a. Using a tanslation and a dilation, show
More informationA Recommender System for Online Personalization in the WUM Applications
A Recommende System fo Online Pesonalization in the WUM Applications Mehdad Jalali 1, Nowati Mustapha 2, Ali Mamat 2, Md. Nasi B Sulaiman 2 Abstact foeseeing of use futue movements and intentions based
More informationLecture 27: Voronoi Diagrams
We say that two points u, v Y ae in the same connected component of Y if thee is a path in R N fom u to v such that all the points along the path ae in the set Y. (Thee ae two connected components in the
More informationEffects of Model Complexity on Generalization Performance of Convolutional Neural Networks
Effects of Model Complexity on Genealization Pefomance of Convolutional Neual Netwoks Tae-Jun Kim 1, Dongsu Zhang 2, and Joon Shik Kim 3 1 Seoul National Univesity, Seoul 151-742, Koea, E-mail: tjkim@bi.snu.ac.k
More informationFINITE ELEMENT MODEL UPDATING OF AN EXPERIMENTAL VEHICLE MODEL USING MEASURED MODAL CHARACTERISTICS
COMPDYN 009 ECCOMAS Thematic Confeence on Computational Methods in Stuctual Dynamics and Eathquake Engineeing M. Papadakakis, N.D. Lagaos, M. Fagiadakis (eds.) Rhodes, Geece, 4 June 009 FINITE ELEMENT
More informationIllumination methods for optical wear detection
Illumination methods fo optical wea detection 1 J. Zhang, 2 P.P.L.Regtien 1 VIMEC Applied Vision Technology, Coy 43, 5653 LC Eindhoven, The Nethelands Email: jianbo.zhang@gmail.com 2 Faculty Electical
More informationComparisons of Transient Analytical Methods for Determining Hydraulic Conductivity Using Disc Permeameters
Compaisons of Tansient Analytical Methods fo Detemining Hydaulic Conductivity Using Disc Pemeametes 1,,3 Cook, F.J. 1 CSRO Land and Wate, ndoooopilly, Queensland The Univesity of Queensland, St Lucia,
More informationFifth Wheel Modelling and Testing
Fifth heel Modelling and Testing en Masoy Mechanical Engineeing Depatment Floida Atlantic Univesity Boca aton, FL 4 Lois Malaptias IFMA Institut Fancais De Mechanique Advancee ampus De lemont Feand Les
More informationAn Optimised Density Based Clustering Algorithm
Intenational Jounal of Compute Applications (0975 8887) Volume 6 No.9, Septembe 010 An Optimised Density Based Clusteing Algoithm J. Hencil Pete Depatment of Compute Science St. Xavie s College, Palayamkottai,
More informationImproved Fourier-transform profilometry
Impoved Fouie-tansfom pofilomety Xianfu Mao, Wenjing Chen, and Xianyu Su An impoved optical geomety of the pojected-finge pofilomety technique, in which the exit pupil of the pojecting lens and the entance
More informationA Novel Automatic White Balance Method For Digital Still Cameras
A Novel Automatic White Balance Method Fo Digital Still Cameas Ching-Chih Weng 1, Home Chen 1,2, and Chiou-Shann Fuh 3 Depatment of Electical Engineeing, 2 3 Gaduate Institute of Communication Engineeing
More informationA Shape-preserving Affine Takagi-Sugeno Model Based on a Piecewise Constant Nonuniform Fuzzification Transform
A Shape-peseving Affine Takagi-Sugeno Model Based on a Piecewise Constant Nonunifom Fuzzification Tansfom Felipe Fenández, Julio Gutiéez, Juan Calos Cespo and Gacián Tiviño Dep. Tecnología Fotónica, Facultad
More informationImage Registration among UAV Image Sequence and Google Satellite Image Under Quality Mismatch
0 th Intenational Confeence on ITS Telecommunications Image Registation among UAV Image Sequence and Google Satellite Image Unde Quality Mismatch Shih-Ming Huang and Ching-Chun Huang Depatment of Electical
More informationAll lengths in meters. E = = 7800 kg/m 3
Poblem desciption In this poblem, we apply the component mode synthesis (CMS) technique to a simple beam model. 2 0.02 0.02 All lengths in metes. E = 2.07 10 11 N/m 2 = 7800 kg/m 3 The beam is a fee-fee
More informationEffective Missing Data Prediction for Collaborative Filtering
Effective Missing Data Pediction fo Collaboative Filteing Hao Ma, Iwin King and Michael R. Lyu Dept. of Compute Science and Engineeing The Chinese Univesity of Hong Kong Shatin, N.T., Hong Kong { hma,
More informationWIRELESS sensor networks (WSNs), which are capable
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. XX, NO. XX, XXX 214 1 Lifetime and Enegy Hole Evolution Analysis in Data-Gatheing Wieless Senso Netwoks Ju Ren, Student Membe, IEEE, Yaoxue Zhang, Kuan
More informationLinear Ensembles of Word Embedding Models
Linea Ensembles of Wod Embedding Models Avo Muomägi Univesity of Tatu Tatu, Estonia avom@ut.ee Kaiit Sits Univesity of Tatu Tatu, Estonia kaiit.sits@ut.ee Sven Lau Univesity of Tatu Tatu, Estonia swen@math.ut.ee
More informationMulti-azimuth Prestack Time Migration for General Anisotropic, Weakly Heterogeneous Media - Field Data Examples
Multi-azimuth Pestack Time Migation fo Geneal Anisotopic, Weakly Heteogeneous Media - Field Data Examples S. Beaumont* (EOST/PGS) & W. Söllne (PGS) SUMMARY Multi-azimuth data acquisition has shown benefits
More informationIntroduction to Medical Imaging. Cone-Beam CT. Introduction. Available cone-beam reconstruction methods: Our discussion:
Intoduction Intoduction to Medical Imaging Cone-Beam CT Klaus Muelle Available cone-beam econstuction methods: exact appoximate Ou discussion: exact (now) appoximate (next) The Radon tansfom and its invese
More informationInformation Retrieval. CS630 Representing and Accessing Digital Information. IR Basics. User Task. Basic IR Processes
CS630 Repesenting and Accessing Digital Infomation Infomation Retieval: Basics Thosten Joachims Conell Univesity Infomation Retieval Basics Retieval Models Indexing and Pepocessing Data Stuctues ~ 4 lectues
More informationAUTOMATED LOCATION OF ICE REGIONS IN RADARSAT SAR IMAGERY
AUTOMATED LOCATION OF ICE REGIONS IN RADARSAT SAR IMAGERY Chistophe Waceman (1), William G. Pichel (2), Pablo Clement-Colón (2) (1) Geneal Dynamics Advanced Infomation Systems, P.O. Box 134008 Ann Abo
More informationNew Algorithms for Daylight Harvesting in a Private Office
18th Intenational Confeence on Infomation Fusion Washington, DC - July 6-9, 2015 New Algoithms fo Daylight Havesting in a Pivate Office Rohit Kuma Lighting Solutions and Sevices Philips Reseach Noth Ameica
More informationCompact Vectors of Locally Aggregated Tensors for 3D shape retrieval
Euogaphics Wokshop on 3D Object Retieval (2013) S. Biasotti, I. Patikakis, U. Castellani, and T. Scheck (Editos) Compact Vectos of Locally Aggegated Tensos fo 3D shape etieval Hedi Tabia 1, David Picad
More informationA Neural Network Model for Storing and Retrieving 2D Images of Rotated 3D Object Using Principal Components
A Neual Netwok Model fo Stong and Reteving 2D Images of Rotated 3D Object Using Pncipal Components Tsukasa AMANO, Shuichi KUROGI, Ayako EGUCHI, Takeshi NISHIDA, Yasuhio FUCHIKAWA Depatment of Contol Engineeng,
More informationOptimal Adaptive Learning for Image Retrieval
Optimal Adaptive Leaning fo Image Retieval ao Wang Dept of Compute Sci and ech singhua Univesity Beijing 00084, P. R. China Wangtao7@63.net Yong Rui Micosoft Reseach One Micosoft Way Redmond, WA 9805,
More informationOn the Forwarding Area of Contention-Based Geographic Forwarding for Ad Hoc and Sensor Networks
On the Fowading Aea of Contention-Based Geogaphic Fowading fo Ad Hoc and Senso Netwoks Dazhi Chen Depatment of EECS Syacuse Univesity Syacuse, NY dchen@sy.edu Jing Deng Depatment of CS Univesity of New
More informationTopological Characteristic of Wireless Network
Topological Chaacteistic of Wieless Netwok Its Application to Node Placement Algoithm Husnu Sane Naman 1 Outline Backgound Motivation Papes and Contibutions Fist Pape Second Pape Thid Pape Futue Woks Refeences
More informationA New Graphical Multivariate Outlier Detection Technique Using Singular Value Decomposition
Intenational Jounal of Engineeing Reseach & echnology (IJER) ISSN: 78-08 Vol. Issue 6, August - 0 A New Gaphical Multivaiate Outlie Detection echnique Using Singula Value Decomposition Nishith Kuma, Mohammed
More informationProf. Feng Liu. Fall /17/2016
Pof. Feng Liu Fall 26 http://www.cs.pdx.edu/~fliu/couses/cs447/ /7/26 Last time Compositing NPR 3D Gaphics Toolkits Tansfomations 2 Today 3D Tansfomations The Viewing Pipeline Mid-tem: in class, Nov. 2
More informationRanking Visualizations of Correlation Using Weber s Law
Ranking Visualizations of Coelation Using Webe s Law Lane Haison, Fumeng Yang, Steven Fanconei, Remco Chang Abstact Despite yeas of eseach yielding systems and guidelines to aid visualization design, pactitiones
More informationSeveral algorithms exist to extract edges from point. system. the line is computed using a least squares method.
Fast Mapping using the Log-Hough Tansfomation B. Giesle, R. Gaf, R. Dillmann Institute fo Pocess Contol and Robotics (IPR) Univesity of Kalsuhe D-7618 Kalsuhe, Gemany fgieslejgafjdillmanng@ia.uka.de C.F.R.
More informationAnd Ph.D. Candidate of Computer Science, University of Putra Malaysia 2 Faculty of Computer Science and Information Technology,
(IJCSIS) Intenational Jounal of Compute Science and Infomation Secuity, Efficient Candidacy Reduction Fo Fequent Patten Mining M.H Nadimi-Shahaki 1, Nowati Mustapha 2, Md Nasi B Sulaiman 2, Ali B Mamat
More information4.2. Co-terminal and Related Angles. Investigate
.2 Co-teminal and Related Angles Tigonometic atios can be used to model quantities such as
More informationTitle. Author(s)NOMURA, K.; MOROOKA, S. Issue Date Doc URL. Type. Note. File Information
Title CALCULATION FORMULA FOR A MAXIMUM BENDING MOMENT AND THE TRIANGULAR SLAB WITH CONSIDERING EFFECT OF SUPPO UNIFORM LOAD Autho(s)NOMURA, K.; MOROOKA, S. Issue Date 2013-09-11 Doc URL http://hdl.handle.net/2115/54220
More informationTransmission Lines Modeling Based on Vector Fitting Algorithm and RLC Active/Passive Filter Design
Tansmission Lines Modeling Based on Vecto Fitting Algoithm and RLC Active/Passive Filte Design Ahmed Qasim Tuki a,*, Nashien Fazilah Mailah b, Mohammad Lutfi Othman c, Ahmad H. Saby d Cente fo Advanced
More informationLifetime and Energy Hole Evolution Analysis in Data-Gathering Wireless Sensor Networks
788 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 12, NO. 2, APRIL 2016 Lifetime and Enegy Hole Evolution Analysis in Data-Gatheing Wieless Senso Netwoks Ju Ren, Student Membe, IEEE, Yaoxue Zhang,
More informationa Not yet implemented in current version SPARK: Research Kit Pointer Analysis Parameters Soot Pointer analysis. Objectives
SPARK: Soot Reseach Kit Ondřej Lhoták Objectives Spak is a modula toolkit fo flow-insensitive may points-to analyses fo Java, which enables expeimentation with: vaious paametes of pointe analyses which
More informationGeophysical inversion with a neighbourhood algorithm I. Searching a parameter space
Geophys. J. Int. (1999) 138, 479 494 Geophysical invesion with a neighbouhood algoithm I. Seaching a paamete space Malcolm Sambidge Reseach School of Eath Sciences, Institute of Advanced studies, Austalian
More informationMULTI-TEMPORAL AND MULTI-SENSOR IMAGE MATCHING BASED ON LOCAL FREQUENCY INFORMATION
Intenational Achives of the Photogammety Remote Sensing and Spatial Infomation Sciences Volume XXXIX-B3 2012 XXII ISPRS Congess 25 August 01 Septembe 2012 Melboune Austalia MULTI-TEMPORAL AND MULTI-SENSOR
More informationExtract Object Boundaries in Noisy Images using Level Set. Final Report
Extact Object Boundaies in Noisy Images using Level Set by: Quming Zhou Final Repot Submitted to Pofesso Bian Evans EE381K Multidimensional Digital Signal Pocessing May 10, 003 Abstact Finding object contous
More informationAccurate Diffraction Efficiency Control for Multiplexed Volume Holographic Gratings. Xuliang Han, Gicherl Kim, and Ray T. Chen
Accuate Diffaction Efficiency Contol fo Multiplexed Volume Hologaphic Gatings Xuliang Han, Gichel Kim, and Ray T. Chen Micoelectonic Reseach Cente Depatment of Electical and Compute Engineeing Univesity
More informationImprovement of First-order Takagi-Sugeno Models Using Local Uniform B-splines 1
Impovement of Fist-ode Takagi-Sugeno Models Using Local Unifom B-splines Felipe Fenández, Julio Gutiéez, Gacián Tiviño and Juan Calos Cespo Dep. Tecnología Fotónica, Facultad de Infomática Univesidad Politécnica
More informationModeling spatially-correlated data of sensor networks with irregular topologies
This full text pape was pee eviewed at the diection of IEEE Communications Society subject matte expets fo publication in the IEEE SECON 25 poceedings Modeling spatially-coelated data of senso netwoks
More informationModule 6 STILL IMAGE COMPRESSION STANDARDS
Module 6 STILL IMAE COMPRESSION STANDARDS Lesson 17 JPE-2000 Achitectue and Featues Instuctional Objectives At the end of this lesson, the students should be able to: 1. State the shotcomings of JPE standad.
More informationDirectional Stiffness of Electronic Component Lead
Diectional Stiffness of Electonic Component Lead Chang H. Kim Califonia State Univesit, Long Beach Depatment of Mechanical and Aeospace Engineeing 150 Bellflowe Boulevad Long Beach, CA 90840-830, USA Abstact
More informationPOMDP: Introduction to Partially Observable Markov Decision Processes Hossein Kamalzadeh, Michael Hahsler
POMDP: Intoduction to Patially Obsevable Makov Decision Pocesses Hossein Kamalzadeh, Michael Hahsle 2019-01-02 The R package pomdp povides an inteface to pomdp-solve, a solve (witten in C) fo Patially
More informationThe EigenRumor Algorithm for Ranking Blogs
he EigenRumo Algoithm fo Ranking Blogs Ko Fujimua N Cybe Solutions Laboatoies N Copoation akafumi Inoue N Cybe Solutions Laboatoies N Copoation Masayuki Sugisaki N Resonant Inc. ABSRAC he advent of easy
More informationShortest Paths for a Two-Robot Rendez-Vous
Shotest Paths fo a Two-Robot Rendez-Vous Eik L Wyntes Joseph S B Mitchell y Abstact In this pape, we conside an optimal motion planning poblem fo a pai of point obots in a plana envionment with polygonal
More informationLink Prediction in Heterogeneous Networks Based on Tensor Factorization
Send Odes fo Repints to epints@benthamscience.ae 36 The Open Cybenetics & Systemics Jounal, 204, 8, 36-32 Open Access Link Pediction in Heteogeneous Netwoks Based on Tenso Factoization Piao Yong,2*, Li
More informationModeling Spatially Correlated Data in Sensor Networks
Modeling Spatially Coelated Data in Senso Netwoks Apoova Jindal and Konstantinos Psounis Univesity of Southen Califonia The physical phenomena monitoed by senso netwoks, e.g. foest tempeatue, wate contamination,
More informationResolution and stability analysis of offset VSP acquisition scenarios with applications to fullwaveform
Resolution and stability analysis of offset VSP acquisition scenaios with applications to fullwavefom invesion I. Silvestov, IPGG SB RAS, D. Neklyudov, IPGG SB RAS, M. Puckett, Schlumbege, V. Tcheveda,
More information3D Periodic Human Motion Reconstruction from 2D Motion Sequences
3D Peiodic Human Motion Reconstuction fom D Motion Sequences Zonghua Zhang and Nikolaus F. Toje BioMotionLab, Depatment of Psychology Queen s Univesity, Canada zhang, toje@psyc.queensu.ca Abstact In this
More informationResearch Article. Regularization Rotational motion image Blur Restoration
Available online www.jocp.com Jounal of Chemical and Phamaceutical Reseach, 6, 8(6):47-476 Reseach Aticle ISSN : 975-7384 CODEN(USA) : JCPRC5 Regulaization Rotational motion image Blu Restoation Zhen Chen
More informationStrictly as per the compliance and regulations of:
Global Jounal of HUMAN SOCIAL SCIENCE Economics Volume 13 Issue Vesion 1.0 Yea 013 Type: Double Blind Pee Reviewed Intenational Reseach Jounal Publishe: Global Jounals Inc. (USA) Online ISSN: 49-460x &
More informationSpiral Recognition Methodology and Its Application for Recognition of Chinese Bank Checks
Spial Recognition Methodology and Its Application fo Recognition of Chinese Bank Checks Hanshen Tang 1, Emmanuel Augustin 2, Ching Y. Suen 1, Olivie Baet 2, Mohamed Cheiet 3 1 Cente fo Patten Recognition
More informationADDING REALISM TO SOURCE CHARACTERIZATION USING A GENETIC ALGORITHM
ADDING REALISM TO SOURCE CHARACTERIZATION USING A GENETIC ALGORITHM Luna M. Rodiguez*, Sue Ellen Haupt, and Geoge S. Young Depatment of Meteoology and Applied Reseach Laboatoy The Pennsylvania State Univesity,
More informationA Full-mode FME VLSI Architecture Based on 8x8/4x4 Adaptive Hadamard Transform For QFHD H.264/AVC Encoder
20 IEEE/IFIP 9th Intenational Confeence on VLSI and System-on-Chip A Full-mode FME VLSI Achitectue Based on 8x8/ Adaptive Hadamad Tansfom Fo QFHD H264/AVC Encode Jialiang Liu, Xinhua Chen College of Infomation
More informationDEADLOCK AVOIDANCE IN BATCH PROCESSES. M. Tittus K. Åkesson
DEADLOCK AVOIDANCE IN BATCH PROCESSES M. Tittus K. Åkesson Univesity College Boås, Sweden, e-mail: Michael.Tittus@hb.se Chalmes Univesity of Technology, Gothenbug, Sweden, e-mail: ka@s2.chalmes.se Abstact:
More informationTissue Classification Based on 3D Local Intensity Structures for Volume Rendering
160 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 6, NO., APRIL-JUNE 000 Tissue Classification Based on 3D Local Intensity Stuctues fo Volume Rendeing Yoshinobu Sato, Membe, IEEE, Cal-Fedik
More information= dv 3V (r + a 1) 3 r 3 f(r) = 1. = ( (r + r 2
Random Waypoint Model in n-dimensional Space Esa Hyytiä and Joma Vitamo Netwoking Laboatoy, Helsinki Univesity of Technology, Finland Abstact The andom waypoint model (RWP) is one of the most widely used
More informationE.g., movie recommendation
Recommende Systems Road Map Intodction Content-based ecommendation Collaboative filteing based ecommendation K-neaest neighbo Association les Matix factoization 2 Intodction Recommende systems ae widely
More informationA New and Efficient 2D Collision Detection Method Based on Contact Theory Xiaolong CHENG, Jun XIAO a, Ying WANG, Qinghai MIAO, Jian XUE
5th Intenational Confeence on Advanced Mateials and Compute Science (ICAMCS 2016) A New and Efficient 2D Collision Detection Method Based on Contact Theoy Xiaolong CHENG, Jun XIAO a, Ying WANG, Qinghai
More informationCoordinate Systems. Ioannis Rekleitis
Coodinate Systems Ioannis ekleitis Position epesentation Position epesentation is: P p p p x y z P CS-417 Intoduction to obotics and Intelligent Systems Oientation epesentations Descibes the otation of
More informationFree Viewpoint Action Recognition using Motion History Volumes
Fee Viewpoint Action Recognition using Motion Histoy Volumes Daniel Weinland 1, Remi Ronfad, Edmond Boye Peception-GRAVIR, INRIA Rhone-Alpes, 38334 Montbonnot Saint Matin, Fance. Abstact Action ecognition
More informationAnalysis of Wired Short Cuts in Wireless Sensor Networks
Analysis of Wied Shot Cuts in Wieless Senso Netwos ohan Chitaduga Depatment of Electical Engineeing, Univesity of Southen Califonia, Los Angeles 90089, USA Email: chitadu@usc.edu Ahmed Helmy Depatment
More informationAdaptation of Motion Capture Data of Human Arms to a Humanoid Robot Using Optimization
ICCAS25 June 2-5, KINTEX, Gyeonggi-Do, Koea Adaptation of Motion Captue Data of Human Ams to a Humanoid Robot Using Optimization ChangHwan Kim and Doik Kim Intelligent Robotics Reseach Cente, Koea Institute
More informationPositioning of a robot based on binocular vision for hand / foot fusion Long Han
2nd Intenational Confeence on Advances in Mechanical Engineeing and Industial Infomatics (AMEII 26) Positioning of a obot based on binocula vision fo hand / foot fusion Long Han Compute Science and Technology,
More informationScaling Location-based Services with Dynamically Composed Location Index
Scaling Location-based Sevices with Dynamically Composed Location Index Bhuvan Bamba, Sangeetha Seshadi and Ling Liu Distibuted Data Intensive Systems Laboatoy (DiSL) College of Computing, Geogia Institute
More informationGravitational Shift for Beginners
Gavitational Shift fo Beginnes This pape, which I wote in 26, fomulates the equations fo gavitational shifts fom the elativistic famewok of special elativity. Fist I deive the fomulas fo the gavitational
More informationConversion Functions for Symmetric Key Ciphers
Jounal of Infomation Assuance and Secuity 2 (2006) 41 50 Convesion Functions fo Symmetic Key Ciphes Deba L. Cook and Angelos D. Keomytis Depatment of Compute Science Columbia Univesity, mail code 0401
More informationCLUSTERED BASED TAKAGI-SUGENO NEURO-FUZZY MODELING OF A MULTIVARIABLE NONLINEAR DYNAMIC SYSTEM
Asian Jounal of Contol, Vol. 7, No., pp. 63-76, June 5 63 -Bief Pape- CLUSTERED BASED TAKAGI-SUGENO NEURO-FUZZY MODELING OF A MULTIVARIABLE NONLINEAR DYNAMIC SYSTEM E. A. Al-Gallaf ABSTRACT This eseach
More information