Geometry-aware Metric Learning

Size: px
Start display at page:

Download "Geometry-aware Metric Learning"

Transcription

1 Zhengdong Lu Prateek Jan Inderjt S. Dhllon Dept. of Computer Scence, Unversty of Texas at Austn, Unversty Staton C5, Austn, TX Abstract In ths paper, we ntroduce a generc framework for sem-supervsed kernel learnng. Gven parwse (ds-)smlarty constrants, we learn a kernel matrx over the data that respects the provded sde-nformaton as well as the local geometry of the data. Our framework s based on metrc learnng methods, where we jontly model the metrc/kernel over the data along wth the underlyng manfold. Furthermore, we show that for some mportant parameterzed forms of the underlyng manfold model, we can estmate the model parameters and the kernel matrx effcently. Our resultng algorthm s able to ncorporate local geometry nto the metrc learnng task; at the same tme t can handle a wde class of constrants. Fnally, our algorthm s fast and scalable unlke most of the exstng methods, t s able to explot the low dmensonal manfold structure and does not requre sem-defnte programmng. We demonstrate wde applcablty and effectveness of our framework by applyng to varous machne learnng tasks such as semsupervsed classfcaton, colored dmensonalty reducton, manfold algnment etc. On each of the tasks our method performs compettvely or better than the respectve state-of-the-art method. 1. Introducton Over the years, kernel methods have become an mportant tool n many machne learnng tasks. Success of these methods s crtcally dependent on selectng an approprate kernel for the gven task at hand usng the provded sdenformaton. To ths end, there have been several recent approaches to learn a kernel functon, e.g., (Zhu et al., 25; Lanckret et al., 24; Davs et al., 27; Ong et al., 25). In most real-lfe applcatons, the volume of avalable data Appearng n Proceedngs of the 26 th Internatonal Conference on Machne Learnng, Montreal, Canada, 29. Copyrght 29 by the author(s)/owner(s). s huge but the amount of supervson avalable s very lmted. Ths necesstates sem-supervsed methods for kernel learnng that also explot the geometry of the data. Addtonally, the sde-nformaton can be provded n a varety of forms, e.g., labels, (ds-)smlarty constrants, and clck through feedback. Thus, a generc framework s requred for sem-supervsed kernel learnng that s able to handle dfferent types of supervson, whle explotng the ntrnsc structure of the unsupervsed data. Furthermore, the learnng algorthm should be fast and scalable to handle large volumes of data. Whle exstng kernel learnng algorthms have been shown to perform well across varous applcatons, most fal to satsfy some of these basc requrements. In ths paper, we propose a framework for sem-supervsed kernel learnng that s based on a generalzaton of exstng work on metrc learnng (Davs et al., 27; Wenberger et al., 26; Globerson & Rowes, 25), as well as data-dependent kernels (Zhu et al., 25; Sndhwan et al., 25). Metrc learnng provdes a flexble method to learn a task-dependent dstance functon (kernel) over the data ponts usng the provded dstance constrants. However, t s crtcally dependent on the exstng feature representaton or a pre-defned smlarty functon, and does not take nto account the manfold structure of the provded data. On the other hand, data-dependent kernel learnng approaches explot the ntrnsc structure of the provded data, but typcally do not specalze to a gven task. Our framework ncorporates the ntrnsc structure n the data, whle learnng a task-dependent kernel. Specfcally, we jontly model a task-dependent kernel as well as a datadependent kernel that reflects the local geometry or manfold structure of the data. We show that for some mportant parameterzatons of the set of data-dependent kernels, our formulaton admts convexty, and the proposed optmzaton algorthm effcently learns an approprate kernel functon for the gven task. Our algorthm s fast, scalable, does not nvolve sem-defnte programmng, and crucally, s able to explot the low dmensonal structure of the underlyng manfold that s often present n real-world datasets. Our proposed framework s generc and can be easly talored for a varety of tasks. In ths paper, we apply our

2 method to the task of classfcaton (nductve and transductve settng), automatc model selecton for standard kernel functons, and sem-supervsed manfold learnng. For each applcaton, we emprcally demonstrate that our method can acheve comparable or better performance than the respectve state-of-the-art. 2. Prevous Work Exstng kernel learnng methods can be broadly dvded nto two categores. The frst category ncludes prmarly task-dependent approaches, where the ntrnsc structure n the data s assumed, and the goal s to maxmally tune the kernel to the provded sde-nformaton for the gven task, e.g., class labels for classfcaton, must (cannot)-lnk constrants for sem-supervsed clusterng. Promnent methods nclude metrc learnng (Davs et al., 27), multple kernel learnng (Lanckret et al., 24), hyper-kernels (Ong et al., 25), hyper-parameter cross valdaton (Seeger, 28), etc. The other category of kernel learnng methods consst of data-dependent approaches, whch explctly model the geometry of the data, e.g., underlyng manfold structure. These methods appear n both unsupervsed and semsupervsed learnng scenaros. For the unsupervsed case, (Wenberger et al., 24) proposed a method to recover the underlyng low dmensonal manfold by learnng a kernel over t. More generally, (Bengo et al., 24) show that a large class of manfold learnng methods are equvalent to learnng certan types of kernels. For the sem-supervsed settng, data-dependent kernels are used to enforce smoothness on a graph or a smlar structure composed from all of the data. Lke n the unsupervsed case, the kernel captures the manfold and/or cluster structure of the data, and after ntegrated a regularzed classfcaton model, often provdes good generalzaton performance (Sndhwan et al., 25; Chapelle et al., 23). Our proposed method combnes the two kernel learnng paradgms, thereby explotng the geometry of the data whle retanng the task-specfc feature. Related work n ths drecton s lmted and largely focuses on learnng parameters for a specfc famly of data-dependent kernels, e.g., spectral kernels (Zhu et al., 25; Lafferty & Lebanon, 25). In comparson, our method s based on a non-parametrc nformaton-theoretc metrc/kernel learnng method and s more flexble. Furthermore, exstng methods are typcally desgned for a partcular applcaton only, e.g., sem-supervsed classfcaton, and are not able to handle dfferent type of constrants, such as dstance constrants. In contrast, our proposed framework can handle a varety of constrants and s applcable to varous machne learnng tasks (see Secton 6). 3. Methodology Gven a set of n ponts {x 1,x 2,...,x n } R d, we seek a postve sem-defnte kernel matrx K that can be later used for varous tasks, e.g. classfcaton, retreval, etc. Our goal s two-fold: 1) use the provded supervson over the data, 2) explot the unlabeled or unsupervsed data,.e., we want to learn a kernel that respects the underlyng manfold structure n the data whle also ncorporatng the sdenformaton provded. Prevous kernel learnng approaches typcally handle ths problem by learnng a spectral kernel K = α v v T, where the vectors v are the lowfrequency egenvectors of the Laplacan of a k-nn graph. However, constranng the egenvectors to be unchanged severely restrcts the class of kernels that can be learned. A contrastng task-dependent approach to kernel learnng s based on the metrc learnng paradgm, where the goal s to learn a kernel K that s close to a pre-defned baselne kernel K and satsfes the provded parwse (or relatve) constrants that are specfc to the task at hand. Formally, K s obtaned by solvng the followng problem: mn K D(K,K ), s.t. K K, where K s a convex set of kernel K that satsfy K + K jj 2K j u (,j) S, K + K jj 2K j l (,j) D, K. (1) In the above S s the gven set of smlar ponts, D s the gven set of ds-smlar ponts, and D(, ) s a dstance functon for comparng two kernel matrces. We wll denote the set of kernel that satsfy (1) as the set of taskdependent kernel. Although flexble and effectve for varous problems, ths framework does not account for the unlabeled data and ther geometry. As a result, large amount of supervson s requred to capture the ntrnsc structure n the data. In ths paper, we propose a geometry-aware metrc learnng (G-ML) framework that combnes both the data-dependent and task-dependent kernel learnng approaches. Our model mantans the flexblty of the metrc learnng based approach whle explotng the ntrnsc structure n the data, and as we shall show later, engenders multple compettve machne learnng models Geometry-aware Metrc Learnng In ths secton, we descrbe our geometry-aware metrc learnng (G-ML) model, where we learn the kernel K, as well as the kernel M that explctly explots the ntrnsc structure n the data through the optmzaton problem: mn K,M D(K,M), s.t. K K, M M, (2) where the set M s a parametrc set of kernels that capture the ntrnsc geometry of the labeled as well as unlabeled

3 data and D(, ) s a dstance functon over matrces. The above optmzaton problem computes kernel K that satsfes task specfc constrants (1) and s also close to kernel M, thus ncorporatng data geometry nto the kernel K (see Fgure 1). Later n the secton, we gve a few nterestng examples of the set M. A key component of our framework s the dstance functon D(K, M) that s beng used. In ths work, we use the LogDet matrx dvergence, D ld (K,M), as the dstance functon, where D ld (K,M) = tr(km 1 ) log det(km 1 ) n. The LogDet dvergence s a Bregman matrx dvergence that s typcally defned over postve defnte matrces and ts defnton can be extended to the case when the range space of matrx K s the same as that of M. Prevously, (Davs et al., 27) showed the effcacy of LogDet as a matrx dvergence for kernel learnng, and ponted out ts equvalence to metrc learnng (called nformaton-theoretc metrc learnng, or ITML). Now, we gve an mportant example of the set M based on spectral learnng methods (Zhu et al., 25) that captures the underlyng structure n the data. Frst, defne a graph G over the data ponts that captures local structure of data, e.g., a k-nn graph or an ǫ-ball graph. Let W be the adjacency matrx of G, D be the degree matrx, L be the graph Laplacan 1 L = D W, and V = [v 1,v 2,...,v r ] be the r egenvectors of L (typcally r n) correspondng to the smallest egenvalues of L: λ 1 λ r. Then, the set M we consder { s gven by: r } M = α v v T α 1 α 2 α r. (3) where the order constrants α 1 α 2 α r further ensure smoothness (the egenvector v s known to be smoother than v +1 ). For ths partcular choce of M, the kernel K s obtaned by solvng the followng optmzaton problem: mn D ld (K,M) K,α 1,α 2,...,α r s.t. K K, M = r α v v T, α 1 α 2 α r. Solvng above problem yelds {α 1,α 2,...,α r } n the cone α 1 α 2 α r and a feasble kernel K that s close to M = r α v v T (see Fgure 1). Slack varables can be ncorporated n our framework to ensure that the set K s always feasble, even under nosy constrants Alternatve M In the above subsecton, we dscussed an example of M as a partcular subset of spectral kernels. However our framework s general and dependng on the applcaton t can admt other parametrc sets also. For example, consder the (4) 1 We can also use the normalzed Laplacan I D 1 2 WD 1 2. Fgure 1. Illustraton of G-ML. The shadowed polygon stands for the feasble set of kernels K specfed by the task dependent parwse constrants. The cone stands for data-dependent kernels that explots the ntrnsc geometry of the data. Usng a fxed M would lead to sub-optmal kernel K, whle the jont optmzaton (as n (2)) over both M and K leads to a better soluton K. set: M = {S S(I + TS) 1 TS T = r θ v v T, θ 1 θ r.} (5) where S s a fxed gven kernel and the vectors v are egenvectors of the graph Laplacan L. Ths set generalzes the data-dependent kernel proposed by (Sndhwan et al., 25) by replacng the graph Laplacan wth a more flexble T. Note that M gven by (5) reduces to M gven by (3) n the lmt S 1. Ths set of kernel s nterestng n that, unlke most spectral kernels that are usually evaluated n a transductve settng, the kernel value can be naturally extended to unseen samples as M(x,x ) = S(x,x ) S(x,.)(I + TS) 1 TS(.,x ) As wll be shown n Secton 4, the set M gven by (3) as well as (5) both lead to convex sub-problems for fndng T wth fxed K. In general, the convexty holds f {v 1,,v r } are orthogonal, whch allows us to extend our model to other manfold learnng models (Bengo et al., 24), such as Isomap or LLE. The set M can also be adapted to perform automatc model selecton for supervsed learnng, for example we can tune the parameter for the RBF kernels by lettng M = {α exp( x x j 2 } 2σ 2 ) α >,σ >, (6) where α and σ are parameters to be learned by G-ML. 4. Algorthm In ths secton, we analyze propertes of the proposed optmzaton problem (4) and propose a fast and scalable algorthm. Frst, note that although the constrants specfed n (4) are all lnear, the objectve functon D ld (K,M) s not jontly convex n K and M. However, the problem can be shown to be convex ndvdually n K and M 1. Here and n the remander of the paper, whenever the nverse of a matrx does not exst, we use ts Moore-Penrose nverse.

4 Algorthm 1 Geometry-aware Metrc Learnng(G-ML) Optmzaton procedure when M s gven by (3) Input: X: nput d n matrx, S: smlarty constrants D: ds-smlarty constrants, α : ntal α γ: slack parameter, r: number of egenvectors Output: K, M 1: G =knn-graph(x), W =Affnty matrx of G 2: L = D W, L = n µ v v T 3: M = r α v v T 4: repeat 5: K = ITML(M, S, D, γ) //(Step A) 6: α = FndAlpha(K,v 1,v 2,...,v r ) //(Step B) 7: M = α v v T 8: untl convergence functon α = FndAlpha(K,v 1,v 2,...,v r ) Cyclc projecton method to solve (4) wth fxed K 1: α = v T Kv,1 r 2: ν =, = 3: repeat 4: c = mn(ν,(α +1 α )/2) 5: ν = ν c,α +1 = α +1 c,α = α + c 6: = mod ( + 1,n) 7: untl convergence It s easy to see that on fxng M, the problem s strctly convex n K as D ld (K,M) s known to be convex n K (Davs et al., 27). The followng lemma shows that (4) s also convex n the parameters 1 α,1 r, when K s fxed. Lemma 1. Assumng K to be fxed, Problem (4) s convex n β 1 = 1/α 1,β 2 = 1/α 2,...,β r = 1/α r. Proof. Snce M 1 = β v v T, where β = 1 α, the fact that D ld (K,M) = D ld (M 1,K 1 ) s convex n M 1 mples convexty n β,. Furthermore, the constrants α 1 α 2 α r can be equvalently wrtten as a set of lnear constrants β r β 2 β 1 >. Now, we descrbe our proposed alternatng mnmzaton algorthm for solvng (4). Our algorthm s based on ndvdual convexty of (4) w.r.t K and M 1. It terates by fxng M (or equvalently α 1,α 2,...,α r ) to solve for K (denoted Step A), and then fxng K to solve for α 1,α 2,...,α r (Step B). In Step A, to fnd K, we use the cyclc projecton algorthm where at each step we project the current soluton onto one of the constrants. The projecton problem that needs to be solved at each step s: mn K D ld(k,k t ), s.t. K + K jj 2K j u,.e., projecton w.r.t. sngle (ds-)smlarty constrant. As shown n (Davs et al., 27), the above problem can be solved n closed form usng a one-rank update to K t. Furthermore, the update can be computed n just O(nk) operatons, where r n s the rank of the kernel M. Now n Step B, to obtan α 1,α 2,...,α r, we solve the equvalent optmzaton problem: mn D ld ( β v v T,K 1 ) β 1,β 2,...,β r s.t. β r β r 1 β 1, where β = 1/α. Ths problem can also be solved usng cyclc projecton, where at each step the current soluton s projected onto one of the nequalty constrants. Every projecton step can be performed n just O(k) operatons. In summary, we have presented a hghly scalable and easy to mplement algorthm (Algorthm 1) for solvng (4). Furthermore, the objectve functon value acheved by our algorthm s guaranteed to converge. Alternatve M As mentoned n Secton 3.2, an alternate set M gven by (5) nduces an natural out-of-sample extenson. Although t s not further pursued n ths paper, we would lke to pont out that, smlar to (4), ths alternatve set M also leads to a convex optmzaton problem for computng M when K s fxed. Lemma 2. Assumng K to be fxed, Problem (4) s convex n θ 1,θ 2,...,θ r. Proof. Restrctng the kernel functon to the provded samples, we get M = S S(I + TS) 1 TS. Usng the Sherman-Morrson-Woodbury formula, M 1 = S 1 + T. Now, D ld (K,M) s convex n M 1. Usng the property that a functon g(x) = f(a + x) s convex f f s convex, D ld (K,M) s convex n T. As T s a lnear functon of θ,1 r, D ld (K,M) s convex n θ 1,,θ r. Usng the above lemma, we can adapt Algorthm 1 to obtan a suboptmal soluton to (2) where M s gven by (5). Unlke the kernels n (3) and (5), the set M gven by (6) does not admt a convex subprolem when fxng K. However, snce only two parameters are nvolved, we can stll adapt our alternatve mnmzaton framework to obtan a reasonably effcent method for optmzng (2) usng M specfed n (6). 5. Dscusson 5.1. Connecton to Regularzaton Theory Now, we present a regularzaton theory based nterpretaton of our methodology for estmatng kernel K (Problem (4)). Usng dualty theory, t can be shown that the general form of the soluton to (4) s gven by: K = ( α 1 v v T + γj(e S e j )(e e j ) T (,j) S γj(e D e j )(e e j ) T ) 1 (7) (,j) D wth γ S j,γd j and e beng the vector wth the th entry one and rest zeros. Let f : X R be

5 a real valued functon over the feature space and f = [f(x 1 ),f(x 2 ),,f(x n )] T, we then have ( ) f T K 1 f = f T 1 v v T f + γ α j(f S f j ) 2 (,j) S γj(f D f j ) 2 (8) (,j) D where the frst term addresses the overall smoothness of functon f on the graph, whle the last two terms measures the volaton of parwse constrants. Formulaton (8) generalzes the jont regularzaton framework proposed by (Sndhwan et al., 25) to nclude non-postve defnte term (ds-smlarty term (,j) D γd j (f f j ) 2 n our case) n the regularzaton, whle the overall postve defnteness s stll ensured ether explctly through another constrant (K ) or mplctly through the partcular optmzaton algorthm (Bregman projecton n our case) Connecton to Gaussan Processes(GP) Next, we present an nterestng connecton of our method to that of GP based methods for estmatng M. Let K = Φ(X)Φ(X) T, where Φ(X) = [φ(x 1 )φ(x 2 ) φ(x n )] T and φ(x ) R m s the feature space representaton of pont x. As n standard GP based methods, assume that each of the feature dmenson of φ(x ) s are jontly Gaussan wth mean and covarance M that needs to be estmated. Thus, the lkelhood of the data s gven by: L = 1 exp (2π) n/2 M 1/2 ( 1 2 tr( Φ(X) T M 1 Φ(X) )). It s easy to see that maxmzng the above gven lkelhood s equvalent to mnmzng D ld (K,M) wth fxed K. Assumng a parametrc form for M = α v v T, GP based spectral kernel learnng s equvalent to learnng M usng our method. Furthermore, typcal GP based methods use one-rank target algnment kernel K = yy T, where y s the label of -th pont. In contrast, we use a more robust learned kernel K that not only accounts for the labels, but also the smlarty n the data ponts tself,.e. our learned kernel K s less lkely to overft to the provded labels and s applcable to a wder class of problems where supervson need not be n the form of labels. 6. Applcatons In ths secton, we descrbe a few applcatons of our geometry-aware metrc learnng framework (G-ML) for kernel learnng. Besdes enhancng exstng metrc/kernel learnng methods, our method also extends the applcaton of kernel learnng to a few prevously napplcable tasks as well, e.g., manfold learnng tasks Classfcaton Frst, we descrbe applcaton of our method to the task of classfcaton n two scenaros: 1) supervsed case where the test ponts are unknown n the tranng phase, and 2) sem-supervsed case where the test/unlabeled ponts are also part of the tranng. For both the cases, parwse smlarty/dssmlarty constrants are obtaned usng the provded labels over the data, and the k nearest neghbor classfer wth the learned kernel K s used for predctng the labels. In the supervsed learnng case, we apply G-ML to the task of automatc model selecton by learnng the parameters for the baselne kernel M. For sem-supervsed learnng, G-ML jontly learns the kernel K and the egenvalues of the spectral kernel M, thereby takng nto account the geometry of the unlabeled data. Note that the optmzaton step for M (step B) s smlar to the kernel-target algnment technque for selectng a spectral kernel (Zhu et al., 25). However, (Zhu et al., 25) treat the kernel as a long vector, whle our method respects the two-dmensonal structure and postve defnteness of the matrx M Manfold Learnng G-ML s applcable to sem-supervsed manfold learnng where the task s to learn and explot the underlyng manfold structure usng the provded supervson (parwse (ds-)smlarty constrants). In partcular, we apply G-ML to the task of non-lnear dmensonalty reducton and manfold algnment. In contrast to other metrc learnng methods (Xng et al., 22; Davs et al., 27) that learns the metrc over the ambent space, G-ML learns the metrc on the manfold, where {v } are the approxmate coordnates of the data on the manfold (Belkn & Nyog, 23). Colored Dmensonalty Reducton Here we consder the sem-supervsed dmensonalty reducton task where we want to retan both the ntrnsc manfold structure of data and the (partal) label nformaton. G-ML naturally merges the two sources of nformaton; the learned kernel K ncorporates the manfold structure (as expressed n {α } and {v }) whle reflectng the provded sde nformaton (expressed through constrants). Hence, the leadng egenvectors of K should provde a better low-dmensonal representaton of the data. In absence of any constrants, ths dmenson reducton model degenerates to Laplacan Egenmaps (Belkn & Nyog, 23). Furthermore, compared to (Song et al., 27), our model s able to learn a more accurate embeddng of the data (Fgure 4). Manfold Algnment Fnally, we apply our method to the task of manfold algnment, where the goal s to algn prevously dsconnected (or weakly connected) manfolds accordng to some common property. For example, consder mages of dfferent objects under a partcular transformaton, e.g. rotaton, llumnaton, scalng etc, whch wll form a low-dmensonal manfold called Le group. The goal s to estmate nformaton about the transformaton of the object n the mage, rather than the object tself. We show that G-ML accurately represents the correspondng Le group manfold by algnng the mage manfold of dfferent objects under the same transformaton (captured by a jont graph Laplacan). Ths algnment s acheved through

6 Test Error (%) Test Error (%) rs (Gaussan) ITML+Xvald 5 Iteratons onosphere (Gaussan) ITML+Xvald 5 Iteratons Test Error (%) Test Error (%) wne (Gaussan) ITML+Xvald 5 Iteratons scale (Gaussan) ITML+Xvald 5 Iteratons Fgure 2. 4-NN classfcaton error va kernels learned usng our method (G-ML) and ITML (Davs et al., 27). The datadependent kernel M s the RBF kernel. Clearly, G-ML s able to acheve compettve error rate whle learnng the kernel wdth for M, whle ITML requres cross valdaton. learnng the kernel K by constranng a small subset of mages wth smlar transformatons to have small dstance. 7. Expermental Results In ths secton, we evaluate our method for geometry-aware metrc learnng (G-ML) on the applcatons mentoned n the prevous secton. Specfcally, we apply our method to the task of classfcaton, sem-supervsed classfcaton, non-lnear dmensonalty reducton, and manfold algnment. For each task we compare our method wth the respectve state-of-the-art methods Classfcaton: Supervsed Learnng Frst, we apply our G-ML framework to the task of classfcaton n a supervsed learnng scenaro (Secton 6.1). For ths task, we consder the feasble set M for M to be scaled Gaussan RBF kernels wth unknown scale α and kernel wdth σ, as n (6). Unlke the spectral kernel case, the sub-problem for fndng α and σ s non-convex and a local optmum for the non-convex subproblem s found wth conjugate gradent descent (Matlab functon fmnsearch). The resultng K s then used for k-nn classfcaton. We evaluate our method (G-ML) on four standard UCI datasets (rs, wne, balance-scale and onosphere). For each dataset we use 2 ponts for tranng and the rest for testng. Fgure 2 compares 4-NN classfcaton error ncurred by our method to that of the state-of-the-art ITML method (Davs et al., 27). For ITML, the kernel wdth of Gaussan RBF M s selected usng leave-one-out cross valdaton. Clearly, G-ML s able to automatcally select a good kernel wdth, whle ITML requres slower cross valdaton to obtan a smlar wdth parameter Classfcaton: Sem-supervsed Learnng Next, we evaluate our method for classfcaton n the sem-supervsed settng (Secton 6.1). We evaluate our method on four datasets that fall n two broad categores: a) text classfcaton: two standard subsets of 2- newsgroup dataset, namely, baseball-hockey (1993 nstances/ 2 classes), and pc-mac (1943/2). b) dgt classfcaton: two subsets of USPS dgts dataset, odd-even (4/2) and ten dgts (4/). Odd-even nvolves classfyng odd 1, 3, 5, 7, 9 vs. even, 2, 4, 6, 8 dgts, whle ten dgts s the standard -way dgt classfcaton. To form the k-nn graph, we use cosne smlarty over tfdf representaton for text classfcaton datasets and RBFkernel functon over gray-scale pxel values for the dgts dataset. We compare G-ML (k-nn classfer wth k = 4) wth three state-of-the-art sem-supervsed kernels: nonparametrc spectral kernel (Zhu et al., 25), dffuson kernel (Kondor & Lafferty, 22), and maxmal-algnment kernel (Lanckret et al., 24). For all four sem-supervsed learnng models we use -NN unweghted graphs on all the datasets. The non-parametrc spectral kernel uses the frst 2 egenvectors (Zhu et al., 25), whereras G-ML uses the frst 2 egenvectors to form M. For the three compettor sem-supervsed kernels, we use support vector machnes (one-vs-all classfcaton). We also compare aganst three standard kernels: RBF kernel (bandwdth learned usng 5-fold cross valdaton), lnear kernel, and quadratc kernel. We use the dffuson kernel K = exp( tl) wth t =.1 for ntalzng our alternatng mnmzaton algorthm. Note that the varous parameter values are set arbtrarly wthout optmzng and do not gve an unfar advantage to the proposed method. We report the classfcaton error of G-ML averaged over 3 random tranng/testng splts; the results of competng methods are from (Zhu et al., 25). The frst row of Fgure 3 compares error ncurred by varous methods on each of the four datasets, the second row shows the test error rate at each teraton of G-ML usng 3 labeled examples (except for dgts dataset where we use 5 examples), whle the thrd row shows the same for 7 labeled examples ( examples for dgts). Clearly, on all the four data sets, G-ML gves comparable or better performance than stateof-the-art sem-supervsed learnng algorthms and sgnfcantly outperforms the supervsed learnng algorthms Colored Dmensonalty Reducton Next, we apply our method to the task of sem-supervsed non-lnear dmensonalty reducton. We evaluate our method on standard USPS dgts dataset, and compare t to the state-of-the-art colored Maxmum Varance Unfoldng (colored MVU) (Song et al., 27) method whch also performs dmensonalty reducton for labelled data. We also compare our method to ITML (Davs et al., 27) that does not take the local geometry nto account and Laplacan Egenmaps (Belkn & Nyog, 23) that does not explot the label nformaton. For vsualzaton, we reduce the dmensonalty of the data to two and plot each of the classes of dgts wth dfferent color (Fgure 4). For the proposed G-ML method, we use 2 samples to generate the

7 Geometry-aware Metrc Learnng dgts odd vs even pc vs mac baseball vs hockey NonpSpec Dffuson Max Algn Gaussan Lnear Quadratc 4 NonpSpec 35 Dffuson Max Algn 3 Gaussan Lnear Quadratc NonpSpec Dffuson 4 Max Algn Gaussan 35 Lnear Quadratc NonpSpec Dffuson Max Algn Gaussan Lnear Quadratc number of teratons number of teratons teset error (%) number of teratons number of teratons Fgure 3. Top row: Classfcaton error for varous methods on four standard datasets usng dfferent number of labeled samples. Note that G-ML consstently performs comparably or better than the best sem-supervsed learnng methods and sgnfcantly outperforms the supervsed learnng methods. Mddle Row and Bottom row: Classfcaton error rate wth 3 labeled samples and 7 labeled data (5, for dgts) as the number of teratons ncrease. In both the cases G-ML mproves over the ntal (dffuson) kernel. G-ML colored MVU (Song et al., 27) ITML (Davs et al., 27) LE (Belkn & Nyog, 23) Fgure 4. Two dmensonal embeddng of 27 USPS dgts usng dfferent methods. Color of the dots represents dfferent classes of dgts (color codng s provded n the top row). We observe that compared to other methods, our method separates the respectve manfolds of dfferent dgts more accurately, e.g. dgt 4. (Better vewed n color) parwse constrants, whle colored MVU s suppled wth all the labels. Note that other than dgt 5, G-ML s able to separate manfolds of all the dgts n the two-dmensonal embeddng. In contrast, colored MVU s unable to clearly separate manfolds of dgts 4, 5, 8, and 2 whle usng more labels than the proposed G-ML method Manfold Algnment In ths experment, we evaluate our method for the task of manfold algnment (Secton 6.2) on two datasets, each assocated wth a dfferent type of transformaton. The frst dataset conssts of mages of two subjects sampled from the Yale face B dataset, each wth 64 dfferent llumnaton condtons (varyng angles of two llumnaton sources). Note that the mages of each of the subjects le on an arbtrary orented two-dmensonal manfold. In order to algn the two manfolds, we randomly sample mustlnks for the mages wth the same llumnaton condtons. The top row of Fgure 5 shows three-dmensonal embeddng of the mages usng Laplacan Egenmaps (Belkn & Nyog, 23), proposed G-ML method at varous teratons, and ITML method wth RBF kernel as the baselne kernel (Davs et al., 27). We observe that G-ML s able to capture the manfold structure of the Le group and successfully algn them wthn fve teratons. Next, we apply our method to the task of llumnaton estmaton, where the goal s to retreve the mage wth the most smlar llumnaton to the gven query mage. As shown n the mddle row of Fgure 5, G-ML s able to accurately retreve smlar llumnaton mages rrespectve of the dentty of the person. The ITML method, whch does not capture the local geometry of the unsupervsed data, s unable to algn the data ponts w.r.t. the llumnaton transform and hence unable to accurately retreve smlar llumnaton mages. To gve a quanttatve evaluaton of manfold algnment, we also performed a smlar experment on a subset of COIL- 2 data datasets, whch contans mages of three subjects wth dfferent degree of rotaton (72 ponts unformly sam-

8 LE (Belkn & Nyog, 23) G-ML (teraton 1) G-ML (teraton 2) G-ML (teraton 5) G-ML (teraton 2) ITML (Davs et al., 27) query A query B G-ML ITML Fgure 5. Manfold algnment results (Yale Face). Top Row: 3-dmensonal embeddng of the mages of two subjects wth dfferent llumnaton. Mddle Row: the retreval result for two queres based on kernel learned usng G-ML. Bottom Row: the retreval result for the same two queres usng ITML kernel. We observe that G-ML s able to capture the local geometry of the manfold, whch s further confrmed by the llumnaton retreval results, where unlke ITML, G-ML s able to retreve smlar llumnaton mages rrespectve of the subject. (Better vewed n color) Recall Comparson of Recall vs NN retreved ITML DK Number of Nearest Neghbors Retreved Fgure 6. Ths plot shows recall as a functon of number of retreved mages, for G-ML, ITML, and Dffuson Kernel (DK). pled from 36 degree). Images of each subjects should le on a crcular one-dmensonal manfold. We apply our method to retreve mages wth smlar angle to a gven query mage. Fgure 6 shows that wth randomly chosen smlarty-constrants, our method s able to obtan recall of.47, sgnfcantly outperformng the ITML (.24) and the dffuson kernel (Kondor & Lafferty, 22) method (.23). Acknowledgements The research s supported by NSF grant CCF ZL s supported by the ICES postdoctoral fellowshp from the Unversty of Texas at Austn. References Belkn, M., & Nyog, P. (23). Laplacan egenmaps for dmensonalty reducton and data representaton. Neural Computaton,, Bengo, Y., Delalleau, O., Roux, N. L., Paement, J., Vncent, P., & Oumet, M. (24). Learnng egenfunctons lnks spectral embeddng and kernel PCA. Neural Computaton, 16, Chapelle, O., Weston, J., & Schlkopf, B. (23). Cluster kernels for sem-supervsed learnng. Advances n Neural Informaton Processng Systems (pp ). Davs, J., Kuls, B., Jan, P., Sra, S., & Dhllon, I. (27). Informaton-theoretc metrc learnng. Internatonal Conf. on Machne Learnng (pp ). Globerson, A., & Rowes, S. (25). Metrc Learnng by Collapsng Classes. Advances n Neural Informaton Processng Systems (pp ). Kondor, R. I., & Lafferty, J. D. (22). Dffuson kernels on graphs and other dscrete nput spaces. Internatonal Conf. on Machne Learnng (pp ). Lafferty, J., & Lebanon, G. (25). Dffuson kernels on statstcal manfolds. IEEE Trans. on Pattern Analyss and Machne Intellgence, 6, Lanckret, G. R. G., Crstann, N., Bartlett, P. L., Ghaou, L. E., & Jordan, M. I. (24). Learnng the kernel matrx wth semdefnte programmng. Journal of Machne Learnng Research, 5, Ong, C. S., Smola, A. J., & Wllamson, R. C. (25). Learnng the kernel wth hyperkernels. Journal of Machne Learnng Research, 6, Seeger, M. (28). Cross-valdaton optmzaton for large scale structured classfcaton kernel methods. Journal of Machne Learnng Research, 9, Sndhwan, V., Nyog, P., & Belkn, M. (25). Beyond the pont cloud: from transductve to sem-supervsed learnng. Int. Conf. on Machne Learnng (pp ). Song, L., Smola, A., Borgwardt, K. M., & Gretton, A. (27). Colored maxmum varance unfoldng. Advances n Neural Informaton Processng Systems (pp ). Wenberger, K., Bltzer, J., & Saul, L. (26). Dstance metrc learnng for large margn nearest neghbor classfcaton. Advances n Neural Informaton Processng Systems (pp ). Wenberger, K., Sha, F., & Saul, L. (24). Learnng a kernel matrx for nonlnear dmenson reducton. Internatonal Conf. on Machne Learnng (pp ). Xng, E., Ng, A., Jordan, M., & Russell, S. (22). Dstance metrc learnng, wth applcaton to clusterng wth sde-nformaton. Advances n Neural Informaton Processng Systems (pp ). Zhu, X., Kandola, J., Ghahraman, Z., & Lafferty, J. (25). Nonparametrc transforms of graph kernels for semsupervsed learnng. Advances n Neural Informaton Processng Systems (pp ).

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Semi-Supervised Discriminant Analysis Based On Data Structure

Semi-Supervised Discriminant Analysis Based On Data Structure IOSR Journal of Computer Engneerng (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 3, Ver. VII (May Jun. 2015), PP 39-46 www.osrournals.org Sem-Supervsed Dscrmnant Analyss Based On Data

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

Semi-Supervised Kernel Mean Shift Clustering

Semi-Supervised Kernel Mean Shift Clustering IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. XX, NO. XX, JANUARY XXXX 1 Sem-Supervsed Kernel Mean Shft Clusterng Saket Anand, Student Member, IEEE, Sushl Mttal, Member, IEEE, Oncel

More information

Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering

Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering Out-of-Sample Extensons for LLE, Isomap, MDS, Egenmaps, and Spectral Clusterng Yoshua Bengo, Jean-Franços Paement, Pascal Vncent Olver Delalleau, Ncolas Le Roux and Mare Oumet Département d Informatque

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Laplacian Eigenmap for Image Retrieval

Laplacian Eigenmap for Image Retrieval Laplacan Egenmap for Image Retreval Xaofe He Partha Nyog Department of Computer Scence The Unversty of Chcago, 1100 E 58 th Street, Chcago, IL 60637 ABSTRACT Dmensonalty reducton has been receved much

More information

Learning an Image Manifold for Retrieval

Learning an Image Manifold for Retrieval Learnng an Image Manfold for Retreval Xaofe He*, We-Yng Ma, and Hong-Jang Zhang Mcrosoft Research Asa Bejng, Chna, 100080 {wyma,hjzhang}@mcrosoft.com *Department of Computer Scence, The Unversty of Chcago

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Semi-supervised Classification Using Local and Global Regularization

Semi-supervised Classification Using Local and Global Regularization Proceedngs of the Twenty-Thrd AAAI Conference on Artfcal Intellgence (2008) Sem-supervsed Classfcaton Usng Local and Global Regularzaton Fe Wang 1, Tao L 2, Gang Wang 3, Changshu Zhang 1 1 Department of

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Why consder unlabeled samples?. Collectng and labelng large set of samples s costly Gettng recorded speech s free, labelng s tme consumng 2. Classfer could be desgned

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

Data-dependent Hashing Based on p-stable Distribution

Data-dependent Hashing Based on p-stable Distribution Data-depent Hashng Based on p-stable Dstrbuton Author Ba, Xao, Yang, Hachuan, Zhou, Jun, Ren, Peng, Cheng, Jan Publshed 24 Journal Ttle IEEE Transactons on Image Processng DOI https://do.org/.9/tip.24.2352458

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM Classfcaton of Face Images Based on Gender usng Dmensonalty Reducton Technques and SVM Fahm Mannan 260 266 294 School of Computer Scence McGll Unversty Abstract Ths report presents gender classfcaton based

More information

Transductive Regression Piloted by Inter-Manifold Relations

Transductive Regression Piloted by Inter-Manifold Relations Huan Wang IE, The Chnese Unversty of Hong Kong, Hong Kong Shucheng Yan Thomas Huang ECE, Unversty of Illnos at Urbana Champagn, USA Janzhuang Lu Xaoou Tang IE, The Chnese Unversty of Hong Kong, Hong Kong

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Mercer Kernels for Object Recognition with Local Features

Mercer Kernels for Object Recognition with Local Features TR004-50, October 004, Department of Computer Scence, Dartmouth College Mercer Kernels for Object Recognton wth Local Features Swe Lyu Department of Computer Scence Dartmouth College Hanover NH 03755 A

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Supervsed vs. Unsupervsed Learnng Up to now we consdered supervsed learnng scenaro, where we are gven 1. samples 1,, n 2. class labels for all samples 1,, n Ths s also

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Fixing Max-Product: Convergent Message Passing Algorithms for MAP LP-Relaxations

Fixing Max-Product: Convergent Message Passing Algorithms for MAP LP-Relaxations Fxng Max-Product: Convergent Message Passng Algorthms for MAP LP-Relaxatons Amr Globerson Tomm Jaakkola Computer Scence and Artfcal Intellgence Laboratory Massachusetts Insttute of Technology Cambrdge,

More information

ManifoldBoost: Stagewise Function Approximation for Fully-, Semiand Un-supervised Learning

ManifoldBoost: Stagewise Function Approximation for Fully-, Semiand Un-supervised Learning ManfoldBoost: Stagewse Functon Approxmaton for Fully-, Semand Un-supervsed Learnng Ncolas Loeff loeff@uuc.edu Department of Electrcal and Computer Engneerng, Unversty of Illnos, Urbana, IL 61801 Davd Forsyth

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue

More information

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Positive Semi-definite Programming Localization in Wireless Sensor Networks Postve Sem-defnte Programmng Localzaton n Wreless Sensor etworks Shengdong Xe 1,, Jn Wang, Aqun Hu 1, Yunl Gu, Jang Xu, 1 School of Informaton Scence and Engneerng, Southeast Unversty, 10096, anjng Computer

More information

SELECTION OF THE NUMBER OF NEIGHBOURS OF EACH DATA POINT FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM

SELECTION OF THE NUMBER OF NEIGHBOURS OF EACH DATA POINT FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM ISSN 392 24X INFORMATION TECHNOLOGY AND CONTROL, 2007, Vol.36, No.4 SELECTION OF THE NUMBER OF NEIGHBOURS OF EACH DATA POINT FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM Rasa Karbauskatė,2, Olga Kurasova,2,

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

Three supervised learning methods on pen digits character recognition dataset

Three supervised learning methods on pen digits character recognition dataset Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

Detection of an Object by using Principal Component Analysis

Detection of an Object by using Principal Component Analysis Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

Intelligent Information Acquisition for Improved Clustering

Intelligent Information Acquisition for Improved Clustering Intellgent Informaton Acquston for Improved Clusterng Duy Vu Unversty of Texas at Austn duyvu@cs.utexas.edu Mkhal Blenko Mcrosoft Research mblenko@mcrosoft.com Prem Melvlle IBM T.J. Watson Research Center

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

New Extensions of the 3-Simplex for Exterior Orientation

New Extensions of the 3-Simplex for Exterior Orientation New Extensons of the 3-Smplex for Exteror Orentaton John M. Stenbs Tyrone L. Vncent Wllam A. Hoff Colorado School of Mnes jstenbs@gmal.com tvncent@mnes.edu whoff@mnes.edu Abstract Object pose may be determned

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity Journal of Sgnal and Informaton Processng, 013, 4, 114-119 do:10.436/jsp.013.43b00 Publshed Onlne August 013 (http://www.scrp.org/journal/jsp) Corner-Based Image Algnment usng Pyramd Structure wth Gradent

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

LOCALIZING USERS AND ITEMS FROM PAIRED COMPARISONS. Matthew R. O Shaughnessy and Mark A. Davenport

LOCALIZING USERS AND ITEMS FROM PAIRED COMPARISONS. Matthew R. O Shaughnessy and Mark A. Davenport 2016 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 13 16, 2016, SALERNO, ITALY LOCALIZING USERS AND ITEMS FROM PAIRED COMPARISONS Matthew R. O Shaughnessy and Mark A. Davenport

More information

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY SSDH: Sem-supervsed Deep Hashng for Large Scale Image Retreval Jan Zhang, and Yuxn Peng arxv:607.08477v2 [cs.cv] 8 Jun 207 Abstract Hashng

More information

Small Network Segmentation with Template Guidance

Small Network Segmentation with Template Guidance Small Network Segmentaton wth Template Gudance Krstn Dane Lu Department of Mathematcs Unversty of Calforna, Davs Davs, CA 95616 kdlu@math.ucdavs.edu Ian Davdson Department of Computer Scence Unversty of

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide Lobachevsky State Unversty of Nzhn Novgorod Polyhedron Quck Start Gude Nzhn Novgorod 2016 Contents Specfcaton of Polyhedron software... 3 Theoretcal background... 4 1. Interface of Polyhedron... 6 1.1.

More information

Semi-supervised Mixture of Kernels via LPBoost Methods

Semi-supervised Mixture of Kernels via LPBoost Methods Sem-supervsed Mxture of Kernels va LPBoost Methods Jnbo B Glenn Fung Murat Dundar Bharat Rao Computer Aded Dagnoss and Therapy Solutons Semens Medcal Solutons, Malvern, PA 19355 nbo.b, glenn.fung, murat.dundar,

More information

Related-Mode Attacks on CTR Encryption Mode

Related-Mode Attacks on CTR Encryption Mode Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory

More information

Fuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System

Fuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System Fuzzy Modelng of the Complexty vs. Accuracy Trade-off n a Sequental Two-Stage Mult-Classfer System MARK LAST 1 Department of Informaton Systems Engneerng Ben-Guron Unversty of the Negev Beer-Sheva 84105

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

Accounting for the Use of Different Length Scale Factors in x, y and z Directions

Accounting for the Use of Different Length Scale Factors in x, y and z Directions 1 Accountng for the Use of Dfferent Length Scale Factors n x, y and z Drectons Taha Soch (taha.soch@kcl.ac.uk) Imagng Scences & Bomedcal Engneerng, Kng s College London, The Rayne Insttute, St Thomas Hosptal,

More information

On Multiple Kernel Learning with Multiple Labels

On Multiple Kernel Learning with Multiple Labels On Multple Kernel Learnng wth Multple Labels Le Tang Department of CSE Arzona State Unversty L.Tang@asu.edu Janhu Chen Department of CSE Arzona State Unversty Janhu.Chen@asu.edu Jepng Ye Department of

More information

MULTI-VIEW ANCHOR GRAPH HASHING

MULTI-VIEW ANCHOR GRAPH HASHING MULTI-VIEW ANCHOR GRAPH HASHING Saehoon Km 1 and Seungjn Cho 1,2 1 Department of Computer Scence and Engneerng, POSTECH, Korea 2 Dvson of IT Convergence Engneerng, POSTECH, Korea {kshkawa, seungjn}@postech.ac.kr

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Robust and Effective Metric Learning Using Capped Trace Norm

Robust and Effective Metric Learning Using Capped Trace Norm Robust and Effectve Metrc Learnng Usng Capped Trace Norm Zhouyuan Huo Department of Computer Scence and Engneerng Unversty of Texas at Arlngton Texas, USA huozhouyuan@gmal.com Fepng Ne Department of Computer

More information

Local Quaternary Patterns and Feature Local Quaternary Patterns

Local Quaternary Patterns and Feature Local Quaternary Patterns Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents

More information

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 1, JANUARY

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 1, JANUARY IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 1, JANUARY 2015 189 Dscrmnatve Shared Gaussan Processes for Multvew and Vew-Invarant Facal Expresson Recognton Stefanos Eleftherads, Student Member,

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

Categories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms

Categories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms 3. Fndng Determnstc Soluton from Underdetermned Equaton: Large-Scale Performance Modelng by Least Angle Regresson Xn L ECE Department, Carnege Mellon Unversty Forbs Avenue, Pttsburgh, PA 3 xnl@ece.cmu.edu

More information