Semi-supervised Classification Using Local and Global Regularization
|
|
- Tabitha Houston
- 6 years ago
- Views:
Transcription
1 Proceedngs of the Twenty-Thrd AAAI Conference on Artfcal Intellgence (2008) Sem-supervsed Classfcaton Usng Local and Global Regularzaton Fe Wang 1, Tao L 2, Gang Wang 3, Changshu Zhang 1 1 Department of Automaton, Tsnghua Unversty, Bejng, Chna 2 School of Computng and Informaton Scences, Florda Internatonal Unversty, Mam, FL, USA 3 Mcrosoft Chna Research, Bejng, Chna Abstract In ths paper, we propose a sem-supervsed learnng (SSL) algorthm based on local and global regularzaton. In the local regularzaton part, our algorthm constructs a regularzed classfer for each data pont usng ts neghborhood, whle the global regularzaton part adopts a Laplacan regularzer to smooth the data labels predcted by those local classfers. We show that some exstng SSL algorthms can be derved from our framework. Fnally we present some expermental results to show the effectveness of our method. Introducton Sem-supervsed learnng (SSL), whch ams at learnng from partally labeled data sets, has receved consderable nterests from the machne learnng and data mnng communtes n recent years (Chapelle et al., 2006b). One reason for the popularty of SSL s because n many real world applcatons, the acquston of suffcent labeled data s qute expensve and tme consumng, but the large amount of unlabeled data are far easer to obtan. Many SSL methods have been proposed n the recent decades (Chapelle et al., 2006b), among whch the graph based approaches, such as Gaussan Random Felds (Zhu et al., 2003), Learnng wth Local and Global Regularzaton (Zhou et al., 2004) and Tkhonov Regularzaton (Belkn et al., 2004), have been becomng one of the hottest research area n SSL feld. The common denomnator of those algorthms s to model the whole data set as an undrected weghted graph, whose vertces correspond to the data set, and edges reflect the relatonshps between parwse data ponts. In SSL settng, some of the vertces on the graph are labeled, whle the remaned are unlabeled, and the goal of graph based SSL s to predct the labels of those unlabeled data ponts (and even the new testng data whch are not n the graph) such that the predcted labels are suffcently smooth wth respect to the data graph. One common strategy for realzng graph based SSL s to mnmze a crteron whch s composed of two parts, the frst part s a loss measures the dfference between the predctons and the ntal data labels, and the second part s a smoothness penalty measurng the smoothness of the predcted labels over the whole data graph. Most of the past Copyrght c 2008, Assocaton for the Advancement of Artfcal Intellgence ( All rghts reserved. works concentrate on the dervaton of dfferent forms of smoothness regularzers, such as the ones usng combnatoral graph Laplacan (Zhu et al., 2003)(Belkn et al., 2006), normalzed graph Laplacan (Zhou et al., 2004), exponental/teratve graph Laplacan (Belkn et al., 2004), local lnear regularzaton (Wang & Zhang, 2006) and local learnng regularzaton (Wu & Schölkopf, 2007), but rarely touch the problem of how to derve a more effcent loss functon. In ths paper, we argue that rather than applyng a global loss functon whch s based on the constructon of a global predctor usng the whole data set, t would be more desrable to measure such loss locally by buldng some local predctors for dfferent regons of the nput data space. Snce accordng to (Vapnk, 1995), usually t mght be dffcult to fnd a predctor whch holds a good predctablty n the entre nput data space, but t s much easer to fnd a good predctor whch s restrcted to a local regon of the nput space. Such dvde and conquer scheme has been shown to be much more effectve n some real world applcatons (Bottou & Vapnk, 1992). One problem of ths local strategy s that the number of data ponts n each regon s usually too small to tran a good predctor, therefore we propose to also apply a global smoother to make the predcted data labels more comply wth the ntrnsc data dstrbutons. A Bref Revew of Manfold Regularzaton Before we go nto the detals of our algorthm, let s frst revew the basc dea of manfold regularzaton (Belkn et al., 2006) n ths secton, snce t s closely related to ths paper. As we know, n sem-supervsed learnng, we are gven a set of data ponts X = {x 1,,x l,x l+1,,x n }, where X l = {x } l =1 are labeled, and X u = {x j } n j=l+1 are unlabeled. Each x X s drawn from a fxed but usually unknown dstrbuton p(x). Belkn et al. (Belkn et al., 2006) proposed a general geometrc framework for semsupervsed learnng called manfold regularzaton, whch seeks an optmal classfcaton functon f by mnmzng the followng objectve J g = l =1 L(y, f(x,w)) + γ A f 2 F + γ I f 2 I, (1) where y represents the label of x, f(x,w) denotes the classfcaton functon f wth ts parameter w, f F penalzes the complexty of f n the functonal space F, f I reflects 726
2 the ntrnsc geometrc nformaton of the margnal dstrbuton p(x), γ A, γ I are the regularzaton parameters. The reason why we should punsh the geometrcal nformaton of f s that n sem-supervsed learnng, we only have a small porton of labeled data (.e. l s small), whch are not enough to tran a good learner by purely mnmzng the structural loss of f. Therefore, we need some pror knowledge to gude us to learn a good f. What p(x) reflects s just such type of pror nformaton. Moreover, t s usually assumed (Belkn et al., 2006) that there s a drect relatonshp between p(x) and p(y x),.e. f two ponts x 1 and x 2 are close n the ntrnsc geometry of p(x), then the condtonal dstrbutons p(y x 1 ) and p(y x 2 ) should be smlar. In other words, p(y x) should vary smoothly along the geodescs n the ntrnsc geometry of p(x). Specfcally, (Belkn et al., 2006) also showed that f 2 I can be approxmated by Ŝ =,j (f(x ) f(x j )) 2 W j = f T Lf (2) where n s the total number of data ponts, and W j are the edge weghts n the data adjacency graph, f = (f(x 1 ),, f(x n )) T. L = D W R n n s the graph Laplacan where W s the graph weght matrx wth ts (, j)-th entry W(, j) = W j, and D s a dagonal degree matrx wth D(, ) = j W j. There has been extensve dscusson on that under certan condtons choosng Gaussan weghts for the adjacency graph leads to convergence of the graph Laplacan to the Laplace-Beltram operator M (or ts weghted verson) on the manfold M(Belkn & Nyog, 2005)(Hen et al., 2005). The Algorthm In ths secton we wll ntroduce our learnng wth local and global regularzaton approach n detal. Frst let s see the motvaton of ths work. Why Local Learnng Although (Belkn et al., 2006) provdes us an excellent framework for learnng from labeled and unlabeled data, the loss J g s defned n a global way,.e. for the whole data set, we only need to pursue one classfcaton functon f that can mnmze J g. Accordng to (Vapnk, 1995), selectng a good f n such a global way mght not be a good strategy because the functon set f(x, w), w W may not contan a good predctor for the entre nput space. However, t s much easer for the set to contan some functons that are capable of producng good predctons on some specfed regons of the nput space. Therefore, f we splt the whole nput space nto C local regons, then t s usually more effectve to mnmze the followng local cost functon for each regon. Nevertheless, there are stll some problems wth pure local learnng algorthms snce that there mght not be enough data ponts n each local regon for tranng the local classfers. Therefore, we propose to apply a global smoother to smooth the predcted data labels wth respect to the ntrnsc data manfold, such that the predcted data labels can be more reasonable and accurate. The Constructon of Local Classfers In ths subsecton, we wll ntroduce how to construct the local classfers. Specfcally, n our method, we splt the whole nput data space nto n overlappng regons {R } n =1, such that R s just the k-nearest neghborhood of x. We further construct a classfcaton functon f for regon R, whch, for smplcty, s assumed to be lnear. Then g predcts the label of x by g (x) = w T (x x ) + b, (3) where w and b are the weght vector and bas term of g 1. A general approach for gettng the optmal parameter set {(w, b )} n =1 s to mnmze the followng structural loss n Jˆ l = (w T (x j x ) + b y j ) 2 + γ A w 2. =1 x j R However, n sem-supervsed learnng scenaro, we only have a few labeled ponts,.e., we do not know the correspondng y for most of the ponts. To allevate ths problem, we assocate each y wth a hdden label f, such that y s drectly determned by f. Then we can mnmze the followng loss functon nstead to get the optmal parameters. J l = (y f ) 2 + λj ˆ l (4) = =1 n =1 x j R (w T (x j x ) + b f j ) 2 + γ A w 2 Let Jl = x j R (w T(x j x ) + b f ) 2 + γ A w 2, whch can be rewrtten n ts matrx form as [ ] Jl = G w b f 2 where G = x T 1 x T 1 x T 2 x T 1 1.., f = x T n x T 1 γa I d 0 f 1 f 2. f n 0. where x j represents the j-th neghbor of x, n s the cardnalty of R, and 0 s a d 1 zero vector, d s the dmensonalty of the data vectors. By takng (w,b ) Jl = 0, we can get that [ ] w = (G b T G ) 1 G T f (5) Then the total loss we want to mnmze becomes ˆ J l = J l = f GT G f, (6) 1 Snce there s only a few data ponts n each neghborhood, then the structural penalty term w wll pull the weght vector w toward some arbtrary orgn. For sotropy reasons, we translate the orgn of the nput space to the neghborhood medod x, by subtractng x from the tranng ponts x j R 727
3 where G = I G (G T G ) 1 G T. If we partton G nto four block as [ A n n G = B n d ] C d n Let f = [f 1, f 2,, f n ] T, then Thus D d d [ ] [ f T G f = [f T A B 0] f C D 0 Ĵ l = ] = f T A f f T A f. (7) Furthermore, we have the followng theorem. Theorem 1. ( A = I n X T H 1 X + XT H 1 X 11 T X T H 1 n c XT H 1 X 11 T n c 11T X T H 1 X + 11T n c n c where H = X X T + γ A I d, c = 1 T X T H 1 X 1, 1 R n 1 s an all-one vector, and A 1 = 0. Proof. See the supplemental materal. Then we can defne the label vector f = [f 1, f 2,, f n ] T R n 1, the concatenated label vector ˆf = [f T 1,f2 T,,fT n ]T and the concatenated block-dagonal matrx A A 2 0 Ĝ = , 0 0 A n whch s of sze n n. Then from Eq.(7) we can derve that J l = ˆf T Ĝˆf. Defne the selecton matrx S {0, 1} n P n, whch s a 0-1 matrx and there s only one 1 n each row of S, such that ˆf = Sf. Then J ˆ l = f T S T ĜSf. Let M = S T ĜS R n n, (8) whch s a square matrx, then we can rewrte ˆ J l as X ˆ J l = f T Mf. (9) SSL wth Local & Global Regularzatons As stated n secton 3.1, we also need to apply a global smoother to smooth the predcted hdden labels {f }. Here we apply the same smoothness regularzer as n Eq.(2), then the predcted labels can be acheved by mnmzng J = =1 (y f ) 2 + λf T Mf + γ I n 2fT Lf. (10) By settng J / f = 0 we can get that ( f = J + λm + γ ) 1 n 2L Jy, (11) ), where J R n n s a dagonal matrx wth ts (, )-th entry { 1, f x s labeled; J(, ) = (12) 0, otherwse, y s an n 1 column vector wth ts -th entry { y, f x y() = s labeled; 0, otherwse Inducton To predct the label of an unseen testng data pont, whch has not appeared n X, we propose a three-step approach: Step 1. Solve the optmal label vector f usng. Step 2. Solve the parameters {w, b } of the optmal local classfcaton functons usng Eq.(5). Step 3. For a new testng pont x, frst dentfy the local regons that x falls n (e.g. by computng the Eucldean dstance between x to the regon medods and select the nearest one), then apply the local predcton functons of the correspondng regons to predct ts label. Dscussons In ths secton, we dscuss the relatonshps between the proposed framework wth some exstng related approaches, and present another mxed regularzaton framework for the algorthm presented n secton. Relatonshp wth Related Approaches There has already been some sem-supervsed learnng algorthms based on dfferent regularzatons. In ths subsecton, we wll dscuss the relatonshps between our algorthm wth those exstng approaches. Relatonshp wth Gaussan-Laplacan Regularzed Approaches Most of tradtonal graph based SSL algorthms (e.g. (Belkn et al., 2004; Zhou et al., 2004; Zhu et al., 2003)) are based on the followng framework f = argmn f (f y ) 2 + ζf T Lf, (13) =1 where f = [f 1, f 2,, f l,, f n ] T, L s the graph Laplacan constructed by Gaussan functons. Clearly, the above framework s just a specal case of our algorthm f we set λ = 0, γ I = n 2 ζ n Eq.(10). Relatonshp wth Local Learnng Regularzed Approaches Recently, Wu & Schölkopf (Wu & Schölkopf, 2007) proposed a novel transducton method based on local learnng, whch ams to solve the followng optmzaton problem f = arg mn f (f y ) 2 + ζ =1 n f o 2, (14) =1 where o s the label of x predcted by the local classfer constructed on the neghborhood of x, and the parameters of the local classfer can be represented by f va mnmzng some local structural loss functons as n Eq.(5). 728
4 Ths approach can be understood as a two-step approach for optmzng Eq.(10) wth γ I = 0: n the fst step, t optmzes the classfer parameters by mnmzng local structural loss (Eq.(4)); n the second step, t mnmzes the predcton loss of each data ponts by the local classfer constructed just on ts neghborhood. A Mxed-Regularzaton Vewpont In secton 3.3 we have stated that our algorthm ams to mnmze J = =1 (y f ) 2 + λf T Mf + γ I n 2fT Lf (15) where M s defned n Eq.(8) and L s the conventonal graph Laplacan constructed by Gaussan functons. It s easy to prove that M has the followng property. Theorem 2. M1 = 0, where 1 R n 1 s a column vector wth all ts elements equalng to 1. Proof. From the defnton of M (Eq.(8)), we have M1 = S T ĜS1 = S T Ĝ1 = 0, Therefore, M can also be vewed as a Laplacan matrx. That s, the last two terms of R l can all be vewed as regularzaton terms wth dfferent Laplacans, one s derved from local learnng, the other s derved from the heat kernel. Hence our algorthm can also be understood from a mxed regularzaton vewpont (Chapelle et al., 2006a)(Zhu & Goldberg, 2007). Just lke the multvew learnng algorthm, whch trans the same type of classfer usng dfferent data features, our method trans dfferent classfers usng the same data features. Dfferent types of Laplacans may better reveal dfferent (maybe complementary) nformaton and thus provde a more powerful classfer. Experments In ths secton, we present a set of experments to show the effectveness of our method. Frst let s descrbe the basc nformaton of the data sets. The Data Sets We adopt 12 data sets n our experments, ncludng 2 artfcal data sets g241c and g241n, three mage data sets USPS, COIL, dgt1, one BCI data set 2, four text data sets cornell, texas, wsconsn and washngton from the WebKB data set 3, and two UCI data sets dabetes and onosphere 4. Table 1 summarzes the characterstcs of the datasets. 2 All these former 6 data sets can be downloaded from benchmarks.html. 3 WebKB/. 4 MLRepostory.html. Table 1: Descrptons of the datasets Datasets Szes Classes Dmensons g241c g241n USPS COIL dgt cornell texas wsconsn washngton BCI dabetes onosphere Methods & Parameter Settngs Besdes our method, we also mplement some other competng methods for expermental comparson. For all the methods, ther hyperparameters were set by 5-fold cross valdaton from some grds ntroduced n the followng. Local and Global Regularzaton (). In the mplementaton the neghborhood sze s searched from {5, 10, 50}, γ A and λ are all searched from {4 3, 4 2, 4 1, 1, 4 1, 4 2, 4 3 } and we set λ + γi n = 1, the wdth of the Gaussan smlarty when constructng 2 the graph s set by the method n (Zhu et al., 2003). Local Learnng Regularzaton (). The mplementaton of ths algorthm s the same as n (Wu & Schölkopf, 2007), n whch we also adopt the mutual neghborhood wth ts sze search from {5, 10, 50}. The regularzaton parameter of the local classfer and the tradeoff parameter between the loss and local regularzaton term are searched from {4 3, 4 2, 4 1, 1, 4 1, 4 2, 4 3 }. Laplacan Regularzed Least Squares (). The mplementaton code s downloaded from manfold_regularzaton/software.html,, n whch the wdth of the Gaussan smlarty s also set by the method n (Zhu et al., 2003), and the extrnsc and ntrnsc regularzaton parameters are searched from {4 3, 4 2, 4 1, 1, 4 1, 4 2, 4 3 }. We adopt the lnear kernel snce our algorthm s locally lnear. Learnng wth Local and Global Consstency (). The mplementaton of the algorthm s the same as n (Zhou et al., 2004), n whch the wdth of the Gaussan smlarty s also by the method n (Zhu et al., 2003), and the regularzaton parameter s searched from {4 3, 4 2, 4 1, 1, 4 1, 4 2, 4 3 }. Gaussan Random Felds (). The mplementaton of the algorthm s the same as n (Zhu et al., 2003). Support Vector Machne (). We use lb (Fan et al., 2005) to mplement the algorthm wth a lnear kernel, and the cost parameter s searched from {10 4, 10 3, 10 2, 10 1, 1, 10 1, 10 2, 10 3, 10 4 }. 729
5 (a) g241c (b) g241n (c) USPS (d) COIL (e) dgt1 5 5 (f) cornell (g) texas 4 (h) wsconsn () washngton (j) BCI number of randomly labeled ponts (k) dabetes (l) onosphere Fgure 1: Expermental results of dfferent algorthms. 730
6 Table 2: Expermental results wth 10% of the data ponts randomly labeled g241c ± ± ± ± ± ± g241n ± ± ± ± ± ± USPS ± ± ± ± ± ± COIL ± ± ± ± ± ± dgt ± ± ± ± ± ± cornell ± ± ± ± ± ± 968 texas ± ± ± ± ± ± wsconsn ± ± ± ± ± ± washngton ± ± ± ± ± ± BCI ± ± ± ± ± ± dabetes ± ± ± ± ± ± onosphere ± ± ± ± ± ± Expermental Results The expermental results are shown n fgure 1. In all the fgures, the x-axs represents the percentage of randomly labeled ponts, the y-axs s the over 50 ndependent runs. From the fgures we can observe The algorthm works very well on the toy and text data sets, but not very well on the mage and UCI data sets. The and algorthm work well on the mage data sets, but not very well on other data sets. The algorthm works well on the mage and text data sets, but not very well on the BCI and toy data sets. works well when the data sets are not well structured, e.g. the toy, UCI and BCI data sets. works very well on almost all the data sets, except for the toy data sets. To better llustrate the expermental results, we also provde the numercal results of those algorthms on all the data sets wth 10% of the ponts randomly labeled, and the values n table 2 are the mean classfcaton accuraces and standard devatons of 50 ndependent runs, from whch we can also see the superorty of the algorthm. Conclusons In ths paper we proposed a general learnng framework based on local and global regularzaton. We showed that many exstng learnng algorthms can be derved from our framework. Fnally experments are conducted to demonstrate the effectveness of our method. References Belkn, M., Matveeva, I., and Nyog, P. (2004). Regularzaton and Sem-supervsed Learnng on Large Graphs. In COLT 17. Belkn, M., and Nyog, P. Towards a Theoretcal Foundaton for Laplacan-Based Manfold Methods. In COLT 18. Belkn, M., Nyog, P., and Sndhwan, V. (2006). Manfold Regularzaton: A Geometrc Framework for Learnng from Labeled and Unlabeled Examples. Journal of Machne Learnng Research 7(Nov): Bottou, L. and Vapnk, V. (1992). Local learnng algorthms. Neural Computaton, 4: Chapelle, O., Ch, M. and Zen, A. (2006). A Contnuaton Method for Sem-Supervsed s. ICML 23, Chapelle, O., B. Schölkopf and A. Zen. (2006). Sem- Supervsed Learnng. 508, MIT Press, Cambrdge, USA. Fan, R. -E., Chen, P. -H., and Ln, C.-J. (2005). Workng Set Selecton Usng Second Order Informaton for Tranng. Journal of Machne Learnng Research 6. Lal, T. N., Schröder, M., Hnterberger, T., Weston, J., Bogdan, M., Brbaumer, N., and Schölkopf, B. (2004). Support Vector Channel Selecton n BCI. IEEE TBE, 51(6). Gloub, G. H., Vanloan, C. F. (1983). Matrx Computatons. Johns Hopkng UP, Baltmore. Hen, M., Audbert, J. Y., and von Luxburg, U. (2005). From Graphs to Manfolds-Weak and Strong Pontwse Consstency of Graph Laplacans. In COLT 18, Schölkopf, B. and Smola, A. (2002). Learnng wth Kernels: Support Vector Machnes, Regularzaton, Optmzaton, and Beyond. The MIT Press, Cambrdge, MA. Smola, A. J., Bartlett, P. L., Scholkopf, B., and Schuurmans, D. (2000). Advances n Large Margn Classfers, The MIT Press. Vapnk, V. N. (1995). The Nature of Statstcal Learnng Theory. Berln: Sprnger-Verlag, Wang, F. and Zhang, C. (2006). Label Propagaton Through Lnear Neghborhoods. ICML 23. Wu, M. and Schölkopf, B. (2007). Transductve Classfcaton va Local Learnng Regularzaton. AISTATS 11. Zhou, D., Bousquet, O., Lal, T. N. Weston, J., & Schölkopf, B. (2004). Learnng wth Local and Global Consstency. In NIPS 16. Zhu, X., Ghahraman, Z., and Lafferty, Z. (2003). Sem- Supervsed Learnng Usng Gaussan Felds and Harmonc Functons. In ICML 20. Zhu, X. and Goldberg, A. (2007). Kernel Regresson wth Order Preferences. In AAAI. 731
Support Vector Machines
/9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.
More informationLearning the Kernel Parameters in Kernel Minimum Distance Classifier
Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department
More informationClassifier Selection Based on Data Complexity Measures *
Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.
More informationDiscriminative Dictionary Learning with Pairwise Constraints
Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse
More informationBAYESIAN MULTI-SOURCE DOMAIN ADAPTATION
BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,
More informationMachine Learning 9. week
Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below
More informationThe Research of Support Vector Machine in Agricultural Data Classification
The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou
More informationSupport Vector Machines
Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned
More informationAn Optimal Algorithm for Prufer Codes *
J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,
More informationLECTURE : MANIFOLD LEARNING
LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors
More informationTsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance
Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for
More informationSolving two-person zero-sum game by Matlab
Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by
More informationDetermining the Optimal Bandwidth Based on Multi-criterion Fusion
Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn
More informationContent Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers
IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth
More informationHigh-Boost Mesh Filtering for 3-D Shape Enhancement
Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,
More informationIncremental Learning with Support Vector Machines and Fuzzy Set Theory
The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and
More informationCollaboratively Regularized Nearest Points for Set Based Recognition
Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,
More informationParallelism for Nested Loops with Non-uniform and Flow Dependences
Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr
More informationSubspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;
Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd
More informationThree supervised learning methods on pen digits character recognition dataset
Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru
More informationTransductive Regression Piloted by Inter-Manifold Relations
Huan Wang IE, The Chnese Unversty of Hong Kong, Hong Kong Shucheng Yan Thomas Huang ECE, Unversty of Illnos at Urbana Champagn, USA Janzhuang Lu Xaoou Tang IE, The Chnese Unversty of Hong Kong, Hong Kong
More information12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification
Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero
More informationOutline. Type of Machine Learning. Examples of Application. Unsupervised Learning
Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton
More informationCluster Analysis of Electrical Behavior
Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School
More informationIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY SSDH: Sem-supervsed Deep Hashng for Large Scale Image Retreval Jan Zhang, and Yuxn Peng arxv:607.08477v2 [cs.cv] 8 Jun 207 Abstract Hashng
More informationSmoothing Spline ANOVA for variable screening
Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory
More informationClassification / Regression Support Vector Machines
Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM
More informationR s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes
SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges
More informationLaplacian Eigenmap for Image Retrieval
Laplacan Egenmap for Image Retreval Xaofe He Partha Nyog Department of Computer Scence The Unversty of Chcago, 1100 E 58 th Street, Chcago, IL 60637 ABSTRACT Dmensonalty reducton has been receved much
More informationAn Improved Spectral Clustering Algorithm Based on Local Neighbors in Kernel Space 1
DOI: 10.98/CSIS110415064L An Improved Spectral Clusterng Algorthm Based on Local Neghbors n Kernel Space 1 Xnyue Lu 1,, Xng Yong and Hongfe Ln 1 1 School of Computer Scence and Technology, Dalan Unversty
More informationA Fast Content-Based Multimedia Retrieval Technique Using Compressed Data
A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,
More informationMachine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)
Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes
More informationHuman Face Recognition Using Generalized. Kernel Fisher Discriminant
Human Face Recognton Usng Generalzed Kernel Fsher Dscrmnant ng-yu Sun,2 De-Shuang Huang Ln Guo. Insttute of Intellgent Machnes, Chnese Academy of Scences, P.O.ox 30, Hefe, Anhu, Chna. 2. Department of
More informationClassifying Acoustic Transient Signals Using Artificial Intelligence
Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)
More informationFace Recognition Based on SVM and 2DPCA
Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty
More informationBOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET
1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School
More informationOutline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:
Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A
More informationManifoldBoost: Stagewise Function Approximation for Fully-, Semiand Un-supervised Learning
ManfoldBoost: Stagewse Functon Approxmaton for Fully-, Semand Un-supervsed Learnng Ncolas Loeff loeff@uuc.edu Department of Electrcal and Computer Engneerng, Unversty of Illnos, Urbana, IL 61801 Davd Forsyth
More informationYan et al. / J Zhejiang Univ-Sci C (Comput & Electron) in press 1. Improving Naive Bayes classifier by dividing its decision regions *
Yan et al. / J Zhejang Unv-Sc C (Comput & Electron) n press 1 Journal of Zhejang Unversty-SCIENCE C (Computers & Electroncs) ISSN 1869-1951 (Prnt); ISSN 1869-196X (Onlne) www.zju.edu.cn/jzus; www.sprngerlnk.com
More informationNUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS
ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana
More informationUsing Neural Networks and Support Vector Machines in Data Mining
Usng eural etworks and Support Vector Machnes n Data Mnng RICHARD A. WASIOWSKI Computer Scence Department Calforna State Unversty Domnguez Hlls Carson, CA 90747 USA Abstract: - Multvarate data analyss
More informationA MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS
Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung
More informationA Robust Method for Estimating the Fundamental Matrix
Proc. VIIth Dgtal Image Computng: Technques and Applcatons, Sun C., Talbot H., Ourseln S. and Adraansen T. (Eds.), 0- Dec. 003, Sydney A Robust Method for Estmatng the Fundamental Matrx C.L. Feng and Y.S.
More informationSemi-Supervised Discriminant Analysis Based On Data Structure
IOSR Journal of Computer Engneerng (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 3, Ver. VII (May Jun. 2015), PP 39-46 www.osrournals.org Sem-Supervsed Dscrmnant Analyss Based On Data
More informationFeature Reduction and Selection
Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components
More informationRecommended Items Rating Prediction based on RBF Neural Network Optimized by PSO Algorithm
Recommended Items Ratng Predcton based on RBF Neural Network Optmzed by PSO Algorthm Chengfang Tan, Cayn Wang, Yuln L and Xx Q Abstract In order to mtgate the data sparsty and cold-start problems of recommendaton
More informationUnsupervised Learning
Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and
More informationProblem Definitions and Evaluation Criteria for Computational Expensive Optimization
Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty
More informationAdaptive Transfer Learning
Adaptve Transfer Learnng Bn Cao, Snno Jaln Pan, Yu Zhang, Dt-Yan Yeung, Qang Yang Hong Kong Unversty of Scence and Technology Clear Water Bay, Kowloon, Hong Kong {caobn,snnopan,zhangyu,dyyeung,qyang}@cse.ust.hk
More informationA Binarization Algorithm specialized on Document Images and Photos
A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a
More informationMachine Learning: Algorithms and Applications
14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of
More informationEdge Detection in Noisy Images Using the Support Vector Machines
Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona
More informationLearning an Image Manifold for Retrieval
Learnng an Image Manfold for Retreval Xaofe He*, We-Yng Ma, and Hong-Jang Zhang Mcrosoft Research Asa Bejng, Chna, 100080 {wyma,hjzhang}@mcrosoft.com *Department of Computer Scence, The Unversty of Chcago
More informationS1 Note. Basis functions.
S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type
More informationSupport Vector Machines. CS534 - Machine Learning
Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators
More informationEfficient Text Classification by Weighted Proximal SVM *
Effcent ext Classfcaton by Weghted Proxmal SVM * Dong Zhuang 1, Benyu Zhang, Qang Yang 3, Jun Yan 4, Zheng Chen, Yng Chen 1 1 Computer Scence and Engneerng, Bejng Insttute of echnology, Bejng 100081, Chna
More informationQuality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation
Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on
More informationData Mining: Model Evaluation
Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct
More informationLecture 5: Multilayer Perceptrons
Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented
More informationRelated-Mode Attacks on CTR Encryption Mode
Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory
More informationSVM-based Learning for Multiple Model Estimation
SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:
More informationSum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints
Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan
More informationCHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION
48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue
More informationPERFORMANCE EVALUATION FOR SCENE MATCHING ALGORITHMS BY SVM
PERFORMACE EVALUAIO FOR SCEE MACHIG ALGORIHMS BY SVM Zhaohu Yang a, b, *, Yngyng Chen a, Shaomng Zhang a a he Research Center of Remote Sensng and Geomatc, ongj Unversty, Shangha 200092, Chna - yzhac@63.com
More informationHermite Splines in Lie Groups as Products of Geodesics
Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the
More informationThe Shortest Path of Touring Lines given in the Plane
Send Orders for Reprnts to reprnts@benthamscence.ae 262 The Open Cybernetcs & Systemcs Journal, 2015, 9, 262-267 The Shortest Path of Tourng Lnes gven n the Plane Open Access Ljuan Wang 1,2, Dandan He
More informationLearning to Project in Multi-Objective Binary Linear Programming
Learnng to Project n Mult-Objectve Bnary Lnear Programmng Alvaro Serra-Altamranda Department of Industral and Management System Engneerng, Unversty of South Florda, Tampa, FL, 33620 USA, amserra@mal.usf.edu,
More informationSimulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010
Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement
More informationSupervised Nonlinear Dimensionality Reduction for Visualization and Classification
IEEE Transactons on Systems, Man, and Cybernetcs Part B: Cybernetcs 1 Supervsed Nonlnear Dmensonalty Reducton for Vsualzaton and Classfcaton Xn Geng, De-Chuan Zhan, and Zh-Hua Zhou, Member, IEEE Abstract
More information6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour
6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the
More informationAnnouncements. Supervised Learning
Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples
More informationA Fast Visual Tracking Algorithm Based on Circle Pixels Matching
A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng
More informationNetwork Intrusion Detection Based on PSO-SVM
TELKOMNIKA Indonesan Journal of Electrcal Engneerng Vol.1, No., February 014, pp. 150 ~ 1508 DOI: http://dx.do.org/10.11591/telkomnka.v1.386 150 Network Intruson Detecton Based on PSO-SVM Changsheng Xang*
More informationOutline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1
4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:
More informationLearning a Class-Specific Dictionary for Facial Expression Recognition
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for
More informationFrom Comparing Clusterings to Combining Clusterings
Proceedngs of the Twenty-Thrd AAAI Conference on Artfcal Intellgence (008 From Comparng Clusterngs to Combnng Clusterngs Zhwu Lu and Yuxn Peng and Janguo Xao Insttute of Computer Scence and Technology,
More informationAn Evolvable Clustering Based Algorithm to Learn Distance Function for Supervised Environment
IJCSI Internatonal Journal of Computer Scence Issues, Vol. 7, Issue 5, September 2010 ISSN (Onlne): 1694-0814 www.ijcsi.org 374 An Evolvable Clusterng Based Algorthm to Learn Dstance Functon for Supervsed
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15
CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc
More informationA Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems
A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty
More informationSynthesizer 1.0. User s Guide. A Varying Coefficient Meta. nalytic Tool. Z. Krizan Employing Microsoft Excel 2007
Syntheszer 1.0 A Varyng Coeffcent Meta Meta-Analytc nalytc Tool Employng Mcrosoft Excel 007.38.17.5 User s Gude Z. Krzan 009 Table of Contents 1. Introducton and Acknowledgments 3. Operatonal Functons
More informationCompiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz
Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster
More informationNAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics
Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson
More informationA Robust LS-SVM Regression
PROCEEDIGS OF WORLD ACADEMY OF SCIECE, EGIEERIG AD ECHOLOGY VOLUME 7 AUGUS 5 ISS 37- A Robust LS-SVM Regresson József Valyon, and Gábor Horváth Abstract In comparson to the orgnal SVM, whch nvolves a quadratc
More informationResearch of Neural Network Classifier Based on FCM and PSO for Breast Cancer Classification
Research of Neural Network Classfer Based on FCM and PSO for Breast Cancer Classfcaton Le Zhang 1, Ln Wang 1, Xujewen Wang 2, Keke Lu 2, and Ajth Abraham 3 1 Shandong Provncal Key Laboratory of Network
More informationFast Computation of Shortest Path for Visiting Segments in the Plane
Send Orders for Reprnts to reprnts@benthamscence.ae 4 The Open Cybernetcs & Systemcs Journal, 04, 8, 4-9 Open Access Fast Computaton of Shortest Path for Vstng Segments n the Plane Ljuan Wang,, Bo Jang
More informationAn Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices
Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal
More informationFeature-Based Matrix Factorization
Feature-Based Matrx Factorzaton arxv:1109.2271v3 [cs.ai] 29 Dec 2011 Tanq Chen, Zhao Zheng, Quxa Lu, Wenan Zhang, Yong Yu {tqchen,zhengzhao,luquxa,wnzhang,yyu}@apex.stu.edu.cn Apex Data & Knowledge Management
More informationLocal Quaternary Patterns and Feature Local Quaternary Patterns
Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents
More informationA fast algorithm for color image segmentation
Unersty of Wollongong Research Onlne Faculty of Informatcs - Papers (Arche) Faculty of Engneerng and Informaton Scences 006 A fast algorthm for color mage segmentaton L. Dong Unersty of Wollongong, lju@uow.edu.au
More informationFuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System
Fuzzy Modelng of the Complexty vs. Accuracy Trade-off n a Sequental Two-Stage Mult-Classfer System MARK LAST 1 Department of Informaton Systems Engneerng Ben-Guron Unversty of the Negev Beer-Sheva 84105
More informationAPPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT
3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ
More informationRandom Kernel Perceptron on ATTiny2313 Microcontroller
Random Kernel Perceptron on ATTny233 Mcrocontroller Nemanja Djurc Department of Computer and Informaton Scences, Temple Unversty Phladelpha, PA 922, USA nemanja.djurc@temple.edu Slobodan Vucetc Department
More informationTPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints
TPL-ware Dsplacement-drven Detaled Placement Refnement wth Colorng Constrants Tao Ln Iowa State Unversty tln@astate.edu Chrs Chu Iowa State Unversty cnchu@astate.edu BSTRCT To mnmze the effect of process
More informationParallel matrix-vector multiplication
Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more
More informationConstructing Minimum Connected Dominating Set: Algorithmic approach
Constructng Mnmum Connected Domnatng Set: Algorthmc approach G.N. Puroht and Usha Sharma Centre for Mathematcal Scences, Banasthal Unversty, Rajasthan 304022 usha.sharma94@yahoo.com Abstract: Connected
More informationRobust Dictionary Learning with Capped l 1 -Norm
Proceedngs of the Twenty-Fourth Internatonal Jont Conference on Artfcal Intellgence (IJCAI 205) Robust Dctonary Learnng wth Capped l -Norm Wenhao Jang, Fepng Ne, Heng Huang Unversty of Texas at Arlngton
More informationGSLM Operations Research II Fall 13/14
GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are
More informationRange images. Range image registration. Examples of sampling patterns. Range images and range surfaces
Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples
More informationFace Recognition University at Buffalo CSE666 Lecture Slides Resources:
Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural
More information