On Multiple Kernel Learning with Multiple Labels
|
|
- Darcy Cunningham
- 5 years ago
- Views:
Transcription
1 On Multple Kernel Learnng wth Multple Labels Le Tang Department of CSE Arzona State Unversty Janhu Chen Department of CSE Arzona State Unversty Jepng Ye Department of CSE Arzona State Unversty Abstract For classfcaton wth multple labels, a common approach s to learn a classfer for each label. Wth a kernel-based classfer, there are two optons to set up kernels: select a specfc kernel for each label or the same kernel for all labels. In ths work, we present a unfed framework for mult-label multple kernel learnng, n whch the above two approaches can be consdered as two extreme cases. Moreover, our framework allows the kernels shared partally among multple labels, enablng flexble degrees of label commonalty. We systematcally study how the sharng of kernels among multple labels affects the performance based on extensve experments on varous benchmark data ncludng mages and mcroarray data. Interestng fndngs concernng effcacy and effcency are reported. 1 Introducton Wth the prolferaton of kernel-based methods lke support vector machnes SVM, kernel learnng has been attractng ncreasng attentons. As wdely known, the kernel functon or matrx plays an essental role n kernel methods. For practcal learnng problems, dfferent kernels are usually prespecfed to characterze the data. For nstance, Gaussan kernel wth dfferent wdth parameters; data fuson wth heterogeneous representatons [Lanckret et al., 2004b]. Tradtonally, an approprate kernel can be estmated through cross-valdaton. Recent multple kernel learnng MKL methods [Lanckret et al., 2004a] manpulate the Gram kernel matrx drectly by formulatng t as a sem-defnte program SDP, or alternatvely, search for an optmal convex combnaton of multple user-specfed kernels va quadratcally constraned quadratc program QCQP. Both SDP and QCQP formulatons can only handle data of medum sze and small number of kernels. To address large scale kernel learnng, varous methods are developed, ncludng SMOlke algorthm [Bach et al., 2004], sem-nfnte lnear program SILP [Sonnenburg et al., 2007] and projected gradent method [Rakotomamonjy et al., 2007]. It s notced that most exstng works on MKL focus on bnary classfcatons. In ths work, MKL learnng the weghts for each base kernel for classfcaton wth multple labels s explored nstead. Classfcaton wth multple labels refers to classfcaton wth more than 2 categores n the output space. Commonly, the problem s decomposed nto multple bnary classfcaton tasks, and the tasks are learned ndependently or jontly. Some works attempt to address the kernel learnng problem wth multple labels. In [Jebara, 2004], all bnary classfcaton tasks share the same Bernoull pror for each kernel, leadng to a sparse kernel combnaton. [Zen, 2007] dscusses the problem of kernel learnng for mult-class SVM, and [J et al., 2008] studes the case for mult-label classfcaton. Both works above explot the same kernel drectly for all classes, yet no emprcal result s formally reported concernng whether the same kernel across labels performs better over a specfc kernel for each label. The same-kernel-across-tasks setup seems reasonable at frst glmpse but needs more nvestgaton. Mostly, the multple labels are wthn the same doman, and naturally the classfcaton tasks share some commonalty. One the other hand, the kernel s more nformatve for classfcaton when t s algned wth the target label. Some tasks say recognze sunset and anmal n mages are qute dstnct, so a specfc kernel for each label should be encouraged. Gven these consderatons, two questons rses naturally: Whch approach could be better, the same kernel for all labels or a specfc kernel for each label? To our best knowledge, no work has formally studed ths ssue yet. A natural extenson s to develop kernels that capture the smlarty and dfference among labels smultaneously. Ths matches the relatonshp among labels more reasonably, but could t be effectve n practce? The questons above motvate us to develop a novel framework to model task smlarty and dfference smultaneously when handlng multple related classfcaton tasks. We show that the framework can be solved va QCQP wth proper regularzaton on kernel dfference. To be scalable, an SILPlke algorthm s provded. In ths framework, selectng the same kernel for all labels or a specfc kernel for each label are deemed as two extreme cases. Moreover, ths framework allows varous degree of kernel sharng wth proper parameter setup, enablng us to study dfferent strateges of kernel sharng systematcally. Based on extensve experments on benchmark data, we report some nterestng fndngs and explanatons concernng the aforementoned two questons. 1255
2 2 A Unfed Framework To systematcally study the effect of kernel sharng among multple labels, we present a unfed framework to allow flexble degree of kernel sharng. We focus on the well-known kernel-based algorthm SVM, for learnng k bnary classfcaton tasks {f t } k respectvely, based on n tranng samples {x,y t}n,wherets the ndex of a specfc label. Let HK t be the feature space, and φt K be the mappng functon defned as φ t K : φt K x Ht K, for a kernel functon Kt. Let G t be the kernel Gram matrx for the t-th task, namely Gj t = Kt x,x j = φ t K x φ t K x j. Under the settng of learnng multple labels {f t } k usng SVM, each label f t can be seen as learnng a lnear functon n the feature space HK t, such that f t x =sgn w t,φ t K x + bt where w t s the feature weght and b t s the bas term. Typcally, the dual formulaton of SVM s consdered. Let Dα t, G t denote the dual objectve of the t-th task gven kernel matrx G t : Dα t, G t =[α t ] T e 1 2 [αt ] T G t y t [y t ] T α t 1 whereforthetaskf t, G t S+ n denotes the kernel matrx and S+ n s the set of sem-postve defnte matrces; denotes element-wse matrx multplcaton; α t R n denotes the dual varable vector. Mathematcally, mult-label learnng wth k labels can be formulated n the dual form as: max Dα t, G t 2 {α t } k s.t. [α t ] T y t =0, 0 α t C, t =1,,k Here, C s the penalty parameter for allowng the msclassfcaton. Gven {G t } k, optmal {αt } k n Eq. 2 can be found by solvng a convex problem. Note that the dual objectve s the same as the prmal objectve of SVM due to ts convexty equal to the emprcal classfcaton loss plus model complexty. Followng [Lanckret et al., 2004a], multple kernel learnng wth k labels and p base kernels G 1,G 2,,G p can be formulated as: mn {G t } k λ Ω{G t } k + max {α t } k Dα t, G t 3 s.t. [α t ] T y t =0, 0 α t C, t =1,,k G t = θg t, t =1,,k 4 θ t =1, θt 0, t =1,,k 5 where Ω{G t } k s a regularzaton term to represent the cost assocated wth kernel dfferences among labels. To capture the commonalty among labels, Ω should be a monotonc ncreasng functon of kernel dfference. λ s the trade-off parameter between kernel dfference and classfcaton loss. Clearly, f λ s set to 0, the objectve goal s decoupled nto k sub-problems, wth each selectng a kernel ndependently IndependentModel;When λ s suffcently large, the regularzaton term domnates and forces all labels to select the same kernel Same Model. In between, there are nfnte number of Partal Model whch control the degrees of kernel dfference among tasks. The larger λ s, the more smlar the kernels of each label are. 3 Regularzaton on Kernel Dfference Here, we develop one regularzaton scheme such that formula 3 can be solved va convex programmng. Snce the optmal kernel for each label s expressed as a convex combnaton of multple base kernels as n eq. 4 and 5, each θ t essentally represents the kernel assocated wth the t-th label. We decouple the kernel weghts of each label nto two non-negatve parts: θ t = ζ + γ t, ζ,γ t 0 6 where ζ denotes the shared kernel across labels, and γ t s the label-specfc part. So the kernel dfference can be defned as: Ω {G t } k 1 = γ t 7 2 For presentaton convenence, we denote G t α =[α t ] T G t y t [y] T α t 8 It follows that the MKL problem can be solved va QCQP 1. Theorem 3.1. Gven regularzaton as presented n 6 and 7, the problem n 3 s equvalent to a Quadratcally Constraned Quadratc Program QCQP: max [α t ] T e 1 2 s s.t. s s 0, s s 0 s t kλ G t α, =1,,p s t G t α, =1,,p,,,k [α t ] T y t =0, 0 α t C, t =1,,k The kernel weghts of each label ζ,γ t can be obtaned va the dual varables of the constrants. The QCQP formulaton nvolves nk +2varables, k +1p quadratc constrants and Onk lnear constrants. Though ths QCQP can be solved effcently by general optmzaton software, the quadratc constrants mght exhaust memory resources f k or p s large. Next, we ll show a more scalable algorthm that solves the problem effcently. 4 Algorthm The objectve n 3 gven λ s equvalent to the followng problem wth a proper β and other constrants specfed n 3: mn {[α max t ] T e 12 {G t } {α t } [αt ] T G t y t [y t ] T } α t 9 s.t. Ω{G t } k β 10 1 Please refer to the appendx for the proof. 1256
3 Compared wth λ, β has an explct meanng: the maxmum dfference among kernels of each label. Snce p θt = 1, the mn-max problem n 9, akn to [Sonnenburg et al., 2007], can be expressed as: mn g t 11 s.t. θdα t t,g g t, α t St 12 wth St = { α t 0 α t C, [α t ] T y t =0 } 13 Note that the max operaton wth respect to α t s transformed nto a constrant for all the possble α t n the set St defned n 13. An algorthm smlar to cuttng-plane could be utlzed to solve the problem, whch essentally adds constrants n terms of α t teratvely. In the J-th teraton, we perform the followng: 1. Gven θ t and gt from prevous teraton, fnd out new α t J n the set 13 whch volates the constrants 12 most for each label. Essentally, we need to fnd out α t such that p θt Dαt,G s maxmzed, whch bols down to an SVM problem wth fxed kernel for each label: max [α t ] T e 1 α t 2 [αt ] T θg t t y t [y t ] T α t Here, each label s SVM problem can be solved ndependently and typcal SVM acceleraton technques and exstng SVM mplementatons can be used drectly. 2. Gven α t J obtaned n Step 1, add k lnear constrants: θ t Dαt J,G g t, t =1,,k and fnd out new θ t and g t va the problem below: mn g t s.t. θ t Dαt j,g g t, j =1,,J θ t 0, θ t =1, t =1,,k 1 γ t β, θ t = ζ + γ t,ζ,γ t 0 2 Note that both the constrants and the objectve are lnear, so the problem can be solved effcently by general optmzaton package. 3. J = J +1. Repeat the above procedure untl no α s found to volate the constrants n Step 1. So n each teraton, we nterchangeably solve k SVM problems and a lnear program of sze Okp. 5 Expermental Study 5.1 Experment Setup 4 mult-label data sets are selected as n Table 1. We also nclude 5 mult-class data sets as they are specal cases of Table 1: Data Descrpton Data #samples #labels #kernels Lgand Mult-label Bo Scene Yeast USPS Letter Mult-class Yaleface news Segment mult-label classfcaton and one-vs-all approach performs reasonably well [Rfkn and Klautau, 2004]. We report average AUC and accuracy for mult-label and mult-class data, respectvely. A porton of data are sampled from USPS, Letter and Yaleface, as they are too large to handle drectly. Varous type of base kernels are generated. We generate 15 dffuson kernels wth parameter varyng from 0.1 to 6 for Lgand [Tsuda and Noble, 2004]; The 8 kernels of Bo are generated followng [Lanckret et al., 2004b]; 20news uses dverse text representatons [Kolda, 1997] leadng to 62 dfferent kernels; For other data, Gaussan kernels wth dfferent wdths are constructed. The trade-off parameter C of SVM s set to a sensble value based on cross valdaton. We vary the number of samples for tranng and randomly sample 30 dfferent subsets n each setup. The average performance are recorded. 5.2 Experment Results Due to space lmt, we can only report some representatve results n Table 3-5. The detals are presented n an extended techncal report [Tang et al., 2009]. Tr Rato n the frst row denotes the tranng rato, the percentage of samples used for tranng. The frst column denotes the porton of kernels shared among labels va varyng the parameter β n 10, wth Independent and Same model beng the extreme. The last row Dff represents the performance dfference between the best model and the worst one. Bold face denotes the best n each column unless there s no sgnfcant dfference. Below, we seek to address the problems rased n the ntroducton. Does kernel regularzaton yeld any effect? The maxmal dfference of varous models on all data sets are plotted n Fgure 1. The x-axs denotes ncreasng tranng data and y-axs denotes maxmal performance dfference. Clearly, when the tranng rato s small, there s a dfference between varous models, especally for USPS, Yaleface and Lgand. For nstance, the dfference could be as large as 9% when only 2% USPS data s used for tranng. Ths knd of classfcaton wth rare samples are common for applcatons lke object recognton. Data Boand Letter demonstrate medum dfference between 1 2%. But for other data sets lke Yeast, the dfference < 1% s neglgble. The dfference dmnshes as tranng data ncreases. Ths s common for all data sets. When tranng samples are moderately large, kernel regularzaton actually has no much effect. It works only when the tranng samples are few. Whch model excels? Here, we study whch model Independent, Partal or Same excels f there s a dfference. In Table 3-5, the entres n bold 1257
4 Independent Partal Same AUC Computaton Tme Fgure 1: Performance Dfference denote the best one n each settng. It s notced that Same Model tends to be the wnner or a close runner-up most of the tme. Ths trend s observed for almost all the data. Fgure 2 shows the average performance and standard devaton of dfferent models when 10% of Lgand data are used for tranng. Clearly, a general trend s that sharng the same kernel s lkely to be more robust compared wth Independent Model. Note that the varance s large because of the small sample for tranng. Actually, Same Model performs best or close n 25 out of 30 trals. So t s a wser choce to share the same kernel even f bnary classfcaton tasks are qute dfferent. Independent Model s more lkely to overft the data. Partal model, on the other hand, takes the wnner only f t s close to the same model sharng 90% kernel as n Table 4. Mostly, ts performance stays between Independent and Same Model. Why s Same Model better? As have demonstrated, Same Model outperforms Partal and Independent Model. In Table 2, we show the average kernel weghts of 30 runs when 2% of USPS data s employed for tranng. Each column stands for a kernel. The weghts for kernel 8-10 are not presented as they are all 0. The frst 2 blocks represents the kernel weghts of each class obtaned va Independent and Partal Model sharng 50% kernel, respectvely. The row follows are the weghts produced by Same Model. All the models, no matter whch class, prefer to choose K5 and K6. However, Same Model assgns a very large weght to the 6-th kernel. In the last row, we also present the weght obtaned when 60% of data s used for tranng, n whch case Independent Model and Same Model tend to select almost the same kernels. Compared wth others, the weghts of Same Model obtaned usng 2% data s closer to the soluton obtaned wth 60% data. In other words, forcng all the bnary classfcaton tasks to share the same kernel s tantamount to ncreasng data samples for tranng, resultng n a more robust kernel. Whch model s more scalable? Regularzaton on kernel dfference seems to affect margnally when the samples are more than few. Thus, a method requrng less computatonal cost s favorable. Our algorthm conssts of multple teratons. In each teraton, we need to solve k SVMs gven fxed kernels. For each bnary classfcaton problem, combnng the kernel matrx costs Opn 2. The tme complexty of SVM s On η wth η [1, 2.3] [Platt, 1999]. After that, a LP wth 0.67 Independent 20% 40% 60% 80% Same Fgure 2: Performance on Lgand 0 10% 20% 30% 40% 50% 70% Trang Rato Fgure 3: Effcency Comparson Table 2: Kernel Weghts of Dfferent Models K1 K2 K3 K4 K5 K6 K7 C C C C I C C C C C C C C C C P C C C C C C S Tr=60% Opk varables the kernel weghts and ncreasng number of constrants needs to be solved. We notce that the algorthms termnates wth dozens of teratons and SVM computaton domnates the computaton tme n each teraton f p<<n. Hence, the total tme complexty s approxmately OIkpn 2 +OIkn η where I s the number of teratons. As for Same Model, the same kernel s used for all the bnary classfcaton problems and thus requres less tme for kernel combnaton. Moreover, compared wth Partal Model, only Op, nstead of Opk varables kernel weghts, need to be determned, resultng less tme to solve the LP. Wth Independent Model, the total tme for SVM tranng and kernel combnaton remans almost the same as Partal. Rather than one LP wth Opk varables, Independent needs to solve k LP wth only Op varables n each teraton, potentally savng some computaton tme. One advantage of Independent Model s that, t decomposes the problem nto multple ndependent kernel selecton problem, whch can be paralleled seamlessly wth a mult-core CPU or clusters. In Fgure 3, we plot the average computaton tme of varous models on Lgand data on a PC wth Intel P4 2.8G CPU and 1.5G memory. We only plot Partal model sharng 50% kernel to make the fgure legble. All the models yeld sm- 1258
5 Table 3: Lgand Result Tr Rato 10% 15% 20% 25% 30% 35% 40% 45% 50% 60% 70% 80% Independent % % % % Same Dff Table 4: Bo Result Tr Rato 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% 20% 30% Independent % % % % % Same Dff Table 5: USPS Result Tr Rato 2% 3% 4% 5% 6% 7% 8% 9% 10% 20% 40% 60% Independent % % % % Same Dff% lar magntude wth respect to number of samples, as we have analyzed. But Same Model takes less tme to arrve at a soluton. Smlar trend s observed for other data sets as well. Same Model, wth more strct constrants, s ndeed more effcent than Independent and Partal model f parallel computaton s not consdered. So n terms of both classfcaton performance and effcency, Same model should be preferred. Partal Model, seemngly more reasonable to match the relatonshp between labels, should not be consdered gven ts margnal mprovement and addtonal computatonal cost. Ths concluson, as we beleve, would be helpful and suggestve for other practtoners. A specal case: Average Kernel Here we examne one specal case of Same Model: average of base kernels. The frst block of Table 6 shows the performance of MKL wth Same Model compared wth average kernel on Lgand over 30 runs. Clearly, Same Model s almost always the wnner. It should be emphaszed that smple average actually performs reasonably well, especally when base kernels are good. The effect s mostly observable when samples are few say only 10% tranng data. Interestngly, as tranng samples ncreases to 60%-80%, the performance wth Average Kernel decreases. However, Same Model s performance mproves consstently. Ths s because Same Model can learn an optmal kernel whle Average does not consder the ncreasng label nformaton for kernel combnaton. A key dfference between Same Model and Average Kernel s that the soluton of the former s sparse. For nstance, Same Model on Lgand data pcks 2-3 base kernels for the fnal soluton whle Average has to consder all the 15 base kernels. For data lke mcroarray, graphs, and structures, some specalzed kernels are computatonally expensve. Ths s even worse f we have hundreds of base kernels. Thus, t s desrable to select those relevant kernels for predcton. Another potental dsadvantage wth Average Kernel s robustness. To verfy ths, we add an addtonal lnear kernel wth random nose. The correspondng performance s presented n the 2nd block of Table 6. The performance of Average Kernel deterorates whereas Same Model s performance remans nearly unchanged. Ths mples that Average Kernel can be affected by nosy base kernels whereas Same Model s capable of pckng the rght ones. 6 Conclusons Kernel learnng for mult-label or mult-class classfcaton problem s mportant n terms of kernel parameter tunng or heterogeneous data fuson, whereas t s not clear whether a specfc kernel or the same kernel should be employed n practce. In ths work, we systematcally study the effects of dfferent kernel sharng strateges. We present a unfed framework wth kernel regularzaton such that flexble degree of kernel sharng s vable. Under ths framework, three dfferent models are compared: Independent, Partal and Same Model. It turns out that the same kernel s preferred for classfcaton even f the labels are qute dfferent. When samples are few, Same Model tends to yeld a more robust kernel. Independent Model, on the contrary, s lkely to learn a bad kernel due to over-fttng. Partal Model, occasonally better, les n between most of the tme. However, the dfference of these models vanshes quckly wth ncreas- 1259
6 Table 6: Same Model compared wth Average Kernel on Lgand Data. The 1st block s the performance when all the kernels are reasonably good; The 2nd block s the performance when a nosy kernel s ncluded n the base kernel set. Tr rato 10% 15% 20% 25% 30% 35% 40% 45% 50% 60% 70% 80% Good Same Kernels Average Nosy Same Kernels Average ng tranng samples. All the models yeld smlar classfcaton performance when samples are large. In ths case, Independent and Same are more effcent. Same Model, a lttle bt surprsng, s the most effcent method. Partal Model, though, asymptotcally bears the same tme complexty, often needs more computaton tme. It s observed that for some data, smply usng the average kernel whch s a specal case of Same Model wth a proper parameter tunng for SVM occasonally gves reasonable good performance. Ths also confrms our concluson that selectng the same kernel for all labels s more robust n realty. However, ths average kernel s not sparse and can be senstve to nosy kernels. In ths work, we only consder kernels n the nput space. It could be nterestng to explore the constructon of kernels n the output space as well. Acknowledgments Ths work was supported by NSF IIS , IIS , CCF , NIH R01-HG002516, and NGA HM We thank Dr. Rong Jn for helpful suggestons. A Proof for Theorem 3.1 Proof. Based on Eq. 4, 5 and 6, we can assume ζ = c 1, γ t = c 2, c 1 + c 2 =1, c 1,c 2 0. Let G t α =[α t ] T G t y t [y] T α t and G t α as n Eq. 8. Then Eq. 3 can be reformulated as: max α t = max α t mn {G t } [α t ] T e 1 2 [α t ] T e 1 2 max ζ,γ t λc2 + G α t λc 2 + ζ + γ t G t α It follows that the 2nd term can be further reformulated as X max ζ G t c 1,c 2,ζ,γ t α + γ t Gt α λc2 = max P max ζ G t c 1,c 2 ζ =c α 1 + P max γ t γt =c Gt α kλc2 2 = max c 1 max G t c 1,c α + X k c 2 max G t α 2 kλc2 = max c 1 +c 2 =1 c 1 max " G t α X k +c2 " = max max G t α, X k # max G t α kλ # max G t α kλ By addng constrants as s s 0,s s t kλ s 0 G t α, =1,,p s t G t α, =1,,p,,,k We thus prove the Theorem. References [Bach et al., 2004] Francs R. Bach, Gert R. G. Lanckret, and Mchael I. Jordan. Multple kernel learnng, conc dualty, and the smo algorthm. In ICML, [Jebara, 2004] Tony Jebara. Mult-task feature and kernel selecton for svms. In ICML, [J et al., 2008] S. J, L. Sun, R. Jn, and J. Ye. Mult-label multple kernel learnng. In NIPS, [Kolda, 1997] Tamara G. Kolda. Lmted-memory matrx methods wth applcatons. PhD thess, [Lanckret et al., 2004a] Gert R. G. Lanckret, Nello Crstann, Peter L. Bartlett, Laurent El Ghaou, and Mchael I. Jordan. Learnng the kernel matrx wth semdefnte programmng. JMLR, 5, [Lanckret et al., 2004b] Gert R. G. Lanckret, et al. Kernelbased data fuson and ts applcaton to proten functon predcton n yeast. In PSB, [Platt, 1999] John C. Platt. Fast tranng of support vector machnes usng sequental mnmal optmzaton [Rakotomamonjy et al., 2007] Alan Rakotomamonjy, Francs R. Bach, Stephane Canu, and Yves Grandvalet. More effcency n kernel learnng. In ICML, [Rfkn and Klautau, 2004] Ryan Rfkn and Aldebaro Klautau. In defense of one-vs-all classfcaton. JMLR, 5, [Sonnenburg et al., 2007] Sren Sonnenburg, Gunnar Rtsch, Chrstn Schfer, and Bernhard Schlkopf. Large scale multple kernel learnng. JMLR, 7: , [Tang et al., 2009] Le Tang, Janhu Chen, and Jepng Ye. On multple kernel learnng wth multple labels. Techncal report, Arzona State Unversty, [Tsuda and Noble,2004] Koj Tsuda and Wllam Stafford Noble. Learnng kernels from bologcal networks by maxmzng entropy. Bonformatcs, 20: , [Zen,2007] Alexander Zen. Multclass multple kernel learnng. In ICML,
Support Vector Machines
Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned
More informationSupport Vector Machines
/9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.
More informationSupport Vector Machines. CS534 - Machine Learning
Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators
More informationLearning the Kernel Parameters in Kernel Minimum Distance Classifier
Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department
More informationDiscriminative Dictionary Learning with Pairwise Constraints
Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse
More informationFeature Reduction and Selection
Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components
More informationClassification / Regression Support Vector Machines
Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM
More informationMachine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)
Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes
More informationGSLM Operations Research II Fall 13/14
GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are
More informationThree supervised learning methods on pen digits character recognition dataset
Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru
More informationBackpropagation: In Search of Performance Parameters
Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,
More informationSolving two-person zero-sum game by Matlab
Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by
More informationSLAM Summer School 2006 Practical 2: SLAM using Monocular Vision
SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,
More informationCHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION
48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue
More informationCS 534: Computer Vision Model Fitting
CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust
More informationAn Optimal Algorithm for Prufer Codes *
J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,
More informationLECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming
CEE 60 Davd Rosenberg p. LECTURE NOTES Dualty Theory, Senstvty Analyss, and Parametrc Programmng Learnng Objectves. Revew the prmal LP model formulaton 2. Formulate the Dual Problem of an LP problem (TUES)
More informationCollaboratively Regularized Nearest Points for Set Based Recognition
Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,
More informationA Binarization Algorithm specialized on Document Images and Photos
A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a
More informationProblem Definitions and Evaluation Criteria for Computational Expensive Optimization
Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty
More informationLecture 5: Multilayer Perceptrons
Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented
More informationA mathematical programming approach to the analysis, design and scheduling of offshore oilfields
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and
More informationProgramming in Fortran 90 : 2017/2018
Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values
More informationSmoothing Spline ANOVA for variable screening
Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory
More informationAn Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices
Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal
More information5 The Primal-Dual Method
5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton
More informationOutline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1
4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:
More informationSubspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;
Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features
More informationOutline. Type of Machine Learning. Examples of Application. Unsupervised Learning
Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton
More informationEdge Detection in Noisy Images Using the Support Vector Machines
Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona
More informationBOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET
1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School
More information6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour
6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the
More informationCourse Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms
Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques
More informationOptimizing Document Scoring for Query Retrieval
Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd
More informationCategories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms
3. Fndng Determnstc Soluton from Underdetermned Equaton: Large-Scale Performance Modelng by Least Angle Regresson Xn L ECE Department, Carnege Mellon Unversty Forbs Avenue, Pttsburgh, PA 3 xnl@ece.cmu.edu
More informationLECTURE : MANIFOLD LEARNING
LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors
More informationAn Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed
More informationUnsupervised Learning
Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and
More informationLearning to Project in Multi-Objective Binary Linear Programming
Learnng to Project n Mult-Objectve Bnary Lnear Programmng Alvaro Serra-Altamranda Department of Industral and Management System Engneerng, Unversty of South Florda, Tampa, FL, 33620 USA, amserra@mal.usf.edu,
More informationSVM-based Learning for Multiple Model Estimation
SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:
More informationMathematics 256 a course in differential equations for engineering students
Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the
More informationPrivate Information Retrieval (PIR)
2 Levente Buttyán Problem formulaton Alce wants to obtan nformaton from a database, but she does not want the database to learn whch nformaton she wanted e.g., Alce s an nvestor queryng a stock-market
More information12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification
Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero
More informationComplex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.
Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal
More informationMeta-heuristics for Multidimensional Knapsack Problems
2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,
More informationThe Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique
//00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy
More informationDetermining the Optimal Bandwidth Based on Multi-criterion Fusion
Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn
More informationHermite Splines in Lie Groups as Products of Geodesics
Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the
More informationS1 Note. Basis functions.
S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type
More informationTowards Semantic Knowledge Propagation from Text to Web Images
Guoun Q (Unversty of Illnos at Urbana-Champagn) Charu C. Aggarwal (IBM T. J. Watson Research Center) Thomas Huang (Unversty of Illnos at Urbana-Champagn) Towards Semantc Knowledge Propagaton from Text
More informationTerm Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task
Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto
More informationMachine Learning: Algorithms and Applications
14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of
More informationSENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR
SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR Judth Aronow Rchard Jarvnen Independent Consultant Dept of Math/Stat 559 Frost Wnona State Unversty Beaumont, TX 7776 Wnona, MN 55987 aronowju@hal.lamar.edu
More informationUser Authentication Based On Behavioral Mouse Dynamics Biometrics
User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA
More informationMessage-Passing Algorithms for Quadratic Programming Formulations of MAP Estimation
Message-Passng Algorthms for Quadratc Programmng Formulatons of MAP Estmaton Akshat Kumar Department of Computer Scence Unversty of Massachusetts Amherst akshat@cs.umass.edu Shlomo Zlbersten Department
More informationKent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming
CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems
More informationToday s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss.
Today s Outlne Sortng Chapter 7 n Wess CSE 26 Data Structures Ruth Anderson Announcements Wrtten Homework #6 due Frday 2/26 at the begnnng of lecture Proect Code due Mon March 1 by 11pm Today s Topcs:
More informationy and the total sum of
Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton
More informationActive Contours/Snakes
Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng
More informationA Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems
A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty
More informationParallelism for Nested Loops with Non-uniform and Flow Dependences
Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr
More informationAnnouncements. Supervised Learning
Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples
More informationModule Management Tool in Software Development Organizations
Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,
More informationClassifier Selection Based on Data Complexity Measures *
Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.
More informationReview of approximation techniques
CHAPTER 2 Revew of appromaton technques 2. Introducton Optmzaton problems n engneerng desgn are characterzed by the followng assocated features: the objectve functon and constrants are mplct functons evaluated
More informationParallel matrix-vector multiplication
Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more
More informationThe Codesign Challenge
ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.
More informationNAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics
Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson
More informationA Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines
A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría
More informationFuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System
Fuzzy Modelng of the Complexty vs. Accuracy Trade-off n a Sequental Two-Stage Mult-Classfer System MARK LAST 1 Department of Informaton Systems Engneerng Ben-Guron Unversty of the Negev Beer-Sheva 84105
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15
CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc
More informationA MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS
Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung
More informationBiostatistics 615/815
The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts
More informationEECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science
EECS 730 Introducton to Bonformatcs Sequence Algnment Luke Huan Electrcal Engneerng and Computer Scence http://people.eecs.ku.edu/~huan/ HMM Π s a set of states Transton Probabltes a kl Pr( l 1 k Probablty
More informationCompiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz
Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster
More informationDiscriminative classifiers for object classification. Last time
Dscrmnatve classfers for object classfcaton Thursday, Nov 12 Krsten Grauman UT Austn Last tme Supervsed classfcaton Loss and rsk, kbayes rule Skn color detecton example Sldng ndo detecton Classfers, boostng
More informationOutline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:
Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A
More informationSemi-supervised Mixture of Kernels via LPBoost Methods
Sem-supervsed Mxture of Kernels va LPBoost Methods Jnbo B Glenn Fung Murat Dundar Bharat Rao Computer Aded Dagnoss and Therapy Solutons Semens Medcal Solutons, Malvern, PA 19355 nbo.b, glenn.fung, murat.dundar,
More informationTaxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems
Taxonomy of Large Margn Prncple Algorthms for Ordnal Regresson Problems Amnon Shashua Computer Scence Department Stanford Unversty Stanford, CA 94305 emal: shashua@cs.stanford.edu Anat Levn School of Computer
More informationRandom Kernel Perceptron on ATTiny2313 Microcontroller
Random Kernel Perceptron on ATTny233 Mcrocontroller Nemanja Djurc Department of Computer and Informaton Scences, Temple Unversty Phladelpha, PA 922, USA nemanja.djurc@temple.edu Slobodan Vucetc Department
More informationWavefront Reconstructor
A Dstrbuted Smplex B-Splne Based Wavefront Reconstructor Coen de Vsser and Mchel Verhaegen 14-12-201212 2012 Delft Unversty of Technology Contents Introducton Wavefront reconstructon usng Smplex B-Splnes
More informationHelsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)
Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute
More informationIncremental Learning with Support Vector Machines and Fuzzy Set Theory
The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and
More informationFast Feature Value Searching for Face Detection
Vol., No. 2 Computer and Informaton Scence Fast Feature Value Searchng for Face Detecton Yunyang Yan Department of Computer Engneerng Huayn Insttute of Technology Hua an 22300, Chna E-mal: areyyyke@63.com
More informationMachine Learning 9. week
Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below
More informationFace Recognition University at Buffalo CSE666 Lecture Slides Resources:
Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural
More informationCluster Analysis of Electrical Behavior
Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School
More informationHierarchical clustering for gene expression data analysis
Herarchcal clusterng for gene expresson data analyss Gorgo Valentn e-mal: valentn@ds.unm.t Clusterng of Mcroarray Data. Clusterng of gene expresson profles (rows) => dscovery of co-regulated and functonally
More informationLOOP ANALYSIS. The second systematic technique to determine all currents and voltages in a circuit
LOOP ANALYSS The second systematic technique to determine all currents and voltages in a circuit T S DUAL TO NODE ANALYSS - T FRST DETERMNES ALL CURRENTS N A CRCUT AND THEN T USES OHM S LAW TO COMPUTE
More informationAPPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT
3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ
More information3. CR parameters and Multi-Objective Fitness Function
3 CR parameters and Mult-objectve Ftness Functon 41 3. CR parameters and Mult-Objectve Ftness Functon 3.1. Introducton Cogntve rados dynamcally confgure the wreless communcaton system, whch takes beneft
More informationUSING GRAPHING SKILLS
Name: BOLOGY: Date: _ Class: USNG GRAPHNG SKLLS NTRODUCTON: Recorded data can be plotted on a graph. A graph s a pctoral representaton of nformaton recorded n a data table. t s used to show a relatonshp
More informationMulti-objective Optimization Using Adaptive Explicit Non-Dominated Region Sampling
11 th World Congress on Structural and Multdscplnary Optmsaton 07 th -12 th, June 2015, Sydney Australa Mult-objectve Optmzaton Usng Adaptve Explct Non-Domnated Regon Samplng Anrban Basudhar Lvermore Software
More informationFeature-Based Matrix Factorization
Feature-Based Matrx Factorzaton arxv:1109.2271v3 [cs.ai] 29 Dec 2011 Tanq Chen, Zhao Zheng, Quxa Lu, Wenan Zhang, Yong Yu {tqchen,zhengzhao,luquxa,wnzhang,yyu}@apex.stu.edu.cn Apex Data & Knowledge Management
More informationSimulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010
Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement
More informationMOBILE Cloud Computing (MCC) extends the capabilities
1 Resource Sharng of a Computng Access Pont for Mult-user Moble Cloud Offloadng wth Delay Constrants Meng-Hs Chen, Student Member, IEEE, Mn Dong, Senor Member, IEEE, Ben Lang, Fellow, IEEE arxv:1712.00030v2
More informationRange images. Range image registration. Examples of sampling patterns. Range images and range surfaces
Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples
More informationUsing Neural Networks and Support Vector Machines in Data Mining
Usng eural etworks and Support Vector Machnes n Data Mnng RICHARD A. WASIOWSKI Computer Scence Department Calforna State Unversty Domnguez Hlls Carson, CA 90747 USA Abstract: - Multvarate data analyss
More informationTitle: A Novel Protocol for Accuracy Assessment in Classification of Very High Resolution Images
2009 IEEE. Personal use of ths materal s permtted. Permsson from IEEE must be obtaned for all other uses, n any current or future meda, ncludng reprntng/republshng ths materal for advertsng or promotonal
More information