A Hill-climbing Landmarker Generation Algorithm Based on Efficiency and Correlativity Criteria
|
|
- Melvyn Pope
- 5 years ago
- Views:
Transcription
1 A Hll-clmbng Landmarker Generaton Algorthm Based on Effcency and Correlatvty Crtera Daren Ler, Irena Koprnska, and Sanjay Chawla School of Informaton Technologes, Unversty of Sydney Madsen Buldng F09, Unversty of Sydney, NSW 2006, Australa {ler, rena, Abstract For a gven classfcaton task, there are typcally several learnng algorthms avalable. The queston then arses: whch s the most approprate algorthm to apply. Recently, we proposed a new algorthm for makng such a selecton based on landmarkng - a meta-learnng strategy that utlses meta-features that are measurements based on effcent learnng algorthms. Ths algorthm, whch creates a set of landmarkers that each utlse subsets of the algorthms beng landmarked, was shown to be able to estmate accuracy well, even when employng a small fracton of the gven algorthms. However, that verson of the algorthm has exponental computatonal complexty for tranng. In ths paper, we propose a hll-clmbng verson of the landmarker generaton algorthm, whch requres only polynomal tranng tme complexty. Our experments show that the landmarkers formed have smlar results to the more complex verson of the algorthm. 1. Introducton The choce of machne learnng algorthm can be a vtal factor n determnng the success or falure of a learnng soluton. Theoretcal (Wolpert, 2001), as well as emprcal results (Mche et al., 1994) suggest that no one algorthm s genercally superor; ths means that we cannot always rely on one partcular algorthm to solve all our learnng problems. Accordngly, selecton s typcally done va some form of holdout testng. However, wth the growng plethora of machne learnng algorthms, and because several dfferent representatons are usually reasonable for a gven problem, such evaluaton s often computatonally unvable. As an alternatve, some meta-learnng (Graud- Carrer et al., 2004; Vlalta and Drss, 2002) technques utlse past experence or meta-knowledge n order to learn when to use whch learnng algorthm. As wth standard machne learnng, the success of metalearnng s greatly dependent upon the qualty of the features chosen. And whle varous strateges for defnng such meta-features have been proposed (e.g. Kalouss and Hlaro, 2001; Brazdl et al., 1994; Mche et al., 1994), Copyrght 2005, Amercan Assocaton for Artfcal Intellgence ( All rghts reserved. there has been no consensus on what consttutes good features. Landmarkng (Fürnkranz and Petrak, 2001; Pfahrnger et al., 2000) s a new and promsng approach that characterses datasets by drectly measurng the performance of smple and fast learnng algorthms called landmarkers. However, the selecton of landmarkers s typcally done n an ad hoc fashon, wth the landmarkers generated focused on charactersng an arbtrary set of algorthms. In prevous work (Ler et al., 2004b, 2004c), we renterpreted the role of landmarkers, defnng each as a set of learnng algorthms that characterses the pattern of performance of one specfc learnng algorthm. As crtera, we specfed that these landmarkers should each be both effcent and correlated (as compared wth the landmarked algorthm). However, the landmarker generaton algorthm proposed (based on the all-subsets regresson technque (Mller, 2002)) has an exponental tranng tme computatonal complexty, makng t unweldy. In ths paper, we propose an alternate hll-clmbng landmarker generaton algorthm based on the forward selecton (Mller, 2002) method for varable selecton n regresson. We show that ths verson of the landmarker generaton algorthm performs almost as well as the allsubsets verson, whle requrng polynomal (as opposed to exponental) tranng tme complexty. 2. Landmarkng and Landmarkers In the lterature (Fürnkranz and Petrak, 2001; Pfahrnger et al., 2000), a landmarker s typcally assocated wth a sngle algorthm wth low computatonal complexty. The general dea s that the performance of a learnng algorthm on a dataset reveals some characterstcs of that dataset. However, n prevous Landmarkng work (Pfahrnger et al., 2000), despte the presence of two landmarker crtera (.e. effcency and bas dversty), no actual mechansm for generatng approprate landmarkers were defned, and the choce of landmarkers was made n an ad hoc fashon. In (Ler et al., 2004b, 2004c), landmarkng and landmarkers were redefned. Let: () a landmarkng element be an algorthm whose performance s utlsed as a dataset characterstc, and () a landmarker be a functon over the performance of a set of landmarkng algorthms,
2 whch resembles the performance (e.g. accuracy) of one specfc learnng algorthm; then landmarkng s thus the process of generatng a set of algorthm estmators.e. landmarkers. Consequently, we also suggest new landmarker crtera. Prevously (Pfahrnger et al., 2000), t was suggested that each landmarkng element be () effcent, and () bas dverse wth respect to the other chosen landmarkers. However, as landmarker (and not landmarkng element) crtera, there s no guarantee that the landmarkers selected va these crtera wll be able to characterse the performance of one, and even less so, a set of canddate algorthms (Ler et al., 2004b). Instead, n (Ler et al., 2004b, 2004c), we proposed that each landmarker be: () correlated.e. each landmarker should as closely as possble have ts output resemble the performance of the canddate algorthm beng landmarked, and () effcent. 2.1 Usng Crtera to Select Landmarkers Intutvely, two algorthms wth smlar performance wll be closer to each other n a space of algorthm performance. Conceptually, the dstance between two algorthms a and l can be regarded as a l. Also, we may express a l 2 as a 2 + l 2 2 a.l. Thus, f a s close to l, ths mples that a l 2 s small, and thus that a.l s relatvely large. Ths pertans to the correlatvty crteron we use to check f a landmarker (e.g. l) s representatve of the performance of some canddate algorthm (e.g. a). Accordngly, correlaton may be measured based on r (Wckens, 1995): a. l cov( a, l) r = cos ( a, l) = = a l σ a σ l Several other heurstcs smlarly apply, ncludng r 2 (.e. the squared Pearson s correlaton coeffcent), Mallow s C p crteron, Akake and Bayesan Informaton Crtera (.e. AIC and BIC respectvely) (Mller, 2002). In terms of effcency, smply utlsng the (computatonal) tme taken to run a gven landmarker would suffce. However, aggregatng the two nto a sngle heurstc s more dffcult; several methods to do ths have been proposed n (Ler et al., 2004a). 2.2 Potental Landmarkng Elements Essentally, any learnng algorthm could be utlsed as a landmarkng element. Therefore, the ntal search space for landmarkng elements s extremely large. One remedy s to only consder algorthms that are less complex versons of the canddate algorthms. Such modfcatons may be categorsed nto two groups: Algorthm specfc reductons, whch relate to the actual nner workngs of one partcular algorthm, and could nclude: lmtng the structure formed (e.g. decson stumps for decson trees), and/or lmtng the nternal search mechansms wthn the algorthm (e.g. usng randomness (Detterch, 1997)). Algorthm generc reductons, whch relate to generc modfcatons that may be appled to any learnng algorthm. As per [4], such modfcatons could be smlar to the sub-samplng technques used n ensemble lterature (Detterch, 1997). Another method of dong ths, s to utlse a subset set of the gven canddate algorthms tself (Ler et al., 2004b, 2004c). Ths s descrbed n further detal n Secton The Proposed Hll-clmbng Landmarker Generaton Algorthm In prevous work (Ler et al., 2004a, 2004b, 2004c), we renterpret the role of landmarkers and proposed to use a subset of the avalable algorthms as landmarkng elements. More specfcally, gven a set of canddate algorthms A = {a 1,, a n }, we seek to select a set of landmarkers L = {l 1,, l n } (where each l s the landmarker for a and utlses the landmarkng elements l.elements) such that: () l L, l.elements A, and () the unon of all l.elements (.e. L.elements) s a (proper) subset of A. Thus, gven the potental landmarkng element sets (.e. each subset of A), the allsubsets landmarker generaton algorthm would generate L by selectng the landmarkng element set (.e. the subset of A) that acheved the hghest heurstc score. Ths n 1 n method requred the computaton of ( ).( n ) lnear regresson functons, where n = A. Gven that the computaton of each regresson functon has complexty O(m), where m s the number of performance estmates (.e. datasets), the complexty of the all-subsets landmarker generaton algorthm s O(nm.2 n ), whch makes the method unweldy for hgh n. (However, note that ths complexty s assocated wth tranng tme and not runtme executon.e. applcaton on a new dataset) As a more effcent alternatve, we propose a hll-clmbng verson of the landmarker generaton algorthm, whch relates to the prevous verson of the algorthm n the same manner that the standard forward selecton (Mller, 2002) relates to allsubsets regresson (Mller, 2002). The standard forward selecton procedure begns by assgnng no ndependents.e. usng only the mean value of the dependent for estmaton. It then teratvely attempts to add landmarkng elements (.e. ndependents) one by one. In each teraton, t frst computes the regresson functons that each utlse a set of ndependents consstng of the unon of the current set of chosen ndependents and one of the remanng unused ndependents. Subsequently, the best model new_model s selected usng some heurstc or crteron measure (e.g. r 2, C p ), and s compared wth the prevous model (.e. the model ncludng all the prevously selected ndependents) old_model va a F-test. Ths F-test checks f the computed F = (SSE(old_model) - SSE(new_model)) / MSE(new_model) s greater than 4.0 (whch s roughly the F value assocated wth confdence = 0.95, dfn = 1, and dfd (.e. n m 1) > 10). If ths nequalty holds, then: () the new model s adopted, () the = 1
3 newly appled ndependent s removed from the potental set of ndependents, and () we proceed to the next teraton. Else, the procedure stops and the fnal adopted model s old_model. For more detals on the forward selecton procedure see (Mller, 2002). Essentally, the standard forward selecton procedure restrcts the amount of computaton by excludng models that do not nclude the already chosen ndependents. More specfcally, gven a total number of ndependents n, n any teraton, only n - ( - 1) regresson functons need be constructed and evaluated, each pertanng to the unon of the prevously chosen ndependents and one of the remanng ndependents n n that had not been chosen n any prevous teraton. Thus, nstead of evaluatng all possble subsets of A and n 1 n computng 1 ( ) =.( n ) regresson functons, the new hll-clmbng landmarker generaton algorthm at most n 1 n 1 requres the computaton of 1 ( 1 ) =.( n ) regresson functons. Ths translates to a (tranng tme) complexty of O(mn 3 ), reducng a prevously exponental complexty to a polynomal one. However, ths s a hll-clmbng approach, and thus may not fnd the optmal model wth respect to the chosen crteron, e.g. r 2 ). 3.1 Algorthm Descrpton The hll-clmbng verson of the landmarker generaton algorthm s essentally a slghtly more complcated verson of the standard forward selecton procedure descrbed n the prevous secton; t s descrbed n Fgure 1. As per the prevous all-subsets verson of the algorthm, the general dea s to use a subset of the avalable algorthms as potental landmarkng elements. Loosely, ths dea assumes that: () several of the avalable algorthms wll have smlar performance patterns, such that one pattern s representatve of several; and () f a gven algorthm has a very dssmlar performance pattern (as compared to the other algorthms), that performance pattern can be correlated to the conjuncton of several others. To re-terate, gven a set of canddate algorthms A = {a 1,, a n }, we seek to select a set of landmarkers L = {l 1,, l n } (where each l s the landmarker for a and utlses the landmarkng elements l.elements) such that: () l L, l.elements A, and () the unon of all l.elements (.e. unon(l.elements)) s a (proper) subset of A. For example, gven A = {a 1, a 2, a 3, a 4 }, a possble set of selected landmarkers may correspond to L, such that L.elements = {a 1, a 2 }, and l 1.elements = {a 1 }, l 2.elements = {a 2 }, l 3.elements = {a 1, a 2 }, l 4.elements = {a 2 }. As n the all-subsets verson, we requre that L.elements A because our objectve s to ncur less computatonal cost then actually evaluatng A; wthout ths condton, there s a chance that L.elements = A, whch would nvaldate the effcency crteron. Referrng back to the example above, we see that to obtan the performance of each a A, on some new dataset s new, we only have to evaluate a 1 and a 2 on s new (.e. we evaluate a 1 and a 2, and use those values to estmate the performance of a 3 and a 4 ). Thus, f L.elements = A, we would not have made any gans n effcency, as all a A would have to be evaluated. Ths means that n any teraton of the hllclmbng algorthm, we must ensure that the unon of the current landmarkng algorthms selected must not equal A. (Note that throughout the remander of ths paper, we use: the term landmarkng elements nterchangeably wth the term ndependents, and the term dependents for the remanng algorthms.e. A \ ndependents.) Inputs: - A = {a 1,, a n}, a set of n canddate algorthms; and - for a A, ϕ a = { ϕa, 1,..., ϕa, m}, the performance estmates for algorthm a on a correspondng set of datasets S = {s 1,, s m}. Output: - L = {l 1,, l n }, a set of landmarkers, where each l s the chosen landmarker for a. Let: - get_regresson_fn(dependent, ndependents) returns the coeffcents of the for the lnear regresson functon correspondng to the nput dependent and ndependents. - fnd_heurstc(coeffcents) returns the heurstc (e.g. r 2 ) for the regresson functon correspondng to coeffcents. - SSE(coeffcents) returns the sum of squared errors assocated wth the regresson functon correspondng to coeffcents. - MSE(coeffcents) returns the mean squared error assocated wth the regresson functon correspondng to coeffcents. - a A, ϕ a = { ϕ a, 1,..., ϕ a, m } be globally accessble - max_ndependents = maxmum number of algorthms to evaluate. The hllclmbng landmarker generaton algorthm: generate_landmarkers(a) returns L 1 L = nl 2 for a A 3 L[a ] = get_regresson_fn(a, φ) 4 chosen = φ 5 vald = true 6 whle vald 7 fns = nl 8 sum_heurstc = nl 9 for a A \ chosen 10 for a j A \ {chosen a } 11 fns[a ][a j] = get_regresson_fn(a, chosen a j) 12 sum_heurstc[a j] += fnd_heurstc(fns[a ][a j]) 13 best_ndependent = a k sum_heurstc[a k] = argmax sum _ heurstc[ a ] a x A \ chosen 14 chosen = chosen best_ndependent 15 vald_update = false 16 for a A 17 f a chosen 18 L [a ] = a 19 else f [SSE(L [a ]) SSE(fns[a ][best_ndependent])] / MSE(fns[a ][best_ndependent]) > L [a ] = fns[a ][best_ndependent] 21 vald_update = true 22 f!vald_update or chosen == max_ndependents 23 vald = false Fgure 1. Pseudo code for the proposed hll-clmbng landmarker generaton algorthm x
4 It also becomes evdent that any chosen landmarkng element a, must be evaluated, and therefore need not be estmated. In fact, from our example, we see that l 1 and l 2 are really not landmarkers; they are merely evaluatons of the algorthms a 1 and a 2. Thus, wth each teraton, two ponts become clear. The frst s that two subsets of A may be defned: () the algorthms to be estmated I = {a a A \ current_l.elements} (.e. dependents the algorthms that requre landmarkers; and correspondngly, the algorthms yet to be chosen the potental ndependents or landmarkng elements), and () the algorthms to be evaluated J = {a a current_l.elements} (.e. the chosen landmarkng algorthms represented by the varable chosen n the pseudo code n Fgure 1). Based on L.elements from our example, these are: I = {a 3, a 4 }, and J = {a 1, a 2 }. Secondly, we seek to move one of the algorthms from I to J. Note that essentally any number of algorthms (less than A ) may be moved va a sngle teraton. However, dong ths ncreases the complexty of the algorthm. In fact, when attemptng to move A 1 algorthms n a sngle teraton, we are actually executng the orgnal all-subsets verson of the algorthm. Wth ether verson of the landmarker generaton algorthm, we begn wth I = A and J = φ. To shft k < A algorthms from I to J, we essentally select the k algorthms that yeld the greatest overall utlty (.e. measured based on the heurstc chosen e.g. r 2 or r 2 + effcency ganed). As an ndependent, each algorthm attans a certan amount of correlaton to each of ts applcable dependents. Thus, we gauge the utlty of each ndependent based on the mean correlaton wth ts applcable dependents (note that the number of applcable dependents for any potental ndependent wll always be the same;.e. n teraton, there wll be A J 1 dependents for each ndependent). For example, gven A = {a 1, a 2, a 3, a 4 }, and assumng we wsh choose a sngle algorthm (k = 1) based solely on r 2, then gven the r 2 values for A n Table 1, we see that as an ndependent (.e. landmarkng algorthm), a 4 has the hghest overall utlty. The calculaton for cases where k > 1 s slghtly more complcated, and snce the proposed hll-clmbng verson only requres the calculaton for k = 1, we wll not elaborate further; for a descrpton of the selecton method for k > 1, see (Ler et al., 2004c). Another ssue that arses when adaptng the standard forward selecton to our landmarker generaton scenaro s that the orgnal stoppng crteron (.e. F-test) must be modfed. Essentally, the new stoppng crtera requres that: () the F-test be adapted to the case where a sngle ndependent s chosen for multple dependents, and () that a lmt be mposed on the total number of ndependents so that L.elements A s ensured (so that a dynamc lmt on the tranng tme computatonal cost may be mposed). The hll-clmbng algorthm seeks to mmc standard forward selecton, whch n each teraton selects the ndependent wth the hghest crteron measure (e.g. r 2 ). However, snce the proposed algorthm makes ths selecton based on an overall utlty, t s lkely that at least for some dependents, the ndependent wth the hghest crteron measure s not chosen. To (partally) account for ths, we do not stop attemptng to utlse more ndependents for any remanng dependent even when an F-test for that dependent fals. Instead, out stoppng crteron (over all remanng dependents) apples whenever: () the selected ndependents J reaches a certan lmt.e. J > the maxmum number of algorthm we wsh to evaluate; and () f all the F-tests (.e. over all remanng dependents) fal for the currently selected ndependent. Indep. a 1 Dependent a 2 a 3 a 4 r 2 a 1 NA /3 = 0.5 a NA /3 = 0.6 a NA /3 = 0.6 a NA 2.1/3 = 0.7 Table 1. Example r 2 values and overall utlty of A = {a 1, a 2, a 3, a 4 } for the one algorthm case. 3.2 Heurstcs for Correlatvty and Effcency To choose between the varous potental landmarkers (.e. the regresson functons correspondng to the varous ndependent set optons n each teraton), we requre a crteron measure or heurstc that measures the utlty of each remanng dependent (.e. a A \ J) to potental ndependent set (.e. ( a j A \ (I a )) I a j ) parng. Gven the specfed correlatvty and effcency landmarker crtera, two heurstcs are evaluated. The frst s the squared Pearson product moment coeffcent or coeffcent of determnaton r 2. By adoptng ths basc measure of correlaton, we rely on the stoppng crtera L.elements A to ensure the effcency crteron s met. Ths means that n each teraton, the choce of ndependent s purely guded by ts correlatvty utlty.e. the (computatonal) cost over the potental ndependents s gnored. And the second, a sem cost-senstve varant of the plan r 2 crteron: r 2 (a, l k ) + ((eff(a) - eff(l k )) / eff(a)), where a s the dependent, l k s the potental landmarker, and A s the set of all canddate algorthms, and thus r 2 (a, l k ) s the r 2 value observed over the a l k parng, and eff(..) s the estmated overall computatonal cost (.e. the tme taken for tranng and testng) of ts subject. Essentally, (eff(a) eff(l k )) / eff(a) corresponds to the amount of effcency ganed (or tme/computaton saved) when l k.elements s evaluated nstead of A. Note that we have proposed several approaches for estmatng eff(..) (Ler et al., 2004a, 2004b). In ths paper, we adopt the smpler and less space ntensve method specfed n (Ler et al., 2004b), whch smply takes the mean tranng and runtme computatonal cost over the tranng datasets for each relevant a x n A.
5 4. Experments, Results and Analyss For our experments we utlse 10 classfcaton learnng algorthms from WEKA (Wtten and Frank, 2000) (.e. naïve Bayes, k-nearest neghbour (wth k = 1 and 7), polynomal support vector machne, decson stump, J4.8 (a WEKA mplementaton of C4.5), random forest, decson table, Rpper, and ZeroR) (.e as A) and 34 classfcaton datasets randomly chosen from the UCI repostory (Blake and Merz, 1998). Leave-one-out crossvaldaton s used to test the proposed landmarker generaton algorthm. No. Algorthms Evaluated, L.elements r s r Table 2. Results for the all-subsets verson (plan r 2 ) No. Algorthms Evaluated, L.elements r s r Table 3. Results for the all-subsets verson (r 2 + ) For each fold we use 33 of the datasets to generate landmarkers as descrbed n Secton 3. The resultant set of landmarkers ndcates whch algorthms must be evaluated and whch estmated (.e. the algorthms n L.elements, and the remanng A \ L.elements respectvely). On the dataset left out, we frst run the algorthms that must me evaluated and then use ther accuracy results (attaned va stratfed ten-fold cross-valdaton) to estmate the performance of the other algorthms usng the regresson functons computed durng landmarker generaton. The verson of the algorthm descrbed n Secton 3.1 wll attempt to fnd a set of landmarkers L such that L.elements max_ndependents (the user specfed parameter) and where the chosen landmarkers only ncludes algorthms (.e. ndependents) that have not faled all F-tests. Thus t s possble that the hll-clmbng landmarker generaton algorthm returns an L such that L.elements < max_ndependents. Correspondngly, some modfcatons to the proposed algorthm were made n order to make a drect comparson wth the results from the all-subsets verson, whch (n those experments see (Ler et al., 2004b)) produces a set of landmarkers for each possble sze of L.elements < A. In order to ensure that a specfc number of ndependents are utlsed (.e. a specfc number of landmarkng algorthms are run), the hllclmbng verson wll, when t encounters the stuaton where L.elements < max_ndependents and all F-tests for the current ndependent fal, contnue to move algorthms from I to J wthout changng the chosen models (.e. ndependents) for the remanng algorthm to be estmated n I. In each such teraton, the dependent algorthm (.e. landmarked algorthm) wth the model wth lowest utlty s moved. Thus, for each dataset/fold, 9 sets of landmarkers and correspondngly, 9 sets of accuracy estmates for A are attaned (snce A = 10). The followng three evaluatons are reported for each of these sets: Effcency ganed (): the mean percentage of computaton saved (across the 34 datasets or folds) by runnng the correspondng L.elements nstead of A. Rank order correlaton (r s ): the mean Spearman s rank order correlaton coeffcent r s (across the 34 datasets or folds), measurng the rank correlaton between: () the rank order of the accuraces estmated va the landmarkers and regresson, and () the rank order of the accuraces evaluated va ten-fold cross-valdaton. Algorthm-par orderng (): the mean percentage of algorthm parngs n whch the order predcted va the landmarkers s the same as that observed va ten-fold cross-valdaton. Note, wth A = 10, there are 10 C 2 = 45 parngs. Gven that x = L.elements are evaluatng (va ten-fold cross-valdaton), x C 2 algorthm pars wll correspond to the orderng that s found va tenfold cross-valdaton. We denote ths as assured, the parngs that are guaranteed to be correct. The results of the all-subsets verson of the landmarker generaton algorthm usng the plan r 2 and r 2 + effcency ganed heurstcs are gven n Table 2 and 3 respectvely, whle the results from the hll-clmbng (forward selecton) verson are gven n Tables 4 and 5. From Tables 2 through 5, we observe that the hllclmbng landmarker generaton algorthm qute closely matches the predctve performance of the all-subsets verson. Correspondngly, we fnd that even when few landmarkng algorthms are utlsed, the generated landmarkers are stll able to produce a reasonable estmate whle makng a szable gan n effcency. And as wth the all-subsets result, when we allow the generaton algorthm to utlse larger sets of canddate algorthms as landmarkers (.e. as we allow larger L.elements ), the r 2, r s, and all
6 ncrease, and the effcency ganed decreases, wth all the values approachng the ten-fold cross-valdaton result. No. Algorthms Evaluated, L.elements r s r Table 4. Results for the hll-clmbng verson (plan r 2 ) No. Algorthms Evaluated, L.elements r s r Table 5. Results for the hll-clmbng verson (r 2 + ) Acknowledgements The authors would lke to acknowledge the support of the Smart Internet Technology CRC n ths research. References Blake, C.; and Merz, C UCI Repostory of Machne Learnng Databases. Unversty of Calforna, Irvne, Department of Informaton and Computer Scences. Brazdl, P.; Gama, J.; and Henery, R Charactersng the applcablty of classfcaton algorthms usng meta level learnng. In Proceedngs of the European Conference on Machne Learnng, Detterch, T Machne learnng research: 4 current drectons. Artfcal Intellgence Magazne, 18(4): Fürnkranz, J.; and Petrak, J An evaluaton of landmarkng varants. In Proceedngs of the European Conference on Machne Learnng, Workshop on Integratng Aspects of Data Mnng, Decson Support and Meta-learnng, Graud-Carrer, C.; Vlalta, R.; and Brazdl, P Introducton to the specal ssue on meta-learnng. Machne Learnng, 54(3): Kalouss, A.; and Hlaro, M Model selecton va meta-learnng: A Comparatve Study. Internatonal Journal on Artfcal Intellgence Tools, 10(4): Ler, D.; Koprnska, I.; and Chawla, S. 2004a. Comparsons between heurstcs based on correlatvty and effcency for landmarker generaton. In Proceedngs of the 4th Internatonal Conference on Hybrd Intellgent Systems, n press. Ler, D.; Koprnska, I.; and Chawla, S. 2004b. A landmarker selecton algorthm based on correlaton and effcency crtera. In Proceedngs of the 17th Australan Jont Conference on Artfcal Intellgence, Ler, D.; Koprnska, I.; and Chawla, S. 2004c. A new landmarker generaton algorthm based on correlatvty. In Proceedngs of the Internatonal Conference on Machne Learnng and Applcatons, Mche, D.; Spegelhalter, D.; and Taylor, C Machne Learnng, Neural and Statstcal Classfcaton. Ells Horwood. Mller, A Subset Selecton n Regresson (2nd Edton). Chapman and Hall/CRC. Pfahrnger, B.; Bensusan, H.; and Graud-Carrer, C Meta-learnng by landmarkng varous learnng algorthms. In Proceedngs of the Internatonal Conference on Machne Learnng, Vlalta, R.; and Drss, Y. A perspectve vew and survey of meta-learnng. Artfcal Intellgence Revew, 18(2): Wckens, T The Geometry of Multvarate Statstcs. LEA Publshers. Wtten, I.; and Frank, E Data Mnng: Practcal Machne Learnng Tools wth Java Implementatons. Morgan Kaufmann. Wolpert, D The supervsed learnng no-free-lunch theorems. In Proceedngs of the Soft Computng n Industry - Recent Applcatons,
Learning the Kernel Parameters in Kernel Minimum Distance Classifier
Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department
More informationSupport Vector Machines
/9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.
More informationClassifier Selection Based on Data Complexity Measures *
Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.
More informationDetermining the Optimal Bandwidth Based on Multi-criterion Fusion
Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn
More informationFeature Reduction and Selection
Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components
More informationSimulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010
Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement
More informationCluster Analysis of Electrical Behavior
Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School
More informationProblem Definitions and Evaluation Criteria for Computational Expensive Optimization
Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty
More informationAn Optimal Algorithm for Prufer Codes *
J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,
More informationFeature Selection as an Improving Step for Decision Tree Construction
2009 Internatonal Conference on Machne Learnng and Computng IPCSIT vol.3 (2011) (2011) IACSIT Press, Sngapore Feature Selecton as an Improvng Step for Decson Tree Constructon Mahd Esmael 1, Fazekas Gabor
More informationA mathematical programming approach to the analysis, design and scheduling of offshore oilfields
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and
More informationReducing Frame Rate for Object Tracking
Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg
More informationCompiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz
Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster
More informationS1 Note. Basis functions.
S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type
More informationCourse Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms
Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques
More informationModule Management Tool in Software Development Organizations
Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,
More informationSmoothing Spline ANOVA for variable screening
Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory
More informationA Binarization Algorithm specialized on Document Images and Photos
A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a
More informationMachine Learning 9. week
Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below
More informationMeta-heuristics for Multidimensional Knapsack Problems
2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,
More informationSubspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;
Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features
More informationSLAM Summer School 2006 Practical 2: SLAM using Monocular Vision
SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,
More informationNAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics
Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson
More informationMathematics 256 a course in differential equations for engineering students
Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the
More informationEECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science
EECS 730 Introducton to Bonformatcs Sequence Algnment Luke Huan Electrcal Engneerng and Computer Scence http://people.eecs.ku.edu/~huan/ HMM Π s a set of states Transton Probabltes a kl Pr( l 1 k Probablty
More informationSynthesizer 1.0. User s Guide. A Varying Coefficient Meta. nalytic Tool. Z. Krizan Employing Microsoft Excel 2007
Syntheszer 1.0 A Varyng Coeffcent Meta Meta-Analytc nalytc Tool Employng Mcrosoft Excel 007.38.17.5 User s Gude Z. Krzan 009 Table of Contents 1. Introducton and Acknowledgments 3. Operatonal Functons
More informationEdge Detection in Noisy Images Using the Support Vector Machines
Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona
More informationParallelism for Nested Loops with Non-uniform and Flow Dependences
Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr
More informationRelated-Mode Attacks on CTR Encryption Mode
Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory
More informationFuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System
Fuzzy Modelng of the Complexty vs. Accuracy Trade-off n a Sequental Two-Stage Mult-Classfer System MARK LAST 1 Department of Informaton Systems Engneerng Ben-Guron Unversty of the Negev Beer-Sheva 84105
More informationThree supervised learning methods on pen digits character recognition dataset
Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru
More informationTN348: Openlab Module - Colocalization
TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages
More informationAssignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.
Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton
More informationNUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS
ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd
More informationX- Chart Using ANOM Approach
ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are
More informationWishing you all a Total Quality New Year!
Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma
More informationThe Research of Support Vector Machine in Agricultural Data Classification
The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou
More informationLoad Balancing for Hex-Cell Interconnection Network
Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,
More informationSequential search. Building Java Programs Chapter 13. Sequential search. Sequential search
Sequental search Buldng Java Programs Chapter 13 Searchng and Sortng sequental search: Locates a target value n an array/lst by examnng each element from start to fnsh. How many elements wll t need to
More informationAn Entropy-Based Approach to Integrated Information Needs Assessment
Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology
More informationThe Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique
//00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy
More informationA Statistical Model Selection Strategy Applied to Neural Networks
A Statstcal Model Selecton Strategy Appled to Neural Networks Joaquín Pzarro Elsa Guerrero Pedro L. Galndo joaqun.pzarro@uca.es elsa.guerrero@uca.es pedro.galndo@uca.es Dpto Lenguajes y Sstemas Informátcos
More informationThis module is part of the. Memobust Handbook. on Methodology of Modern Business Statistics
Ths module s part of the Memobust Handbook on Methodology of Modern Busness Statstcs 26 March 2014 Theme: Donor Imputaton Contents General secton... 3 1. Summary... 3 2. General descrpton... 3 2.1 Introducton
More information6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour
6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the
More informationRange images. Range image registration. Examples of sampling patterns. Range images and range surfaces
Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples
More informationAn Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed
More informationLecture 5: Multilayer Perceptrons
Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented
More informationData Mining: Model Evaluation
Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct
More informationCS 534: Computer Vision Model Fitting
CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust
More informationA New Approach For the Ranking of Fuzzy Sets With Different Heights
New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays
More informationCS1100 Introduction to Programming
Factoral (n) Recursve Program fact(n) = n*fact(n-) CS00 Introducton to Programmng Recurson and Sortng Madhu Mutyam Department of Computer Scence and Engneerng Indan Insttute of Technology Madras nt fact
More informationConcurrent Apriori Data Mining Algorithms
Concurrent Apror Data Mnng Algorthms Vassl Halatchev Department of Electrcal Engneerng and Computer Scence York Unversty, Toronto October 8, 2015 Outlne Why t s mportant Introducton to Assocaton Rule Mnng
More information12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification
Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero
More informationSum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints
Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan
More informationParallel Branch and Bound Algorithm - A comparison between serial, OpenMP and MPI implementations
Journal of Physcs: Conference Seres Parallel Branch and Bound Algorthm - A comparson between seral, OpenMP and MPI mplementatons To cte ths artcle: Luco Barreto and Mchael Bauer 2010 J. Phys.: Conf. Ser.
More informationHelsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)
Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute
More informationOutline. Type of Machine Learning. Examples of Application. Unsupervised Learning
Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton
More informationAn Efficient Genetic Algorithm with Fuzzy c-means Clustering for Traveling Salesman Problem
An Effcent Genetc Algorthm wth Fuzzy c-means Clusterng for Travelng Salesman Problem Jong-Won Yoon and Sung-Bae Cho Dept. of Computer Scence Yonse Unversty Seoul, Korea jwyoon@sclab.yonse.ac.r, sbcho@cs.yonse.ac.r
More informationBOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET
1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School
More informationDynamic Integration of Regression Models
Dynamc Integraton of Regresson Models Nall Rooney 1, Davd Patterson 1, Sarab Anand 1, Alexey Tsymbal 2 1 NIKEL, Faculty of Engneerng,16J27 Unversty Of Ulster at Jordanstown Newtonabbey, BT37 OQB, Unted
More informationOutline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1
4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:
More informationBackpropagation: In Search of Performance Parameters
Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,
More informationPerformance Evaluation of Information Retrieval Systems
Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence
More informationCollaboratively Regularized Nearest Points for Set Based Recognition
Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,
More informationHermite Splines in Lie Groups as Products of Geodesics
Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the
More informationHybrid Heuristics for the Maximum Diversity Problem
Hybrd Heurstcs for the Maxmum Dversty Problem MICAEL GALLEGO Departamento de Informátca, Estadístca y Telemátca, Unversdad Rey Juan Carlos, Span. Mcael.Gallego@urjc.es ABRAHAM DUARTE Departamento de Informátca,
More informationTerm Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task
Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto
More informationMachine Learning. Topic 6: Clustering
Machne Learnng Topc 6: lusterng lusterng Groupng data nto (hopefully useful) sets. Thngs on the left Thngs on the rght Applcatons of lusterng Hypothess Generaton lusters mght suggest natural groups. Hypothess
More informationLS-TaSC Version 2.1. Willem Roux Livermore Software Technology Corporation, Livermore, CA, USA. Abstract
12 th Internatonal LS-DYNA Users Conference Optmzaton(1) LS-TaSC Verson 2.1 Wllem Roux Lvermore Software Technology Corporaton, Lvermore, CA, USA Abstract Ths paper gves an overvew of LS-TaSC verson 2.1,
More informationExercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005
Exercses (Part 4) Introducton to R UCLA/CCPR John Fox, February 2005 1. A challengng problem: Iterated weghted least squares (IWLS) s a standard method of fttng generalzed lnear models to data. As descrbed
More informationImprovement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration
Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,
More informationIntelligent Information Acquisition for Improved Clustering
Intellgent Informaton Acquston for Improved Clusterng Duy Vu Unversty of Texas at Austn duyvu@cs.utexas.edu Mkhal Blenko Mcrosoft Research mblenko@mcrosoft.com Prem Melvlle IBM T.J. Watson Research Center
More informationIncremental Learning with Support Vector Machines and Fuzzy Set Theory
The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and
More informationSENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR
SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR Judth Aronow Rchard Jarvnen Independent Consultant Dept of Math/Stat 559 Frost Wnona State Unversty Beaumont, TX 7776 Wnona, MN 55987 aronowju@hal.lamar.edu
More informationFor instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)
Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A
More information2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements
Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.
More informationProgramming in Fortran 90 : 2017/2018
Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values
More informationComparison of Heuristics for Scheduling Independent Tasks on Heterogeneous Distributed Environments
Comparson of Heurstcs for Schedulng Independent Tasks on Heterogeneous Dstrbuted Envronments Hesam Izakan¹, Ath Abraham², Senor Member, IEEE, Václav Snášel³ ¹ Islamc Azad Unversty, Ramsar Branch, Ramsar,
More informationA Fast Visual Tracking Algorithm Based on Circle Pixels Matching
A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng
More informationCMPS 10 Introduction to Computer Science Lecture Notes
CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not
More informationSupport Vector Machines
Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned
More informationCorrelative features for the classification of textural images
Correlatve features for the classfcaton of textural mages M A Turkova 1 and A V Gadel 1, 1 Samara Natonal Research Unversty, Moskovskoe Shosse 34, Samara, Russa, 443086 Image Processng Systems Insttute
More informationCHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION
48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue
More informationComparing High-Order Boolean Features
Brgham Young Unversty BYU cholarsarchve All Faculty Publcatons 2005-07-0 Comparng Hgh-Order Boolean Features Adam Drake adam_drake@yahoo.com Dan A. Ventura ventura@cs.byu.edu Follow ths and addtonal works
More informationProblem Set 3 Solutions
Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,
More informationLearning to Project in Multi-Objective Binary Linear Programming
Learnng to Project n Mult-Objectve Bnary Lnear Programmng Alvaro Serra-Altamranda Department of Industral and Management System Engneerng, Unversty of South Florda, Tampa, FL, 33620 USA, amserra@mal.usf.edu,
More informationAgenda & Reading. Simple If. Decision-Making Statements. COMPSCI 280 S1C Applications Programming. Programming Fundamentals
Agenda & Readng COMPSCI 8 SC Applcatons Programmng Programmng Fundamentals Control Flow Agenda: Decsonmakng statements: Smple If, Ifelse, nested felse, Select Case s Whle, DoWhle/Untl, For, For Each, Nested
More informationTsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance
Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for
More informationReview of approximation techniques
CHAPTER 2 Revew of appromaton technques 2. Introducton Optmzaton problems n engneerng desgn are characterzed by the followng assocated features: the objectve functon and constrants are mplct functons evaluated
More informationHybridization of Expectation-Maximization and K-Means Algorithms for Better Clustering Performance
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 2 Sofa 2016 Prnt ISSN: 1311-9702; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-2016-0017 Hybrdzaton of Expectaton-Maxmzaton
More informationVirtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory
Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process
More informationA Powerful Feature Selection approach based on Mutual Information
6 IJCN Internatonal Journal of Computer cence and Network ecurty, VOL.8 No.4, Aprl 008 A Powerful Feature electon approach based on Mutual Informaton Al El Akad, Abdelall El Ouardgh, and Drss Aboutadne
More informationParameter estimation for incomplete bivariate longitudinal data in clinical trials
Parameter estmaton for ncomplete bvarate longtudnal data n clncal trals Naum M. Khutoryansky Novo Nordsk Pharmaceutcals, Inc., Prnceton, NJ ABSTRACT Bvarate models are useful when analyzng longtudnal data
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15
CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc
More informationSorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions
Sortng Revew Introducton to Algorthms Qucksort CSE 680 Prof. Roger Crawfs Inserton Sort T(n) = Θ(n 2 ) In-place Merge Sort T(n) = Θ(n lg(n)) Not n-place Selecton Sort (from homework) T(n) = Θ(n 2 ) In-place
More informationTECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.
TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS Muradalyev AZ Azerbajan Scentfc-Research and Desgn-Prospectng Insttute of Energetc AZ1012, Ave HZardab-94 E-mal:aydn_murad@yahoocom Importance of
More informationA Fast Content-Based Multimedia Retrieval Technique Using Compressed Data
A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,
More informationCSE 326: Data Structures Quicksort Comparison Sorting Bound
CSE 326: Data Structures Qucksort Comparson Sortng Bound Steve Setz Wnter 2009 Qucksort Qucksort uses a dvde and conquer strategy, but does not requre the O(N) extra space that MergeSort does. Here s the
More informationLobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide
Lobachevsky State Unversty of Nzhn Novgorod Polyhedron Quck Start Gude Nzhn Novgorod 2016 Contents Specfcaton of Polyhedron software... 3 Theoretcal background... 4 1. Interface of Polyhedron... 6 1.1.
More information