Abstract Ths paper ponts out an mportant source of necency n Smola and Scholkopf's Sequental Mnmal Optmzaton (SMO) algorthm for SVM regresson that s c

Size: px
Start display at page:

Download "Abstract Ths paper ponts out an mportant source of necency n Smola and Scholkopf's Sequental Mnmal Optmzaton (SMO) algorthm for SVM regresson that s c"

Transcription

1 Improvements to SMO Algorthm for SVM Regresson 1 S.K. Shevade S.S. Keerth C. Bhattacharyya & K.R.K. Murthy shrsh@csa.sc.ernet.n mpessk@guppy.mpe.nus.edu.sg cbchru@csa.sc.ernet.n murthy@csa.sc.ernet.n 1 Author for Correspondence: Prof. S.S. Keerth, Dept of Mechancal and Producton Engneerng, Natonal Unversty of Sngapore, Sngapore

2 Abstract Ths paper ponts out an mportant source of necency n Smola and Scholkopf's Sequental Mnmal Optmzaton (SMO) algorthm for SVM regresson that s caused by the use of a sngle threshold value. Usng clues from the KKT condtons for the dual problem, two threshold parameters are employed to derve modcatons of SMO for regresson. These moded algorthms perform sgncantly faster than the orgnal SMO on the datasets tred. 1 Introducton Support Vector Machne (SVM) s an elegant tool for solvng pattern-recognton and regresson problems. Over the past few years, t has attracted a lot of researchers from the neural network and mathematcal programmng communty; the man reason for ths beng ther ablty to provde excellent generalzaton performance. SVMs have also been demonstrated to be valuable for several real-world applcatons. In ths paper, we address the SVM regresson problem. Recently, Smola and Scholkopf[7, 8] proposed an teratve algorthm called Sequental Mnmal Optmzaton (SMO), for solvng the regresson problem usng SVM. Ths algorthm s an extenson of the SMO algorthm proposed by Platt[5] for SVM classer desgn. The remarkable feature of the SMO algorthms s that they are fast as well as very easy to mplement. In a recent paper[4] we suggested some mprovements to Platt's SMO algorthm for SVM classer desgn. In ths paper, we extend those deas to Smola and Scholkopf's SMO algorthm for regresson. The mprovements that we suggest n ths paper enhance the value of SMO for regresson even further. In partcular, we pont out an mportant source of necency caused by the way SMO mantans and updates a sngle threshold value. Gettng clues from optmalty crtera assocated wth the Karush-Kuhn-Tucker(KKT) condtons for the dual problem, we suggest the use of two threshold parameters and devse two moded versons of SMO for regresson that are ecent than the orgnal SMO. Computatonal comparson on datasets show that the modcatons perform sgncantly better than the orgnal SMO. The paper s organzed as follows. In secton 2 we brey dscuss the SVM regresson problem formulaton, the dual problem and the assocated KKT optmalty condtons. We also pont out how these condtons lead to proper crtera for termnatng algorthms for desgnng SVM for regresson. Secton 3 gves a bref overvew of Smola's SMO algorthm for regresson. In secton 1

3 4 we pont out the necency assocated wth the way SMO uses a sngle threshold value and descrbe the moded algorthm n secton 5. Computatonal comparson s done n secton 6. 2 The SVM Regresson Problem and Optmalty Condtons The basc problem addressed n ths paper s the regresson problem. The tutoral by Smola and Scholkopf[7] gves a good overvew of the soluton of ths problem usng SVMs. Throughout the paper we wll use x to denote the nput vector of the support vector machne and z to denote the feature space vector whch s related to x by a transformaton, z = (x). Let the tranng set, fx ; d g, consst of m data ponts, where x s the -th nput pattern and d s the correspondng target value, d 2 R. The goal of SVM regresson s to estmate a functon f(x) that s as \close" as possble to the target values d for every x and at the same tme, s as \at" as possble for good generalzaton. The functon f s represented usng a lnear functon n the feature space: f(x) = w (x) + b where b denotes the bas. As n all SVM desgns, we dene the kernel functon k(x; ^x) = (x)(^x), where \" denotes nner product n the z space. Thus, all computatons wll be done usng only the kernel functon. Ths nner-product kernel helps n takng the dot product of two vectors n the feature space wthout havng to construct the feature space explctly. Mercer's theorem[2] tells the condtons under whch ths kernel operator s useful for SVM desgns. For SVM regresson purposes, Vapnk[9] suggested the use of -nsenstve loss functon where the error s not penalzed as long as t s less than. It s assumed here that s known a pror. Usng ths error functon together wth a regularzng term, and lettng z = (x ), the optmzaton problem solved by the support vector machne can be P formulated as: mn 1 2 kwk2 + C ( + ) s:t: d? w z? b + w z + b? d + ; (P) The above problem s referred to as the prmal problem. The constant C > determnes the trade-o between the smoothness of f and the amount up to whch devatons larger than are tolerated. 2

4 Let us dene w(; ) = P (? )z : We wll refer to the () 's as Lagrange multplers. Usng Wolfe dualty theory, t can be shown that the () 's are obtaned by solvng the followng Dual problem: max s:t: X X d (? )? X 1 ( + )? 2 w(; ) w(; ) (? ) = ; 2 [; C] 8 Once the 's and 's are obtaned, the prmal varables, w; b; and can be easly determned by usng the KKT condtons mentoned earler. The feature space (and hence w) can be nnte dmensonal. Ths makes t computatonally dcult to solve the prmal problem (P). The numercal approach n SVM desgn s to solve the P dual problem snce t s a nte-dmensonal optmzaton problem. (Note that w(; ) w(; ) = Pj(? )( j? j )k(x ; x j ).) To derve proper stoppng condtons for algorthms whch solve the dual, t s mportant to wrte down the optmalty condtons for the dual. The Lagrangan for the dual s: Let X L D = 1 2 w(; ) w(; )? d (? ) + X ( + ) X + (? )? X X X X?? (C? )? The KKT condtons for the dual problem are: F = d? w(; ) =?F + +? + = F +?? + = = ; ; = ; ; (C? ) = ; ; C (C? ) = ; ; C (C? ) (D) 3

5 These condtons 2 can be smpled by consderng the followng ve cases. It s easy to check that at optmalty, for every, and cannot be non-zero at the same tme. Hence cases correspondng to 6= have been left out. (It s worth notng here that n the SMO regresson algorthm and ts modcatons dscussed n ths paper, the condton, = 8 s mantaned throughout.) Case 1: = =? (F? ) (1a) Case 2: = C Case 3: = C Case 4: 2 (; C) F? F?? F? = (1b) (1c) (1d) Case 5: 2 (; C) F? =? (1e) Dene the followng ndex sets at a gven : I a = f : < < Cg; I b = f : < < Cg; I 1 = f : = ; = g; I2 = f : = ; = Cg; I3 = f : = C; = g. Also, let I = I a [ I b. Let us also dene F ~ and F as ~F = F + f 2 I b [ I 2 ; = F? f 2 I a [ I 1 : and F = F? f 2 I a [ I 3 ; = F + f 2 I b [ I 1 : Usng these dentons we can rewrte the necessary condtons mentoned n (1a)-(1e) as F ~ 8 2 I [ I 1 [ I 2 ; F 8 2 I [ I 1 [ I 3 : (2) Let us dene b up = mnf F : 2 I [ I 1 [ I 3 g and b low = maxf F ~ : 2 I [ I 1 [ I 2 g (3) 2 The KKT condtons are both necessary and sucent for optmalty. Hereafter we wll smply refer to them as optmalty condtons. 4

6 Then the optmalty condtons wll hold at some b low b up (4) It s easy to see the close relatonshp between the threshold parameter b n the prmal problem and the multpler,. In partcular, at optmalty, and b are dentcal. Therefore, n the rest of the paper, and b wll denote one and the same quantty. We wll say that an ndex par (; j) denes a volaton at (; ) f one of the followng two sets of condtons holds: 2 I [ I 1 [ I 2 ; j 2 I [ I 1 [ I 3 and F ~ > Fj (5a) 2 I [ I 1 [ I 3 ; j 2 I [ I 1 [ I 2 and F < Fj ~ (5b) Note that optmalty condton wll hold at there does not exst any ndex par (; j) that denes a volaton. Snce, n numercal soluton, t s usually not possble to acheve optmalty exactly, there s a need to dene approxmate optmalty condtons. The condton (4) can be replaced by b low b up + 2 (6) where s a postve tolerance parameter. (In the pseudo-codes gven n the appendx of ths paper, ths parameter s referred to as tol). Correspondngly, the denton of volaton can be altered by replacng (5a) and (5b) respectvely by: 2 I [ I 1 [ I 2 ; j 2 I [ I 1 [ I 3 and F ~ > Fj + 2 (7a) 2 I [ I 1 [ I 3 ; j 2 I [ I 1 [ I 2 and F < Fj ~? 2 (7b) Hereafter n the paper, when optmalty s mentoned t wll mean approxmate optmalty. Let E = F? b. Usng (1) t s easy to check that optmalty holds there exsts a b such that the followng hold for every : > ) E? (8a) < C ) E + (8b) > ) E? + (8c) < C ) E?? (8d) 5

7 These condtons are used n[7,8] together wth a specal choce of b to check f an example volates the KKT condtons. However, unless the choce of b turns out to be rght, usng the above condtons for checkng optmalty can be ncorrect. We wll say more about ths n secton 4 after a bref dscusson of Smola and Scholkopf's SMO algorthm n the next secton. 3 Smola and Scholkopf's SMO Algorthm for Regresson A number of algorthms have been suggested for solvng the dual problem. Smola and Scholkopf[7, 8] gve a detaled vew of these algorthms and ther mplementatons. Tradtonal quadratc programmng algorthms such as nteror pont algorthms are not sutable for large sze problems because of the followng reasons. Frst, they requre that the kernel matrx k(x ; x j ) be computed and stored n memory. Ths requres extremely large memory. Second, these methods nvolve expensve matrx operatons such as Cholesky decomposton of a large sub-matrx of the kernel matrx. Thrd, codng of these algorthms s dcult. Attempts have been made to develop methods that overcome some or all of these problems. One such method s chunkng. The dea here s to operate on a xed sze subset of the tranng set at a tme. Ths subset s called the workng set and the optmzaton subproblem s solved wth respect to the varables correspondng to the examples n the workng set and a set of support vectors for the current workng set s found. These current support vectors are then used to determne the new workng set, the data the current estmator would make error on. The new optmzaton subproblem s solved and ths process s repeated untl the KKT condtons are satsed for all the examples. Platt[5] proposed an algorthm, called Sequental Mnmal Optmzaton (SMO) for the SVM classer desgn. Ths algorthm puts chunkng to the extreme by teratvely selectng workng sets of sze two and optmzng the target functon wth respect to them. One advantage of usng workng sets of sze 2 s that the optmzaton subproblem can be solved analytcally. The chunkng process s repeated tll all the tranng examples satsfy KKT condtons. Smola and Scholkopf[7,8] extended these deas for solvng the regresson problem usng SVMs. We descrbe ths algorthm very brey below. The detals, together wth a pseudo-code can be found n [7,8]. We assume that the reader s famlar wth them. To convey our deas compactly we employ the notatons used n 6

8 [7,8]. The basc step n SMO algorthm conssts of choosng a par of ndces, ( 1 ; 2 ) and optmzng the dual objectve functon n (D) by varyng the Lagrange multplers correspondng to 1 and 2 only. We make one mportant comment here on the role of the threshold parameter,. As n[7,8] dene the output error on -th pattern as E = F? Let us call the ndces of the two multplers chosen for jont optmzaton n one step as 2 and 1. To take a step by varyng the Lagrange multplers of examples 1 and 2, we only need to know E 1? E 2 = F 1? F 2. Therefore a knowledge of the value of s not needed to take a step. The method followed to choose 1 and 2 at each step s crucal for ndng the soluton of the problem ecently. The SMO algorthm employs a two loop approach: the outer loop chooses 2 ; and, for a chosen 2 the nner loop chooses 1. The outer loop terates over all patterns volatng the optmalty condtons, rst only over those wth Lagrange multplers nether on the upper nor on the lower boundary(n Smola and Scholkopf's pseudo-code ths loopng s ndcated by examneall = ), and once all of them are satsed, over all patterns volatng the optmalty condtons(examneall = 1) to ensure that the problem has ndeed been solved. For ecent mplementaton a cache for E s mantaned and updated for those ndces correspondng to non-boundary Lagrange multplers. The remanng E are computed as and when needed. Let us now see how the SMO algorthm chooses 1. The am s to make a large ncrease n the objectve functon. Snce t s expensve to try out all possble choces of 1 and choose the one that gves the best ncrease n the objectve functon, the ndex 1 s chosen to maxmze je 2? E 1 j or je 2? E 1 2j dependng on the multplers of 1 and 2. Snce E s avalable n cache for non-boundary multpler ndces, only such ndces are ntally used n the above choce of 1. If such a choce of 1 does not yeld sucent progress, then the followng steps are taken. Startng from a randomly chosen ndex, all ndces correspondng to non-bound multplers are tred as a choce for 1, one by one. If stll sucent progress s not possble, all ndces are tred as choces for 1, one by one, agan startng from a randomly chosen ndex. Thus the choce of random seed aects the runnng tme of SMO. Although a value of s not needed to take a step, t s needed f (8a)-(8d) are employed for checkng optmalty. In the SMO algorthm s updated after each step. A value of s chosen so 7

9 as to satsfy (1) for 2 f 1 ; 2 g. If, after a step nvolvng ( 1 ; 2 ), one of the Lagrange multplers (or both) takes a non-boundary value then (1d) or (1e) s exploted to update the value of. In the rare case that ths does not happen, there exsts a whole nterval, say [ low ; up ], of admssble thresholds. In ths stuaton SMO smply chooses to be the md-pont of ths nterval. 4 Inecency of the SMO algorthm SMO algorthm for regresson, dscussed above, s very smple and easy to mplement. However t can become necent, typcally near a soluton pont, because of ts way of computng and mantanng a sngle threshold value. At any nstant, the SMO algorthm xes b based on the current two ndces used for jont optmzaton. However, whle checkng whether the remanng examples volate optmalty or not, t s qute possble that a derent, shfted choce of b may do a better job. So, n the SMO algorthm t s qute possble that, even though (; ) has reached a value where optmalty s satsed (.e., (6)), but SMO has not detected ths because t has not dented the correct choce of b. It s also qute possble that, a partcular ndex may appear to volate the optmalty condtons because (8) s employed usng an \ncorrect" value of b although ths ndex may not be able to par wth another to make progress n the objectve functon. In such a stuaton the SMO algorthm does an expensve and wasteful search lookng for a second ndex so as to take a step. We beleve that ths s a major source of necency n the SMO algorthm. There s one smple alternate way of choosng b that nvolves all ndces. By dualty theory, the objectve functon value n (P) of a prmal feasble soluton s greater than or equal to the objectve functon value n (D) of a dual feasble soluton. The derence between these two values s referred to as the dualty gap. The dualty gap s zero only at optmalty. Suppose (; ) s gven and w = w(; ). The term can be chosen optmally (as a functon of ). The result s that the dualty gap s expressed as a functon of only. One possble way of mprovng the SMO algorthm s to always choose so as to mnmze the dualty gap. Ths corresponds to the subproblem, mn X max(; F?? ;?F +? ) Let m denote the number of examples. In an ncreasng order arrangement of ff? g and ff + g let f m and f m+1 be the m-th and (m + 1)-th values. Then any n the nterval, [f m ; f m+1 ] s a mnmzer. The determnaton of f m and f m+1 can be done ecently usng a \medan-ndng" 8

10 technque. Snce all F are not typcally avalable at a gven stage of the algorthm, t s approprate to apply the above dea to that subset of ndces for whch F are avalable. Ths set s nothng but I. We mplemented ths dea and tested t on some benchmark problems. But t dd not fare well. See secton 6 for performance on an algorthm. 5 Modcatons of the SMO Algorthm In ths secton, we suggest two moded versons of the SMO algorthm for regresson, each of whch overcomes the problems mentoned n the last secton. As we wll see n the computatonal evaluaton of secton 6, these modcatons are always better than the orgnal SMO algorthm for regresson and n most stuatons, they also gve qute a remarkable mprovement n ecency. In short, The modcatons avod the use of a sngle threshold value b and the use of (8) for checkng optmalty. Instead, two threshold parameters, b up and b low are mantaned and (6) (or (7)) s employed for checkng optmalty. Assumng that the reader s famlar wth[7] and the pseudo-codes for SMO gven there, we only gve a set of ponters that descrbe the changes that are made to Smola and Scholkopf's SMO algorthm for regresson. Pseudo-codes that fully descrbe these can be found n[6]. 1. Suppose, at any nstant, F s avalable for all. Let low and up be ndces such that ~F low = b low = maxf F ~ : 2 I [ I 1 [ I 2 g (9a) and F up = b up = mnf F : 2 I [ I 1 [ I 3 g (9b) Then checkng a partcular for optmalty s easy. For example, suppose 2 I 3. We only have to check f F < b low? 2. If ths condton holds, then there s a volaton and n that case SMO's takestep procedure can be appled to the ndex par (; low ). Smlar steps can be gven for ndces n other sets. Thus, n our approach, the checkng of optmalty of the rst ndex, 2 and the choce of second ndex, 1, go hand n hand, unlke the orgnal SMO algorthm. As we wll see below, we compute and use ( low ; b low ) and ( up ; b up ) va an ecent updatng process. 2. To be ecent, we would, lke n the SMO algorthm, spend much of the eort alterng ; 2 I ; cache for F, 2 I are mantaned and updated to do ths ecently. And, when optmalty holds for all 2 I, only then all ndces are examned for optmalty. 9

11 3. The procedure takestep s moded. After a successful step usng a par of ndces, ( 2 ; 1 ), let ^I = I [ f 1 ; 2 g. We compute, partally, ( low ; b low ) and ( up ; b up ) usng ^I only (.e., use only 2 ^I n (9)). Note that these extra steps are nexpensve because cache for ff ; 2 I g s avalable and updates of F 1 ; F 2 are easly done. A careful look shows that, snce 2 and 1 have been just nvolved n a successful step, each of the two sets, ^I \(I [I 1 [I 2 ) and ^I \(I [I 1 [I 3 ), s non-empty; hence the partally computed ( low ; b low ) and ( up ; b up ) wll not be null elements. Snce l ow and u p could take values from f2; 1 g and they are used as choces for 1 n the subsequent step (see tem 1 above), we keep the values of F 1 and F 2 also n cache. 4. When workng wth only ; ; 2 I,.e., a loop wth examneall =, one should note that, f (6) holds at some pont then t mples that optmalty holds as far as I s concerned. (Ths s because, as mentoned n tem 3 above, the choce of b low and b up are nuenced by all ndces n I.) Ths gves an easy way of extng ths loop. 5. There are two ways of mplementng the loop nvolvng ndces n I only (examneall = ). Method 1. Ths s smlar to what s done n SMO. Loop through all 2 2 I. For each 2, check optmalty and f volated, choose 1 approprately. For example, f F2 < b low? 2 then there s a volaton and n that case choose 1 = low. Method 2. Always work wth the worst volatng par,.e., choose 2 = low and 1 = up. Dependng on whch one of these methods s used, we call the resultng overall modcaton of SMO as SMO-Modcaton 1 and SMO-Modcaton 2. SMO and SMO-Modcaton 1 are dentcal except n the way the bas s mantaned and optmalty s tested. On the other hand, SMO-Modcaton 2 can be thought of as a further mprovement of SMO-Modcaton 1 where the cache s eectvely used to choose the volatng par when examneall =. 6. When optmalty on I holds, as already sad we come back to check optmalty on all ndces (examneall = 1). Here we loop through all ndces, one by one. Snce (b low ; low ) and (b up ; up ) have been partally computed usng I only, we update these quanttes as each s examned. For a gven, F s computed rst and optmalty s checked usng the current (b low ; low ) and b up ; up ); f there s no volaton, F are used to update these quanttes. For example, f 2 I 3 and F < b low? 2, then there s a volaton, n whch case we take a step usng (; low ). On the other hand, f there s no volaton, then ( up ; b up ) s moded usng F,.e., f F < b up then we do: up := and b up := F. 1

12 7. Suppose we do as descrbed above. What happens f there s no volaton for any n a loop havng examneall = 1? Can we conclude that optmalty holds for all? The answer s: YES. Ths s easy to see from the followng argument. Suppose, by contradcton, there does exst one (; j) par such that they dene a volaton,.e., they satsfy (7). Let us say, < j. Then j would not have satsed the optmalty check n the above descrbed mplementaton because ether F or ~F would have, earler than j s seen, aected ether the calculaton of b up and/or b low settngs. In other words, even f s mstakenly taken as havng satsed optmalty earler n the loop, j wll be detected as volatng optmalty when t s analysed. Only when (6) holds t s possble for all ndces to satsfy the optmalty checks. Furthermore, when (6) holds and the loop over all ndces has been completed, the true values of b up and b low, as dened n (3) would have been computed snce all ndces have been encountered. As a nal choce of b (for later use n dong nference) t s approprate to set: b = :5(b up + b low ). 6 Computatonal Comparson In ths secton we compare the performance of our modcatons aganst Smola and Scholkopf's SMO algorthm for regresson on three datasets. We mplemented all these methods n C and ran them usng gcc on a P3 45 MHz Lnux machne. The value, = :1 was used for all experments. The rst dataset s a toy dataset where the functon to be approxmated s a cubc polynomal, :2x 3 + :5x 2? x. The doman of ths functon was xed to [?1; 1]. A Gaussan nose of mean zero and varance 1 was added to the tranng set output. A hundred tranng samples were chosen randomly. The performance of the four algorthms for the polynomal kernel k(x ; x j ) = (1 + x x j ) p where p was chosen to be 3, s shown n Fg. 1. The second dataset s the Boston housng dataset whch s a standard benchmark for testng regresson algorthms. Ths dataset s avalable at UCI Repostory [1]. The dmenson of the nput s 13. We used the tranng set of sze 46. A Gaussan nose of mean zero and standard devaton 6 was added to the tranng data. = :56 was used n ths case. Fg. 2 shows the performance of the four algorthms on ths dataset. For ths as well as the thrd dataset the Gaussan kernel k(x ; x j ) = exp(?kx? x j k 2 =) 11

13 25 2 "smo" "smo_dualty_gap" "smo_mod_1" "smo_mod_2" CPU Tme(s) C Fgure 1: Toy data: CPU Tme (n seconds) shown as a functon of C "smo" "smo_dualty_gap" "smo_mod_1" "smo_mod_2" CPU Tme(s) C Fgure 2: Boston Housng data: CPU Tme (n seconds) shown as a functon of C. was used and the value employed was 5:. The thrd dataset, Comp-Actv, s avalable at the Delve webste[3]. Ths dataset contans 8192 data ponts of whch we used We mplemented the \cpusmall" prototask, whch nvolves usng 12 attrbutes to predct the fracton of tme (n percentage) the CPU runs n user mode. Gaussan nose of mean zero and standard devaton 1 was added to ths tranng set. We used = :48 for ths dataset. The performance of the four algorthms on ths dataset s shown n Fg. 3. It s very clear that both modcatons outperform the orgnal SMO algorthm. In many stuatons the mprovement n ecency s remarkable. In partcular, at large values of C the mprovement s by an order of magntude. Between the two modcatons, t s dcult to say 12

14 whch one s better. We have not reported a comparson of the generalzaton abltes of the three methods snce all three methods apply to the same problem formulaton, are termnated at the same tranng set accuracy, and hence gve very close generalzaton performance. 7 Concluson In ths paper we have ponted out an mportant source of necency n Smola and Scholkopf's SMO algorthm that s caused by the operaton wth a sngle threshold value. We have suggested two modcatons of the SMO algorthm that overcome the problem by ecently mantanng and updatng two threshold parameters. Our computatonal experments show that these modcatons speed up the SMO algorthm sgncantly n most stuatons. References [1] C.L. Blake and C.J. Merz, UCI repostory of machne learnng databases, Unversty of Calforna, Department of Informaton and Computer Scence, Irvne, CA, USA, See: [2] C.J.C. Burges, A tutoral on support vector machnes for pattern recognton, Data Mnng and Knowledge Dscovery, 3(2), [3] Delve: Data for evaluatng learnng n vald experments. See: [4] S.S. Keerth, S.K, Shevade, C. Bhattacharyya and K.R.K. Murthy, Improvements to Platt's smo algorthm for svm classer desgn, Techncal Report CD-99-14, Control Dvson, Dept. of Mechancal and Producton Engneerng, Natonal Unversty of Sngapore, Sngapore, August See: [5] J.C. Platt, Fast tranng of support vector machnes usng sequental mnmal optmzaton, n B. Scholkopf, C. Burges, A. Smola. Advances n Kernel Methods: Support vector Machnes, MIT Press, Cambrdge, MA, December

15 [6] S.K. Shevade, S.S. Keerth, C. Bhattacharyya and K.R.K. Murthy, Improvement to smo algorthm for regresson, Techncal Report CD-99-16, Control Dvson, Dept. of Mechancal and Producton Engneerng, Natonal Unversty of Sngapore, Sngapore, August See: [7] A.J. Smola, Learnng wth kernels, PhD Thess, GMD, Brlnghoven, Germany, 1998 [8] A.J. Smola and B. Scholkopf, A tutoral on support vector regresson, NeuroCOLT Techncal Report TR , Royal Holloway College, London, UK, [9] V. Vapnk, The Nature of Statstcal Learnng Theory, Sprnger, NY, USA,

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Improvements to the SMO Algorithm for SVM Regression

Improvements to the SMO Algorithm for SVM Regression 1188 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 5, SEPTEMBER 2000 Improvements to the SMO Algorithm for SVM Regression S. K. Shevade, S. S. Keerthi, C. Bhattacharyya, K. R. K. Murthy Abstract This

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR Judth Aronow Rchard Jarvnen Independent Consultant Dept of Math/Stat 559 Frost Wnona State Unversty Beaumont, TX 7776 Wnona, MN 55987 aronowju@hal.lamar.edu

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

Taxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems

Taxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems Taxonomy of Large Margn Prncple Algorthms for Ordnal Regresson Problems Amnon Shashua Computer Scence Department Stanford Unversty Stanford, CA 94305 emal: shashua@cs.stanford.edu Anat Levn School of Computer

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Parallel Sequential Minimal Optimization for the Training. of Support Vector Machines

Parallel Sequential Minimal Optimization for the Training. of Support Vector Machines Parallel Sequental Mnmal Optmzaton for the Tranng of Sport Vector Machnes 1 L.J. Cao a, S.S. Keerth b, C.J. Ong b, P. Uvaraj c, X.J. Fu c and H.P. Lee c, J.Q. Zhang a a Fnancal Studes of Fudan Unversty,

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Array transposition in CUDA shared memory

Array transposition in CUDA shared memory Array transposton n CUDA shared memory Mke Gles February 19, 2014 Abstract Ths short note s nspred by some code wrtten by Jeremy Appleyard for the transposton of data through shared memory. I had some

More information

LECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming

LECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming CEE 60 Davd Rosenberg p. LECTURE NOTES Dualty Theory, Senstvty Analyss, and Parametrc Programmng Learnng Objectves. Revew the prmal LP model formulaton 2. Formulate the Dual Problem of an LP problem (TUES)

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

A One-Sided Jacobi Algorithm for the Symmetric Eigenvalue Problem

A One-Sided Jacobi Algorithm for the Symmetric Eigenvalue Problem P-Q- A One-Sded Jacob Algorthm for the Symmetrc Egenvalue Problem B. B. Zhou, R. P. Brent E-mal: bng,rpb@cslab.anu.edu.au Computer Scences Laboratory The Australan Natonal Unversty Canberra, ACT 000, Australa

More information

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process

More information

Channel 0. Channel 1 Channel 2. Channel 3 Channel 4. Channel 5 Channel 6 Channel 7

Channel 0. Channel 1 Channel 2. Channel 3 Channel 4. Channel 5 Channel 6 Channel 7 Optmzed Regonal Cachng for On-Demand Data Delvery Derek L. Eager Mchael C. Ferrs Mary K. Vernon Unversty of Saskatchewan Unversty of Wsconsn Madson Saskatoon, SK Canada S7N 5A9 Madson, WI 5376 eager@cs.usask.ca

More information

Computer models of motion: Iterative calculations

Computer models of motion: Iterative calculations Computer models o moton: Iteratve calculatons OBJECTIVES In ths actvty you wll learn how to: Create 3D box objects Update the poston o an object teratvely (repeatedly) to anmate ts moton Update the momentum

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Using Neural Networks and Support Vector Machines in Data Mining

Using Neural Networks and Support Vector Machines in Data Mining Usng eural etworks and Support Vector Machnes n Data Mnng RICHARD A. WASIOWSKI Computer Scence Department Calforna State Unversty Domnguez Hlls Carson, CA 90747 USA Abstract: - Multvarate data analyss

More information

Column-Generation Boosting Methods for Mixture of Kernels

Column-Generation Boosting Methods for Mixture of Kernels Column-Generaton Boostng Methods for Mxture of Kernels (KDD-4 464) Jnbo B Computer-Aded Dagnoss & Therapy Group Semens Medcal Solutons Malvern, A 9355 nbo.b@semens.com Tong Zhang IBM T.J. Watson Research

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters Proper Choce of Data Used for the Estmaton of Datum Transformaton Parameters Hakan S. KUTOGLU, Turkey Key words: Coordnate systems; transformaton; estmaton, relablty. SUMMARY Advances n technologes and

More information

On Some Entertaining Applications of the Concept of Set in Computer Science Course

On Some Entertaining Applications of the Concept of Set in Computer Science Course On Some Entertanng Applcatons of the Concept of Set n Computer Scence Course Krasmr Yordzhev *, Hrstna Kostadnova ** * Assocate Professor Krasmr Yordzhev, Ph.D., Faculty of Mathematcs and Natural Scences,

More information

Related-Mode Attacks on CTR Encryption Mode

Related-Mode Attacks on CTR Encryption Mode Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

ON SOME ENTERTAINING APPLICATIONS OF THE CONCEPT OF SET IN COMPUTER SCIENCE COURSE

ON SOME ENTERTAINING APPLICATIONS OF THE CONCEPT OF SET IN COMPUTER SCIENCE COURSE Yordzhev K., Kostadnova H. Інформаційні технології в освіті ON SOME ENTERTAINING APPLICATIONS OF THE CONCEPT OF SET IN COMPUTER SCIENCE COURSE Yordzhev K., Kostadnova H. Some aspects of programmng educaton

More information

1 Introducton Gven a graph G = (V; E), a non-negatve cost on each edge n E, and a set of vertces Z V, the mnmum Stener problem s to nd a mnmum cost su

1 Introducton Gven a graph G = (V; E), a non-negatve cost on each edge n E, and a set of vertces Z V, the mnmum Stener problem s to nd a mnmum cost su Stener Problems on Drected Acyclc Graphs Tsan-sheng Hsu y, Kuo-Hu Tsa yz, Da-We Wang yz and D. T. Lee? September 1, 1995 Abstract In ths paper, we consder two varatons of the mnmum-cost Stener problem

More information

Polyhedral Compilation Foundations

Polyhedral Compilation Foundations Polyhedral Complaton Foundatons Lous-Noël Pouchet pouchet@cse.oho-state.edu Dept. of Computer Scence and Engneerng, the Oho State Unversty Feb 8, 200 888., Class # Introducton: Polyhedral Complaton Foundatons

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005 Exercses (Part 4) Introducton to R UCLA/CCPR John Fox, February 2005 1. A challengng problem: Iterated weghted least squares (IWLS) s a standard method of fttng generalzed lnear models to data. As descrbed

More information

Alternating Direction Method of Multipliers Implementation Using Apache Spark

Alternating Direction Method of Multipliers Implementation Using Apache Spark Alternatng Drecton Method of Multplers Implementaton Usng Apache Spark Deterch Lawson June 4, 2014 1 Introducton Many applcaton areas n optmzaton have benefted from recent trends towards massve datasets.

More information

Complex System Reliability Evaluation using Support Vector Machine for Incomplete Data-set

Complex System Reliability Evaluation using Support Vector Machine for Incomplete Data-set Internatonal Journal of Performablty Engneerng, Vol. 7, No. 1, January 2010, pp.32-42. RAMS Consultants Prnted n Inda Complex System Relablty Evaluaton usng Support Vector Machne for Incomplete Data-set

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Wavefront Reconstructor

Wavefront Reconstructor A Dstrbuted Smplex B-Splne Based Wavefront Reconstructor Coen de Vsser and Mchel Verhaegen 14-12-201212 2012 Delft Unversty of Technology Contents Introducton Wavefront reconstructon usng Smplex B-Splnes

More information

CLASSIFICATION OF ULTRASONIC SIGNALS

CLASSIFICATION OF ULTRASONIC SIGNALS The 8 th Internatonal Conference of the Slovenan Socety for Non-Destructve Testng»Applcaton of Contemporary Non-Destructve Testng n Engneerng«September -3, 5, Portorož, Slovena, pp. 7-33 CLASSIFICATION

More information

Synthesizer 1.0. User s Guide. A Varying Coefficient Meta. nalytic Tool. Z. Krizan Employing Microsoft Excel 2007

Synthesizer 1.0. User s Guide. A Varying Coefficient Meta. nalytic Tool. Z. Krizan Employing Microsoft Excel 2007 Syntheszer 1.0 A Varyng Coeffcent Meta Meta-Analytc nalytc Tool Employng Mcrosoft Excel 007.38.17.5 User s Gude Z. Krzan 009 Table of Contents 1. Introducton and Acknowledgments 3. Operatonal Functons

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

and NSF Engineering Research Center Abstract Generalized speedup is dened as parallel speed over sequential speed. In this paper

and NSF Engineering Research Center Abstract Generalized speedup is dened as parallel speed over sequential speed. In this paper Shared Vrtual Memory and Generalzed Speedup Xan-He Sun Janpng Zhu ICASE NSF Engneerng Research Center Mal Stop 132C Dept. of Math. and Stat. NASA Langley Research Center Msssspp State Unversty Hampton,

More information

Implementation Naïve Bayes Algorithm for Student Classification Based on Graduation Status

Implementation Naïve Bayes Algorithm for Student Classification Based on Graduation Status Internatonal Journal of Appled Busness and Informaton Systems ISSN: 2597-8993 Vol 1, No 2, September 2017, pp. 6-12 6 Implementaton Naïve Bayes Algorthm for Student Classfcaton Based on Graduaton Status

More information

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe CSCI 104 Sortng Algorthms Mark Redekopp Davd Kempe Algorthm Effcency SORTING 2 Sortng If we have an unordered lst, sequental search becomes our only choce If we wll perform a lot of searches t may be benefcal

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

Variant Multi Objective Tsp Model

Variant Multi Objective Tsp Model Volume Issue 06 Pages-656-670 June-06 ISSN e): 395-70 Varant Mult Objectve Tsp Model K.Vjaya Kumar, P.Madhu Mohan Reddy, C. Suresh Babu, M.Sundara Murthy Department of Mathematcs, Sr Venkateswara Unversty,

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Desgn and Analyss of Algorthms Heaps and Heapsort Reference: CLRS Chapter 6 Topcs: Heaps Heapsort Prorty queue Huo Hongwe Recap and overvew The story so far... Inserton sort runnng tme of Θ(n 2 ); sorts

More information

Data Mining For Multi-Criteria Energy Predictions

Data Mining For Multi-Criteria Energy Predictions Data Mnng For Mult-Crtera Energy Predctons Kashf Gll and Denns Moon Abstract We present a data mnng technque for mult-crtera predctons of wnd energy. A mult-crtera (MC) evolutonary computng method has

More information

Adaptive Virtual Support Vector Machine for the Reliability Analysis of High-Dimensional Problems

Adaptive Virtual Support Vector Machine for the Reliability Analysis of High-Dimensional Problems Proceedngs of the ASME 2 Internatonal Desgn Engneerng Techncal Conferences & Computers and Informaton n Engneerng Conference IDETC/CIE 2 August 29-3, 2, Washngton, D.C., USA DETC2-47538 Adaptve Vrtual

More information

Learning to Project in Multi-Objective Binary Linear Programming

Learning to Project in Multi-Objective Binary Linear Programming Learnng to Project n Mult-Objectve Bnary Lnear Programmng Alvaro Serra-Altamranda Department of Industral and Management System Engneerng, Unversty of South Florda, Tampa, FL, 33620 USA, amserra@mal.usf.edu,

More information

A Robust LS-SVM Regression

A Robust LS-SVM Regression PROCEEDIGS OF WORLD ACADEMY OF SCIECE, EGIEERIG AD ECHOLOGY VOLUME 7 AUGUS 5 ISS 37- A Robust LS-SVM Regresson József Valyon, and Gábor Horváth Abstract In comparson to the orgnal SVM, whch nvolves a quadratc

More information

an assocated logc allows the proof of safety and lveness propertes. The Unty model nvolves on the one hand a programmng language and, on the other han

an assocated logc allows the proof of safety and lveness propertes. The Unty model nvolves on the one hand a programmng language and, on the other han UNITY as a Tool for Desgn and Valdaton of a Data Replcaton System Phlppe Quennec Gerard Padou CENA IRIT-ENSEEIHT y Nnth Internatonal Conference on Systems Engneerng Unversty of Nevada, Las Vegas { 14-16

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Solving the SVM Problem. Christopher Sentelle, Ph.D. Candidate L-3 CyTerra Corporation

Solving the SVM Problem. Christopher Sentelle, Ph.D. Candidate L-3 CyTerra Corporation Solvng the SVM Problem Chrstopher Sentelle, Ph.D. Canddate L-3 Cyerra Corporaton Introducton SVM Background Kernel Methods Generalzaton and Structural Rsk Mnmzaton Solvng the SVM QP Problem Actve Set Method

More information

Non-Split Restrained Dominating Set of an Interval Graph Using an Algorithm

Non-Split Restrained Dominating Set of an Interval Graph Using an Algorithm Internatonal Journal of Advancements n Research & Technology, Volume, Issue, July- ISS - on-splt Restraned Domnatng Set of an Interval Graph Usng an Algorthm ABSTRACT Dr.A.Sudhakaraah *, E. Gnana Deepka,

More information

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016) Technsche Unverstät München WSe 6/7 Insttut für Informatk Prof. Dr. Thomas Huckle Dpl.-Math. Benjamn Uekermann Parallel Numercs Exercse : Prevous Exam Questons Precondtonng & Iteratve Solvers (From 6)

More information