Column-Generation Boosting Methods for Mixture of Kernels

Size: px
Start display at page:

Download "Column-Generation Boosting Methods for Mixture of Kernels"

Transcription

1 Column-Generaton Boostng Methods for Mxture of Kernels (KDD-4 464) Jnbo B Computer-Aded Dagnoss & Therapy Group Semens Medcal Solutons Malvern, A 9355 nbo.b@semens.com Tong Zhang IBM T.J. Watson Research Center Yorktown Heghts, NY 598 tzhang@watson.bm.com Krstn. Bennett Department of Mathematcal Scences Rensselaer olytechnc Insttute Troy, NY 28 bennek@rp.edu ABSTRACT We devse a boostng approach to classfcaton and regresson based on column generaton usng a mxture of kernels. Tradtonal kernel methods construct models based on a sngle postve sem-defnte kernel wth the type of kernel predefned and kernel parameters chosen from a set of possble choces accordng to cross valdaton performance. Our approach creates models that are mxtures of a lbrary of kernel models, and our algorthm automatcally determnes kernels to be used n the fnal model. The -norm and 2- norm regularzaton methods are employed to restrct the ensemble of kernel models. The proposed method produces more sparse solutons. Hence t can handle larger problems, and sgnfcantly reduces the testng tme. By extendng the column generaton (CG) optmzaton whch exsted for lnear programs wth -norm regularzaton to quadratc programs that use 2-norm regularzaton, we are able to solve many learnng formulatons by leveragng varous algorthms for constructng sngle kernel models. Computatonal ssues whch we have encountered when applyng CG boostng are addressed. In partcular, by gvng dfferent prortes to columns to be generated, we are able to scale CG boostng to large datasets. Expermental results on benchmark problems are ncluded to analyze the performance of the proposed CG approach and to demonstrate ts effectveness. Keywords Kernel methods, Boostng, Column generaton. INTRODUCTION Kernel-based algorthms have proven to be very effectve for solvng classfcaton, regresson and other nference problems n many applcatons. By ntroducng a postve semdefnte kernel K, nonlnear models can be created usng lnear learnng algorthms such as n support vector machnes (SVM), kernel rdge regresson, kernel logstc regresson, etc. The dea s to map data nto a feature space, and construct optmal lnear functons n the feature space that correspond to nonlnear functons n the orgnal space. The key property s that the resultng model can be expressed as a kernel expanson. For example, the model (or the decson boundary) f obtaned by SVM classfcaton s expressed as f(x) = α K(x, x ), () where α s the model coeffcents and x s the th tranng nput vector. Here (x,y ), (x 2,y 2),, (x l,y l)arethe tranng examples drawn..d. from an underlyng dstrbuton. Throughout ths artcle, vectors are presumed to be column vectors unless otherwse stated, and denoted usng bold-face lower letters. Matrces are denoted by bold-face upper letters. In such kernel methods, the choce of kernel mappng s of crucal mportance. Usually, the choce of kernel s determned by predefnng the type of kernel (e.g, RBF, and polynomal kernels), and tunng the kernel parameters usng cross-valdaton performance. Cross-valdaton s expensve and the resultng kernel s not guaranteed to be an excellent choce. Recent work [2, 8, 4,, 7] has attempted to form kernels that adapt to the data of a partcular task to be solved. For example, Lanckret et al. [2] and Hammer et al. [] proposed the use of a lnear combnaton of kernels K = p µpkp from a famly of varous kernel functons K p. To ensure the postve sem-defnteness, the combnaton coeffcents µ p are ether smply requred to be nonnegatve or determned n the way such that the composte kernel s postve sem-defnte, for nstance, by solvng a sem-defnte program as n [2] or by boostng as n [7]. The decson boundary f thus becomes f(x) = α p µ pk p(x, x )!. (2) In our approach, we do not make an effort to form a new kernel or a kernel matrx (the Gram matrx nduced by a ker-

2 MSE= MSE=2.849e MSE=.739e MSE=2.265e Fgure : A two dmensonal toy regresson example. Left, the target functon wth 5 tranng ponts; md-left, lnear kernel; md, RBF kernel; md-rght, composte (lnear+rbf); rght, mxture (lnear and RBF). nel on the tranng data). We construct models that are a mxture of models, where each model s based on one kernel choce from a lbrary of kernels. Our algorthm automatcally determnes the kernels to be used n the mxture model. The decson boundary represented by ths approach s f(x) = p α p Kp(x, x). (3) revous kernel methods have employed smlar strateges to mprove generalzaton and reduce tranng and predcton computatonal costs. MARKING [2] optmzed a heterogeneous kernel usng a gradent descent algorthm n functon space. GSVC and OKER [6, 5] grew mxtures of kernel functons from a famly of RBF kernels. GSVC and OKER, however, were desgned to be used only wth weghted least squares SVMs. The proposed approach can be used n conuncton wth any lnear or quadratc generalzed SVM formulatons and algorthms. In ths artcle, we devse algorthms that boost on kernel columns va column generaton (CG) technques. Our goal s to develop approaches to construct nference models that make use of varous geometry of the feature spaces ntroduced by a famly of kernels other than a less expressve feature space nduced by a sngle kernel. The -norm and 2-norm regularzaton methods can be employed to restrct the capacty of mxture models of form (3). We expect the resultng soluton of mxture models to be more sparse than models wth composte kernels. The CG-based technques are adopted to obtan sparsty. We extend LBoost, a CG boostng algorthm orgnally proposed for lnear programs (Ls) [9], to quadratc programs (Qs). Our algorthm therefore becomes more general andssutable for constructng a much larger varety of nference models. We outlne ths paper now. In Secton 2, we dscuss the proposed mxture-of-kernels model and descrbe ts characterstcs. Secton 3 extends LBoost to Qs that use 2-norm regularzaton. Varous CG-based algorthms are presented n Secton 4 ncludng classfcaton and regresson SVMs. Then n Secton 5, we address some computatonal ssues encountered when applyng CG-based boostng methods. Expermental results on benchmark problems are ncluded n Secton 6 to examne the performance of the proposed method. The last secton 7 concludes ths paper. 2. MITURE OF KERNELS In ths secton, we dscuss the characterstcs of the mxtureof-kernels method and compare t wth other approaches. 2. Approxmaton capablty Models (3) that are based on a mxture of kernels are not necessarly equvalent to models (2) based on a composte p kernel. Rewrtng a composte kernel model (2) as f(x) = αµpkp(x, x), we can see t s equvalent to a mxtureof-kernels model (3) wth α p = αµp. The opposte s not necessarly true, however, snce gven any composte kernel model (2), for any two kernels K p and K q,theratoα p /αq s fxed to µ p/µ q, for all (assumng µ q wthout loss of generalty). Note that for a mxture-of-kernels model (3), we do not have ths restrcton. It follows that a mxture model of form (3) can potentally gve a larger hypothess space than a model that uses ether a sngle kernel or a composte kernel. The hypothess space tends to be more expressve,.e., a mxture-of-kernels model can better approxmate target functons of practcal problems. Although models usng a sngle RBF kernel are known to be capable of approxmatng any contnuous functons n the lmt, these sngle-kernel models can gve very poor approxmatons for many target functons. For example, suppose the target functon can be expressed as a lnear combnaton of several RBF kernels of dfferent bandwdths. If we approxmate t usng a sngle RBF kernel (wth a fxed bandwdth), then many bass functons have to be used. Ths leads to a dense and poor approxmaton, whch s also dffcult to learn. Therefore, usng a sngle RBF kernel, we may have to use much more tranng data than necessary to learn a target functon that can be better approxmated by a mxture model wth dfferent kernels. Due to the better approxmaton capablty, a mxture model (3) tends to gve more sparse solutons whch have two mportant consequences: the generalzaton performance tends to be mproved snce t s much easer to learn a sparse model due to the Occam s Razor prncple (whch says that the smplest soluton s lkely to be the best soluton among all possble alternatves); the scalablty s enhanced snce sparse models requre both less tranng and less testng tme. We use a smple regresson problem to llustrate ths pont. The target functon s defned as a lnear+rbf functon f(x) =x +2x 2 + exp( x c 2 )wherec σ s are [, ], [ 2, 2 ], [ 2, 2 ]and[ 2, 2 ]. Fgure (left) shows the target functon together wth 5 tranng ponts each generated as y = f(x )+ε where ε s a small Gaussan nose. The other four graphs n Fgure demonstrate the followng: f we use the lnear kernel, there s no way to adapt to the local RBF behavor; f a RBF kernel s used, the lnear part of the functon cannot be learned well. Usng ether a composte kernel model or a mxture kernel model acheves a

3 better approxmaton of the target functon. However, the mxture-of-kernel model s more sparse (whch means better approxmaton to the target), and has better generalzaton (as seen n Fgure ). Moreover, the composte kernel method has sgnfcantly greater tranng and testng computatonal costs. 2.2 Connecton to RBF nets Consder the case of sets of RBF functons (kernels), we dscuss the relatonshp among dfferent bass expanson models, such as models obtaned by SVMs, our approach and RBF nets [5]. In bass expanson methods [], the model takes a form of f(x) = α φ (x), where each functon φ s a bass functon of a form exp( x c 2 /σ ). In RBF networks, the centers c and the bandwdths σ of the bass functons are heurstcally determned usng unsupervsed technques. Therefore to construct the decson rule, one has to estmate:. the number of RBF centers; 2. the estmates of the centers; 3. the lnear model parameter, and 4. the bandwdth parameter σ. Compared wth RBF networks, the benefts of usng SVMs have been studed [9, 8]. The frst three sets of parameters can be automatcally determned by SVMs when learnng support vectors. The last parameter s usually obtaned by cross-valdaton tunng. Classc SVMs, however, use only a sngle parameter σ, whch means that all centers (support vectors) are assocated wth a sngle choce of σ. Contrary to SVMs, RBF networks estmate a dfferent σ forevery center c of the RBF bass. More generally, the bandwdth σ can be dfferent on dfferent drectons. Our model has a flexblty n between SVMs and RBF networks. Our model stll benefts from the SVM-lke algorthms, so parameters except σ can be learned by solvng an optmzaton problem. In addton, our model allows the RBF bass postoned at dfferent centers (support vectors) to assocate wth dfferent bandwdths. 2.3 Regularzaton To acheve good generalzaton performance, t s mportant to ntroduce approprate regularzaton condtons on the model class. For a mxture model of form (3), the natural extenson to the reproducng kernel Hlbert space (RKHS) regularzaton, commonly used by sngle-kernel methods such as SVMs, s to use the followng generalzed RKHS regularzaton R(f): R(f) = p, α p αp Kp(x, x), Ths regularzaton condton, however, requres postve semdefnteness of the kernel matrx K p. Ths requrement can be removed by ntroducng other regularzaton condtons that are equally sutable for capacty control [3]. In partcular, we consder penalzng the -norm or 2-norm of. These regularzaton methods are more generally applcable snce they do not requre the kernel matrx to be postve sem-defnte. Ths can be an mportant advantage for certan applcatons. Moreover, t s well known that the -norm regularzaton = α leads to sparse solutons, whch as we have explaned earler, s very desrable. The mxture-of-kernels method nvestgated n ths work has nterestng propertes concernng ts learnng and approxmaton behavors. Due to the space lmtaton, these theoretcal ssues wll be addressed elsewhere. Ths paper focuses on algorthmc desgn ssues such as achevng sparsty of the solutons and the scalablty to large-scale problems. 3. COLUMN GENERATION FOR Q The column generaton technques have been wdely used for solvng large-scale Ls or dffcult nteger programs snce the 95s [4]. In the prmal space, the CG method solves Ls on a subset of varables, whch means not all columns of the kernel matrx are generated at once and used to construct the functon f. More columns are generated and added to the problem to acheve optmalty. In the dual space, the columns n the prmal problem correspond to the constrants n the dual problem. When a column s not ncluded n the prmal, the correspondng constrant does not appear n the dual. If a constrant absent from the dual problem s volated by the soluton to the restrcted problem, ths constrant (a cuttng plane) needs to be ncluded n the dual problem to further restrct ts feasble regon. Thus these technques are also referred to as cuttng plane methods []. We frst brefly revew the exstng LBoost wth -norm regularzaton. Then we formulate the Q optmzaton problem wth 2-norm regularzaton and dscuss the extenson of CG technques to Qs. For notatonal convenence, we can re-ndex the columns n dfferent kernels to form a sngle mult-kernel. Gven a lbrary of kernels S = {K, K 2,,K }, agrammatrx K p can be calculated for each kernel n S on sample data wth the column K p(, x ) correspondng to the th tranng example. Let us lne up all these kernel matrces together K = [K K 2 K ], and let ndex run through the columns and ndex run along the rows. Hence K denotes the th row of K, andk denotes the th column. There are d = l columns n total. 3. L formulaton If the hypothess K α based on a sngle column of the matrx K s regarded as a weak model or base classfer, we can rewrte LBoost usng our notaton and followng the statement n [3, 9]: mn, d α + C l ξ = = s.t. y K α + ξ, ξ, =,,l (4) α, =,,d, for regularzaton parameter C>. The dual of L (4) s max u s.t. l l u = u y K, =,,d, = u C, =,,l. These problems are referred to as the master problems. The CG method solves Ls by ncrementally selectng a subset of columns from the smplex tableau and optmzng the tableau restrcted on the subset of varables (each correspondng to a selected column). After a prmal-dual soluton (5)

4 ( ˆ, ˆ, û) to the restrcted problem s obtaned, we solve τ =max û y K, (6) where runs over all columns of K. If τ, the soluton to the restrcted problem s already optmal to the master problems. If τ >, then the soluton to (6) provdes a column to be ncluded n the restrcted problem. 3.2 Q formulaton Usng the 2-norm regularzaton approach, constructng a model as n L (4) corresponds to solvng ths Q: d mn, 2 = s.t. y α, d α 2 + C l ξ = K α + ξ, ξ, =,,l =,,d. The Lagrangan dual functon s the followng: L = α C l ξ u(y K α + ξ ) = = sξ tα where u, s and t are nonnegatve Lagrange multplers. Takng the dervatve of the Lagrangan functon wth respect to the prmal varables yelds L α : α uyk = t, L (8) ξ : C u = s. Substtutng (8) nto the Lagrangan yelds the dual problem as follows: max u mn l u d α 2 2 s.t. l = = u y K α, =,,d, = u C, =,,l. The CG method parttons the varables α nto two sets, the workng set W that s used to buld the model and the remanng set denoted as N that s elmnated from the model as the correspondng columns are not generated. Each CG step optmzes a subproblem over the workng set W of varables and then selects a column from N based on the current soluton to add to W. At each teraton, the α n N can be nterpreted as α =. Hence once a soluton W to the restrcted problem s obtaned, ˆ =( W N =)sfeasble to the master Q (7). The followng theorem examnes when an optmal soluton to the master problem s obtaned n the CG procedure. Theorem 3. (Optmalty of Q CG). Let ( ˆ, ˆ, û) be the prmal-dual soluton to the current restrcted problems. If for all N, ûyk, then( ˆ, ˆ, û) s optmal to the correspondng master prmal (7) and dual (9) problems. roof. By the KKT condtons, to show the optmalty, we need to confrm prmal feasblty, dual feasblty and (7) (9) the equalty of prmal and dual obectves. Recall how we defne ˆ =( W N =),so(ˆ, ˆ) s feasble for Q (7). The prmal obectve must be equal to the dual obectve snce α =, N. Now the key ssue s dual feasblty. Snce ( W, ˆ, û) s optmal for the restrcted problem, t satsfes all constrants of the restrcted dual problem, ncludng ûyk α, W. Hence the dual feasblty s valdated f ûyk α =, N. Any column that volates dual feasblty can be added. For Ls, a common heurstc s to choose the column K that maxmzes uyk over all N. Smlar to LBoost, the CG boostng for Qs can also use the magntude of the volaton to pck the kernel columns or kernel bass functon. In other words, the column K wth the maxmal score uyk wll be ncluded n the restrcted problem. Remark 3. (Column generaton when ). Let ( ˆ, ˆ, û) be the soluton to the restrcted Q (7) and (9), and W, N be the current workng and non-workng sets. Solve τ =max N l = û y K, () and let the soluton be K ĵ. If τ, the optmalty s acheved; otherwse, the soluton K ĵ s a column for ncluson n the prmal problem and also provdes a cuttng plane to the dual problem. After the column has been generated, we can ether solve the updated prmal problem or the updated dual problem. The orgnal LBoost [9] solves the dual problem at each teraton. Any sutable algorthm can be used to solve the prmal or dual master problem. From the optmzaton pont of vew, t s not clear whch problem wll be computatonally cheaper than the other. In our mplementaton, we solve the prmal problem whch has the advantage of reducng the cost of fndng the frst feasble soluton for the updated prmal. For Ls, the tableau s optmzed startng from the current soluton after the new column s generated. Smlar to Ls, the Q solver can have a warm-start by settng the ntal guess of soluton for the updated prmal to the current soluton. Therefore solvng each restrcted prmal can be cheap n CG algorthms. Snce we extend the columngeneraton boostng for Ls to Qs (7), we name the famly of column-generaton approaches CG-Boost. 4. VARIANTS OF CG-BOOST We have extended the CG optmzaton for Ls to Qs where a -norm error metrc wth 2-norm regularzaton s mnmzed to construct models n the form of αp Kp(x, x),. However, typcal kernel methods do not requre the model parameters α to be nonnegatve. Moreover, the model may contan a bas term b,.e., α p Kp(x, x)+b. In ths secton, we nvestgate these varous models and devse varants of CG-Boost, ncludng those sutable for classfcaton and regresson SVMs. Note that ths general approach can be appled to other support vector methods ncludng novelty detecton and the nu-svm formulatons.

5 4. Add offset b If the decson boundary model or the regresson model has an offset b, we need to dscuss two cases. Frst, f the varable b s regularzed, an l vector of ones can be added to the kernel matrx to correspond to b. The CG-Boost wll be exactly the same as for the model wthout an explct b except for the constant column added to the mxture of kernels. Second, f b s not regularzed, then there exsts an extra equalty constrant uy = n the dual problem (5) or (9). In the later case, we use b n the ntal restrcted problem and always keep b n the workng set W. Thus the equalty constrant always presents n the restrcted dual problem. To evaluate the dual feasblty and assure optmalty, we stll ust proceed as n Remark Remove bound constrants The lower bound constrant on model parameters s not a necessary condton to use the CG approach. Removng the lower bound from Q (7), we obtan the problem: d mn, 2 = s.t. y α 2 + C l ξ = K α + ξ, ξ, =,,l. () The correspondng dual can be obtaned through Lagrangan dualty analyss as usually done for SVMs. In the resultng dual, the nequalty constrants n (9) become equalty constrants,.e., α = uyk. Usually we substtute them nto the Lagrangan and elmnate the prmal varables from the dual. The dual obectve becomes l = u 2 d = u y K! 2. In CG optmzaton teratons, the dual feasblty s the key to verfyng optmalty. To evaluate the dual feasblty of (), the equalty constrants should hold for all. Forthe columns n the workng set W, the correspondng constrants are satsfed by the current soluton. For the columns K that do not appear n the current restrcted problem, the correspondng α =. Therefore f uyk =, N, the current soluton s optmal for the master problem. Optmalty can be verfed by the followng method: Remark 4. (Column generaton for free). Let ( ˆ, ˆ, û) be the soluton to the current restrcted Q (7) and (9) wthout bound constrants on, andw, N be the current workng and non-workng sets. Solve τ =mn N l = ûyk, τ+ =max N l = ûyk (2) and let K, K ĵ be the solutons, respectvely. If τ = τ + =, the current soluton ( ˆ, ˆ, û) s optmal to the master problem; otherwse, (a) f τ > τ +, add K to the restrcted problem; or otherwse (b) add K ĵ to the restrcted problem. Unlke for problem (7) wth, choosng the column wth the score uyk farthest from,.e., generatng columns usng (a) and (b) s not smply a heurstc for problems wth free. Instead t s the optmal greedy step for CG optmzaton as shown n the followng proposton. roposton 4.. The column generated usng (a) and (b)sthecolumnnn that decreases the dualty gap the most at the current soluton. roof. Let z be the column generated usng (a) and (b). The dual obectve value at the current soluton ( ˆ, ˆ, û) s ( û 2 ûyk)2 = û ( 2 W ûyk)2 ( ( 2 N ûyk)2 = 2 W ˆα2 + C ˆξ 2 N ( ûyk)2 = d 2 = ˆα2 + C ˆξ 2 N ûyk)2 The last equaton holds due to ˆ =(ˆ ( W ˆα N =). Hence the dualty gap at ( ˆ, ˆ, û) s 2 N ûyk)2. Fndng a column n N that mnmzes the dualty gap yelds the soluton z. 4.3 Apply to SVM regresson The CG-Boost approach can be easly generalzed to solvng regresson problems wth the ɛ-nsenstve loss functon [9]. To construct regresson models f, we penalze as errors ponts that are predcted by f at least ɛ off from the true response. The ɛ-nsenstve loss s defned as max{ y f(x) ɛ, }. Usng the 2-norm regularzaton (see [7] for -norm regularzaton). The optmzaton problem becomes: mn,, 2 s.t. d = α 2 + C l (ξ + η ) = K α + ξ y ɛ, =,,l, K α + η y ɛ, =,,l, ξ, η, =,,l. (3) Note that n the prmal the column generated at each CG teraton doubles ts sze n comparson to the classfcaton case. The th column becomes K concatenated by K. Let u and v be the Lagrange multplers correspondng to the frst set of constrants and the second set of constrants, respectvely. Then the dual constrants are (u v)k = α. Thus solvng the dual problem s more computatonally attractve. Also as analyzed n Secton 4.2, optmalty can be verfed by assessng f (u v)k =, N as n the followng remark. Remark 4.2 (Column generaton for regresson). Let ( ˆ, ˆ, ˆ, û, ˆv) be the soluton to the current restrcted Q (3) and ts correspondng dual, and let W, N be the current workng and non-workng sets, respectvely. Solve τ =mn l N = (û ˆv)K, τ + =max l N = (û (4) ˆv)K and let K, K ĵ be the solutons, respectvely. If τ = τ + =, the current soluton ( ˆ, ˆ, ˆ, û, ˆv) s optmal to the master problem; otherwse, (a) f τ > τ +, add K to the restrcted problem; or otherwse (b) add K ĵ to the restrcted problem.

6 5. STRATIFIED COLUMN GENERATION When the CG method s appled as a general boostng approach, followng the statement n [3], we need to solve max h H ûyh(x) to generate a good column or base classfer h at each teraton. It s commonly dffcult to estmate theentrehypothessspaceh (often nfnte) that can be mplemented by the base learnng algorthm. Typcally we assume that an oracle exsts to tell us the optmal column h. Even when we relax the problem and fnd a weak column that does not necessarly have the largest score, some heurstc strategy s stll requred. In mxture of kernels, the kernel lbrary contans fnte choces, so the matrx K comprses a fnte number of columns. The generaton of a column can be done by scannng all columns of K once requrng access to the entre kernel matrx at each teraton.. When the amount of tranng data s large, ths s not desrable. To allevate ths dffculty and make CG-Boost more effcent, we desgn schemes to reduce the number of possble columns scanned n order to dentfy a volated dual constrant. Examnng the followng complementarty condtons of Q (7) and (9) s nsghtful. y! K α + ξ =,, (5) u α u y K α! =,, (6) ξ (u C) =. (7) The frst set of condtons tells us that u = whenever y Kα >. Snce u can be nterpreted as the msclassfcaton cost, ths mples that only examples wth tght margn can have non-zero costs. The second group of condtons shows that whenever α >, t s exactly equal to uyk. From the last group of condtons, a pont that produces margn error,.e., ξ >, gets the largest msclassfcaton cost C. Based on the complementarty analyss, more effort s requred to ft the error ponts (ξ > ) n order to decrease the msclassfcaton cost. Thus, t s reasonable to gve hgh prorty to the columns from varous kernels n the kernel lbrary that are centered at or correspond to the error ponts. Our toy regresson example depcted n Fgure serves as a lucd example to llustrate ths dea. Suppose we frst ft a lnear regresson model. Then the 4 ponts close to the centers of the 4 RBF functons wll ntroduce large error. Ths exhbts a need for nonlnear structures n order to mprove the predcton on these ponts snce the lnear functon cannot ft them well. When the 4 columns (correspondng to these 4 ponts) of the Gram matrx nduced by the RBF kernel are added to the prmal problem, the resultng model s dramatcally mproved. Dfferent kernels correspond to feature spaces of dfferent geometrc characters. Certan tasks may favor one kernel over others wthn the lbrary. Doman nsghts may help dentfy good canddate kernels for a specfc problem. Users can gve hgh prorty to more nterpretable kernels by generatng columns frst from these kernels. If no such pror nformaton exsts, one phlosophy s to have a preference for less complex and computatonally cheap kernels. For example, assume the kernel lbrary contans two types of kernels, lnear and RBF. At an teraton, f the dual constrants based on columns from the lnear kernel are volated, we choose a lnear kernel column for ncluson n the restrcted problem, and shall not contnue consderng columns from RBF kernels, thus gvng hgher prorty to the smple kernels. Ths stratfcaton bases the soluton to be sparser on the more complex kernel bass functons (columns) than on the smple lnear ones and serves as an extra means to control capacty of the mxture models. 5. The algorthm We use the classfcaton formulaton () wth an explct bas term b as an example to descrbe the algorthm. Let E to denote the set of the columns n K correspondng to the error ponts (ξ > ) at an teraton. Then E K p ndcates the columns, from the kernel K p, that correspond to error ponts. Wthout loss of generalty, we assume the kernels n S = {K,,K } are ordered wth ther prorty decreasng. Wth a lttle abuse of notaton, we use N to denote all columns n K that are not ncluded n the current restrcted problem. In the CG-Boost algorthm, TR, TR2, and TR4 represent termnaton rules whch wll be dscussed n the followng secton. Algorthm 5.. CG-Boost. Intalze the frst column K = e (the vector of ones correspondng to the bas term b) 2. Let = 3. For t =to T, do 4. Solve problem () wth the current kernel matrx K t, and obtan soluton ( t, t, u t ). 5. For p =to,(doascanone K) Solve problem (2) on E K p, obtan τ = max{ τ, τ + }, and the correspondng column z If τ>, K t = {K t, z}, break from loop on p End of loop p 6. If no column s found,.e., τ<, termnate (TR), or 7. For p =to,(doafullscanonn) Solve problem (2) on N K p, obtan τ = max{ τ, τ + }, and the correspondng column z If τ>, K t = {K t, z}, break from loop on p End of loop p 8. If no column s found, optmalty s acheved, termnate (TR2) 9. End of loop t (TR4) 5.2 Termnaton rules Several termnaton strateges can be used to stop the optmzng process, dependng on the desred level of qualty of the solutons. Four schemes can be desgned: TR. Stop when no columns correspondng to error examples can be added, n other words, all dual constrants formed by such columns are satsfed by the current soluton. Ths termnaton rule s weak due to no guarantee that a good model has been obtaned, but t requres a very small amount of computaton, and a full scan on the kernel matrx s not needed.

7 Table : Datasets Summary Dataset dm n pts n postve n negatve Breast Ionosphere ma Dgts 784 6, 3,378 29,622 Forest ,4 2,84 283,3 TR2. Stop when the problem s completely optmzed,.e., the algorthm converges to an optmal soluton. Ths often requres several full scans on the full kernel matrx untl all dual constrants are satsfed by the current soluton to the restrcted problem. TR3. The frst and second termnaton rules may be ether too weak or too strong. CG-Boost allows us to stop n the mddle when a suboptmal model acheves the desrable performance. Especally, for large databases, a valdaton set can be created ndependent of the tranng and test sets. Montorng the predcton performance of each model generated n the CG teratons on the valdaton set can produce nsghts nto a reasonable termnaton locaton. TR4. Another smple way to stop the CG boostng process s to pre-specfy a maxmal number of teratons. When the lmt s reached, the program s termnated. We shall evaluate dfferent termnaton strateges n our computatonal studes. Based on specfc problems at hand, we can choose the termnaton rule most reasonable to the task. 6. COMUTATIONAL RESULTS The experments were desgned to evaluate the performance of the proposed approach n terms of the predcton accuracy, sparsty of solutons as well as scalablty to large data sets, and compare t to other approaches such as compostekernel methods. Small UCI datasets [6] were used prmarly to evaluate the generalzaton performance. Two addtonal large databases were used: the Forest data from UCI KDD Archve, and the NIST hand-wrtten dgt database downloaded from Table provdes the basc statstcs of the datasets. The data was preprocessed by normalzng each orgnal feature to have mean and standard devaton. Three kernel types n the kernel lbrary were used: K s the lnear kernel, K 2 s the quadratc kernel, and K 3 s a RBF kernel, denoted respectvely as L, Q and R n the Tables 3 and 4. We employed a fxed strategy to fnd a σ for the RBF kernel n the experments on all dfferent datasets. Frst we calculate the mean of x x 2 where, run through the tranng examples, and then set σ equal to the mean value. We use the commercal optmzaton package ILOG CLE 8. to solve the restrcted problems. But note that any sutable algorthm for the approprate sngle kernel problem can be extended to mxture of kernels by usng t to solve the restrcted problems n CG-Boost. 6. erformance evaluaton For small datasets, we preformed 5-fold cross valdaton for varous algorthms usng the lnear kernel, the RBF kernel, The table columns are dmenson (dm), the number of ponts (n pts), the number of postve examples (n postve) and the number of negatve examples (n negatve). Table 2: Fve-fold CV results of varous methods (wth 2-norm regularzaton) on small datasets. Method F tst FN tst T trn T tst C l C r Breast lnear RBF composte mxture Ionosphere lnear RBF composte mxture ma lnear RBF composte mxture Table 3: Tranng and test classfcaton accuraces on datasets created from Forest database. Top, 2- norm regularzaton; bottom, -norm regularzaton. Kernel R trn R tst T trn T tst C l C q C r L Q R L+Q L+R L+Q+R L,Q,R L Q R L+Q L+R L+Q+R L,Q,R

8 Table 4: Tranng and test classfcaton accuraces on datasets created from NIST hand-wrtten dgt database. Top, 2-norm regularzaton; bottom, - norm regularzaton. Kernel R trn R tst T trn T tst C l C q C r L Q R L+Q L+R L+Q+R L,Q,R L Q R L+Q L+R L+Q+R L,Q,R Error ercentage Number of columns tran vald test Fgure 2: Comparson of termnaton strateges. the composte kernel of lnear and RBF, and our mxture kernel model. The parameter C was tuned wthn each tranng dataset from choces {,, } n the frst fold. The optmzed value of C s used n the successve folds. In our experments, we dd not tune the parameter µ n the composte kernel snce effcent methods were not avalable. Therefore, µ s smply fxed to. CG-Boost was traned untl optmalty,.e. termnaton rule TR. Table 2 lsts the results obtaned by algorthms usng the 2- norm regularzaton method. Snce these datasets are qute mbalanced, we provde the F rates and FN rates together wth the tranng tme (T trn) requred to completely optmze the Qs (TR2), the executon tme (T tst) nthetest phase and the number of columns from each kernel (C l for lnear kernel, C r for RBF kernel). From Table 2, the composte model and mxture model have smlar performance, but mxture models use sgnfcantly less tme n the testng phase snce solutons are sparse. Solutons are especally sparse on the computatonally expensve kernels such as RBF kernels. Sparsty for nonlnear kernels s computatonally mportant snce t allows kernel methods to scale well to large problems. The sparsty of the lnear kernel s less crucal snce we can smplfy αk(x, x) = K (x, αx). Note the Ionosphere dataset nherently favors the RBF kernel. Snce we emphasze the sparsty on RBF kernel columns, we obtaned slghtly worse performance. Results of experments usng large databases are reported n Table 3 and 4. We generated these datasets by randomly choosng examples for tranng, 2 examples as the valdaton set and examples n test. The tranng, valdaton and test sets are mutually exclusve. Snce the amount of test data s ncreased, the dfference on the executon tme by mxture and composte models s magnfed. Ths s due to the sparsty on the RBF kernel bass acheved by the mxture models by referencng the last 3 columns. Only 3 RBF kernel columns were selected for Forest data n the mxture models. Due to the characterstcs of composte kernel models, all kernels have the same number of columns selected even though some columns mght not be necessary. Notce that solvng the mxture kernel models usually consumed more tme based on T trn (f we do not count the tme needed to determne the parameter µ n the composte-kernel models). However when we nvestgate the behavor of CG-Boost, we can see we do not have to solve the problem completely. We can termnate the CG algorthm when an acceptable soluton s obtaned. Then the tranng tme s also dramatcally reduced. 6.2 Behavor of CG-Boost Frst, we examne dfferent termnaton rules usng the expermental results obtaned on dgt datasets by Qs wth 2-norm regularzaton. Fgure 2 shows the error percentage versus the number of columns. The three curves represent the tranng, valdaton and test performance. All 3 curves are drawn together only for llustratng the results no test data was used n tranng. If TR s used, we stop at the teraton marked by *, and f TR3 s used, we stop at the teraton marked by +. Completely solvng the Qs requres more teratons as marked by. Snce usng TR3, we obtan models wth performance smlar to the optmal model, and the tranng tme s reduced by more than half, TR3 appears to be a good choce for termnaton crtera. We adopted TR3 n later experments. Second, we valdate the effectveness of our stratfed process n generatng the columns. Recall the stratfcaton strategy here s to gve dfferent prortes to dfferent tranng examples and dfferent types of kernels. By lookng at Fgure 3, more columns have to be generated n the CG approach f we do not stratfy the column generaton. The convergence was slower for the regular CG process. Moreover, usng the stratfed process, whenever a column from a smple kernel s dentfed to be ncluded n the Qs, all other columns from complex kernels wll not be calculated. Hence the number of kernel columns that need to be computed n order to generate a column s sgnfcantly reduced n comparson wth the regular CG process. On average the stratfed CG needed 255 kernel columns to be calculated at each teraton whle the regular CG requred a full scan (l =3 columns) n each teraton. Thrd, we look at the parameter tunng problem. For class-

9 Error ercentage Stratfed tran Stratfed vald Regular tran Regular vald Table 5: The number of teratons as a functon of data amount. sze 2 5 Iter Tme the number of examples ncreases. The rate of ncrease s not even lnear. In addton, the testng tme has a very flat rate of growth snce the numbers of columns ncluded n the models are smlar for datasets of dfferent szes and most of the terms are lnear Number of columns Fgure 3: Comparson of regular CG boostng and stratfed CG boostng methods. Error ercentage Number of columns tran (C=25) vald (C=25) tran (C=2) vald (C=2) Fgure 4: Improper regularzaton parameter value can be montored durng the CG teraton. fcaton problems, predcton accuracy depends on an approprate choce of the regularzaton parameter C. In the usual tunng process, we choose a value for C, thensolvetheqs completely, and then examne the valdaton performance of the obtaned model to decde f the choce s approprate. In CG-Boost, by montorng the valdaton performance at each teraton, we can assess f t s overfttng due to an mproper choce of C, wthout havng to fully solve the Qs as shown n Fgure 4. Forth, we assess the scalablty of the proposed CG-Boost. Our CG-Boost s an extenson to orgnal LBoost, so the scalablty analyss for LBoost n [9] s also sutable to CG- Boost. We conducted experments usng the Forest database. We randomly took, 2, 5 and examples from the database as the tranng sets, we stll use examples n test as n the prevous experments. The number of columns and runnng tmes are ncluded n Table 5. The number of columns that were generated does not ncrease as 7. CONCLUSIONS We proposed a CG Boostng method for classfcaton and regresson usng mxture-of-kernels models. We argued that the mxture of kernel approach leads to models that are more expressve than models based on the tradtonal snglekernel or the composte kernel approach. Ths means that for many problems, we are able to better approxmate the target functon wth a fxed number of bass functons. A smple example s gven to demonstrate ths phenomenon. Due to the better approxmaton behavor, the mxture of kernel method often leads to sparser solutons. Ths property s computatonally mportant snce wth a sparser soluton, t s possble to handle larger problems, and the testng tme can be sgnfcantly reduced. Moreover, sparse models tend to have better generalzaton performance due to the Occam s Razor prncple. Experments on varous datasets were presented to support our clams. In addton, by extendng the current LBoost column generaton boostng method to handle quadratc programmng, we are able to solve many learnng formulatons by leveragng exstng and future algorthms for constructng sngle kernel models. Our method s also computatonal sgnfcantly more effcent than the composte kernel method, whch requres sem-defnte programmng technques to fnd approprate composte kernels. 8. REFERENCES [] M. S. Bazaraa, H. D. Sheral, and C. M. Shetty. Nonlnear rogrammng: Theory and Algorthms. John Wley & Sons, Inc., New York, NY, 993. [2] K. Bennett, M. Momma, and M. Embrechts. MARK: A boostng algorthm for heterogeneous kernel models. In roceedngs of SIGKDD Internatonal Conference on Knowledge Dscovery and Data Mnng, pages 24 3, 22. [3] K.. Bennett, A. Demrz, and J. Shawe-Taylor. A column generaton algorthm for boostng. In roceedngs of the 7th Internatonal Conference on Machne Learnng, pages Morgan Kaufmann, San Francsco, CA, 2. [4] J. B. Mult-obectve programmng n SVMs. In T. Fawcett and N. Mshra, edtors, roceedngs of the Twenteth Internatonal Conference on Machne

10 Learnng, pages 35 42, Menlo ark, CA, 23. AAAI ress. [5] C.M.Bshop.Neural Networks for attern Recognton. Oxford Unversty ress, New York, 995. [6] C. Blake and C. Merz. UCI repostory of machne learnng databases, 998. [7] K. Crammer, J. Keshet, and Y. Snger. Kernel desgn usng boostng. In S. T. S. Becker and K. Obermayer, edtors, Advances n Neural Informaton rocessng Systems 5, pages MIT ress, Cambrdge, MA, 23. [8] N. Crstann, A. Elsseef, and J. Shawe-Taylor. On optmzng kernel algnment. Techncal Report NC2-TR-2-87, NeuroCOLT, 2. [9] A. Demrz, K.. Bennett, and J. Shawe-Taylor. Lnear programmng boostng va column generaton. Machne Learnng, 46( 3): , 22. [] B. Hamers, J. Suykens, V. Leemans, and B. Moor. Ensemble learnng of coupled parametersed kernel models. In Supplementary roc. of the Internatonal Conference on Artfcal Neural Networks and Internatonal Conference on Neural Informaton rocessng (ICANN/ICONI), pages 3 33, 23. [] T. Haste, R. Tbshran, and J. Fredman. The Elements of Statstcal Learnng: data mnng, nference and predcton. Sprnger-Verlag, New York, 2. [2] G. Lanckret, N. Crstann,. Bartlett, L. Ghaou, and M. I. Jordan. Learnng the kernel matrx wth semdefnte programmng. Journal of Machne Learnng Research, 5:27 72, 23. [3] O. L. Mangasaran. Generalzed support vector machnes. In. Bartlett, B. Schölkopf, D. Schuurmans, and A. Smola, edtors, Advances n Large Margn Classfers, pages MIT ress, 2. [4] S. G. Nash and A. Sofer. Lnear and Nonlnear rogrammng. McGraw-Hll, New York, NY, 996. [5] E. arrado-hernandez, J. Arenas-Garca, I. Mora-Jmenez,, and A. Nava-Vazquez. On problem orented kernel refnng. Neurocomputng, 55:35 5, 23. [6] E. arrado-hernandez, I. Mora-Jmenez, J. Arenas-Garca, A. R. Fgueras-Vdal, and A. Nava-Vazquez. Growng support vector classfers wth controlled complexty. attern Recognton Letters, 36: , 23. [7] G. Rätsch, A. Demrz, and K. Bennett. Sparse regresson ensembles n nfnte and fnte hypothess spaces. Machne Learnng, 48( 3):93 22, 22. [8] B. Schölkopf, K.Sung, C. Burges, F. Gros,. Nyog, T. oggo, and V. Vapnk. Comparng support vector machnes wth gaussan kernels to radal bass functon classfers. IEEE Transactons on Sgnal rocessng, 45(): , 997. [9] V. N. Vapnk. Statstcal Learnng Theory. John Wley & Sons, Inc., New York, 998.

Semi-supervised Mixture of Kernels via LPBoost Methods

Semi-supervised Mixture of Kernels via LPBoost Methods Sem-supervsed Mxture of Kernels va LPBoost Methods Jnbo B Glenn Fung Murat Dundar Bharat Rao Computer Aded Dagnoss and Therapy Solutons Semens Medcal Solutons, Malvern, PA 19355 nbo.b, glenn.fung, murat.dundar,

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

Optimization Methods: Integer Programming Integer Linear Programming 1. Module 7 Lecture Notes 1. Integer Linear Programming

Optimization Methods: Integer Programming Integer Linear Programming 1. Module 7 Lecture Notes 1. Integer Linear Programming Optzaton Methods: Integer Prograng Integer Lnear Prograng Module Lecture Notes Integer Lnear Prograng Introducton In all the prevous lectures n lnear prograng dscussed so far, the desgn varables consdered

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

Three supervised learning methods on pen digits character recognition dataset

Three supervised learning methods on pen digits character recognition dataset Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru

More information

A Facet Generation Procedure. for solving 0/1 integer programs

A Facet Generation Procedure. for solving 0/1 integer programs A Facet Generaton Procedure for solvng 0/ nteger programs by Gyana R. Parja IBM Corporaton, Poughkeepse, NY 260 Radu Gaddov Emery Worldwde Arlnes, Vandala, Oho 45377 and Wlbert E. Wlhelm Teas A&M Unversty,

More information

Data Mining: Model Evaluation

Data Mining: Model Evaluation Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Abstract Ths paper ponts out an mportant source of necency n Smola and Scholkopf's Sequental Mnmal Optmzaton (SMO) algorthm for SVM regresson that s c

Abstract Ths paper ponts out an mportant source of necency n Smola and Scholkopf's Sequental Mnmal Optmzaton (SMO) algorthm for SVM regresson that s c Improvements to SMO Algorthm for SVM Regresson 1 S.K. Shevade S.S. Keerth C. Bhattacharyya & K.R.K. Murthy shrsh@csa.sc.ernet.n mpessk@guppy.mpe.nus.edu.sg cbchru@csa.sc.ernet.n murthy@csa.sc.ernet.n 1

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Incremental Learning with Support Vector Machines and Fuzzy Set Theory

Incremental Learning with Support Vector Machines and Fuzzy Set Theory The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

A Robust LS-SVM Regression

A Robust LS-SVM Regression PROCEEDIGS OF WORLD ACADEMY OF SCIECE, EGIEERIG AD ECHOLOGY VOLUME 7 AUGUS 5 ISS 37- A Robust LS-SVM Regresson József Valyon, and Gábor Horváth Abstract In comparson to the orgnal SVM, whch nvolves a quadratc

More information

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT 3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Concurrent Apriori Data Mining Algorithms

Concurrent Apriori Data Mining Algorithms Concurrent Apror Data Mnng Algorthms Vassl Halatchev Department of Electrcal Engneerng and Computer Scence York Unversty, Toronto October 8, 2015 Outlne Why t s mportant Introducton to Assocaton Rule Mnng

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

LECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming

LECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming CEE 60 Davd Rosenberg p. LECTURE NOTES Dualty Theory, Senstvty Analyss, and Parametrc Programmng Learnng Objectves. Revew the prmal LP model formulaton 2. Formulate the Dual Problem of an LP problem (TUES)

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Taxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems

Taxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems Taxonomy of Large Margn Prncple Algorthms for Ordnal Regresson Problems Amnon Shashua Computer Scence Department Stanford Unversty Stanford, CA 94305 emal: shashua@cs.stanford.edu Anat Levn School of Computer

More information

Complex System Reliability Evaluation using Support Vector Machine for Incomplete Data-set

Complex System Reliability Evaluation using Support Vector Machine for Incomplete Data-set Internatonal Journal of Performablty Engneerng, Vol. 7, No. 1, January 2010, pp.32-42. RAMS Consultants Prnted n Inda Complex System Relablty Evaluaton usng Support Vector Machne for Incomplete Data-set

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

Using Neural Networks and Support Vector Machines in Data Mining

Using Neural Networks and Support Vector Machines in Data Mining Usng eural etworks and Support Vector Machnes n Data Mnng RICHARD A. WASIOWSKI Computer Scence Department Calforna State Unversty Domnguez Hlls Carson, CA 90747 USA Abstract: - Multvarate data analyss

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Review of approximation techniques

Review of approximation techniques CHAPTER 2 Revew of appromaton technques 2. Introducton Optmzaton problems n engneerng desgn are characterzed by the followng assocated features: the objectve functon and constrants are mplct functons evaluated

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Network Intrusion Detection Based on PSO-SVM

Network Intrusion Detection Based on PSO-SVM TELKOMNIKA Indonesan Journal of Electrcal Engneerng Vol.1, No., February 014, pp. 150 ~ 1508 DOI: http://dx.do.org/10.11591/telkomnka.v1.386 150 Network Intruson Detecton Based on PSO-SVM Changsheng Xang*

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

Categories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms

Categories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms 3. Fndng Determnstc Soluton from Underdetermned Equaton: Large-Scale Performance Modelng by Least Angle Regresson Xn L ECE Department, Carnege Mellon Unversty Forbs Avenue, Pttsburgh, PA 3 xnl@ece.cmu.edu

More information

Adaptive Virtual Support Vector Machine for the Reliability Analysis of High-Dimensional Problems

Adaptive Virtual Support Vector Machine for the Reliability Analysis of High-Dimensional Problems Proceedngs of the ASME 2 Internatonal Desgn Engneerng Techncal Conferences & Computers and Informaton n Engneerng Conference IDETC/CIE 2 August 29-3, 2, Washngton, D.C., USA DETC2-47538 Adaptve Vrtual

More information

Efficient Distributed Linear Classification Algorithms via the Alternating Direction Method of Multipliers

Efficient Distributed Linear Classification Algorithms via the Alternating Direction Method of Multipliers Effcent Dstrbuted Lnear Classfcaton Algorthms va the Alternatng Drecton Method of Multplers Caoxe Zhang Honglak Lee Kang G. Shn Department of EECS Unversty of Mchgan Ann Arbor, MI 48109, USA caoxezh@umch.edu

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Active Contours/Snakes

Active Contours/Snakes Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

SUMMARY... I TABLE OF CONTENTS...II INTRODUCTION...

SUMMARY... I TABLE OF CONTENTS...II INTRODUCTION... Summary A follow-the-leader robot system s mplemented usng Dscrete-Event Supervsory Control methods. The system conssts of three robots, a leader and two followers. The dea s to get the two followers to

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

Learning to Project in Multi-Objective Binary Linear Programming

Learning to Project in Multi-Objective Binary Linear Programming Learnng to Project n Mult-Objectve Bnary Lnear Programmng Alvaro Serra-Altamranda Department of Industral and Management System Engneerng, Unversty of South Florda, Tampa, FL, 33620 USA, amserra@mal.usf.edu,

More information

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions Sortng Revew Introducton to Algorthms Qucksort CSE 680 Prof. Roger Crawfs Inserton Sort T(n) = Θ(n 2 ) In-place Merge Sort T(n) = Θ(n lg(n)) Not n-place Selecton Sort (from homework) T(n) = Θ(n 2 ) In-place

More information

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR Judth Aronow Rchard Jarvnen Independent Consultant Dept of Math/Stat 559 Frost Wnona State Unversty Beaumont, TX 7776 Wnona, MN 55987 aronowju@hal.lamar.edu

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

Wavefront Reconstructor

Wavefront Reconstructor A Dstrbuted Smplex B-Splne Based Wavefront Reconstructor Coen de Vsser and Mchel Verhaegen 14-12-201212 2012 Delft Unversty of Technology Contents Introducton Wavefront reconstructon usng Smplex B-Splnes

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

Radial Basis Functions

Radial Basis Functions Radal Bass Functons Mesh Reconstructon Input: pont cloud Output: water-tght manfold mesh Explct Connectvty estmaton Implct Sgned dstance functon estmaton Image from: Reconstructon and Representaton of

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS Copng wth NP-completeness 11. APPROXIMATION ALGORITHMS load balancng center selecton prcng method: vertex cover LP roundng: vertex cover generalzed load balancng knapsack problem Q. Suppose I need to solve

More information

A Saturation Binary Neural Network for Crossbar Switching Problem

A Saturation Binary Neural Network for Crossbar Switching Problem A Saturaton Bnary Neural Network for Crossbar Swtchng Problem Cu Zhang 1, L-Qng Zhao 2, and Rong-Long Wang 2 1 Department of Autocontrol, Laonng Insttute of Scence and Technology, Benx, Chna bxlkyzhangcu@163.com

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION 24 CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION The present chapter proposes an IPSO approach for multprocessor task schedulng problem wth two classfcatons, namely, statc ndependent tasks and

More information