Learning to Project in Multi-Objective Binary Linear Programming

Size: px
Start display at page:

Download "Learning to Project in Multi-Objective Binary Linear Programming"

Transcription

1 Learnng to Project n Mult-Objectve Bnary Lnear Programmng Alvaro Serra-Altamranda Department of Industral and Management System Engneerng, Unversty of South Florda, Tampa, FL, USA, amserra@mal.usf.edu, amserra/ Had Charkhgard Department of Industral and Management System Engneerng, Unversty of South Florda, Tampa, FL, USA, hcharkhgard@usf.edu, hcharkhgard/ Iman Dayaran Culverhouse College of Busness, The Unversty of Alabama, Tuscaloosa, AL USA, dayaran@cba.ua.edu, Al Eshragh School of Mathematcal and Physcal Scences, The Unversty of Newcastle, Callaghan, NSW 2308 Australa, al.eshragh@newcastle.edu.au, Sorna Javad Department of Industral and Management Systems Engneerng, Unversty of South Florda, Tampa, FL, USA, javads@mal.usf.edu In ths paper, we nvestgate the possblty of mprovng the performance of mult-objectve optmzaton soluton approaches usng machne learnng technques. Specfcally, we focus on mult-objectve bnary lnear programs and employ one of the most effectve and recently developed crteron space search algorthms, the so-called KSA, durng our study. Ths algorthm computes all nondomnated ponts of a problem wth p objectves by searchng on a projected crteron space,.e., a (p 1)-dmensonal crteron apace. We present an effectve and fast learnng approach to dentfy on whch projected space the KSA should work. We also present several generc features/varables that can be used n machne learnng technques for dentfyng the best projected space. Fnally, we present an effectve b-objectve optmzaton based heurstc for selectng the best subset of the features to overcome the ssue of overfttng n learnng. Through an extensve computatonal study over 2000 nstances of tr-objectve Knapsack and Assgnment problems, we demonstrate that an mprovement of up to 12% n tme can be acheved by the proposed learnng method compared to a random selecton of the projected space. Key words : Mult-objectve optmzaton, machne learnng, bnary lnear program, crteron space search algorthm, learnng to project Hstory : 1

2 1. Introducton Many real-lfe optmzaton problems nvolve multple objectve functons and they can be stated as follows: mn {z 1(x),..., z p (x)}, (1) x X where X R n represents the set of feasble solutons of the problem, and z 1 (x),..., z p (x) are p objectve functons. Because objectves are often competng n a mult-objectve optmzaton problem, an deal feasble soluton that can optmze all objectves at the same tme does not often exst n practce. Hence, when solvng such a problem, the goal s often generatng some (f not all) effcent solutons,.e., a feasble soluton n whch t s mpossble to mprove the value of one objectve wthout makng the value of any other objectve worse. The focus of ths study s on Mult-Objectve Bnary Lnear Programs (MOBLPs),.e., mult-objectve optmzaton problems n whch all decson varables are bnary and all objectve functons and constrants are lnear. In the last few years, sgnfcant advances have been made n the development of effectve algorthms for solvng MOBLPs, see for nstance Boland et al. (2015a,b, 2016, 2017b), Dächert et al. (2012), Dächert and Klamroth (2015), Fattah and Turkay (2017), Krlk and Sayın (2014), Özpeynrc and Köksalan (2010), Lokman and Köksalan (2013), Özlen et al. (2013), Przybylsk and Gandbleux (2017), Przybylsk et al. (2010), Soylu and Yıldız (2016), and Vncent et al. (2013). Many of the recently developed algorthms fall nto the category of crteron space search algorthms,.e., those that work n the space of objectve functons values. Hence, such algorthms are specfcally desgned to fnd all nondomnated ponts of a mult-objectve optmzaton problem,.e., the mage of an effcent soluton n the crteron space s beng referred to as a nondomnated pont. After computng each nondomnated pont, crteron space search algorthms remove the proporton of the crteron space domnated by the obtaned nondomnated pont and search for not-yet-found nondomnated ponts n the remanng space. In general, to solve a mult-objectve optmzaton problem, crteron space search algorthms solve a sequence of sngle-objectve optmzaton problems. Specfcally, when solvng a problem wth p objectve functons, many crteron space search algorthms frst attempt to transform the problem nto a sequence of problems wth (p 1) objectves (Boland et al. 2017b). In other words, they attempt to compute all nondomnated ponts by dscoverng 2

3 ther projectons n a (p 1)-dmensonal crteron space. Evdently, the same process can be appled recursvely untl a sequence of sngle-objectve optmzaton problems are generated. For example, to solve each problem wth (p 1) objectves, a sequence of problems wth (p 2) objectves can be solved. Overall, there are at least two possble ways to apply the projecton from a hgher dmensonal crteron space (for example p) to a crteron space wth one less dmenson (for example p 1): Weghted Sum Projecton: A typcal approach used n the lterature (see for nstance Özlen and Azzoğlu 2009) s to select one of the objectve functons avalable n the hgher dmenson (for example z 1 (x)) and remove t after addng t wth some strctly postve weght to the other objectve functons. In ths case, by mposng dfferent bounds for z 1 (x) and/or the value of the other objectve functons, a sequence of optmzaton problems wth p 1 objectves wll be generated. Lexcographcal Projecton: We frst note that a lexcographcal optmzaton problem s a two-stage optmzaton problem that attempts to optmze a set of objectves, the so-called secondary objectves, over the set of solutons that are optmal for another objectve, the so-called prmary objectve. The frst stage n the lexcographcal optmzaton problem s a sngle-objectve optmzaton problem as t optmzes the prmary goal. The second stage, however, can be a mult-objectve optmzaton problem as t optmzes the secondary objectves. Based on ths defnton, another typcal approach (see for nstance Özlen et al. (2013)) for projecton s to select one of the objectve functons avalable n the hgher dmenson (for example z 1 (x)) and smply remove t. In ths case, by mposng dfferent bounds for z 1 (x) and/or the value of the other objectve functons, a sequence of lexcographcal optmzaton problems should be solved n whch z 1 (x) s the prmary objectve and the remanng p 1 objectves are secondary objectves. In lght of the above, whch objectve functon should be selected for dong a projecton and how to do a projecton are two typcal questons that can be asked when developng a crteron space search algorthm. So, by ths observaton, there are many possble ways to develop a crteron space search algorthm and some of whch may perform better for some nstances. So, the underlyng research queston of ths study s that whether Machne Learnng (ML) technques can help us answer the above questons for a gven class of nstances of a mult-objectve objectve optmzaton problem? 3

4 It s worth mentonng that, n recent years, smlar questons have been asked n the feld of sngle-objectve optmzaton. For example, ML technques have been successfully mplemented for the purpose of varable selecton and node selecton n branch-and-bound algorthms (see for nstance Khall et al. (2016), Alvarez et al. (2017), Sabharwal et al. (2012), He et al. (2014), Khall et al. (2017)). However, stll the majorty of the algorthmc/theoretcal studes n the feld of ML have been focused on usng optmzaton models and algorthms to enhance ML technques and not the other way around (see for nstance Roth and Yh (2005), Bottou (2010), Le et al. (2011), Sra et al. (2012), Snoek et al. (2012), Bertsmas et al. (2016)). In general, to the best of our knowledge, there are no studes n the lterature that address the problem of enhancng mult-objectve optmzaton algorthms usng ML. In ths study, as the frst attempt, we focus only on the smplest and most hgh-level queston that can be asked, that s for a gven nstance of MOBLP wth p objectve functons whch objectve should be removed for reducng the dmenson of the crteron space to p 1 n order to mnmze the soluton tme? It s evdent that f one can show that ML s even valuable for such a hgh-level queston then deeper questons can be asked and explored that can possbly mprove the soluton tme sgnfcantly. In order to answer the above queston, we employ one of the effectve state-of-the-art algorthms n the lterature of mult-objectve optmzaton, the so-called KSA whch s developed by Krlk and Sayın (2014). Ths algorthm uses the lexcographcal projecton for reducng the p-dmensonal crteron space to p 1, and then t recursvely reduces the dmenson from p 1 to 1 by usng a specal case of the weghted sum projecton n whch all the weghts are equal to one. Currently, the default objectve functon for conductng the projecton from p- dmensonal crteron space to p 1 s the frst objectve functon (or better say random because one can change the order of the objectve functons n an nput fle). So, a natural queston s that does t really matter whch objectve functon s selected for such a projecton? To answer ths queston, we conducted a set of experments by usng a C++ mplementaton of the KSA whch s publcly avalable n ~moolbrary and recorded the number of ILPs solved (#ILPs) and the computatonal tme (n seconds). We generated 1000 nstances (200 per class) of tr-objectve Assgnment Problem (AP) and 1000 nstances (200 per class) of Knapsack Problem (KP) based on the 4

5 procedure descrbed by Krlk and Sayın (2014). Table 1 shows the mpact of projectng based on the worst and best objectve functon usng the KSA where #ILPs s the number of sngle-objectve nteger lnear programs solved. Numbers reported n ths table are averages over 200 nstances. Table 1 Type #Objectves #Varables Projectng based on dfferent objectves usng the KSA. Projectng worst objectve Projectng best objectve %Decrease Run tme (s.) #ILPs Run tme (s.) #ILPs Run tme (s.) #ILPs AP , , % 4.61% , , % 4.40% , , , , % 4.01% , , , , % 4.27% , , , , % 3.97% KP , , % 1.05% , , % 0.82% 80 1, , , , % 0.74% 90 5, , , , % 0.63% , , , , % 0.57% We observe that, on average, the runnng tme can be reduced up to 34% whle the #ILPs can be mproved up to 4%. Ths numercal study clearly shows the mportance of projecton n the soluton tme. Hence, t s certanly worth studyng ML technques n predctng the best objectve functon for projectng, the so-called learnng to project. So, our man contrbuton n ths study s to ntroduce an ML framework to smulate the selecton of the best objectve functon to project. We collect data from each objectve functon and ther nteractons wth the decson space to create features. Based on the created features, an easy-to-evaluate functon s learned to emulate the classfcaton of the projectons. Another contrbuton of ths study s developng a smple but effectve b-objectve optmzaton-based heurstc approach to select the best subset of features to overcome the ssue of overfttng. We show that the accuracy of the proposed predcton model can reach up to around 72%, whch represents up to 12% mprovement n soluton tme. The rest of ths paper s organzed as follows. In Secton 2, some useful concepts and notatons about mult-objectve optmzaton are ntroduced and also a hgh-level descrpton of the KSA s gven. In Secton 3, we provde a hgh-level descrpton of our proposed 5

6 machne learnng framework and ts three man components. In Secton 4, the frst component of the framework, whch s a pre-orderng approach for changng the order of the objectve functons n an nput fle, s explaned. In Secton 5, the second component of the framework that ncludes features and labels are explaned. In Secton 6, the thrd/last component of the framework, whch s a b-objectve optmzaton based heurstc for selectng the best subset of features, s ntroduced. In Secton 7, we provde a comprehensve computatonal study. Fnally, n Secton 8, we provde some concludng remarks. 2. Prelmnares A Mult-Objectve Bnary Lnear Program (MOBLP) s a problem of the form (1) n whch X := { x {0, 1} n : Ax b } represents the feasble set n the decson space, A R m n, and b R m. It s assumed that X s bounded and z (x) = c x where c R n for = 1, 2,..., p represents a lnear objectve functon. The mage Y of X under vector-valued functon z := (z 1, z 2,..., z p ) represents the feasble set n the objectve/crteron space, that s Y := {o R p : o = z(x) for all x X }. Throughout ths artcle, vectors are always column-vectors and denoted n bold font. Defnton 1. A feasble soluton x X s called effcent or Pareto optmal, f there s no other x X such that z (x ) z (x) for = 1,..., p and z(x ) z(x). If x s effcent, then z(x) s called a nondomnated pont. The set of all effcent solutons x X s denoted by X E. The set of all nondomnated ponts z(x) Y for some x X E s denoted by Y N and referred to as the nondomnated fronter. Overall, mult-objectve optmzaton s concerned wth fndng all nondomnated ponts,.e., an exact representaton of the elements of Y N. The set of nondomnated ponts of a MOBLPs s fnte (snce by assumpton X s bounded). However, due to the exstence of unsupported nondomnated ponts,.e., those nondomnated ponts that cannot be obtaned by optmzng any postve weghted summaton of the objectve functons over the feasble set, computng all nondomnated ponts s challengng. One of the effectve crteron space search algorthms for MOBLPs s the KSA and ts hgh-level descrpton s provded next. The KSA s bascally a varaton of the ε-constrant method for generatng the entre nondomnated fronter of mult-objectve nteger lnear programs. In each teraton, ths algorthm solves the followng lexcographcal optmzaton problem n whch the frst stage s: 6

7 ˆx arg mn { z 1 (x) : x X, z (x) u {2,..., p} }, where u 2,..., u p are user-defned upper bounds. If ˆx exsts,.e., the frst stage s feasble, then the followng second-stage problem wll be solved: ˆx arg mn { p z (x) : x X, z 1 (x) z 1 (ˆx), z (x) u {2,..., p} }. =2 The algorthm computes all nondomnated ponts by mposng dfferent values on u 2,..., u p n each teraton. Interested readers may refer to Krlk and Sayın (2014) for further detals about how values of u 2,..., u p wll be updated n each teraton. It s mportant to note that n the frst stage users can replace the objectve functon z 1 (x) wth any other arbtrary objectve functon,.e., z j (x) where j {1,....p}, and change the objectve functon of the second stage accordngly,.e., p =1: j z (x). As shown n Introducton, on average, the runnng tme can decrease up to 34% by choosng the rght objectve functon for the frst stage. So, the goal of the proposed machne learnng technque n ths study s to dentfy the best choce. As an asde, we note that to be consstent wth our explanaton of the lexcographc and/or weghted sum projectons n Introducton, the lexcographc optmzaton problem of the KSA s presented slghtly dfferently n ths secton. Specfcally, Krlk and Sayın (2014) use the followng optmzaton problem nstead of the second-stage problem (mentoned above): ˆx arg mn { p z (x) : x X, z 1 (x) = z 1 (ˆx), z (x) u {2,..., p} }. =1 However, one can easly observe that these two formulatons are equvalent. In other words, the lexcographc optmzaton problem ntroduced n ths secton s a just dfferent representaton of the one proposed by Krlk and Sayın (2014). 3. Machne learnng framework We now ntroduce our ML framework for learnng to project n MOBLPs. Our proposed framework s based on Mult-class Support Vector Machne (MSVM). In ths applcaton, MSVM learns a functon f : Φ Ω from a tranng set to predct whch objectve functon wll have the best performance n the frst stage of the KSA (for a MOBLP nstance), 7

8 where Φ s the feature map doman descrbng the MOBLP nstance and Ω := {1, 2,..., p} s the doman of the labels. A label y Ω ndcates the ndex of the objectve functon that should be selected. We do not explan MSVM n ths study but nterested readers may refer to Crammer and Snger (2001) and Tsochantards et al. (2004) for detals. We used the publcly avalable mplementaton of MSVM n ths study whch can be found n It s worth mentonng that we have used MSVM manly because t was performng well durng the course of ths study. In Secton 7.4, we also report results obtaned by replacng MSVM wth Random Forest (Breman 2001, Prnze and Van den Poel 2008) to show the performance of another learnng technque n our proposed framework. Also, we provde more reasons n Secton 7.4 about why MSVM s used n ths study. Overall, the proposed ML framework contans three man components: Component 1 : It s evdent that by changng the order of the objectve functons of a MOBLP nstance n an nput fle, the nstance remans the same. Therefore, n order to ncrease the stablty of the predcton of MSVM, we propose an approach to pre-order the objectve functons of each MOBLP nstance n an nput fle before feedng t to MSVM (see Secton 4). Component 2 : We propose several generc features that can be used to descrbe each MOBLP nstance. A hgh-level descrpton of the features can be found n Secton 5 and ther detaled descrptons can be found n Appendx A. Component 3 : We propose a b-objectve heurstc approach (see Secton 6) for selectng the best subset of features for each class of MOBLP nstances (whch are AP and KP n ths study). Our numercal results show that our approach selects around 15% of features based on the tranng set for each class of MOBLP nstances n practce. Note that dentfyng the best subset of features s helpful for overcomng the ssue of overfttng and mprovng the predcton accuracy (Charkhgard and Eshragh 2019, Tbshran 1996). The proposed ML framework uses the above components for tranng purposes. A detaled dscusson on the accuracy of the proposed framework on a testng set (for each class of MOBLP nstances) s gven n Secton A pre-orderng approach for objectve functons It s obvous that by changng the order of objectve functons n an nput fle correspondng to an nstance, a new nstance wll not be generated. In other words, only the nstance s represented dfferently n that case and hence ts nondomnated fronter wll reman the 8

9 same. Ths suggests that the vector of features that wll be extracted for any nstance should be ndependent of the order of the objectve functons. To address ths ssue, we propose to perform a pre-orderng (heurstc) approach before gvng an nstance to MSVM for tranng or testng purposes. That s, when users provde an nstance, we frst change ts nput fle by re-orderng the objectve functons before feedng t to the MSVM. Obvously, ths somehow stablzes the predcton accuracy of the proposed ML framework. In lght of the above, let 1 x := ( p =1 c 1 + 1,..., 1 p =1 c n + 1 ). In the proposed approach, we re-order the objectve functons n an nput fle n a nondecreasng order of c 1 x, c 2 x,..., c p x. Intutvely, c x can be vewed as the normalzaton score for objectve functon {1,..., p}. In the rest of ths paper, the vector c for = 1, 2,..., p s assumed to be ordered accordng to the proposed approach. 5. Features and label descrbng a MOBLP nstance Ths secton provdes a hgh-level explanaton of the features that we create to descrbe a MOBLP nstance and also how each nstance s labeled. To the best of our knowledge, there are no studes that ntroduce features to descrbe mult-objectve nstances, and hence the proposed features are new Features The effcency of our proposed ML approach reles on the features descrbng a MOBLP nstance. In other words, the features should be easy to compute and effectve. Based on ths observaton, we create only statc features,.e., those that are computed once usng just the nformaton provded by the MOBLP nstance. Note that we only consder statc features because the learnng process and the decson on whch objectve functon to select for projecton (n the KSA) have to take place before solvng the MOBLP nstance. Overall, due to the nature of our research, most of our features descrbe the objectve functons of the nstances. We understand that the objectve functons by themselves are a lmted source of nformaton to descrbe an nstance. Therefore, we also consder establshng some relatonshps between the objectve functons and the other characterstcs of the nstance n order to mprove the relablty of our features. 9

10 In lght of the above, a total of 5p p 50 features are ntroduced for descrbng each nstance of MOBLP. As an asde, because n our computatonal study p = 3, we have 313 features n total. Some of these features rely on the characterstcs of the objectve functons such as the magntude and the number of postve, negatve and zero coeffcents. We also consder features that establsh a relatonshp between the objectve functons usng some normalzaton technques, e.g., the pre-orderng approach used to order the objectve functons (see Secton 4). Other features are created based on some mathematcal and statstcal computatons that lnk the objectve functons wth the technologcal coeffcents and the rght-hand-sde values. We also defne features based on the area of the projected crteron space.e., the correspondng (p 1)-dmensonal crteron space, that needs to be explored when one of the objectves s selected for conductng the projecton. Note that, to compute such an area, several sngle-objectve bnary lnear programmng problems need to be solved. However, n order to reduce the complexty of the features extracton, we compute an approxmaton of the area-to-explore by optmzng the lnear relaxaton of the problems. Addtonally, we create features n whch the basc characterstcs of an nstance are descrbed, e.g., sze, number of varables, and number of constrants. The man dea of the features s to generate as much nformaton as possble n a smple way. We accomplsh ths by computng all the proposed features n polynomal tme for a gven nstance. The features are normalzed usng a t-statstc score. Normalzaton s performed by aggregatng subsets of features computed from a smlar source. Fnally, the values of the normalzed feature matrx are dstrbuted approxmately between -1 and 1. Interested readers can fnd a detaled explanaton of the features n Appendx A Labels Based on our research goal,.e., smulatng the selecton of the best objectve, we propose a mult-class nteger labelng scheme for each nstance, where y Ω s the label of the nstance and Ω = {1, 2,..., p} s the doman of the labels. The value of y classfes the nstance wth a label that ndcates the ndex of the best objectve functon for projecton based on the runnng tme of the KSA (when generatng the entre nondomnated fronter of the nstance). The label of each nstance s assgned as follows: y arg mn j {1,...,p} {RunnngTme j}, (2) 10

11 where RunnngTme j s the runnng tme of the nstance when the objectve functon j s used for projectng the crteron space. 6. Best subset selecton of features It s easy to observe that by ntroducng more (lnearly ndependent) features and retranng an ML model (to optmalty) ts predcton error,.e., error = 1 accuracy, on the tranng set decreases and eventually t becomes zero. Ths s because n that case we are provdng a larger degree of freedom to the ML model. However, ths s not necessarly the case for the testng set. In other words, by ntroducng more features, the ML model that wll be obtaned s often overftted to the tranng set and does not perform well on the testng set. So, the underlyng dea of the best subset selecton of features s to avod the ssue of overfttng. However, the key pont s that n a real-world scenaro we do not have access to the testng set. So, selectng the best subset of features should be done based the nformaton obtaned from the tranng set. In lght of the above, studyng the trade-off between the number of features and the predcton error of an ML model on the tranng set s helpful for selectng the best subset of features (Charkhgard and Eshragh 2019). However, computng such a tradeoff usng exact methods s dffcult n practce snce the total number of subsets (of features) s an exponental functon of the number of features. Therefore, n ths secton, we ntroduce a b-objectve optmzaton-based heurstc for selectng the best subset of features. The proposed approach has two phases: Phase I : In the frst phase, the algorthm attempts to approxmate the tradeoff. Specfcally, the algorthm computes an approxmated nondomnated fronter of a b-objectve optmzaton problem n whch ts conflctng objectves are mnmzng the number of features and mnmzng the predcton error on the tranng set. Phase II : In the second phase, the algorthm selects one of the approxmated nondomnated pont and ts correspondng MSVM model to be used for predcton on the testng set. We frst explan Phase I. To compute the approxmated nondomnated fronter, we run MSVM teratvely on a tranng set. In each teraton, one approxmated nondomnated pont wll be generated. The approxmated nondomnated pont obtaned n teraton t s denoted by (k t, e t ) where k t s the number of features n the correspondng predcton 11

12 model (obtaned by MSVM) and e t s the predcton error of the correspondng model on the tranng set. To compute the frst nondomnated pont, the proposed approach/algorthm assumes that all features are avalable and t runs the MVSM to obtan the parameters of the predcton model. We denote by W t the parameter of the predcton model obtaned by MSVM n teraton t. Note that W t s a p k t matrx where p s the number of objectves. Now consder an arbtrary teraton t. The algorthm wll explore the parameters of the predcton model obtaned n the prevous teraton by MSVM,.e., W t 1, and wll remove the least mportant feature based on W t 1. Hence, because of removng one feature, we have that k t = k t 1 1. Specfcally, each column of matrx W t 1 s assocated to a feature. Therefore, the algorthm computes the standard devaton of each column ndependently. The feature wth the mnmum standard devaton wll be selected and removed n teraton t. Note that MSVM creates a model for each objectve functon and that s the reason that matrx W t 1 has p rows. So, f the standard devaton of a column n the matrx s zero then we know that the correspondng feature s contrbutng exactly the same n all p models and therefore t can be removed. So, we observe that the standard devaton plays an mportant role n dentfyng the least mportant feature. Overall, after removng the least mportant feature, MSVM should be run agan for computng W t and e t. The algorthm for computng the approxmated nondomnated fronter termnates as soon as k t = 0. A detaled descrpton of the algorthm for computng the approxmated nondomnated fronter can be found n Algorthm 1. In the second phase, we select an approxmated nondomnated pont. However, before dong so, t s worth mentonng that MSVM can take a long tme to compute W t n each teraton of Algorthm 1. So, to avod ths ssue, users usually termnate MSVM before t reaches to an optmal soluton by mposng some termnaton condtons ncludng a relatve optmalty gap tolerance and adjustng the so-called regularzaton parameter (see Crammer and Snger (2001) and Tsochantards et al. (2004) for detals). In ths study, we set the tolerance to 0.1 and the regularzaton parameter to (snce we numercally observed that MSVM performs better n ths case). Such lmtatons obvously mpact the predcton error that wll be obtaned on the tranng set,.e., e t. So, some of the ponts that wll be reported by Algorthm 1 may domnate each other. Therefore, n Phase II, we 12

13 Algorthm 1: Phase I: Computng an approxmated nondomnated fronter nput: Tranng set, The set of features 1 Queue.create(Q) 2 t 1 3 k t The ntal number of features 4 whle k t 0 do 5 f t 1 then 6 Fnd the least mportant feature from W t 1 and remove t from the set of features 7 k t k t Compute W t by applyng MSVM on the tranng set usng the current set of features 9 Compute e t by applyng the obtaned predcton model assocated wth W t on the tranng set 10 Q.add ( (k t, e t ) ) 11 t t return Q frst remove the domnated ponts. In the remanng of ths secton we assume that there s no domnated pont n the approxmated nondomnated fronter. Next, the proposed approach selects an approxmated nondomnated pont that has the mnmum Eucldean dstance from the (magnary) deal pont,.e., an magnary pont n the crteron space that has the mnmum number of features as well as the mnmum predcton error. Such a technque s a specal case of optmzaton over the fronter (Abbas and Chaabane 2006, Jorge 2009, Boland et al. 2017a, Serra-Altamranda and Charkhgard 2019). We note that n b-objectve optmzaton, the deal pont can be computed easly based on the endponts of the (approxmated) nondomnated fronter. Let (k, e ) and (k, e ) be the two endponts n whch k < k and e > e. In ths case, the deal pont s (k, e ). Note too that because the frst and second objectves have dfferent scales, n ths study, we frst normalze all approxmated nondomnated ponts before selectng a pont. Let (k, e) be an arbtrary approxmated nondomnated pont. After normalzaton, ths pont wll be as follows: ( k k k k, e e e e Observe that the proposed normalzaton technque ensures that the value of each component of a pont wll be between 0 and 1. As a consequence, n ths case, the nomnalzed deal pont wll be always (0, 0). We wll dscuss about the effectveness of our proposed best subset selecton approach n the next secton. 13 ).

14 7. A computatonal study In ths secton, we conduct an extensve computatonal study to evaluate the performance of the KSA when the proposed ML technque s used for learnng to project. We generate 1000 tr-objectve AP nstances and also 1000 tr-objectve KP nstances based on the procedures descrbed by Krlk and Sayın (2014). Snce there are three objectves, we compute the entre representaton of the nondomnated fronter three tmes for each nstance usng the KSA; In each tme, a dfferent objectve functon wll be selected for projecton. We employ CPLEX 12.7 as the sngle-objectve bnary lnear programmng solver. All computatonal experments are carred out on a Dell PowerEdge R630 wth two Intel Xeon E GHz 12-Core Processors (30MB), 128GB RAM, and the RedHat Enterprse Lnux 6.8 operatng system. We only use a sngle thread for all experments. Our experments are dvded nto three parts. In the frst part, we run our approach over the entre set of nstances usng 80% of the data as the tranng set and 20% of the data as the testng set. The second part evaluates the predcton models obtaned n the frst part on a reduced testng set. In other words, the tranng set s as same as the one n the frst part but the testng set s smaller. Specfcally, f t does not really matter whch objectve functon to be selected for projecton (n terms of soluton tme) for an nstance n the testng set of the frst part then we remove t. Obvously one can thnk of such nstances as te cases. In the thrd part of our experments, we extend the concept of reduced testng set to the tranng set. That s, we remove not only the te cases from the testng set but also from the tranng set. In general, the goal of reducng testng set and/or tranng set s to mprove the overall accuracy of the predcton model. At the end of the computatonal study, we replace MSVM by Random Forest n the proposed ML framework to show the performance of another learnng technque. We note that n ths computatonal study, we do not report any tme for our proposed ML framework because the aggregated tme of generatng the features, learnng process, and predctons for all 1000 nstances of a class of optmzaton problem,.e., AP and KP, s around 50 seconds. Ths mples that on average almost 0.05 seconds are spent on each nstance Complete tranng and testng sets The frst part of our experments are done on the entre tranng and testng sets. For each class of optmzaton problems,.e., KP and AP, the proposed subset selecton approach s run on ts correspondng tranng set. The proposed approach obtans the best subset of 14

15 features and ts correspondng predcton model for each class of nstances. Before provdng detaled explanatons about the accuracy of such a predcton model on the testng set, t s necessary to show that the proposed b-objectve optmzaton approach for selectng the best subset of features s ndeed effectve. Fgure 1 shows the approxmated nondomnated fronter (obtaned durng the course of the proposed best subset selecton approach) for each class of optmzaton problems. In ths fgure, small (red) plus symbols are the outputs of Algorthm 1. The (black) square on the vertcal axs shows the deal pont and fnally the (yellow) square on the approxmated nondomnated fronter shows the selected pont by the proposed method. Frst note that we ntroduced 313 generc features n ths paper but the tal of the approxmated nondomnated fronter n Fgure 1 clearly shows that not all 313 features are used. Ths s because some of the 313 features are not applcable to all classes and wll be removed automatcally before runnng the proposed best subset selecton approach. We observe from Fgure 1 that, overall, by ntroducng more features the predcton error deceases for the tranng set. It s evdent when all features are ncluded the accuracy,.e., 1 error, for the tranng set s around 59.5% and 70% for AP and KP nstances, respectvely. Of course ths s not surprsng because the learnng procedure wll be done based on the tranng set and ntroducng more features gves a larger degree of freedom to the learnng model. However, ths s not necessarly the case for the testng set. Bascally, by ntroducng more features, we may rase the ssue of overfttng,.e., the predcton error s small for the tranng set but large for the testng set. Error Fgure Number of features Error Number of features (a) Tranng set of AP (b) Tranng set of KP An llustraton of the performance of the proposed approach for selectng the best subset of features on the complete tranng set 15

16 To show ths, for each of the ponts (other than the deal pont) n Fgure 1, we have plotted ts correspondng pont for the testng set n Fgure 2. Specfcally, for each pont n Fgure 1, we run ts correspondng model on the testng set to compute the error. From Fgure 2 we observe that the error hghly fluctuates. In fact, t s evdent that for AP nstances, the predcton model that has around 40 features s the best predcton model. Smlarly, from Fgure 2, t s evdent that for KP nstances, the predcton model that has around 25 features s the best predcton model. Note that n practce, we do not have access to the testng set. So, we should select the best subset of features only based on the tranng set. Therefore, the goal of any best subset selecton technque s to dentfy the predcton model that s (almost) optmal for the testng set based on the exstng data set,.e., the tranng set. From Fgure 2, we observe that our proposed best subset selecton heurstc has such a desrable characterstc n practce. We see that the selected model,.e., the (yellow) square, s nearly optmal. In fact the proposed approach has selected a predcton model wth the accuracy of around 50% and 55% for AP and KP nstances, respectvely. Ths mples that the absolute dfference between the accuracy of the model selected by the proposed subset selecton approach and the accuracy of the optmal model s almost 3% and 5% for AP and KP nstances, respectvely. We note that for both classes of nstances less than 50 features exst n the model selected by the proposed approach. Overall, these results are promsng due to the fact that the (expected) probablty of randomly pckng the correct objectve functon to project s 1,.e., around 33.3% for the tr-objectve nstances. p Table 2 Accuracy and average tme decrease of testng set when usng the proposed ML technque (for the case of the complete tranng and testng sets) Tme Decrease Type Vars Accuracy ML vs. Rand Best vs. Rand ML vs. Rand Best vs. Rand % 1.29% 2.33% 55.43% % 0.67% 1.65% 40.57% AP % 0.18% 1.48% 12.20% % 0.44% 1.69% 25.81% % 1.07% 1.65% 64.99% Avg 49.50% 0.68% 1.74% 38.90% % 9.12% 11.03% 82.68% % 2.01% 10.14% 19.79% KP % 4.59% 11.42% 40.22% % 8.05% 13.27% 60.68% % 5.09% 10.34% 49.24% Avg 55.00% 5.67% 11.17% 50.77% 16

17 Error Fgure Number of features Error (a) Testng set of AP (b) Testng set of KP An llustraton of the performance of the proposed approach for the best subset selecton of features on the complete testng set Number of features We now dscuss about the performance of the selected model n detal for each class of optmzaton problems. Table 2 summarzes our fndngs. In ths table, the column labeled Accuracy shows the average percentage of the predcton accuracy of the selected model for dfferent subclasses of nstances. Note that as mentoned n Introducton, each subclass has 200 nstances. The column labeled ML vs. Rand shows the average percentage of decrease n soluton tme when ML technque s used compared to randomly pckng an objectve functon for projectng. The column labeled Best vs. Rand shows the average percentage of decrease n soluton tme when the best objectve functon s selected for projecton compared to randomly pckng an objectve functon for projectng. Fnally, column labeled ML vs. Rand shows the percentage of ML vs. Rand to Best vs. Rand. Best vs. Rand Overall, we observe that our ML method mproves the computatonal tme n all testng sets. For AP nstances, the mprovement s around 0.68% on average whch s small. However, we should note that n the deal case, we could obtan around 1.74% mprovement n tme on average for such nstances. So, the mprovement obtaned wth the proposed ML technque s 38.9% of the deal case. For largest subclass of AP nstances, ths number s around 64.99%. For the KP nstances, the results are even more promsng snce the amount of mprovement n soluton tme s around 5.67% on average. In the deal case, we could obtan an average mprovement of 11.17% for such nstances. So, the mprovement obtaned wth the proposed ML technque s 50.77% of the deal case. 17

18 7.2. Complete tranng set and reduced testng set In ths secton, we test the performance of the model obtaned n Secton 7.1 on a reduced testng set. Specfcally, we remove the nstances that can be consdered as te cases,.e., those n whch the soluton tme does not change sgnfcantly (relatve to other nstances) when dfferent objectve functons are selected for projecton. To reduce the testng set we apply the followng steps: Step 1: We compute the tme range of each nstance,.e., the dfference between the best and worst soluton tmes that can be obtaned for the nstance when dfferent objectve functons are consdered for projecton. Step 2: For each subclass of nstances,.e., those wth the same number of decson varables, we compute the standard devaton and the mnmum of tme ranges n that subclass. Step 3: We elmnate an nstance,.e., consder t as a te case, f ts tme range s not greater than the sum of the mnmum and standard devaton of tme ranges n ts assocated subclass. As a result of the procedure explaned above, the testng set was reduced by 35.5% for AP nstances and by 17.5% for KP nstances. Table 3 summarzes our fndngs for the reduced testng set. Table 3 Accuracy and average tme decrease of testng set when usng the proposed ML technque (for the case of the complete tranng set and the reduced testng set) Tme Decrease Type Vars Accuracy ML vs. Rand Best vs. Rand ML vs. Rand Best vs. Rand % 1.48% 2.54% 58.10% % 0.92% 2.23% 41.32% AP % 0.12% 1.96% 6.02% % 0.54% 1.97% 27.17% % 1.64% 2.15% 76.09% Avg 56.59% 0.90% 2.16% 41.73% % 10.11% 11.96% 84.52% % 2.03% 11.48% 17.67% KP % 5.30% 13.48% 39.31% % 10.21% 15.57% 65.59% % 6.32% 11.33% 55.77% Avg 59.39% 6.66% 12.63% 52.74% Observe that the accuracy of the predcton models has ncreased sgnfcantly for the reduced testng set. Specfcally, t has reached to around 56.59% and 59.39% on overage for AP and KP nstances, respectvely. Snce the elmnated nstances are consdered as te 18

19 cases, we can assume that they are also success cases for the predcton model. So, by consderng such success cases, the predcton accuracy wll ncrease to ( ) % and ( ) % for AP and KP nstances, respectvely. In terms of computatonal tme, we also observe (from Table 3) an mprovement of around 0.90% and 6.66% on average for AP and KP nstances, respectvely. Ths amount of mprovement s about 41.73% and 52.74% of the deal scenaros (on average) for AP and KP nstances, respectvely Reduced tranng and testng sets Due to promsng results obtaned n Secton 7.2, t s natural to ask whether we can see even more mprovement f we reduce not only the testng set but also the tranng set. Therefore, n ths secton, we elmnate the te cases usng the same procedure dscussed n Secton 7.2 from both tranng and testng sets. By dong so, the sze of the tranng+testng set was reduced by 37% and 18% for AP and KP nstances, respectvely. It s evdent that due to the change n the tranng set, we need to apply our proposed approach for best subset selecton of features agan. So, smlar to Secton 7.1, Fgure 3 shows the approxmated nondomnated fronter for each class of optmzaton problems based on the reduced tranng set. By comparng the deal ponts n Fgures 1 and 3, an mmedate mprovement n the (deal) accuracy can be observed. In fact the absolute dfference between the error of the deal ponts (n these fgures) s around 12% and 7% for AP and KP nstances, respectvely. Smlar mprovements can be observed by comparng the selected approxmated nondomnated ponts n Fgures 1 and 3. Smlar to Secton 7.1, for each of the ponts (other than the deal pont) n Fgure 3, we have plotted ts correspondng pont for the testng set n Fgure 4. We agan observe that the selected model,.e., the (yellow) square, s nearly optmal for both classes of optmzaton problems. In fact the proposed approach has selected a predcton model wth the accuracy of around 52% and 62% for AP and KP nstances, respectvely. Ths mples that the absolute dfference between the accuracy of the model selected by the proposed approach and the accuracy of the optmal model s almost 5% and 3% for AP and KP nstances, respectvely. A summary of the results of ths last experment can be found n Table 4. Observe that the average predcton accuracy on the testng set for the expermental settng n ths secton,.e., reduced tranng and testng sets, has mproved sgnfcantly for KP nstances 19

20 Error Fgure Number of features Error Number of features (a) Tranng set of AP (b) Tranng set of KP An llustraton of the performance of the proposed approach for selectng the best subset of features on the reduced tranng set Error Fgure Number of features Error Number of features (a) Testng set of AP (b) Testng set of KP An llustraton of the performance of the proposed approach for the best subset selecton of features on the reduced testng set compared to the results gven n Sectons 7.1 and 7.2. However for the nstances of AP problem, the average predcton accuracy n ths secton s only better than the one presented n Secton 7.1. Overall, the average predcton accuracy s 52.38% and 61.59% for AP and KP nstances when usng reduced tranng and testng sets. By consderng the te cases as success events, the projected accuracy ncreases up to 70% and 68.5% for AP and KP nstances, respectvely. The mportance of such an ncrease n the accuracy s hghlghted by the tme decrease percentages gven n Table 4, whch s over 1% for AP nstances and s near 8% for KP nstances. In fact, for the largest subclass of AP nstances, the average tme mprovement of 1.6% s equvalent to almost 110 seconds on average. Smlarly, for 20

21 Table 4 Accuracy and average tme decrease of testng set when usng the proposed ML technque (for the case of the reduced tranng and testng sets) Tme Decrease Type Vars Accuracy ML vs. Rand Best vs. Rand ML vs. Rand Best vs. Rand % 0.82% 2.14% 38.41% % 1.17% 2.39% 48.76% AP % 0.67% 1.93% 34.46% % 1.10% 2.53% 43.35% % 1.60% 2.87% 55.61% Avg 52.38% 1.03% 2.35% 43.99% % 6.75% 12.16% 55.51% % 6.79% 12.03% 56.43% KP % 12.14% 16.91% 71.79% % 9.22% 12.26% 75.17% % 4.70% 10.56% 44.50% Avg 61.59% 7.61% 12.64% 60.18% the largest subclass of KP nstances, the tme mprovement of 4.7% s around 490 seconds on average Replacng MSVM by Random Forest One man reason that we used MSVM n ths study s that (as shown n the prevous sectons) t performs well n practce for the purpose of ths study. However, another crtcal reason s the fact that MSVM creates a matrx of parameters denoted by W t n each teraton. Ths matrx has p rows where p s the number of objectve functons. In other words, for each objectve functon, MSVM creates a specfc model for predctng whch one should be used for projecton n KSA. Ths characterstc s desrable because t allowed us to develop a custom-bult b-objectve heurstc for selectng the best subset of features. Specfcally, as dscussed n Secton 6, ths characterstc s essental for dentfyng the least mportant feature n each teraton of Algorthm 1. However, applyng such a procedure on other ML technques s not trval. In lght of the above, n ths secton we replace MSVM by Random Forest wthn the proposed machne learnng framework. However, we smply use the best subset of features selected by MSVM and then feed t to Random Forest for tranng and predctng. To mplement Random Forest we use sckt-learn lbrary n Python (Pedregosa et al. 2011). Table 5 shows a comparson between the predcton accuracy of MSVM and Random Forest under three expermental settngs descrbed n Sectons In other words, Settng 1 corresponds to the complete tranng and testng sets; Settng 2 corresponds to the complete tranng set and reduced testng sets; Fnally, Settng 3 refers to the reduced tranng and testng sets. 21

22 In ths table, columns labeled Increase show the average percentage of ncrease n the predcton accuracy of the Random Forest compared to MSVM. Observe from these columns that the reported numbers are mostly negatve. Ths mples that, n general, MSVM outperforms Random Forest n terms of predcton accuracy. For example, n Settng 3, we observe that the accuracy of Random Forest s around 18.63% and 22.88% worse than the accuracy of MSVM on average. Ths experment clearly shows the advantage of usng MSVM n the proposed ML framework. Table 5 A performance comparson between MSVM and Random Forest on a testng set Settng 1 Settng 2 Settng 3 Type Vars Accuracy Increase Accuracy Increase Accuracy Increase % 9.99% 64.52% 11.12% 36.36% % % -5.89% 50.00% % 26.67% % AP % 10.54% 37.04% -9.09% 52.63% -4.60% % 20.01% 65.79% 13.65% 70.59% 55.31% % % 73.91% 6.24% 50.00% -5.00% Avg 52.50% 6.06% 59.69% 5.48% 42.62% % % % 47.37% % 41.18% % % 9.53% 56.76% 23.52% 50.00% % KP % % 54.55% -5.27% 61.90% 3.17% % % 55.56% % 47.06% % % % 43.33% % 45.95% % Avg 46.00% % 51.52% % 47.50% % 8. Conclusons and future research We presented a mult-class support vector machne based approach to enhance exact multobjectve bnary lnear programmng algorthms. Our approach smulates the best selecton of objectve functon to be used for projecton n the KSA n order to mprove ts computatonal tme. We ntroduced a pre-orderng approach for the objectve functons n the nput fle for the purpose of standardzng the vector of features. Moreover, we ntroduced a b-objectve optmzaton approach for selectng the best subset of features n order to overcome overfttng. By conductng an extensve computatonal, we showed that reachng to the predcton accuracy of around 70% s possble for nstances of tr-objectve AP and KP. It was shown that such a predcton accuracy results n a decrease of over 12% n the computatonal tme for some nstances. Overall, we hope that the smplcty of our proposed ML technque and ts promsng results encourage more researchers to use ML technques for mprovng mult-objectve 22

23 optmzaton solvers. Note that, n ths paper, we studed the problem of learnng to project n a statc settng,.e., before solvng an nstance we predct the best objectve functon and use t durng the course of the search. So, one future research drecton of ths study would be fndng a way to employ the proposed learnng-to-project technque n a dynamc settng,.e., at each teraton n the search process we predct the best projected objectve and use t. Evdently, ths may result n developng new algorthms that have not yet been studed n the lterature of mult-objectve optmzaton. References Abbas M, Chaabane D (2006) Optmzng a lnear functon over an nteger effcent set. European Journal of Operatonal Research 174(2): Alvarez AM, Louveaux Q, Wehenkel L (2017) A machne learnng-based approxmaton of strong branchng. INFORMS Journal on Computng 29(1): Bertsmas D, Kng A, Mazumder R, et al. (2016) Best subset selecton va a modern optmzaton lens. The annals of statstcs 44(2): Boland N, Charkhgard H, Savelsbergh M (2015a) A crteron space search algorthm for bobjectve nteger programmng: The balanced box method. INFORMS Journal on Computng 27(4): Boland N, Charkhgard H, Savelsbergh M (2015b) A crteron space search algorthm for bobjectve mxed nteger programmng: The trangle splttng method. INFORMS Journal on Computng 27(4): Boland N, Charkhgard H, Savelsbergh M (2016) The L-shape search method for trobjectve nteger programmng. Mathematcal Programmng Computaton 8(2): Boland N, Charkhgard H, Savelsbergh M (2017a) A new method for optmzng a lnear functon over the effcent set of a multobjectve nteger program. European Journal of Operatonal Research 260(3): Boland N, Charkhgard H, Savelsbergh M (2017b) The quadrant shrnkng method: A smple and effcent algorthm for solvng tr-objectve nteger programs. European Journal of Operatonal Research 260(3): Bottou L (2010) Large-scale machne learnng wth stochastc gradent descent. Proceedngs of COMP- STAT 2010, (Sprnger). Breman L (2001) Random forests. Machne learnng 45(1):5 32. Charkhgard H, Eshragh A (2019) A new approach to select the best subset of predctors n lnear regresson modelng: b-objectve mxed nteger lnear programmng. ANZIAM journal. Avalable onlne. https: //do.org/ /s Crammer K, Snger Y (2001) On the algorthmc mplementaton of multclass kernel-based vector machnes. Journal of machne learnng research 2(Dec):

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters Proper Choce of Data Used for the Estmaton of Datum Transformaton Parameters Hakan S. KUTOGLU, Turkey Key words: Coordnate systems; transformaton; estmaton, relablty. SUMMARY Advances n technologes and

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR Judth Aronow Rchard Jarvnen Independent Consultant Dept of Math/Stat 559 Frost Wnona State Unversty Beaumont, TX 7776 Wnona, MN 55987 aronowju@hal.lamar.edu

More information

High-Boost Mesh Filtering for 3-D Shape Enhancement

High-Boost Mesh Filtering for 3-D Shape Enhancement Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints TPL-ware Dsplacement-drven Detaled Placement Refnement wth Colorng Constrants Tao Ln Iowa State Unversty tln@astate.edu Chrs Chu Iowa State Unversty cnchu@astate.edu BSTRCT To mnmze the effect of process

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

Multicriteria Decision Making

Multicriteria Decision Making Multcrtera Decson Makng Andrés Ramos (Andres.Ramos@comllas.edu) Pedro Sánchez (Pedro.Sanchez@comllas.edu) Sonja Wogrn (Sonja.Wogrn@comllas.edu) Contents 1. Basc concepts 2. Contnuous methods 3. Dscrete

More information

Intra-Parametric Analysis of a Fuzzy MOLP

Intra-Parametric Analysis of a Fuzzy MOLP Intra-Parametrc Analyss of a Fuzzy MOLP a MIAO-LING WANG a Department of Industral Engneerng and Management a Mnghsn Insttute of Technology and Hsnchu Tawan, ROC b HSIAO-FAN WANG b Insttute of Industral

More information

Fast Computation of Shortest Path for Visiting Segments in the Plane

Fast Computation of Shortest Path for Visiting Segments in the Plane Send Orders for Reprnts to reprnts@benthamscence.ae 4 The Open Cybernetcs & Systemcs Journal, 04, 8, 4-9 Open Access Fast Computaton of Shortest Path for Vstng Segments n the Plane Ljuan Wang,, Bo Jang

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

Assembler. Building a Modern Computer From First Principles.

Assembler. Building a Modern Computer From First Principles. Assembler Buldng a Modern Computer From Frst Prncples www.nand2tetrs.org Elements of Computng Systems, Nsan & Schocken, MIT Press, www.nand2tetrs.org, Chapter 6: Assembler slde Where we are at: Human Thought

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

An Indian Journal FULL PAPER ABSTRACT KEYWORDS. Trade Science Inc.

An Indian Journal FULL PAPER ABSTRACT KEYWORDS. Trade Science Inc. [Type text] [Type text] [Type text] ISSN : 97-735 Volume Issue 9 BoTechnology An Indan Journal FULL PAPER BTAIJ, (9), [333-3] Matlab mult-dmensonal model-based - 3 Chnese football assocaton super league

More information

Three supervised learning methods on pen digits character recognition dataset

Three supervised learning methods on pen digits character recognition dataset Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

Hybrid Heuristics for the Maximum Diversity Problem

Hybrid Heuristics for the Maximum Diversity Problem Hybrd Heurstcs for the Maxmum Dversty Problem MICAEL GALLEGO Departamento de Informátca, Estadístca y Telemátca, Unversdad Rey Juan Carlos, Span. Mcael.Gallego@urjc.es ABRAHAM DUARTE Departamento de Informátca,

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

Constructing Minimum Connected Dominating Set: Algorithmic approach

Constructing Minimum Connected Dominating Set: Algorithmic approach Constructng Mnmum Connected Domnatng Set: Algorthmc approach G.N. Puroht and Usha Sharma Centre for Mathematcal Scences, Banasthal Unversty, Rajasthan 304022 usha.sharma94@yahoo.com Abstract: Connected

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors Onlne Detecton and Classfcaton of Movng Objects Usng Progressvely Improvng Detectors Omar Javed Saad Al Mubarak Shah Computer Vson Lab School of Computer Scence Unversty of Central Florda Orlando, FL 32816

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

A Facet Generation Procedure. for solving 0/1 integer programs

A Facet Generation Procedure. for solving 0/1 integer programs A Facet Generaton Procedure for solvng 0/ nteger programs by Gyana R. Parja IBM Corporaton, Poughkeepse, NY 260 Radu Gaddov Emery Worldwde Arlnes, Vandala, Oho 45377 and Wlbert E. Wlhelm Teas A&M Unversty,

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Random Kernel Perceptron on ATTiny2313 Microcontroller

Random Kernel Perceptron on ATTiny2313 Microcontroller Random Kernel Perceptron on ATTny233 Mcrocontroller Nemanja Djurc Department of Computer and Informaton Scences, Temple Unversty Phladelpha, PA 922, USA nemanja.djurc@temple.edu Slobodan Vucetc Department

More information

Optimal Workload-based Weighted Wavelet Synopses

Optimal Workload-based Weighted Wavelet Synopses Optmal Workload-based Weghted Wavelet Synopses Yoss Matas School of Computer Scence Tel Avv Unversty Tel Avv 69978, Israel matas@tau.ac.l Danel Urel School of Computer Scence Tel Avv Unversty Tel Avv 69978,

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

A Simple and Efficient Goal Programming Model for Computing of Fuzzy Linear Regression Parameters with Considering Outliers

A Simple and Efficient Goal Programming Model for Computing of Fuzzy Linear Regression Parameters with Considering Outliers 62626262621 Journal of Uncertan Systems Vol.5, No.1, pp.62-71, 211 Onlne at: www.us.org.u A Smple and Effcent Goal Programmng Model for Computng of Fuzzy Lnear Regresson Parameters wth Consderng Outlers

More information

Review of approximation techniques

Review of approximation techniques CHAPTER 2 Revew of appromaton technques 2. Introducton Optmzaton problems n engneerng desgn are characterzed by the followng assocated features: the objectve functon and constrants are mplct functons evaluated

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

An efficient iterative source routing algorithm

An efficient iterative source routing algorithm An effcent teratve source routng algorthm Gang Cheng Ye Tan Nrwan Ansar Advanced Networng Lab Department of Electrcal Computer Engneerng New Jersey Insttute of Technology Newar NJ 7 {gc yt Ansar}@ntedu

More information

Data Representation in Digital Design, a Single Conversion Equation and a Formal Languages Approach

Data Representation in Digital Design, a Single Conversion Equation and a Formal Languages Approach Data Representaton n Dgtal Desgn, a Sngle Converson Equaton and a Formal Languages Approach Hassan Farhat Unversty of Nebraska at Omaha Abstract- In the study of data representaton n dgtal desgn and computer

More information

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010 Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

Network Intrusion Detection Based on PSO-SVM

Network Intrusion Detection Based on PSO-SVM TELKOMNIKA Indonesan Journal of Electrcal Engneerng Vol.1, No., February 014, pp. 150 ~ 1508 DOI: http://dx.do.org/10.11591/telkomnka.v1.386 150 Network Intruson Detecton Based on PSO-SVM Changsheng Xang*

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Supervsed vs. Unsupervsed Learnng Up to now we consdered supervsed learnng scenaro, where we are gven 1. samples 1,, n 2. class labels for all samples 1,, n Ths s also

More information

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION 24 CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION The present chapter proposes an IPSO approach for multprocessor task schedulng problem wth two classfcatons, namely, statc ndependent tasks and

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

Learning-Based Top-N Selection Query Evaluation over Relational Databases

Learning-Based Top-N Selection Query Evaluation over Relational Databases Learnng-Based Top-N Selecton Query Evaluaton over Relatonal Databases Lang Zhu *, Wey Meng ** * School of Mathematcs and Computer Scence, Hebe Unversty, Baodng, Hebe 071002, Chna, zhu@mal.hbu.edu.cn **

More information