A Fusion of Stacking with Dynamic Integration
|
|
- William Price
- 5 years ago
- Views:
Transcription
1 A Fuson of Stackng wth Dynamc Integraton all Rooney, Davd Patterson orthern Ireland Knowledge Engneerng Laboratory Faculty of Engneerng, Unversty of Ulster Jordanstown, ewtownabbey, BT37 OQB, U.K. {nf.rooney, Abstract In ths paper we present a novel method that fuses the ensemble meta-technques of Stackng and Dynamc Integraton () for regresson problems, wthout addng any major computatonal overhead. The ntenton of the technque s to beneft from the varyng performance of Stackng and for dfferent data sets, n order to provde a more robust technque. We detal an emprcal analyss of the technque referred to as weghted Meta Combner (wmetacomb) and compare ts performance to Stackng and the technque of Dynamc Weghtng wth Selecton. The emprcal analyss conssted of four sets of experments where each experment recorded the cross-fold evaluaton of each technque for a large number of dverse data sets, where each base model s created usng random feature selecton and the same base learnng algorthm. Each experment dffered n terms of the latter base learnng algorthm used. We demonstrate that for each evaluaton, wmeta- Comb was able to outperform and Stackng for each experment and as such fuses the two underlyng mechansms successfully. 1 Introducton The objectve of ensemble learnng s to ntegrate a number of base learnng models n an ensemble so that the generalzaton performance of the ensemble s better than any of the ndvdual base learnng models. If the base learnng models are created usng the same learnng algorthm, the ensemble learnng schema s homogeneous otherwse t s heterogeneous. Clearly n the former case, t s mportant that the base learnng models are not dentcal, and theoretcally t has been shown that for such an ensemble to be effectve, t s mportant that the base learnng models are suffcently accurate and dverse n ther predctons [Krogh & Vedelsby, 1995], where dversty n terms of regresson s usually measured by the ambguty or varance n ther predctons. To acheve ths n terms of homogeneous learnng, methods exst that ether manpulate, for each model, the tranng data through nstance or feature samplng methods or manpulate the learnng parameters of the learnng algorthm. Brown et al.[2004] provde a recent survey of such ensemble generaton methods. Havng generated an approprate set of dverse and suffcently accurate models, an ensemble ntegraton method s requred to create a successful ensemble. One well known approach to ensemble ntegraton s to combne the models usng a learnng model tself. Ths process, referred to as Stackng [Wolpert, 1992], forms a meta-model created from a meta-data set. The meta-data set s formed from the predctons of the base models and the output varable of each tranng nstance. As a consequence, for each tranng nstance I =< xy, > belongng to tranng set D, metanstance MI =< f ˆ ˆ 1( x),.. f ( x), s formed, where f ˆ ( x) s the predcton of base model, = 1.. where s the sze of the ensemble. To ensure that the predctons are unbased, the tranng data s dvded by a J -fold cross-valdaton process nto J subsets, so that for each teraton of a crossvaldaton process, one of the subsets s removed from the tranng data and the models are re-traned on the remanng data. Predctons are then made by each base model on the held out sub-set. The meta-model s traned on the accumulated meta-data after the completon of the cross valdaton process and each of the base models s traned on the whole tranng data. Breman[1996b] showed the successful applcaton of ths method for regresson problems usng a lnear regresson method for the meta-model where the coeffcents of the lnear regresson model were constraned to be non-negatve. However there s no specfc requrement to use lnear regresson as the meta-model [Haste et al., 2001] and other lnear algorthms have been used partcularly n the area of classfcaton [Seewald, 2002, Džerosk, & Ženko, 2004]. A technque whch appears at frst very smlar n scope to Stackng (and can be seen as a varant thereof) s that proposed by Tsymbal et al.[2003] referred to as Dynamc Integraton. Ths technque also runs a J-fold cross valdaton 2844
2 process for each base model, however the process by whch t creates and uses meta-data s dfferent. Each metanstance conssts of the followng format < x, Err1 ( x),.., Err ( x), where Err ( x ) s the predcton error made by each base model for each tranng nstance < x,. Rather than formng a meta-model based on the derved meta-data, Dynamc Integraton uses a k- nearest neghbour (k-) lazy model to fnd the nearest neghbours based on the orgnal nstance feature space (whch s a subspace n the meta-data) for a gven test nstance. It then determnes the total tranng error of the base models recorded for ths meta-nearest neghbour set. The level of error s used to allow the dynamc selecton of one of the base models or an approprate weghted combnaton of all or a select number of base models to make a predcton for the gven test nstance. Tsymbal et al. [2003] descrbes three Dynamc Integraton technques for classfcaton whch have been adapted for regresson problems [Rooney et al., 2004]. In ths paper we consder one varant of these technques referred to as Dynamc Weghtng wth Selecton (DWS) whch apples a weghtng combnaton to the base models based on ther localzed error, to make a predcton, but excludes frst those base models that are deemed too naccurate to be added to the weghted combnaton. A method of njectng dversty nto a homogeneous ensemble s to use random subspacng [Ho, 1998a, 1998b]. Random subspacng s based on the process whereby each base model s bult wth data wth randomly subspaced features. Tsymbal et al. [2003] revsed ths mechansm so that a varable length of features are randomly subspaced. They shown ths method to be effectve n creatng ensembles that were ntegrated usng methods for classfcaton. It has been shown emprcally that Stackng and Dynamc Integraton based on ntegratng models, generated usng random subspacng, are suffcently dfferent from each other n that they can gve varyng generalzaton performance on the same data sets [Rooney et al., 2004]. The man computatonal requrement n both Stackng and Dynamc Integraton, s the use of cross-valdaton of the base models to form meta-data. It requres very lttle addtonal computatonal overhead to propose an ensemble meta-learner that does both n one cross valdaton of the ensemble base models and as a consequence, s able to create both a Stackng meta-model and a Dynamc Integraton meta-model, wth the requred knowledge for the ensemble meta-learner to use both models to form predctons for test nstances. The focus of ths paper s on the formulaton and evaluaton of such an ensemble meta-learner, referred to as weghted Meta- Combner (wmetacomb). We nvestgated whether wmeta- Comb wll on average outperform both Stackng for regresson () and DWS for a range of data sets and a range of base learnng algorthms used to generate homogeneous randomly subspaced models. 2 Methodology In ths secton, we descrbe n detal the process of wmeta- Comb. The tranng phase of wmetacomb s shown n Fgure 1.. Algorthm : wmetacomb: Tranng Phase Input: D Output: f ˆ, f ˆ, f ˆ, = 1.. (* step 1 *) partton D nto J fold parttons D = 1.. ntalse meta-data set = ntalse meta-data set = For = 1..J-2 Dtran = D\ D D = D test buld learnng models f ˆ, = 1.. usng Dtran For each nstance < x, n Dtest Form meta-nstance MI : < f ˆ ˆ 1( x),.., f ( x), EndFor Endfor J Form meta-nstance MI : < x, Err1 ( x),.., Err ( x), Add Add MI MI to to Buld meta-model fˆ usng the meta-data set Buld meta-model ˆ f (* step 2 *) =J D = D test usng the meta-data set Determne TotalErr and TotalErr usng Dtest (* step 3 *) = J-1 Dtran = D\ D Dtest = D buld learnng models f ˆ, = 1.. usng Dtran For each nstance < x, n Dtest Form meta-nstance MI : < f ˆ ˆ 1( x),.., f ( x), Form meta-nstance MI : < x, Err1 ( x),.., Err ( x), Add MI to Add MI to EndFor Retran the f, = 1.. on D ˆ Buld meta-model fˆ usng the meta-data set Buld meta-model fˆ usng the meta-data set Fgure 1. Tranng Phase of wmetacomb The frst step n the tranng phase of wmetacomb conssts of buldng and meta-models on J-1 folds of the tranng data (step 1). The second step nvolves testng the meta-models usng the remanng tranng fold, as an ndcator of ther potental generalzaton performance. It s mportant to do ths as even though models are not created 2845
3 usng the complete meta-data, they are tested usng data that they are not traned on, n order to gve a relable estmate of ther test performance. The performance of the meta-models s recorded by ther total absolute error, TotalError and TotalError wthn step 2. The meta-models are then rebult usng the entre meta-data completed by step 3. The computatonal overhead of wmetacomb n comparson to by tself centres only on the cost of tranng the addtonal meta-model. However as the addtonal meta-model s based on a lazy nearest neghbour learner, ths cost s trval. The predcton made by wmetacomb s made by a smple weghted combnaton of ts meta-models based on ther total error performance determned n step 2. Denote TotalErr =< TotalError, TotalError >. The ndvdual weght for each meta-model s determned by fndng the normalzed error for each meta-model: normerror = TotalError / TotalError = 1..2 A normalzed weghtng for each meta-model s gven by: mw normerror T e = (1) norm _ mw mw = (2) mw Equaton 1 s nfluenced by a weghtng mechansm that Wolpert & Macready [1996] proposed for combnng base models generated by stochastc process such as used n Baggng [Breman,1996a]. The weghtng parameter T determnes how much relatve nfluence to gve to each meta-model based on ther normalzed error e.g. suppose normerror 1 = 0.4 and normerror 2 = 0.6, then Table 1 gves the normalzed weght values for dfferent values of T. Ths ndcates that a low value of T wll gve large mbalance to the weghtng, whereas a hgh value of T almost equally weghts the metalearners regardless of ther dfference n normalzed errors. T norm _ mw 1 norm _ mw Table 1. The effect of the weghtng parameter Based on these consderatons, we appled the followng heurstc functon for the settng of T, whch ncreases the mbalance n the weghtng dependent on how great the normalzed errors dffer. T normerror normerror = (3) wmetacomb forms a predcton for a test nstance x by the followng combnaton of the predctons made by the metamodels: _ * ˆ ( ) _. ˆ norm mw f x + norm mw f ( x) Expermental Evaluaton and Analyss In these experments, we nvestgated the performance of wmetacomb n comparson to and DWS. The work was carred out based on approprate extensons to the WEKA envronment, such as the mplementaton of DWS. The ensemble sets conssted of 25 base homogeneous models where the data for each base model was randomly subspaced [Tsymbal et al., 2003]. ote that random subspacng was a pseudo-random process so that a feature sub-set generated for each model n an ensemble set s the same for all ensemble sets created usng dfferent base learnng algorthms and the same data set. The meta-learner for and the Stackng component wthn wmetacomb was based on the model tree M5 [Qunlan, 1992], as ths has a larger hypothess space than Lnear Regresson. The number of nearest neghbours for DWS k- meta-learner was set to 15. The performance of DWS can be affected by the sze of k for partcular data sets, but for ths partcular choce of data sets, prevous emprcal analyss had shown that the value of 15, gave overall a relatvely strong performance. The value of J durng the tranng phase of each metatechnque was set to 10. In order to evaluate the performance of the dfferent ensemble technques, we chose 30 data sets avalable from the WEKA envronment [Wtten & Frank, 1999]. These data sets represent a number of synthetc and real-world problems from a varety of dfferent domans, have a varyng range n ther attrbute szes and many contan a mxture of both dscrete and contnuous attrbutes. Data sets whch had more than 500 nstances, were sampled wthout replacement to reduce the sze of the data set to a maxmum of 500 nstances, n order to make the evaluaton computatonally tractable. Any mssng values n the data set were replaced usng a mean or modal technque. The characterstcs of data sets are shown n Table 2. We carred out a set of four experments, where each experment dffered purely n the base learnng algorthm deployed to generate base models n the ensemble. Each experment calculated the relatve root mean squared error (RRMSE) [Setono et al., 2002] of each ensemble technque for each data set based on a 10 fold cross valdaton measure. 2846
4 Data set Orgnal Data set sze Total Data set sze n experments umber of nput Attrbutes Con- Ds- tnuous crete abalone autohorse autompg autoprce auto bank8fm bank32nh bodyfat breasttumor cal_housng cloud cpu cpu_act cpu_small echomonths elevators fshcatch fredman housng house_8l house_16h lowbwt machne_cpu polluton pyrm sensory servo sleep strke veteran Table 2. The data sets characterstcs The RRMSE s a percentage measure. By way of comparson of the ensemble approaches n general, we also determned the RRMSE for a sngle model bult on the whole unsampled data, usng the same learnng algorthm as used n the ensemble base models. We repeated ths evaluaton four tmes, each tme usng a dfferent learnng algorthm to generate base models. The choce of learnng algorthms was based on a choce of smple and effcent methods n order to make the study computatonally tractable, but also to have representatve range of methods wth dfferent learnng hypothess spaces n order to determne whether wmetacomb was a robust method ndependent of the base learnng algorthm deployed. The base learnng algorthms used for the four experments were 5 nearest neghbours (5- ), Lnear Regresson (LR), Locally weghted regresson [Atkeson et al., 1997] and the model tree learner M5. The results of the evaluaton were assessed by performng a 2 taled pared t test (p=0.05) on the cross fold RRMSE for each technque n comparson to each other for each data set. The results of the comparson between one technque A and another B were then summarsed n the form wns:losses:tes (ASD) where wns s a count of the number of data sets where technque A outperformed technque B sgnfcantly, losses s a count of the number of data sets where technque B outperformed technque A sgnfcantly and tes a count where there was no sgnfcant dfference. The sgnfcance gan s the dfference n the number of wns and the number of losses. If ths zero or less, no gan was shown. If the gan s less than 3 ths s ndcatve of only a slght mprovement. We determned the average sgnfcance dfference (ASD) between the RRMSE of the two technques, averaged over wns + losses data sets only where a sgnfcant dfference was shown. If technque A outperformed B as shown by the sgnfcance rato, the ASD gave us a percentage measure of the degree by whch A outperformed B (f technque A was outperformed by B, then the ASD value s negatve). The ASD by tself of course has no drect ndcaton of the performance of the technques and s only a supplementary measure. A large ASD does not ndcate that technque A performed consderably better than B f the dfference n the number of wns to losses was small. Conversely even f the number of wns was much larger than the number of losses but the ASD was small, ths reduces the mpact of the result consderably. Ths secton s dvded nto a dscusson of the summary of results for each ndvdual base learnng algorthm then a dscusson of the overall results. 3.1 Base Learner: k- Table 3 shows a summary of the results for the evaluaton when the base learnng algorthm was 5-. Clearly all the ensemble technques strongly outperformed sngle 5- whereby n terms of sgnfcance alone, wmetacomb shown the largest mprovement wth a gan of 22. Although showed fewer data sets wth sgnfcant mprovement, t had a greater ASD than wmetacomb. wmetacomb outperformed DWS wth a gan of 7, and showed a small mprovement over, wth a gan of 3. showed a sgnfcance gan of only 1 over DWS. Technque 5- DWS wmetacomb 5-0:11:19 (-16.07) 0:20:10 (-12.28) 0:22:8 (-14.47) 11:0:19 (16.07) 3:2:25 (3.82) 0:3:27 (-6.8) DWS 20:0:10 (12.28) 2:3:25 (-3.82) 1:8:21 (-5.28) wmetacomb 22:0:8 (14.47) 3:0:27 (6.8) 8:1:21 (5.28) Table 3. Comparson of methods when base learnng algorthm was
5 3.2 Base Learner: LR Table 4 shows that all ensembles technques outperformed sngle LR, wth wmetacomb showng the largest mprovement wth a gan of 13 and an ASD of wmetacomb outperformed and DWS wth a larger mprovement shown n comparson to DWS than to both n terms of sgnfcance and ASD. outperformed DWS wth a sgnfcance gan of 7 and ASD of 5.1. Technque LR DWS wmetacomb LR 2:10:18 (-11.38) 1:11:18 (-5.56) 0:13:17 (-11.99) 10:2:18 (11.38) 11:4:15 (5.1) 0:5:25 (-5.58) DWS 11:1:18 (5.56) 4:11:15 (-5.1) 3:13:14 (-6.02) wmetacomb 13:0:17 (11.99) 5:0:25 (5.58) 13:3:14 (6.02) Table 4. Comparson of methods when base learnng algorthm was LR 3.3 Base Learner: LWR Table 5 shows that all ensembles technques outperformed sngle LWR, wth wmetacomb showng the largest mprovement wth a sgnfcance gan of 16. wmetacomb outperformed and DWS wth a larger mprovement shown over DWS than n terms of sgnfcance and ASD. outperformed DWS wth a sgnfcance gan of 4 and an ASD of Technque LWR DWS wmetacomb LWR 1:9:20 (-13.81) 0:16:14 (-12.28) 0:16:14 (-12.28) 9:1:20 (13.81) 6:2:22 (6.24) 0:5:25 (-10.48) DWS 16:0:14 (12.28) 2:6:22 (-6.24) 0:9:21 (-8.28) wmetacomb 16:0:14 (15.91) 5:0:25 (10.48) 9:0:21 (8.28) Table 5. Comparson of methods when base learnng algorthm was LWR Table 5 shows that all ensembles technques outperformed sngle LWR, wth wmetacomb showng the largest mprovement wth a sgnfcance gan of 16. wmetacomb outperformed and DWS wth a larger mprovement shown over DWS than n terms of sgnfcance and ASD. outperformed DWS wth a sgnfcance gan of 4 and ASD of Base Learner: M5 Technque M5 DWS wmetacomb M5 3:1:26 (4.4) 1:6:23 (-3.45) 0:8:22 (-4.51) 1:3:26 (-4.4) 2:6:22 (-2.62) 0:11:19 (-4.48) DWS 6:1:23 (3.45) 6:2:22 (2.62) 0:6:24 (-3.53) wmetacomb 8:0:22 (4.51) 11:0:19 (4.48) 6:0:24 (3.53) Table 6. Comparson of methods when base learnng algorthm was M5 Table 6 shows a much reduced performance for the ensembles n comparson to the sngle model, when the base learnng algorthm was M5. In fact, showed no mprovement over M5, and wmetacomb and DWS although they mproved on M5 n terms of sgnfcance dd not show an ASD as large as for the prevous learnng algorthms. The reduced performance wth M5 s ndcatve that n the case of certan learnng algorthms, random subspacng by tself s not able to generate suffcently accurate base models, and needs to be enhanced usng search approaches that mprove the qualty of ensemble members such as consdered n [Tsymbal et al., 2005]. wmetacomb showed mprovements over and to a lesser degree both n terms of sgnfcance and ASD values. Overall, t can be seen that wmetacomb gave the strongest performance of the ensembles for 3 out of 4 base learnng algorthms (5-, LR, M5) n comparson to the sngle model, n terms of ts sgnfcance gan, and ted n comparson to DWS for LWR. For 3 out of the 4 learnng algorthms (LR, LWR, M5), wmetacomb also had the hghest ASD compared to the sngle model. wmetacomb outperformed for all learnng algorthms. The largest sgnfcant mprovement over was recorded wth M5 wth a sgnfcance gan of 11, and the largest ASD n comparson to was recorded wth LWR of The lowest sgnfcant mprovement over was recorded wth 5-, LR, and LWR wth a sgnfcant gan of 5, and lowest ASD of 4.48 wth M5. wmetacomb outperformed DWS for all four base learnng algorthms. The degree of mprovement was larger n comparson to DWS than. The largest sgnfcant mprovement over DWS was shown wth LR wth a sgnfcance gan of 10 and the largest ASD of 8.28 wth LWR. The lowest mprovement was shown wth M5 both n terms of gan and ASD whch were 6 and 3.53 respectvely. In effect, wmetacomb was able to fuse effectvely the overlappng expertse of the two meta-technques. Ths was seen to be case ndependent of the learnng algorthm employed. In addton, wmetacomb provded beneft whether overall for a gven experment, was stronger than DWS or not (as 2848
6 was the case wth LR and LWR, but not M5), ndcatng the general robustness of the technque. 4 Conclusons We have presented a technque called wmetacomb that merges the technque of Stackng for regresson and Dynamc ntegraton for regresson, and showed that t was successfully able to beneft from the ndvdual metatechnques often complementary expertse for dfferent data sets. Ths was demonstrated based on 4 sets of experments each usng dfferent base learnng algorthms to generate homogeneous ensemble sets. It was shown that for each experment, wmetacomb outperformed Stackng and the Dynamc Integraton technque of Dynamc Weghtng wth Selecton. In future, we ntend to generalze the technque so as to allow the combnaton of more than two metaensemble technques. In ths, we may consder combnatons of Stackng usng dfferent learnng algorthms or other Dynamc ntegraton combnaton methods. References [Atkeson et al., 1997] Atkeson, C. G., Moore, A. W., and Schaal, S.. Locally weghted learnng. Artfcal Intellgence Revew, 11:11-73, Kluwer, [Breman, L., 1996a]. Baggng Predctors. Machne learnng, 24: , Kluwer, [Breman, L., 1996b] Stacked Regressons. Machne Learnng, 24:49-64, Kluwer, [Brown et al., 2004] Brown, G., Wyatt, J., Harrs, R. amd Yao, X. Dversty Creaton Methods: A Survey and Categorsaton. Informaton Fuson 6(1):5-20, Elsever,2004. [Džerosk, & Ženko, 2004] Džerosk, S. & Ženko, B. Is Combnng Classfers wth Stackng Better than Selectng the Best One? Machne Learnng 54(3): , Kluwer [Haste et al., 2001] Haste, T., Tbshran, R., Fredman, J. The Elements of Statstcal Learnng:Data mnng, Inference and Predcton. Sprnger seres n Statstcs, [Ho, 1998a] Ho, T.K. The random subspace method for constructng decson forests. IEEE Transactons on Pattern Analyss and Machne Intellgence, 20(8): , [Ho, 1998b] Ho, T.K. earest eghbors n Random Subspaces. In Proceedngs of the Jont IAPR Workshops on Advances n Pattern Recognton, LCS, Vol. 1451, pp , Sprnger-Verlag, [Krogh & Vedelsby, 1995] Krogh, A. and Vedelsby, J. eural etworks Ensembles, Cross valdaton, and Actve Learnng. In Advances n eural Informaton Processng Systems, pp , MIT Press, [Qunlan, 1992] Qunlan, R. Learnng wth contnuous classes, In Proceedngs of the 5 th Australan Jont Conference on Artfcal Intellgence, pp , World Scentfc, [Rooney et al., 2004] Rooney,., Patterson, D., Anand S, & Tsymbal, A. Dynamc Integraton of Regresson Models. In Proceedngs of the 5 th Internatonal Multple Classfer Systems Workshop. LCS Vol. 3077, pp , Sprnger, [Seewald, 2002] Seewald, A. Explorng the Parameter State Space of Stackng. In Proceedngs of the 2002 IEEE Internatonal Conference on Data Mnng, pp , [Setono et al., 2002] Setono, R., Leow, W.K., & Zurada, J. Extracton of Rules from Artfcal eural etworks for onlnear Regresson. IEEE Transactons on eural etworks, 13(3): , [Tsymbal et al., 2003] Tsymbal, A., Puuronen, S., Patterson, D. Ensemble feature selecton wth the smple Bayesan classfcaton, Informaton Fuson, 4:87-100, Elsever, [Tsymbal et al., 2005] Tysmbal, A. Pechenzky, M., Cunnngham, P. Sequental Genetc Search for Ensemble Feature Selecton. In Proc. of the 19 th Internatonal Jont Conference on Artfcal Intellgence, pp , [Wtten & Frank, 1999] Wtten, I. and Frank, E. Data Mnng: Practcal Machne Learnng Tools and Technques wth Java Implementatons, Morgan Kaufmann, [Wolpert, 1992] Wolpert, D. Stacked Generalzaton. eural etworks 5, pp , [Wolpert & Macready, 1996] Wolpert, D. & Macready, W. Combnng Stackng wth Baggng to Improve a Learnng Algorthm, Santa Fe Insttute Techncal Report ,
Dynamic Integration of Regression Models
Dynamc Integraton of Regresson Models Nall Rooney 1, Davd Patterson 1, Sarab Anand 1, Alexey Tsymbal 2 1 NIKEL, Faculty of Engneerng,16J27 Unversty Of Ulster at Jordanstown Newtonabbey, BT37 OQB, Unted
More informationA Lazy Ensemble Learning Method to Classification
IJCSI Internatonal Journal of Computer Scence Issues, Vol. 7, Issue 5, September 2010 ISSN (Onlne): 1694-0814 344 A Lazy Ensemble Learnng Method to Classfcaton Haleh Homayoun 1, Sattar Hashem 2 and Al
More informationTerm Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task
Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto
More informationBOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET
1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School
More informationSupport Vector Machines
/9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.
More informationJournal of Chemical and Pharmaceutical Research, 2014, 6(6): Research Article. A selective ensemble classification method on microarray data
Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(6):2860-2866 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A selectve ensemble classfcaton method on mcroarray
More informationClassifier Selection Based on Data Complexity Measures *
Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.
More informationContent Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers
IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth
More informationS1 Note. Basis functions.
S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type
More informationParallelism for Nested Loops with Non-uniform and Flow Dependences
Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr
More informationThe Research of Support Vector Machine in Agricultural Data Classification
The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou
More informationC2 Training: June 8 9, Combining effect sizes across studies. Create a set of independent effect sizes. Introduction to meta-analysis
C2 Tranng: June 8 9, 2010 Introducton to meta-analyss The Campbell Collaboraton www.campbellcollaboraton.org Combnng effect szes across studes Compute effect szes wthn each study Create a set of ndependent
More informationData Mining: Model Evaluation
Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct
More informationDetermining the Optimal Bandwidth Based on Multi-criterion Fusion
Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn
More informationSmoothing Spline ANOVA for variable screening
Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory
More informationFeature Reduction and Selection
Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components
More informationA Binarization Algorithm specialized on Document Images and Photos
A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a
More informationImprovement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration
Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,
More informationWishing you all a Total Quality New Year!
Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma
More informationA Statistical Model Selection Strategy Applied to Neural Networks
A Statstcal Model Selecton Strategy Appled to Neural Networks Joaquín Pzarro Elsa Guerrero Pedro L. Galndo joaqun.pzarro@uca.es elsa.guerrero@uca.es pedro.galndo@uca.es Dpto Lenguajes y Sstemas Informátcos
More informationLearning the Kernel Parameters in Kernel Minimum Distance Classifier
Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department
More informationSupport Vector Machines
Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned
More informationMeta-heuristics for Multidimensional Knapsack Problems
2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,
More informationNAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics
Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson
More informationAdditive Groves of Regression Trees
Addtve Groves of Regresson Trees Dara Sorokna, Rch Caruana, and Mrek Redewald Department of Computer Scence, Cornell Unversty, Ithaca, NY, USA {dara,caruana,mrek}@cs.cornell.edu Abstract. We present a
More informationSimulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010
Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement
More informationA mathematical programming approach to the analysis, design and scheduling of offshore oilfields
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and
More informationImproving Combination Methods of Neural Classifiers Using NCL
Internatonal Journal of Computer Informaton Systems and Industral Management Applcatons. ISS 50-7988 Volume 4 (0) pp. 679-686 MIR Labs, www.mrlabs.net/jcsm/ndex.html Improvng Combnaton Methods of eural
More informationSubspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;
Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features
More informationConcurrent Apriori Data Mining Algorithms
Concurrent Apror Data Mnng Algorthms Vassl Halatchev Department of Electrcal Engneerng and Computer Scence York Unversty, Toronto October 8, 2015 Outlne Why t s mportant Introducton to Assocaton Rule Mnng
More informationUsing Neural Networks and Support Vector Machines in Data Mining
Usng eural etworks and Support Vector Machnes n Data Mnng RICHARD A. WASIOWSKI Computer Scence Department Calforna State Unversty Domnguez Hlls Carson, CA 90747 USA Abstract: - Multvarate data analyss
More informationRelevance Assignment and Fusion of Multiple Learning Methods Applied to Remote Sensing Image Analysis
Assgnment and Fuson of Multple Learnng Methods Appled to Remote Sensng Image Analyss Peter Bajcsy, We-Wen Feng and Praveen Kumar Natonal Center for Supercomputng Applcaton (NCSA), Unversty of Illnos at
More informationAn Evolvable Clustering Based Algorithm to Learn Distance Function for Supervised Environment
IJCSI Internatonal Journal of Computer Scence Issues, Vol. 7, Issue 5, September 2010 ISSN (Onlne): 1694-0814 www.ijcsi.org 374 An Evolvable Clusterng Based Algorthm to Learn Dstance Functon for Supervsed
More informationIntelligent Information Acquisition for Improved Clustering
Intellgent Informaton Acquston for Improved Clusterng Duy Vu Unversty of Texas at Austn duyvu@cs.utexas.edu Mkhal Blenko Mcrosoft Research mblenko@mcrosoft.com Prem Melvlle IBM T.J. Watson Research Center
More informationAn Entropy-Based Approach to Integrated Information Needs Assessment
Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology
More informationOptimizing Document Scoring for Query Retrieval
Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng
More information6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour
6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the
More informationA Fast Content-Based Multimedia Retrieval Technique Using Compressed Data
A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,
More informationHU Sheng-neng* Resources and Electric Power,Zhengzhou ,China
do:10.21311/002.31.6.09 Applcaton of new neural network technology n traffc volume predcton Abstract HU Sheng-neng* 1 School of Cvl Engneerng &Communcaton, North Chna Unversty of Water Resources and Electrc
More informationAPPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT
3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ
More informationX- Chart Using ANOM Approach
ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are
More informationAn Evaluation of Divide-and-Combine Strategies for Image Categorization by Multi-Class Support Vector Machines
An Evaluaton of Dvde-and-Combne Strateges for Image Categorzaton by Mult-Class Support Vector Machnes C. Demrkesen¹ and H. Cherf¹, ² 1: Insttue of Scence and Engneerng 2: Faculté des Scences Mrande Galatasaray
More informationMachine Learning Algorithm Improves Accuracy for analysing Kidney Function Test Using Decision Tree Algorithm
Internatonal Journal of Management, IT & Engneerng Vol. 8 Issue 8, August 2018, ISSN: 2249-0558 Impact Factor: 7.119 Journal Homepage: Double-Blnd Peer Revewed Refereed Open Access Internatonal Journal
More informationAn Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed
More informationThree supervised learning methods on pen digits character recognition dataset
Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru
More informationClassifying Acoustic Transient Signals Using Artificial Intelligence
Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)
More informationEdge Detection in Noisy Images Using the Support Vector Machines
Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona
More informationSpam Filtering Based on Support Vector Machines with Taguchi Method for Parameter Selection
E-mal Spam Flterng Based on Support Vector Machnes wth Taguch Method for Parameter Selecton We-Chh Hsu, Tsan-Yng Yu E-mal Spam Flterng Based on Support Vector Machnes wth Taguch Method for Parameter Selecton
More informationHelsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)
Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute
More informationModeling Inter-cluster and Intra-cluster Discrimination Among Triphones
Modelng Inter-cluster and Intra-cluster Dscrmnaton Among Trphones Tom Ko, Bran Mak and Dongpeng Chen Department of Computer Scence and Engneerng The Hong Kong Unversty of Scence and Technology Clear Water
More informationComparing High-Order Boolean Features
Brgham Young Unversty BYU cholarsarchve All Faculty Publcatons 2005-07-0 Comparng Hgh-Order Boolean Features Adam Drake adam_drake@yahoo.com Dan A. Ventura ventura@cs.byu.edu Follow ths and addtonal works
More informationA Hill-climbing Landmarker Generation Algorithm Based on Efficiency and Correlativity Criteria
A Hll-clmbng Landmarker Generaton Algorthm Based on Effcency and Correlatvty Crtera Daren Ler, Irena Koprnska, and Sanjay Chawla School of Informaton Technologes, Unversty of Sydney Madsen Buldng F09,
More informationFace Recognition Based on SVM and 2DPCA
Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty
More informationThe Man-hour Estimation Models & Its Comparison of Interim Products Assembly for Shipbuilding
Internatonal Journal of Operatons Research Internatonal Journal of Operatons Research Vol., No., 9 4 (005) The Man-hour Estmaton Models & Its Comparson of Interm Products Assembly for Shpbuldng Bn Lu and
More informationImplementation Naïve Bayes Algorithm for Student Classification Based on Graduation Status
Internatonal Journal of Appled Busness and Informaton Systems ISSN: 2597-8993 Vol 1, No 2, September 2017, pp. 6-12 6 Implementaton Naïve Bayes Algorthm for Student Classfcaton Based on Graduaton Status
More informationAvailable online at ScienceDirect. Procedia Environmental Sciences 26 (2015 )
Avalable onlne at www.scencedrect.com ScenceDrect Proceda Envronmental Scences 26 (2015 ) 109 114 Spatal Statstcs 2015: Emergng Patterns Calbratng a Geographcally Weghted Regresson Model wth Parameter-Specfc
More informationSome Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated.
Some Advanced SP Tools 1. umulatve Sum ontrol (usum) hart For the data shown n Table 9-1, the x chart can be generated. However, the shft taken place at sample #21 s not apparent. 92 For ths set samples,
More informationy and the total sum of
Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton
More informationFuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System
Fuzzy Modelng of the Complexty vs. Accuracy Trade-off n a Sequental Two-Stage Mult-Classfer System MARK LAST 1 Department of Informaton Systems Engneerng Ben-Guron Unversty of the Negev Beer-Sheva 84105
More informationA Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems
A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty
More informationAnalysis of Continuous Beams in General
Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,
More informationLoad-Balanced Anycast Routing
Load-Balanced Anycast Routng Chng-Yu Ln, Jung-Hua Lo, and Sy-Yen Kuo Department of Electrcal Engneerng atonal Tawan Unversty, Tape, Tawan sykuo@cc.ee.ntu.edu.tw Abstract For fault-tolerance and load-balance
More informationSVM-based Learning for Multiple Model Estimation
SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:
More informationSum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints
Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan
More informationUB at GeoCLEF Department of Geography Abstract
UB at GeoCLEF 2006 Mguel E. Ruz (1), Stuart Shapro (2), June Abbas (1), Slva B. Southwck (1) and Davd Mark (3) State Unversty of New York at Buffalo (1) Department of Lbrary and Informaton Studes (2) Department
More informationMachine Learning 9. week
Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below
More informationLearning-Based Top-N Selection Query Evaluation over Relational Databases
Learnng-Based Top-N Selecton Query Evaluaton over Relatonal Databases Lang Zhu *, Wey Meng ** * School of Mathematcs and Computer Scence, Hebe Unversty, Baodng, Hebe 071002, Chna, zhu@mal.hbu.edu.cn **
More informationA MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS
Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung
More informationProblem Set 3 Solutions
Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,
More informationEYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS
P.G. Demdov Yaroslavl State Unversty Anatoly Ntn, Vladmr Khryashchev, Olga Stepanova, Igor Kostern EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS Yaroslavl, 2015 Eye
More informationMachine Learning: Algorithms and Applications
14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of
More informationHigh-Boost Mesh Filtering for 3-D Shape Enhancement
Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,
More informationMULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION
MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and
More informationA classification scheme for applications with ambiguous data
A classfcaton scheme for applcatons wth ambguous data Thomas P. Trappenberg Centre for Cogntve Neuroscence Department of Psychology Unversty of Oxford Oxford OX1 3UD, England Thomas.Trappenberg@psy.ox.ac.uk
More informationTN348: Openlab Module - Colocalization
TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages
More informationProblem Definitions and Evaluation Criteria for Computational Expensive Optimization
Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty
More informationRemote Sensing Textual Image Classification based on Ensemble Learning
I.J. Image, Graphcs and Sgnal Processng, 2016, 12, 21-29 Publshed Onlne December 2016 n MECS (http://www.mecs-press.org/) DOI: 10.5815/jgsp.2016.12.03 Remote Sensng Textual Image Classfcaton based on Ensemble
More informationTECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.
TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS Muradalyev AZ Azerbajan Scentfc-Research and Desgn-Prospectng Insttute of Energetc AZ1012, Ave HZardab-94 E-mal:aydn_murad@yahoocom Importance of
More informationFeature Selection as an Improving Step for Decision Tree Construction
2009 Internatonal Conference on Machne Learnng and Computng IPCSIT vol.3 (2011) (2011) IACSIT Press, Sngapore Feature Selecton as an Improvng Step for Decson Tree Constructon Mahd Esmael 1, Fazekas Gabor
More informationAutomated Selection of Training Data and Base Models for Data Stream Mining Using Naïve Bayes Ensemble Classification
Proceedngs of the World Congress on Engneerng 2017 Vol II, July 5-7, 2017, London, U.K. Automated Selecton of Tranng Data and Base Models for Data Stream Mnng Usng Naïve Bayes Ensemble Classfcaton Patrca
More informationCLASSIFICATION OF ULTRASONIC SIGNALS
The 8 th Internatonal Conference of the Slovenan Socety for Non-Destructve Testng»Applcaton of Contemporary Non-Destructve Testng n Engneerng«September -3, 5, Portorož, Slovena, pp. 7-33 CLASSIFICATION
More informationA Similarity-Based Prognostics Approach for Remaining Useful Life Estimation of Engineered Systems
2008 INTERNATIONAL CONFERENCE ON PROGNOSTICS AND HEALTH MANAGEMENT A Smlarty-Based Prognostcs Approach for Remanng Useful Lfe Estmaton of Engneered Systems Tany Wang, Janbo Yu, Davd Segel, and Jay Lee
More informationQuery Clustering Using a Hybrid Query Similarity Measure
Query clusterng usng a hybrd query smlarty measure Fu. L., Goh, D.H., & Foo, S. (2004). WSEAS Transacton on Computers, 3(3), 700-705. Query Clusterng Usng a Hybrd Query Smlarty Measure Ln Fu, Don Hoe-Lan
More informationAssociative Based Classification Algorithm For Diabetes Disease Prediction
Internatonal Journal of Engneerng Trends and Technology (IJETT) Volume-41 Number-3 - November 016 Assocatve Based Classfcaton Algorthm For Dabetes Dsease Predcton 1 N. Gnana Deepka, Y.surekha, 3 G.Laltha
More informationLearning-based License Plate Detection on Edge Features
Learnng-based Lcense Plate Detecton on Edge Features Wng Teng Ho, Woo Hen Yap, Yong Haur Tay Computer Vson and Intellgent Systems (CVIS) Group Unverst Tunku Abdul Rahman, Malaysa wngteng_h@yahoo.com, woohen@yahoo.com,
More informationThe Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique
//00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy
More informationBAYESIAN MULTI-SOURCE DOMAIN ADAPTATION
BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,
More informationCluster Analysis of Electrical Behavior
Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School
More informationRandom Kernel Perceptron on ATTiny2313 Microcontroller
Random Kernel Perceptron on ATTny233 Mcrocontroller Nemanja Djurc Department of Computer and Informaton Scences, Temple Unversty Phladelpha, PA 922, USA nemanja.djurc@temple.edu Slobodan Vucetc Department
More informationNetwork Intrusion Detection Based on PSO-SVM
TELKOMNIKA Indonesan Journal of Electrcal Engneerng Vol.1, No., February 014, pp. 150 ~ 1508 DOI: http://dx.do.org/10.11591/telkomnka.v1.386 150 Network Intruson Detecton Based on PSO-SVM Changsheng Xang*
More informationCombining Multiresolution Shape Descriptors for 3D Model Retrieval
Ryutarou Ohbuch, Yushn Hata, Combnng Multresoluton Shape Descrptors for 3D Model Retreval, Accepted, Proc. WSCG 2006, Plzen, Czech Republc, Jan. 30~Feb. 3, 2006. Combnng Multresoluton Shape Descrptors
More informationFuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches
Proceedngs of the Internatonal Conference on Cognton and Recognton Fuzzy Flterng Algorthms for Image Processng: Performance Evaluaton of Varous Approaches Rajoo Pandey and Umesh Ghanekar Department of
More informationPetri Net Based Software Dependability Engineering
Proc. RELECTRONIC 95, Budapest, pp. 181-186; October 1995 Petr Net Based Software Dependablty Engneerng Monka Hener Brandenburg Unversty of Technology Cottbus Computer Scence Insttute Postbox 101344 D-03013
More informationLecture 5: Multilayer Perceptrons
Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented
More informationExtraction of Fuzzy Rules from Trained Neural Network Using Evolutionary Algorithm *
Extracton of Fuzzy Rules from Traned Neural Network Usng Evolutonary Algorthm * Urszula Markowska-Kaczmar, Wojcech Trelak Wrocław Unversty of Technology, Poland kaczmar@c.pwr.wroc.pl, trelak@c.pwr.wroc.pl
More informationCAN COMPUTERS LEARN FASTER? Seyda Ertekin Computer Science & Engineering The Pennsylvania State University
CAN COMPUTERS LEARN FASTER? Seyda Ertekn Computer Scence & Engneerng The Pennsylvana State Unversty sertekn@cse.psu.edu ABSTRACT Ever snce computers were nvented, manknd wondered whether they mght be made
More informationKeywords - Wep page classification; bag of words model; topic model; hierarchical classification; Support Vector Machines
(IJCSIS) Internatonal Journal of Computer Scence and Informaton Securty, Herarchcal Web Page Classfcaton Based on a Topc Model and Neghborng Pages Integraton Wongkot Srura Phayung Meesad Choochart Haruechayasak
More informationFeature Kernel Functions: Improving SVMs Using High-level Knowledge
Feature Kernel Functons: Improvng SVMs Usng Hgh-level Knowledge Qang Sun, Gerald DeJong Department of Computer Scence, Unversty of Illnos at Urbana-Champagn qangsun@uuc.edu, dejong@cs.uuc.edu Abstract
More informationBackpropagation: In Search of Performance Parameters
Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,
More informationClassification Methods
1 Classfcaton Methods Ajun An York Unversty, Canada C INTRODUCTION Generally speakng, classfcaton s the acton of assgnng an object to a category accordng to the characterstcs of the object. In data mnng,
More information