Specialized Weighted Majority Statistical Techniques in Robotics (Fall 2009)

Size: px
Start display at page:

Download "Specialized Weighted Majority Statistical Techniques in Robotics (Fall 2009)"

Transcription

1 Statstcal Technques n Robotcs (Fall 09) Keywords: classfer ensemblng, onlne learnng, expert combnaton, machne learnng Javer Hernandez Alberto Rodrguez Tomas Smon javerhe@andrew.cmu.edu albertor@andrew.cmu.edu tsmon@andrew.cmu.edu Abstract The problem that we address s that of forecastng results by combnng expert predctons. Standard ensemble methods do not explctly consder the performance of experts to be varable across the problem doman. In contrast, we propose Specalzed Weghted Majorty. Specalsts for each area of the feature space are created by augmentng experts wth a classfer that chooses on whch samples to vote. Our method can be seen as tradng off complexty of the obtaned solutons and the amount of tranng data requred. We compare the proposed method to Weghted Majorty and SVMs on synthetc and real data. 1. Introducton The problem that we to address s that of expert or classfer combnaton, especally for applcatons n whch the experts are humans provdng forecasts. For ths reason, n the followng dscusson we wll use the word experts ndstnctly when referrng to both experts and classfers. We are nterested n dervng a method that provdes constructve nterference between the experts n order to do a better job at predctng than any sngle expert, but also takes advantage of the characterstcs of our problem doman to solve a smpler problem. One can thnk of learnng methods as a soluton to the trade-off between complexty of soluton and speed of learnng. On one sde of the spectrum we have smple, fast learnng, and fast adaptng algorthms such as Wnnow and Verson of the paper for Statstcal Technques n Robotcs. Weghted Majorty (WM) votng varants whch compete aganst the best of the component experts. On the other end we have more complex but more powerful algorthms such as SVM or AdaBoost, whch compete aganst combnatons of classfers. Ths added complexty often comes at the cost of ncreasng the amount of tranng examples requred to learn ths complex soluton. We propose an n-between soluton. Specalzed Weghted Majorty s a method that leverages Weghted Majorty-lke algorthms by modelng the performance of experts across the dmensons/features of the problem. It s known that when Weghted Majorty 1 combnes specalsts (experts that are aware of ther capabltes and can decde whether to vote or not), t competes aganst the best set of specalsts rather that the best expert. In ths paper we propose turnng experts nto specalsts by drectly learnng ther ndvdual areas of expertse. Intutvely, the exstence of these areas s a reasonable assumpton n many cases, for example, when combnng human experts. Our goal s to retan some of the advantages of smple learnng algorthms such as Weghted Majorty, but at the same tme beng capable of fndng more complex solutons. By drectly learnng the specaltes, we hope to develop a method that trades off between both extremes of the learnng spectrum. 2. Prevous Work Prevous work on both types of learnng methods (smple vs complex) s extensve. Smple methods have been shown to have good behavor aganst rrelevant features or nose and to be able to adapt n a tmely manner to dynamcal problems. Weghted Majorty case. 1 The Wnnow weght-update rule s actually used n ths

2 Votng (Lttlestone & Warmuth, 1994) and Wnnow (Lttlestone, 1988; Lttlestone, 1991) are the most representatve algorthms n that category. Both are onlne multplcatve weght updatng methods that try to mnmze the total number of mstakes wthn the learnng process. The weght adjustment process n both methods s desgned so that they compete n performance wth the best expert. On the other sde of the spectrum, we have ensemble methods such as AdaBoost (Freund, 1997; Schapre et al., 1998) and general purpose learners such as SVM (Burges, 1998). In a more offlne fashon, both methods combne experts by optmzng the weghts of each expert rather than n an ncremental way wth every new sample, n a more optmal and global sense wth respect to all prevously seen samples. By dong so, they are able to compete aganst lnear combnatons of experts. In ths study, we am to do better than any of the experts. At the same tme, we do not want to renounce to the good behavor and smplcty of Weghted Majorty. Wth ths n mnd, we have devsed a method that uses the concept of specalsts to separate the learnng process nto two stages. In the frst stage, a specalst s constructed from an expert by addng a classfer that predcts when the expert should vote or not. The second and ensemblng stage uses a small modfcaton to Weghted Majorty (the Wnnow weght-update rule) to combne specalsts. Ths rule has been shown to compete aganst the best set of specalsts (Blum, 1995; Blum, 1996), choosng the best expert for each area of the nput space. 3. Creatng specalsts from experts The problem settng that concerns us s that of emttng a predcton ŷ t (from the set of classes C) for each sample x t R D (D features assocated to each sample). The global predcton s based on the predctons made by our N experts 2, e (x t ) C. Our ground truth conssts of correctly labeled pars (x t, y t ), and we want to combne the predctons of the experts so as to mnmze the number of mstakes. In standard Weghted Majorty votng, ŷ t s assgned such that: ŷ t = arg max c C e (x t)=c w t (1) The weghts w t are updated multplcatvely accordng to the Wnnow algorthm (Lttlestone, 1988; Lttle- 2 We can assume that the experts have much more nformaton at ther dsposal than we can encode n our features x t. Makng e a functon of x t s for notatonal convenence only. stone, 1991): f an expert makes a mstake on a sample then ts weght s multpled by a factor β (0..1), and f (and only f) the global algorthm makes a mstake,.e., ŷ y, the weghts of those experts that voted correctly are dvded by β. The same weght-update algorthm can be appled to specalsts nstead of experts. In the followng sectons we dscuss two methods to create specalsts from experts. Secton 3.1 descrbes a nave approach, whle Sec. 3.2 abstracts ths concept to learn a more general category of areas of expertse Nave approach: Feature Specalsts (FS) Let us assume that the features x t descrbng each sample are bnary. Suppose that t s also reasonable to assume that for each state of a gven feature, one of the experts wll perform better than all others. In ths scenaro, one way to construct specalsts s as the Cartesan product of features and experts: for each expert and each feature, two new specalsts are created; one casts a vote (the same vote as the orgnal expert) only when the feature s actve, the other only when the feature s nactve. The set of N experts s transformed then nto 2DN specalsts. Ths can be thought of as the nave approach, attemptng to fnd the best expert for each feature ndependently : Explct dscovery of specaltes (SWM) In the prevous method, the search for the best expert for each feature can be seen as an attempt to fnd under what condtons each one of the orgnal experts has a good performance. Ths abstracton allows for the separaton of the learnng problem nto two smaller sub-problems. In, we frst learn under what condtons the expert performs well. In a posteror stage we combne the predctons of only those experts whose learned areas of expertse nclude the specfc sample under consderaton. The prevous decson functon s modfed as follows: ŷ t = arg max c C e (x t)=c f w t (2) Here, the flter f {0, 1} selects those experts that should vote, and wll be a functon of the features x t. In our approach, ths functon wll be a classfer traned on prevously seen examples, and wll am to predct whether the expert wll be correct or not. In partcular, we choose the flter to be a lnear SVM decson functon. See Algorthm 1 for an outlne of the process.

3 Algorthm 1 1: Intalze w 1 = 1 for = 1... N 2: for t = 1... M do 3: for = 1... N do 4: model = tran(x 1:t 1,e (x 1:t 1 ) == y 1:t 1 ) 5: f = test(x t,model ) 6: end for 7: ŷ t = arg max c C e (x t)=c f w t 8: Penalze classfers wth false postves: for all f = 1 s.t. e (x t ) y t, w t+1 w t β. 9: f msclassfcaton then 10: Reward classfers that predcted correctly: for all f = 1 s.t. e (x t ) = y t, w t+1 w t 1 β. 11: end f 12: end for In the pseudo-code, the functon tran(...) represents tranng a classfer model on prevous data wth the desred output, whle f = test(x t,model ) evaluates ths classfer on the current sample. The nave procedure (Feature Specalsts) assumes that the features cleanly partton the areas of expertse. By drectly learnng the expertse we are removng ths assumpton, and we can hope for a more accurate descrpton of these areas Dmensonalty Analyss In posteror results, as n Fgure 1, we see that Specalzed Weghted Majorty acheves better results than learnng methods on both ends of the learnng spectrum. Although the dmensonalty of smple methods such as Weghted Majorty s small, the performance s constraned by the best expert. On the other end, complex methods such as SVM can potentally obtan better solutons, however they are hndered by the dmensonalty of the problem they try to solve. From a dmensonalty pont of vew, t s far to compare SWM both wth the FS approach and wth SVM, representatve of both categores of learnng methods: Features Specalsts Although the algorthm remans purely as a weght multplcatve update method and the learnng speed should be fast, the dmensonalty of the problem jumps from N to 2DN. Ths ncreases greatly the amount of data requred to fnd a satsfyng soluton. SVM Suppose we try to fnd an optmal combnaton of the experts by nputtng to a lnear SVM both the experts and the orgnal features as features. The dmenson of the problem s then N + D. SWM Assumng the local classfers for each one of the experts s mplemented by a lnear SVM, the number of parameters to learn s (D + 1)N. However, the separaton of the problem nto two stages reduces the complexty of the learnng problem by decomposng t nto N smaller sub-problems of dmenson D. The decomposton of the problem succeeds n creatng smaller, easer sub-problems of lower dmensonalty that can hopefully be solved n a qucker fashon. 4. Expermental Results 4.1. Synthetc Data In order to evaluate and compare several methods n an deal scenaro, we have created synthetc data where the areas of expertse of each one of the experts can be specfed Data Generaton The dataset has the followng parameters: N - The number of experts K - The number of output classes to predct M - The number of trals/samples. D - The number of features assocated wth each sample (dmenson of the nput space). For ths data, we have chosen bnary features. The value of the features and the output classes are generated n a purely random fashon, enforcng the absence of correlaton between features x t and output y t. The predctons of the experts for each tral are randomly generated, allowng for a certan probablty of beng correct for each expert. Ths probablty depends on two parameters: ɛ g (capturng the general knowledge of the expert) and ɛ s (capturng the specfc expertse, dependng on the area of the nput space x t ). For any gven expert, the value of ts general knowledge ɛ g s drawn unformly from the nterval [0, ˆɛ g ]. Its specfc knowledge along each one of the features of the nput space s drawn unformly form the nterval [0, ˆɛ s ]. Then, for any gven sample (x, y), the probablty of gvng a correct predcton condtoned on feature s defned as: P g {}}{ P [correct x 1 = 1] = K + ɛ g +ɛ s P [correct x = 0] = P g ɛ s

4 We further use condtonal ndependence between features n order to calculate the probablty of beng correct on each sample as: P [x corr]p [corr] P [correct x] = P [x] = P [corr] D P [x corr] P [x] 1 = P [x]p [corr] D 1 P [x corr]p [corr] 1 = P [x]p [corr] D 1 P [corr x ]P [x ] 1 P [correct x] = 2 D P [x]p [corr] D 1 P [corr x ] Wth a smlar dervaton, the overall probablty of beng correct (not condtoned) can be shown to be ndependent of ɛ s and equal to: Results P [correct] = P g = 1 K + ɛ g In ths secton we compare performance wth four methods: Best Expert, Weghted Majorty, Feature Specalsts and SVMs. For the SVM classfer, we use the LIBSVM (Chang & Ln, 01) mplementaton wth an RBF kernel, and all of the hyper-parameters (e. kernel wdth, C) were tuned usng cross-valdaton. In ths case, both sample features x 1:t 1 and expert predctons e 1:N (x 1:t 1 ) are used as tranng data. Fgure 1 shows the performance measured as percentage of accumulated mstakes for a synthetc dataset wth the followng parameters: N = (experts), M = 0 (samples), D = 10 (features), and K = 2 (classes). The y-axs vares the global knowledge of each expert (ɛ g ), whle the x-axs shows varatons n the amount of specalzaton of each expert ɛ s. As expected, we can see that Weghted Majorty s unable to learn expert specaltes and ts amount of mstakes s roughly algned wth the x-axs. More complcated methods (second row) are capable of takng advantage of the specaltes, as can be seen n the decreasng number of mstakes across the x-axs. The complexty of SVMs requres larger values of ɛ s to perform as well as the proposed methods (or alternatvely, a larger number of tranng samples). Lkewse, the Feature Specalsts are not well suted for ths data due to the large number of actve features n each sample. As hoped, our method s able to capture expert specaltes and outperform the best expert Applcaton: Sports Bettng An example problem on whch to apply the proposed method s that of sports bettng. The result to be predcted s the wnner of a match, and the experts wll be a group of sports fans offerng ther predcton. The man assumpton s that certan people mght be better at predctng the outcome of certan games, by vrtue of trackng the performance of the partcular teams more closely. Addtonally, fans of specfc teams mght have unrealstc expectatons or bases when ther preferred team s playng. The parameters of ths dataset are: N = (sport fans), M = 273 (matches), and D = (teams), and K = 3 (team A wns, team B wns, or te). Each feature wll be x t {0, 1}, where 1 ndcates the team s playng and 0 otherwse. Therefore, any gven sample contans 2 features set to one and 18 features set to zero. Experments on ths dataset have shown that all algorthms perform smlarly to the best expert after the 273 matches (142 ± 3 mstakes). Although, far from the worst expert (170 mstakes), the dfferences between algorthms are not sgnfcant enough to draw strong conclusons. In order to understand these results, we produced a smlar synthetc dataset wth the same sparsty pattern (see results n Fgure 2). There s a consderable decrease n the performance of SWM and SVMs, whle there s an ncrease n the learnng of FS. Ths can be nterpreted as SVMs havng more dffculty n learnng from hghly sparse data (due to there beng less nformaton n each sample). Ths drectly affects the performance of our method snce SVMs are used to represent the specaltes. On the other hand, the smplcty of the Feature Specalsts (combned wth the fact that very few features are actve on each sample) allows better learnng. Because the performance on the real data does not reflect the dfferences observed on sparse synthetc data, we beleve ths dataset presents a lack of clear specaltes (.e. users predct at random more often than not). 5. Conclusons We have shown how turnng a set of experts nto specalsts can lead to a sgnfcant gan. Ths was accomplshed by augmentng experts wth knowledge about ther performance n dfferent areas of the feature space. Compared to smpler combnaton methods such as Weghted Majorty, our method offers the

5 P(correct) = 1/K + epslon g epslon s P(correct) = 1/K + epslon g epslon s P(correct) = 1/K + epslon g epslon s (a) Best Expert (b) Average Expert (c) Weghted Majorty P(correct) = 1/K + epslon g Feature Specalsts epslon s P(correct) = 1/K + epslon g epslon s P(correct) = 1/K + epslon g epslon s (d) Feature Specalsts (e) (f) Support Vector Machnes Fgure 1. The fgures show the percentage of accumulated mstakes. advantage of beng able to express a larger set of target concepts. Compared to more general learnng methods commonly used n stackng or cascadng, our method smplcty and use of doman knowledge results n faster learnng (n the sense of requrng less tranng examples). These conclusons have been extracted from plausbly generated (albet stll contrved) artfcal data. Emprcal support for our conclusons was not found n the real-data applcaton on whch we tested. Due to the small amount of avalable data and the smlar performance of all tested methods on ths set (and ndeed, the smlar performance of all avalable experts), we cannot draw strong conclusons from ths experment. It mght be the case that our ntal hypothess for ths data was wrong that there are no areas of expertse n sports bettng users, but t s just as plausble that not enough tranng data was avalable to learn these dfferences for any of the consdered methods. There are two lnes of future work: Explorng technques to mnmze the total number of mstakes n a more global fashon. Although ths would ncrease the dmensonalty of the problem, methodologes for combnng global results wth local ones have proven useful n the past. Our formulaton shows a herarchy that would easly allow such combnaton. Testng the proposed method n dfferent applcatons, wth more complete and sutable datasets. We have consdered for other applcatons are weather and stock market predcton. References Blum, A. (1995). Emprcal Support for Wnnow and Weghted-Majorty Algorthms: Results on a Calendar Schedulng Doman. Machne Learnng (pp ). Morgan Kaufmann. Blum, A. (1996). On-lne algorthms n machne learnng. In Proceedngs of the Workshop on On-Lne Algorthms, Dagstuhl (pp. 6 3). Sprnger. Burges, C. J. (1998). A tutoral on support vector machnes for pattern recognton. Data Mnng and Knowledge Dscovery, 2, Chang, C.-C., & Ln, C.-J. (01). LIBSVM: a lbrary for support vector machnes. Software avalable at

6 P(correct) = 1/K + epslon g epslon s P(correct) = 1/K + epslon g epslon s P(correct) = 1/K + epslon g epslon s (a) Feature Specalsts (b) (c) Support Vector Machnes Fgure 2. The above fgures compare performance of the algorthms on sparse synthetc feature data, where only two features are set to 1 on any gven sample. Freund, Y. (1997). A Decson-Theoretc Generalzaton of On-Lne Learnng and an Applcaton to Boostng,. Journal of Computer and System Scences, 55, Gangardwala, A., & Polkar, R. (05). Dynamcally weghted majorty votng for ncremental learnng and comparson of three boostng based approaches. Proceedngs. 05 IEEE Internatonal Jont Conference on Neural Networks, 05., Lttlestone, N. (1988). Learnng quckly when rrelevant attrbutes abound: A new lnear-threshold algorthm. Machne Learnng (pp ). Lttlestone, N. (1991). Redundant nosy attrbutes, attrbute errors, and lnear-threshold learnng usng wnnow. Annual Workshop on Computatonal Learnng Theory. Lttlestone, N., & Warmuth, M. (1994). The weghted majorty algorthm. th Annual Symposum on Foundatons of Computer Scence, 108, Schapre, R. E., Freund, Y., Bartlett, P., & Lee, W. S. (1998). Boostng the margn: a new explanaton for the effectveness of votng methods. The Annals of Statstcs, 26,

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Journal of Chemical and Pharmaceutical Research, 2014, 6(6): Research Article. A selective ensemble classification method on microarray data

Journal of Chemical and Pharmaceutical Research, 2014, 6(6): Research Article. A selective ensemble classification method on microarray data Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(6):2860-2866 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A selectve ensemble classfcaton method on mcroarray

More information

Fuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System

Fuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System Fuzzy Modelng of the Complexty vs. Accuracy Trade-off n a Sequental Two-Stage Mult-Classfer System MARK LAST 1 Department of Informaton Systems Engneerng Ben-Guron Unversty of the Negev Beer-Sheva 84105

More information

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated.

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated. Some Advanced SP Tools 1. umulatve Sum ontrol (usum) hart For the data shown n Table 9-1, the x chart can be generated. However, the shft taken place at sample #21 s not apparent. 92 For ths set samples,

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

Data Mining: Model Evaluation

Data Mining: Model Evaluation Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors Onlne Detecton and Classfcaton of Movng Objects Usng Progressvely Improvng Detectors Omar Javed Saad Al Mubarak Shah Computer Vson Lab School of Computer Scence Unversty of Central Florda Orlando, FL 32816

More information

CAN COMPUTERS LEARN FASTER? Seyda Ertekin Computer Science & Engineering The Pennsylvania State University

CAN COMPUTERS LEARN FASTER? Seyda Ertekin Computer Science & Engineering The Pennsylvania State University CAN COMPUTERS LEARN FASTER? Seyda Ertekn Computer Scence & Engneerng The Pennsylvana State Unversty sertekn@cse.psu.edu ABSTRACT Ever snce computers were nvented, manknd wondered whether they mght be made

More information

A Lazy Ensemble Learning Method to Classification

A Lazy Ensemble Learning Method to Classification IJCSI Internatonal Journal of Computer Scence Issues, Vol. 7, Issue 5, September 2010 ISSN (Onlne): 1694-0814 344 A Lazy Ensemble Learnng Method to Classfcaton Haleh Homayoun 1, Sattar Hashem 2 and Al

More information

Detection of an Object by using Principal Component Analysis

Detection of an Object by using Principal Component Analysis Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,

More information

Learning physical Models of Robots

Learning physical Models of Robots Learnng physcal Models of Robots Jochen Mück Technsche Unverstät Darmstadt jochen.mueck@googlemal.com Abstract In robotcs good physcal models are needed to provde approprate moton control for dfferent

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

UB at GeoCLEF Department of Geography Abstract

UB at GeoCLEF Department of Geography   Abstract UB at GeoCLEF 2006 Mguel E. Ruz (1), Stuart Shapro (2), June Abbas (1), Slva B. Southwck (1) and Davd Mark (3) State Unversty of New York at Buffalo (1) Department of Lbrary and Informaton Studes (2) Department

More information

Data Mining For Multi-Criteria Energy Predictions

Data Mining For Multi-Criteria Energy Predictions Data Mnng For Mult-Crtera Energy Predctons Kashf Gll and Denns Moon Abstract We present a data mnng technque for mult-crtera predctons of wnd energy. A mult-crtera (MC) evolutonary computng method has

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Incremental Learning with Support Vector Machines and Fuzzy Set Theory

Incremental Learning with Support Vector Machines and Fuzzy Set Theory The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and

More information

Machine Learning. Topic 6: Clustering

Machine Learning. Topic 6: Clustering Machne Learnng Topc 6: lusterng lusterng Groupng data nto (hopefully useful) sets. Thngs on the left Thngs on the rght Applcatons of lusterng Hypothess Generaton lusters mght suggest natural groups. Hypothess

More information

Bayesian Classifier Combination

Bayesian Classifier Combination Bayesan Classfer Combnaton Zoubn Ghahraman and Hyun-Chul Km Gatsby Computatonal Neuroscence Unt Unversty College London London WC1N 3AR, UK http://www.gatsby.ucl.ac.uk {zoubn,hckm}@gatsby.ucl.ac.uk September

More information

Benchmarking of Update Learning Strategies on Digit Classifier Systems

Benchmarking of Update Learning Strategies on Digit Classifier Systems 2012 Internatonal Conference on Fronters n Handwrtng Recognton Benchmarkng of Update Learnng Strateges on Dgt Classfer Systems D. Barbuzz, D. Impedovo, G. Prlo Dpartmento d Informatca Unverstà degl Stud

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Journal of Process Control

Journal of Process Control Journal of Process Control (0) 738 750 Contents lsts avalable at ScVerse ScenceDrect Journal of Process Control j ourna l ho me pag e: wwwelsevercom/locate/jprocont Decentralzed fault detecton and dagnoss

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

High-Boost Mesh Filtering for 3-D Shape Enhancement

High-Boost Mesh Filtering for 3-D Shape Enhancement Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,

More information

Learning-based License Plate Detection on Edge Features

Learning-based License Plate Detection on Edge Features Learnng-based Lcense Plate Detecton on Edge Features Wng Teng Ho, Woo Hen Yap, Yong Haur Tay Computer Vson and Intellgent Systems (CVIS) Group Unverst Tunku Abdul Rahman, Malaysa wngteng_h@yahoo.com, woohen@yahoo.com,

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

Private Information Retrieval (PIR)

Private Information Retrieval (PIR) 2 Levente Buttyán Problem formulaton Alce wants to obtan nformaton from a database, but she does not want the database to learn whch nformaton she wanted e.g., Alce s an nvestor queryng a stock-market

More information

Related-Mode Attacks on CTR Encryption Mode

Related-Mode Attacks on CTR Encryption Mode Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

Face Detection with Deep Learning

Face Detection with Deep Learning Face Detecton wth Deep Learnng Yu Shen Yus122@ucsd.edu A13227146 Kuan-We Chen kuc010@ucsd.edu A99045121 Yzhou Hao y3hao@ucsd.edu A98017773 Mn Hsuan Wu mhwu@ucsd.edu A92424998 Abstract The project here

More information

Efficient Text Classification by Weighted Proximal SVM *

Efficient Text Classification by Weighted Proximal SVM * Effcent ext Classfcaton by Weghted Proxmal SVM * Dong Zhuang 1, Benyu Zhang, Qang Yang 3, Jun Yan 4, Zheng Chen, Yng Chen 1 1 Computer Scence and Engneerng, Bejng Insttute of echnology, Bejng 100081, Chna

More information

Learning Non-Linearly Separable Boolean Functions With Linear Threshold Unit Trees and Madaline-Style Networks

Learning Non-Linearly Separable Boolean Functions With Linear Threshold Unit Trees and Madaline-Style Networks In AAAI-93: Proceedngs of the 11th Natonal Conference on Artfcal Intellgence, 33-1. Menlo Park, CA: AAAI Press. Learnng Non-Lnearly Separable Boolean Functons Wth Lnear Threshold Unt Trees and Madalne-Style

More information

PRÉSENTATIONS DE PROJETS

PRÉSENTATIONS DE PROJETS PRÉSENTATIONS DE PROJETS Rex Onlne (V. Atanasu) What s Rex? Rex s an onlne browser for collectons of wrtten documents [1]. Asde ths core functon t has however many other applcatons that make t nterestng

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Circuit Analysis I (ENGR 2405) Chapter 3 Method of Analysis Nodal(KCL) and Mesh(KVL)

Circuit Analysis I (ENGR 2405) Chapter 3 Method of Analysis Nodal(KCL) and Mesh(KVL) Crcut Analyss I (ENG 405) Chapter Method of Analyss Nodal(KCL) and Mesh(KVL) Nodal Analyss If nstead of focusng on the oltages of the crcut elements, one looks at the oltages at the nodes of the crcut,

More information

A Fusion of Stacking with Dynamic Integration

A Fusion of Stacking with Dynamic Integration A Fuson of Stackng wth Dynamc Integraton all Rooney, Davd Patterson orthern Ireland Knowledge Engneerng Laboratory Faculty of Engneerng, Unversty of Ulster Jordanstown, ewtownabbey, BT37 OQB, U.K. {nf.rooney,

More information

Extraction of Fuzzy Rules from Trained Neural Network Using Evolutionary Algorithm *

Extraction of Fuzzy Rules from Trained Neural Network Using Evolutionary Algorithm * Extracton of Fuzzy Rules from Traned Neural Network Usng Evolutonary Algorthm * Urszula Markowska-Kaczmar, Wojcech Trelak Wrocław Unversty of Technology, Poland kaczmar@c.pwr.wroc.pl, trelak@c.pwr.wroc.pl

More information

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z. TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS Muradalyev AZ Azerbajan Scentfc-Research and Desgn-Prospectng Insttute of Energetc AZ1012, Ave HZardab-94 E-mal:aydn_murad@yahoocom Importance of

More information

Random Kernel Perceptron on ATTiny2313 Microcontroller

Random Kernel Perceptron on ATTiny2313 Microcontroller Random Kernel Perceptron on ATTny233 Mcrocontroller Nemanja Djurc Department of Computer and Informaton Scences, Temple Unversty Phladelpha, PA 922, USA nemanja.djurc@temple.edu Slobodan Vucetc Department

More information

Backpropagation: In Search of Performance Parameters

Backpropagation: In Search of Performance Parameters Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,

More information

Learning-Based Top-N Selection Query Evaluation over Relational Databases

Learning-Based Top-N Selection Query Evaluation over Relational Databases Learnng-Based Top-N Selecton Query Evaluaton over Relatonal Databases Lang Zhu *, Wey Meng ** * School of Mathematcs and Computer Scence, Hebe Unversty, Baodng, Hebe 071002, Chna, zhu@mal.hbu.edu.cn **

More information

A User Selection Method in Advertising System

A User Selection Method in Advertising System Int. J. Communcatons, etwork and System Scences, 2010, 3, 54-58 do:10.4236/jcns.2010.31007 Publshed Onlne January 2010 (http://www.scrp.org/journal/jcns/). A User Selecton Method n Advertsng System Shy

More information

Improving Web Image Search using Meta Re-rankers

Improving Web Image Search using Meta Re-rankers VOLUME-1, ISSUE-V (Aug-Sep 2013) IS NOW AVAILABLE AT: www.dcst.com Improvng Web Image Search usng Meta Re-rankers B.Kavtha 1, N. Suata 2 1 Department of Computer Scence and Engneerng, Chtanya Bharath Insttute

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

Classifier Ensemble Design using Artificial Bee Colony based Feature Selection

Classifier Ensemble Design using Artificial Bee Colony based Feature Selection IJCSI Internatonal Journal of Computer Scence Issues, Vol. 9, Issue 3, No 2, May 2012 ISSN (Onlne): 1694-0814 www.ijcsi.org 522 Classfer Ensemble Desgn usng Artfcal Bee Colony based Feature Selecton Shunmugaprya

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Modular PCA Face Recognition Based on Weighted Average

Modular PCA Face Recognition Based on Weighted Average odern Appled Scence odular PCA Face Recognton Based on Weghted Average Chengmao Han (Correspondng author) Department of athematcs, Lny Normal Unversty Lny 76005, Chna E-mal: hanchengmao@163.com Abstract

More information

EXTENDED BIC CRITERION FOR MODEL SELECTION

EXTENDED BIC CRITERION FOR MODEL SELECTION IDIAP RESEARCH REPORT EXTEDED BIC CRITERIO FOR ODEL SELECTIO Itshak Lapdot Andrew orrs IDIAP-RR-0-4 Dalle olle Insttute for Perceptual Artfcal Intellgence P.O.Box 59 artgny Valas Swtzerland phone +4 7

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

A Statistical Model Selection Strategy Applied to Neural Networks

A Statistical Model Selection Strategy Applied to Neural Networks A Statstcal Model Selecton Strategy Appled to Neural Networks Joaquín Pzarro Elsa Guerrero Pedro L. Galndo joaqun.pzarro@uca.es elsa.guerrero@uca.es pedro.galndo@uca.es Dpto Lenguajes y Sstemas Informátcos

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Classification Based Mode Decisions for Video over Networks

Classification Based Mode Decisions for Video over Networks Classfcaton Based Mode Decsons for Vdeo over Networks Deepak S. Turaga and Tsuhan Chen Advanced Multmeda Processng Lab Tranng data for Inter-Intra Decson Inter-Intra Decson Regons pdf 6 5 6 5 Energy 4

More information

Spam Filtering Based on Support Vector Machines with Taguchi Method for Parameter Selection

Spam Filtering Based on Support Vector Machines with Taguchi Method for Parameter Selection E-mal Spam Flterng Based on Support Vector Machnes wth Taguch Method for Parameter Selecton We-Chh Hsu, Tsan-Yng Yu E-mal Spam Flterng Based on Support Vector Machnes wth Taguch Method for Parameter Selecton

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information