Hierarchical Semantic Perceptron Grid based Neural Network CAO Huai-hu, YU Zhen-wei, WANG Yin-yan Abstract Key words 1.

Size: px
Start display at page:

Download "Hierarchical Semantic Perceptron Grid based Neural Network CAO Huai-hu, YU Zhen-wei, WANG Yin-yan Abstract Key words 1."

Transcription

1 Herarchcal Semantc Perceptron Grd based Neural CAO Hua-hu, YU Zhen-we, WANG Yn-yan (Dept. Computer of Chna Unversty of Mnng and Technology Bejng, Bejng 00083, chna) Abstract A herarchcal semantc perceptron grd archtecture based neural network has been proposed n ths paper, the semantc spottng s a key ssue of ths archtecture, for fndng soluton to ths problem, frstly we has formulated t, then a Semantc neural network classfer frame has been proposed. And eperment results of ths scenaro were presented, to evaluate the effectveness of ths scenaro, we compare ths scenaro wth the SVM, NNet, KNN and NB, the average epermental results of the scenaro are obvously superor to other conventonal approaches. Key words: semantc perceptron, grd, neural network, semantc classfer, machne learnng. Introducton Knowledge and Semantc Web technologes are evolvng the Grd towards the Semantc Grd to facltate knowledge reuse and collaboraton wthn a communty of practce. The Semantc grd s about the way that knowledge s acqured, used, retreved, publshed and mantaned to assst e-scentsts to acheve ther partcular goals and objectves -knowledge s understood as nformaton appled to acheve a goal, solve a problem or enact a decson. It nvolves all three conceptual layers of the Grd: knowledge, nformaton and computaton/data.[,3] For buldng a semantc perceptron grd, above all, semantc spottng must be eactly acheved to form basc semantc grd nodes. In ths paper, we propose a herarchcal semantc perceptron grd archtecture based neural network(spgn). It uses contet at the lower layer to select the eact meanng of key words, and employs a combnaton of contet, co-occurrence statstcs and thesaurus to group the dstrbuted but semantcally related words wthn a semantc to form basc semantc nodes. The semantc nodes are then used to nfer the semantc wthn an nput document. For fndng soluton to semantc spottng problem, we has formulated t, a semantc neural network classfer(snnc) frame has been proposed. Eperments on 0000 data set demonstrate that the SNNC s able to capture the semantcs, and t performs well on semantc spottng task.. Problem descrpton and theoretcal framework Semantc spottng s the problem of dentfyng the presence of a predefned Semantc n a tet document. More formally, gven a set of n Semantcs together wth a collecton of documents, the task s to determne for each document the probablty that one or more semantcs s present n the document. Semantc spottng may be used to automatcally assgn subject codes to newswre stores, flter electronc emals and onlne news, and pre-screen document n nformaton retreval and nformaton etracton applcatons. Semantc spottng, and ts related problem of tet categoraton, has been a hot area of research for over a decade. A large number of technques have been proposed to tackle the problem, ncludng: regresson model, nearest neghbor classfcaton, Bayesan probablstc model, decson tree, nductve rule learnng, on-lne learnng, and, support vector machne (Yang & Lu, 999; Teras & Hartmann, 993)[6]. Most of these methods are word-based and consder only the relatonshps between the features and semantcs, but not the relatonshps among features. Semantc spottng theoretcal framework model s shown n Fg.. The model has two basc components: gatng networks and epert networks. The gatng networks, located at the ntermedate level nodes of the tree, receve the nput (m) (a vector of m nput features representng a document) and produce scalar output that weghts the contrbuton of the chld networks. The epert networks, located at the leaf nodes, receve the nput (m) to produce an estmate of the output. Ths model may be seen as a cascade of networks that works n a bottom-up fashon: the nput s frst presented to the eperts that generate an output, then the output of the eperts are combned by the second level gates, generatng a new output. Fnally the outputs of the second level gates are combned by the root gate to produce the approprate result y (n) (a vector y of n components where n s the number of outputs). The nodes n the tree represent the conve sum of the output of the chld nodes whch s computed as y (n) = j g j y j (n) where g j s the output of the gate and y j (n) s the output of the correspondng chld node. In the orgnal model proposed by Jordan and Jacobs all the networks n the tree are lnear (perceptrons). The epert network produces ts output y as a generaled lnear functon of the nput : y = f (U ) () where U s a weght matr and f s a fed contnuous non lnear functon. The vector s assumed to nclude a fed component wth value one to allow for an ntercept term. For bnary classfcaton problems, f ( ) s the logstc functon, n whch case the epert outputs are nterpreted as the log odds of success

2 under a Bernoull probablty model. Other models (e.g., mult-way classfcaton, countng, rate estmaton and survval estmaton) are handled by makng other choces for f ( ). The gatng networks are also generaled lnear functons. The th output of the gatng network s the softma functon of the ntermedate varable ψ (Brdle 989, McCullagh and Nelder 989) [7]. y y (n) y (n+) Root Gatng y Second Level Gatng y y 3 Global Epert y, Epert y, Epert y,3 Epert Local percepton Input document m g = Fg.. Semantc spottng theoretcal framework model e Ψ j =,..., l e Ψ Where l s the number of chld nodes of the gatng network, and the ntermedate varable ψ s defned as: Ψ = X V T (3) Where v s a weght vector and T s the transpose operaton. The g s are postve and sum to one for each. They can be nterpreted as provdng a soft parttonng of the nput space. The output vector at each nontermnal node of the tree s the weghted sum of the output of the chldren below that nontermnal. For eample, the output at the th nontermnal n the second layer of the two-level tree n fg.. s: ( n) ( n) =. j (4) y g j y. j Where j =,..., l, l s the number of chld nodes connected to the gate, y.j (n) s the output of epert j whch s a chld of gate, and g.j s the jth output of the gate. Note that snce both the g s and the y s depend on the nput, the output s a nonlnear functon of the nput. The theoretcal framework model s very fleble. One may choose a functon f (Eq..) for the gate and epert decson modules that s approprate for the applcaton. Moreover, snce gates and eperts depend only upon the nput, one may choose ether bottom-up or top-down processng whchever s approprate for the problem. In our varaton of the theoretcal framework model we use a bnary classfcaton functon at the gates. Also, we tran and use our model top-down. The choce of ths drecton was made based upon the number of categores. As wll be descrbed n detal later, we have a collecton of 6 categores n the dataset and moreover am towards a scalable categoraton procedure that can handle large classfcaton schemes. Abottom-up approach would be very neffcent gven that there s an epert module for each category. The computatonal requrements are especally severe when the decson module at each epert node s a neural network. In contrast our top-down approach along wth a bnary classfcaton at each gate restrcts the number of epert networks to be actvated for a gven document. It may be observed that n studes usng the bottom-up approach the number of categores was small as for eample 6 for handwrtng recognton and 5 for speech recognton (Waterhouse 997)[8]. In ths study we choose the more effcent drecton ()

3 and postpone the eploraton of bottom-up processng for future research. In our model, gven a bnary functon a gate s traned to yeld a value of f the eample document s categored wth any of ts descendent concepts. Durng testng, the categoraton task starts at the root node and ts gate decdes whether the most general concept s present n the document. If ths s true, then all the second level nodes make ther decsons and the process repeats untl t reaches the leaf nodes. Observe that only the eperts connected to gates that output the value are actvated thus reducng the response tme durng classfcaton. A key dfference depcted s at the gatng networks whch use bnary functons. To gve a statstcal nterpretaton of our model we defne Z,..., Z j as the path of gates from the root node to the gate j that s the parent of an epert ε k that assgns category k. Let (m) be the nput features that represent a document, and y (n) the output vector of the categores assgned to the document. The probablstc nterpretaton of our herarchcal model s as follows: P( P( y ( n) j = ) = n k = =,..., P( j = =, ) P( ) P( ε k = = =, Gven an approprate set of features and a tranng set of manually categored documents, the backpropagaton network learns to make the approprate decsons. Observe that for eperts and gates the set of postve eamples s dfferent. The set of postve eamples for the eperts s a subset of the postve eamples of any ancestor gate. As a consequence, two dentcal neural networks traned wth these dfferent subsets learn dfferent probablstc functons. The backpropagaton neural network as an epert node learns to use the nput to estmate the desred output value (category), whle as a gate t computes the confdence value of the combned outputs of ts chldren. 3. Implementaton of herarchcal semantc perceptron grd based neural network It may be observed that network archtecture and feature selecton methods must be studed n combnaton. In fact, at a basc level feature selecton s one of the factors that defne the confguraton of the network. Gven the many comple combnatons n terms of feature selecton methods and numbers of nodes n the dfferent layers for both the epert and gatng networks we approach the problem n stages[9]. We follow a top-down approach that optmes the gates and then the eperts. Frst we focused our attenton on the gatng networks. We used eperts wth 5 nput features and 50 nodes n the hdden layers. We then eplored 5, 0, 5, 50, 00, and 50 nput features for the gatng networks wth hdden layer that had twce the number of nput nodes. We also tred all three dfferent feature selecton methods (Mutual Informaton, Odds Rato and Correlaton Coeffcent). Ths eperment was done only on the hgh frequency categores because they allow approprate tranng of the neural networks and also because the varance between dfferent tranng runs s smaller than the varance for lower frequency categores. Interestngly the dfferences between the three feature selecton methods on the gatng networks are not sgnfcant. It s possble for ths slght decrease to be caused by the lmt n the number of teratons (, 000) that a network was allowed to run durng tranng. Usually a larger network needs more teratons on the tranng set to converge to an optmal value. 3.. Semantc neural network classfer frame In order to assess the advantage ganed by eplotng the herarchcal structure of the classfcaton scheme, we bult a flat neural network classfer. We decded to buld a flat modular classfer that s mplemented as a set of 03 ndvdual epert networks. In ths model the eperts are traned ndependently usng the optmal feature set and the category one for each ndvdual category []. The thresholdng step s performed by optmng the F value of each epert usng the entre tranng set. These are the values that we report n the net secton for the flat neural network classfer. Observe also that the use of ths model allows us to assess the contrbuton of addng the herarchcal structure,.e., our frst research queston. Neural networks can learn nonlnear mappngs from a set of tranng patterns. The parameters are adjusted to the tranng data n a number of teratons. We have used a back-propagaton algorthm that mnmes quadratc error. Fg.. gves a pctoral representaton of a neural network for document classfcaton. The nput layer has as many unts (neurons) as features retaned. The number of hdden unts s determned by optmng the performance on a cross-valdaton set. There s one output unt for each possble class. Hert [] and Rpley [] gve an ntroducton to neural networks that goes out of the scope ) )... (5) 3

4 of ths paper. Ths learnng process needs to be montored usng a cross-valdaton set to avod overfttng the data. We have used several cross-valdaton sets as descrbed later. T T T C C C 3 Fg.. Neural for semantc classfcaton 3. Feature selecton In tet categoraton the set of possble nput features conssts of all the dfferent words that appear n a collecton of documents. Ths s usually a large set snce even small tet collectons could have hundreds of thousands of features. Reducton of the set of features to tran the neural networks s necessary because the performance of the network and the cost of classfcaton are senstve to the se and qualty of the nput features used to tran the network (Yang and Honovar 998) [4,5]. A frst step towards reducng the se of the feature set s the elmnaton of stop words, and the use of stemmng algorthms. Even after that s done the set of features s typcally too large to be useful for tranng a neural network. Two broad approaches for feature selecton have been presented n the lterature: the wrapper approach, and the flter approach (John et al. 994) [0]. The wrapper approach attempts to dentfy the best feature subset to use wth a partcular algorthm. For eample, for a neural network the wrapper approach selects an ntal subset and measures the performance of the network; then t generates an mproved set of features and measures the performance of the network. Ths process s repeated untl t reaches a termnaton condton (ether a mnmal value of performance or a number of teratons). The flter approach, whch s more commonly used n tet categoraton, attempts to assess the merts of the feature set from the data alone. The flterng approach selects a set of features usng a preprocessng step, based on the tranng data. In ths paper we use the flter approach, but we plan to eplore the wrapper approach n future research. We select three methods that have been used n prevous works: correlaton coeffcent, mutual nformaton, and odds rato. We elmnate those stems that occur n less than 5 documents n the tranng collecton. Snce feature selecton s done for each category, based on ts one (eplaned later) we also remove stems that occur n less than 5% of the postve eample documents. We then rank the remanng stems by the feature selecton measure and select a pre-defned number of top ranked stems as the feature set. 3.3 Tranng set selecton A supervsed learnng algorthm requres the use of a tranng set n whch each element has already been correctly categored. One would epect that the avalablty of a large tranng set (such as OHSUMED) wll be benefcal for tranng the algorthm. In practce ths does not seem to be the case. The problem occurs when there s also a large collecton of categores wth each assgned to a relatvely small number of documents. Ths then creates a stuaton n whch each category has a small number of postve eamples and an overwhelmng number of negatve eamples. When a machne learnng algorthm s traned to learn the assgnment functon wth such an unbalanced tranng set, the algorthm wll learn that the best decson s to not assgn the category. The overwhelmng amount of negatve eamples hdes the assgnment functon. To overcome ths problem an approprate set of tranng eamples must be selected. We call ths tranng subset the category one. The method for creatng the category one s a KNN approach n whch the category one conssts of the set of K nearest neghbors for each postve eample of the category. Ths method wll produce varable sed category ones. We eplored several values of K (0, 50, 00 and 00). Our man concern wth ths method was to obtan a tranng set large enough to tran a neural network wthout overfttng. 4. Performance measures Table Contngency table for class j Assgned YES NO Correct YES NO a j c j b j d j 4

5 Table descrbes the possble outcomes of a bnary classfer. The system YES/NO results refer to the classfer output and the real YES/NO refers to what the correct output s. The perfect classfer would have a value of for a j jand d j, and 0 for b j and c j. Usng table we defne 3 performance measures common n the semantc classfcaton lterature. The tradeoff between recall and precson s controlled by settng the classfers parameters. To descrbe the performance both values should be provded. Another common performance measure s the F-measure defned by Rjsbergen [] F ( r, p) ( β = β β + ) pr p + r The most commonly used F-measure n tet classfcaton s F : pr F ( r, p) = p + r When dealng wth multple classes there are two possble ways of averagng these measures, namely, macro average and mcro average. In the macro averagng, one contngency table as Table 3 per class s used, the performance measures are computed on each of them and then averaged. In mcro averagng only one contngency table s used, an average of all the classes s computed for each cell and the performance measures are obtaned theren. The macro average weghts equally all the classes, regardless of how many documents belong to t. The mcro average weghts equally all the documents, thus favorng the performance on common classes. Dfferent classfers wll perform dfferent n common and rare categores. Learnng algorthms are traned more often on more populated classes thus rskng local overfttng. 5 Eperments and Results We summare here the steps followed n the preparaton of the eperments, and the results theren. a. Preparng the data: A lst of 57 stop-words were removed from the document collecton. A standard stemmng algorthm was also used. b. Vectoraton and weghtng: The resultng documents were represented as vectors, usng TF IDF weghtng. Other weghtng schemes were tred and ths was found optmum. The Smart tet processng package [9] was used for ths stage. All the eperments descrbed use a cosne normalaton. c. archtecture: The selected terms were used as nput features to the network. We tred networks wth 500, 000 and 000 nput features. The network has 90 output unts, one for each class. d. Tranng: We generated 5 cross-valdaton sets wth a number of random documents each. These documents were set asde and the network was traned on the remanng ones. As a rule of thumb, t s common to use 5-0% of the tranng set, we tred usng 500 and 000 documents, the smaller cross-valdaton set resulted n better performance. The learnng rate (η ) was reduced when the error found a mnmum on the cross-valdaton set. Reducngη durng learnng s a commonly used technque, smlar to coolng down n smulated annealng. In table we show the mcro and macro average F performance for networks wth 500, 000 and 000 nput features. In all cases the best results obtaned were for 00 hdden unts. In table 3 the results obtaned by other authors and by us are dsplayed. We show n ths table the best results for each nput dmensonalty. The results n table and 3 were obtaned usng χ ( t). We compare the performance of the classfer wth the results of other authors that used Support Vector Machnes (SVM), kth Nearest Neghbors (KNN) and Naïve Bayes (NB) classfers. The mcro averaged precson of neural networks s the best for all the methods compared, ths s partcularly mportant for many applcatons where precson s the most mportant varable. The F performance for neural networks, n partcular M a F, s not as good as the one for SVM or KNN, and ths s due to a lower recall level. The tradeoff between hgh recall or hgh precson s a well known problem and must be consdered for each applcaton. Neural networks performance has a degree of randomness, nherent to the learnng process and by dstnct ntal condtons, so t s not possble to determne the method performance by a sngle eperment. Several tranngs are needed, preferably wth dfferent cross-valdaton sets and dfferent ntal weghts. We have used 5 to 0 ndependent eperments to determne each value. ma Pr 5

6 Table. Performance of the network wth 500 and 000 nput features. 50 hdden 00 hdden 50hdden Macro F 500nputs Mcro F 500nputs 0.54+/ / / / / / Macro F 000nputs Mcro F 000nputs 0.93+/ / / / / /-0008 Macro F 000nputs Mcro F 000nputs 0.83+/ / / / / / Table 3 Performance of dfferent classfers mr mp mf maf SVM NNet KNN NB NN500np. NN000np. NN000np / / / / / / / / / / / / Concluson In ths paper, we have proposed a herarchcal semantc perceptron grd archtecture based neural network, we has formulated the semantc spottng problem, and we has proposed a semantc neural network classfer frame. The eperment results of ths scenaro were presented, to evaluate the effectveness of ths scenaro, we compare ths scenaro wth the SVM, NNet, KNN and NB, the average epermental results of the scenaro are obvously superor to other conventonal approaches. Future work ncludes tryng these methods on new data sets, we are partcularly nterested n the classfcaton of documents n other languages, other neural network schemes should also be tested. REFERENCES. Foster, I., C. Kesselman, and S. Tuecke, The Anatomy of the Grd: Enablng Scalable Vrtual Organatons. Internatonal Journal of Hgh Performance Computng Applcatons, 00. 5(3): p SLONIM, N. AND TISHBY, N. 00. The power of word clusters for tet classfcaton. In Proceedngs of ECIR-0, 3rd European Colloquum on Informaton Retreval Research (Darmstadt, Germany, 00). 3. S. Lawrence and G. Gles. Accessblty and dstrbuton of nformaton on the web. Nature, 400:07 09, Y. Yang. An evaluaton of statstcal approaches to tet categoraton. Informaton Retreval Journal, May, Y. Yang and X. Lu. A re-eamnaton of tet categoraton methods. In ACM SIGIR Conference on Research and Development n Informaton Retreval (SIGIR 99), Alan, H., Dasmahapatra, S., Gbbns, N., Glaser, H., Harrs, S., Kalfoglou, Y., O'Hara, K., and Shadbolt, N. Managng Reference: Ensurng Referental Integrty of Ontologes for the Semantc Web. 4th Internatonal Conference on Knowledge Engneerng and Knowledge Management, Span, October, Bontcheva, K. Talorng the Content of Dynamcally Generated Eplanatons. M. Bauer, P.J. Gmytrasewc, J. Vassleva (eds). User Modellng 00: 8th Internatonal Conference, UM00, Lecture Notes n Artfcal Intellgence 09, Sprnger Verlag, Cerf, V. G., et al., Natonal Collaboratores: Applyng Informaton Technologes for Scentfc Research, Natonal Academy Press: Washngton, D.C., Cravegna, F.: "Adaptve Informaton Etracton from Tet by Rule Inducton and Generalsaton" n Proceedngs of 7th Internatonal Jont Conference on Artfcal Intellgence (IJCAI 00), Seattle, August 00." 0. Andreyev Y V, Belsky YL, Dmttrev A S, et al. :neural networks mplementaton[j ] Informaton processng usng dynamcal chaos. IEEE Trans Neural s, 7.(996): J. Hert, A. Krogh, and R. Palmer. Introducton to the theory of neural computaton. Addson-Wesley, Redwood, CA, 99.. B.D. Rpley. Pattern recognton and neural networks. Cambrdge Unversty Press, Cambrdge,

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

Review of approximation techniques

Review of approximation techniques CHAPTER 2 Revew of appromaton technques 2. Introducton Optmzaton problems n engneerng desgn are characterzed by the followng assocated features: the objectve functon and constrants are mplct functons evaluated

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

The Man-hour Estimation Models & Its Comparison of Interim Products Assembly for Shipbuilding

The Man-hour Estimation Models & Its Comparison of Interim Products Assembly for Shipbuilding Internatonal Journal of Operatons Research Internatonal Journal of Operatons Research Vol., No., 9 4 (005) The Man-hour Estmaton Models & Its Comparson of Interm Products Assembly for Shpbuldng Bn Lu and

More information

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

Backpropagation: In Search of Performance Parameters

Backpropagation: In Search of Performance Parameters Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

An Image Fusion Approach Based on Segmentation Region

An Image Fusion Approach Based on Segmentation Region Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

Reliable Negative Extracting Based on knn for Learning from Positive and Unlabeled Examples

Reliable Negative Extracting Based on knn for Learning from Positive and Unlabeled Examples 94 JOURNAL OF COMPUTERS, VOL. 4, NO. 1, JANUARY 2009 Relable Negatve Extractng Based on knn for Learnng from Postve and Unlabeled Examples Bangzuo Zhang College of Computer Scence and Technology, Jln Unversty,

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

Keywords - Wep page classification; bag of words model; topic model; hierarchical classification; Support Vector Machines

Keywords - Wep page classification; bag of words model; topic model; hierarchical classification; Support Vector Machines (IJCSIS) Internatonal Journal of Computer Scence and Informaton Securty, Herarchcal Web Page Classfcaton Based on a Topc Model and Neghborng Pages Integraton Wongkot Srura Phayung Meesad Choochart Haruechayasak

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network A New Token Allocaton Algorthm for TCP Traffc n Dffserv Network A New Token Allocaton Algorthm for TCP Traffc n Dffserv Network S. Sudha and N. Ammasagounden Natonal Insttute of Technology, Truchrappall,

More information

UB at GeoCLEF Department of Geography Abstract

UB at GeoCLEF Department of Geography   Abstract UB at GeoCLEF 2006 Mguel E. Ruz (1), Stuart Shapro (2), June Abbas (1), Slva B. Southwck (1) and Davd Mark (3) State Unversty of New York at Buffalo (1) Department of Lbrary and Informaton Studes (2) Department

More information

PRÉSENTATIONS DE PROJETS

PRÉSENTATIONS DE PROJETS PRÉSENTATIONS DE PROJETS Rex Onlne (V. Atanasu) What s Rex? Rex s an onlne browser for collectons of wrtten documents [1]. Asde ths core functon t has however many other applcatons that make t nterestng

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS P.G. Demdov Yaroslavl State Unversty Anatoly Ntn, Vladmr Khryashchev, Olga Stepanova, Igor Kostern EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS Yaroslavl, 2015 Eye

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

Related-Mode Attacks on CTR Encryption Mode

Related-Mode Attacks on CTR Encryption Mode Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory

More information

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters Proper Choce of Data Used for the Estmaton of Datum Transformaton Parameters Hakan S. KUTOGLU, Turkey Key words: Coordnate systems; transformaton; estmaton, relablty. SUMMARY Advances n technologes and

More information

Fuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System

Fuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System Fuzzy Modelng of the Complexty vs. Accuracy Trade-off n a Sequental Two-Stage Mult-Classfer System MARK LAST 1 Department of Informaton Systems Engneerng Ben-Guron Unversty of the Negev Beer-Sheva 84105

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Supervsed vs. Unsupervsed Learnng Up to now we consdered supervsed learnng scenaro, where we are gven 1. samples 1,, n 2. class labels for all samples 1,, n Ths s also

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches Proceedngs of the Internatonal Conference on Cognton and Recognton Fuzzy Flterng Algorthms for Image Processng: Performance Evaluaton of Varous Approaches Rajoo Pandey and Umesh Ghanekar Department of

More information

A User Selection Method in Advertising System

A User Selection Method in Advertising System Int. J. Communcatons, etwork and System Scences, 2010, 3, 54-58 do:10.4236/jcns.2010.31007 Publshed Onlne January 2010 (http://www.scrp.org/journal/jcns/). A User Selecton Method n Advertsng System Shy

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

A Statistical Model Selection Strategy Applied to Neural Networks

A Statistical Model Selection Strategy Applied to Neural Networks A Statstcal Model Selecton Strategy Appled to Neural Networks Joaquín Pzarro Elsa Guerrero Pedro L. Galndo joaqun.pzarro@uca.es elsa.guerrero@uca.es pedro.galndo@uca.es Dpto Lenguajes y Sstemas Informátcos

More information

CSCI 5417 Information Retrieval Systems Jim Martin!

CSCI 5417 Information Retrieval Systems Jim Martin! CSCI 5417 Informaton Retreval Systems Jm Martn! Lecture 11 9/29/2011 Today 9/29 Classfcaton Naïve Bayes classfcaton Ungram LM 1 Where we are... Bascs of ad hoc retreval Indexng Term weghtng/scorng Cosne

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Experiments in Text Categorization Using Term Selection by Distance to Transition Point

Experiments in Text Categorization Using Term Selection by Distance to Transition Point Experments n Text Categorzaton Usng Term Selecton by Dstance to Transton Pont Edgar Moyotl-Hernández, Héctor Jménez-Salazar Facultad de Cencas de la Computacón, B. Unversdad Autónoma de Puebla, 14 Sur

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Selecting Query Term Alterations for Web Search by Exploiting Query Contexts

Selecting Query Term Alterations for Web Search by Exploiting Query Contexts Selectng Query Term Alteratons for Web Search by Explotng Query Contexts Guhong Cao Stephen Robertson Jan-Yun Ne Dept. of Computer Scence and Operatons Research Mcrosoft Research at Cambrdge Dept. of Computer

More information

Efficient Text Classification by Weighted Proximal SVM *

Efficient Text Classification by Weighted Proximal SVM * Effcent ext Classfcaton by Weghted Proxmal SVM * Dong Zhuang 1, Benyu Zhang, Qang Yang 3, Jun Yan 4, Zheng Chen, Yng Chen 1 1 Computer Scence and Engneerng, Bejng Insttute of echnology, Bejng 100081, Chna

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

A Binary Neural Decision Table Classifier

A Binary Neural Decision Table Classifier A Bnary Neural Decson Table Classfer Vctora J. Hodge Smon O Keefe Jm Austn vcky@cs.york.ac.uk sok@cs.york.ac.uk austn@cs.york.ac.uk Advanced Computer Archtecture Group Department of Computer Scence Unversty

More information

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010 Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement

More information

Empirical Distributions of Parameter Estimates. in Binary Logistic Regression Using Bootstrap

Empirical Distributions of Parameter Estimates. in Binary Logistic Regression Using Bootstrap Int. Journal of Math. Analyss, Vol. 8, 4, no. 5, 7-7 HIKARI Ltd, www.m-hkar.com http://dx.do.org/.988/jma.4.494 Emprcal Dstrbutons of Parameter Estmates n Bnary Logstc Regresson Usng Bootstrap Anwar Ftranto*

More information

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Relevance Assignment and Fusion of Multiple Learning Methods Applied to Remote Sensing Image Analysis

Relevance Assignment and Fusion of Multiple Learning Methods Applied to Remote Sensing Image Analysis Assgnment and Fuson of Multple Learnng Methods Appled to Remote Sensng Image Analyss Peter Bajcsy, We-Wen Feng and Praveen Kumar Natonal Center for Supercomputng Applcaton (NCSA), Unversty of Illnos at

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Pruning Training Corpus to Speedup Text Classification 1

Pruning Training Corpus to Speedup Text Classification 1 Prunng Tranng Corpus to Speedup Text Classfcaton Jhong Guan and Shugeng Zhou School of Computer Scence, Wuhan Unversty, Wuhan, 430079, Chna hguan@wtusm.edu.cn State Key Lab of Software Engneerng, Wuhan

More information

Private Information Retrieval (PIR)

Private Information Retrieval (PIR) 2 Levente Buttyán Problem formulaton Alce wants to obtan nformaton from a database, but she does not want the database to learn whch nformaton she wanted e.g., Alce s an nvestor queryng a stock-market

More information

A Robust LS-SVM Regression

A Robust LS-SVM Regression PROCEEDIGS OF WORLD ACADEMY OF SCIECE, EGIEERIG AD ECHOLOGY VOLUME 7 AUGUS 5 ISS 37- A Robust LS-SVM Regresson József Valyon, and Gábor Horváth Abstract In comparson to the orgnal SVM, whch nvolves a quadratc

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

Improving anti-spam filtering, based on Naive Bayesian and neural networks in multi-agent filters

Improving anti-spam filtering, based on Naive Bayesian and neural networks in multi-agent filters J. Appl. Envron. Bol. Sc., 5(7S)381-386, 2015 2015, TextRoad Publcaton ISSN: 2090-4274 Journal of Appled Envronmental and Bologcal Scences www.textroad.com Improvng ant-spam flterng, based on Nave Bayesan

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

USING LINEAR REGRESSION FOR THE AUTOMATION OF SUPERVISED CLASSIFICATION IN MULTITEMPORAL IMAGES

USING LINEAR REGRESSION FOR THE AUTOMATION OF SUPERVISED CLASSIFICATION IN MULTITEMPORAL IMAGES USING LINEAR REGRESSION FOR THE AUTOMATION OF SUPERVISED CLASSIFICATION IN MULTITEMPORAL IMAGES 1 Fetosa, R.Q., 2 Merelles, M.S.P., 3 Blos, P. A. 1,3 Dept. of Electrcal Engneerng ; Catholc Unversty of

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

A Novel Term_Class Relevance Measure for Text Categorization

A Novel Term_Class Relevance Measure for Text Categorization A Novel Term_Class Relevance Measure for Text Categorzaton D S Guru, Mahamad Suhl Department of Studes n Computer Scence, Unversty of Mysore, Mysore, Inda Abstract: In ths paper, we ntroduce a new measure

More information

Multiclass Object Recognition based on Texture Linear Genetic Programming

Multiclass Object Recognition based on Texture Linear Genetic Programming Multclass Object Recognton based on Texture Lnear Genetc Programmng Gustavo Olague 1, Eva Romero 1 Leonardo Trujllo 1, and Br Bhanu 2 1 CICESE, Km. 107 carretera Tjuana-Ensenada, Mexco, olague@ccese.mx,

More information

(1) The control processes are too complex to analyze by conventional quantitative techniques.

(1) The control processes are too complex to analyze by conventional quantitative techniques. Chapter 0 Fuzzy Control and Fuzzy Expert Systems The fuzzy logc controller (FLC) s ntroduced n ths chapter. After ntroducng the archtecture of the FLC, we study ts components step by step and suggest a

More information

A Method of Hot Topic Detection in Blogs Using N-gram Model

A Method of Hot Topic Detection in Blogs Using N-gram Model 84 JOURNAL OF SOFTWARE, VOL. 8, NO., JANUARY 203 A Method of Hot Topc Detecton n Blogs Usng N-gram Model Xaodong Wang College of Computer and Informaton Technology, Henan Normal Unversty, Xnxang, Chna

More information

Data Mining For Multi-Criteria Energy Predictions

Data Mining For Multi-Criteria Energy Predictions Data Mnng For Mult-Crtera Energy Predctons Kashf Gll and Denns Moon Abstract We present a data mnng technque for mult-crtera predctons of wnd energy. A mult-crtera (MC) evolutonary computng method has

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information