Learning Non-Linearly Separable Boolean Functions With Linear Threshold Unit Trees and Madaline-Style Networks

Size: px
Start display at page:

Download "Learning Non-Linearly Separable Boolean Functions With Linear Threshold Unit Trees and Madaline-Style Networks"

Transcription

1 In AAAI-93: Proceedngs of the 11th Natonal Conference on Artfcal Intellgence, Menlo Park, CA: AAAI Press. Learnng Non-Lnearly Separable Boolean Functons Wth Lnear Threshold Unt Trees and Madalne-Style Networks Mehran Saham Department of Computer Scence Stanford Unversty Stanford, CA 9 saham@cs.stanford.edu Abstract Ths paper nvestgates an algorthm for the constructon of decsons trees comprsed of lnear threshold unts and also presents a novel algorthm for the learnng of nonlnearly separable boolean functons usng Madalnestyle networks whch are somorphc to decson trees. The constructon of such networks s dscussed, and ther performance n learnng s compared wth standard Back- Propagaton on a sample problem n whch many rrelevant attrbutes are ntroduced. Lttlestone's Wnnow algorthm s also explored wthn ths archtecture as a means of learnng n the presence of many rrelevant attrbutes. The learnng ablty of ths Madalne-style archtecture on non-optmal (larger than necessary) networks s also explored. Introducton We ntally examne a non-ncremental algorthm that learns bnary classfcaton tasks by producng decson trees of lnear threshold unts (LTU trees). Ths decson tree bears some smlarty to the decson trees produced by ID3 (Qunlan 1983) and Perceptron Trees (Utgoff 1988), yet t seems to promse more generalty as each node n our tree mplements a separate lnear dscrmnant functon whle only the leaves of a Perceptron Tree have ths generalty and the remanng nodes n both the Perceptron Tree and the trees produced by ID3 perform a test on only one feature. Recently, Brodley and Utgoff (199) have also shown that the use of multvarate tests at each node of a decson tree often provdes greater generalzaton when learnng concepts n whch there are rrelevant attrbutes. Furthermore, as presented n (Brent 199), we show how such an LTU tree can be transformed nto a three-layer neural network wth two hdden layers and one output layer (the nput layer s not counted) and can often be traned much more quckly than the standard Back-Propagaton algorthm appled to an entre network (Rumelhart, Hnton, & Wllams 1986). After examnng ths transformaton, a new ncremental learnng algorthm, based on a Madalnestyle archtecture (Rdgway 196, Wdrow & Wnter 1988), s presented n whch learnng s performed usng such three-layer networks. The effectveness of ths algorthm s assessed on a sample non-lnearly separable boolean functon n order to perform comparsons wth the LTU tree algorthm and a smlar network traned usng standard Back-Propagaton. Beng prmarly nterested n functons n whch many rrelevant attrbutes exst, we also explore the performance of the Wnnow algorthm (Lttlestone 1988, 1991) (whch has proven effectve n learnng lnearly separable functons n the presence of many rrelevant attrbutes) wthn the Madalne-style learnng archtecture. We contrast how t performs n learnng our sample non-lnearly separable functon wth the classcal fxed ncrement (Perceptron) updatng method (Duda & Hart 1973). We also examne the effectveness of such learnng procedures n "nonoptmal" Madalne-style networks, and comment on possble future extensons of ths learnng archtecture. The LTU Tree Algorthm The tree buldng algorthm s non-ncremental requrng that the set of all tranng nstances, S, be avalable from the outset. 1 We begn wth the root node of the tree and produce a hyperplane to separate our tranng set usng any means we wsh (n our trals, Back-Propagaton was appled to one node to produce a sngle separatng hyperplane) nto the sets S and S 1, where S ( =, 1) ndcates the set of nstances classfed as by the separatng hyperplane. If there are nstances n S whch should be classfed as 1 (called "ncorrect 's") we then create a left chld node and recursvely apply the algorthm on the left chld usng S as the tranng set. Smlarly, f any nstances n S 1 should be classfed as ("ncorrect 1's") we create a rght chld node and agan recursvely apply our algorthm on the rght chld usng S 1 as the tranng set. Thus the algorthm normally termnates when all of the nstances n the orgnal tranng set, S, are correctly classfed by our tree. The classfcaton procedure usng the completed tree requres us to smply begn at the root node and determne whether the gven nstance s classfed as a or 1 by the hyperplane stored there. A classfcaton of means we follow the left branch, otherwse we follow the rght, and recursvely apply ths procedure wth the hyperplane stored at the approprate chld node. The classfcaton gven at a leaf node n the tree s the fnal output of the classfcaton procedure. Note that the leaves n ths decson tree do not 1 Notaton and namng conventons n the descrpton of the LTU tree algorthm are from Brent (199).

2 classfy all nstances nto one labelng, rather the classfcaton for the nstance s the result of applyng the lnear dscrmnator stored n the leaf node. For our experments, certan (reasonable) lmtng assumptons were placed on the buldng of such LTU trees n order to prevent needlessly complex trees, thereby helpng to mprove generalzaton and reduce the algorthm's executon tme. These ncluded settng a maxmum tree depth of layers and toleratng a certan percentage of error n each ndvdual node. Ths toleraton condton was set after some emprcal observatons whch ndcated that gven some number of smlarly classfed nstances n a node, n, a certan percentage of erroneous classfcatons, E, would be acceptable (thus precludng further branchng for that partcular classfcaton from the node). These values are as follows: f n then E = % f n > & n then E = 1% else E = 6% Intal testng was performed wthn ths LTU tree archtecture usng a varety of methods for learnng the lnear dscrmnant at each node of the tree (Saham 1993). Wshng to mnmze the number of erroneous classfcatons made at each node n the tree, Back- Propagaton appeared to be the most promsng of these weght updatng procedures. Whle ths heurstc of mnmzng errors at each node can occasonally produce larger than optmal trees, t generally produces trees of optmal or near-optmal sze, and was shown to produce the smallest trees on a number of sample functons when compared wth other weght updatng procedures. Snce we are only allowed to store one hyperplane at each node (and not an entre network, although ths mght be an nterestng angle for further research) we apply the Back- Propagaton algorthm to only one unt at a tme. To make ths unt a lnear threshold unt, a threshold s set at. after tranng s completed (ths threshold s not used durng tranng). Thus the output of the unt traned wth Back-Propagaton s gven by: 1 O O LTUn = n. { otherwse O n = 1, 1 + e -N where N k = w t k x k + θ k-1 k where O n s the actual real valued output of the nth traned unt on any nstance and O LTUn s the output of our "lnear threshold unt." θ represents the "bas" weght of the unt. The updatng procedure used n tranng each node s: An optmal tree would contan the mnmum number of lnear separators (nodes) necessary to successfully classfy all nstances n the tranng set, S. w k+1 = w k + w k + (momentum) w k-1 w k = (lrate * x k * O k * (1 - O k ) * (d - O k )) where O k = e -N k, N k = w k t x k + θ k-1 and θ k+1 = θ k + θ k θ k = (lrate * O k * (1 - O k ) * (d - O k )) Where w s the weght vector beng updated and x s a gven nstance vector. We set lrate = 1. and momentum =. n our experments. There are many possble extensons to ths LTU treebuldng algorthm ncludng rrelevant attrbute elmnaton (Brodley & Utgoff 199), producng several hyperplanes at each node usng dfferent weght updatng procedures and selectng the hyperplane whch causes the fewest number of ncorrect classfcatons, usng Bayesan analyss to determne nstance separatons (Langley 199), post-processng of the tree to reduce ts sze, etc. These modfcatons are beyond the scope of ths paper however, and generally are only fne tunngs to the underlyng learnng archtecture whch s not changed by them. Creatng Networks From LTU Trees The trees whch are produced by the LTU tree algorthm can be mechancally transformed nto three-layer connectonst networks that mplement the same functons. Gven an LTU tree, T, wth m nodes, we can construct an somorphc network contanng the m nodes of the tree n the frst hdden layer (each fully connected to the set of nputs). The second hdden layer consstng of n nodes (AND gates), where n s the number of possble dstnct paths between the root of T and a leaf node (a node wthout two chldren). And the output layer merely beng an OR gate connected to all n nodes n the prevous layer. The connectons between the frst and second hdden layers are constructed by traversng each possble path from the root to a leaf n the tree T, and at each node recordng whch branch was followed to get to t. Thus each node n the second hdden layer represents a sngle dstnct path through T by beng connected to those nodes n the frst layer whch correspond to the nodes that were traversed along the gven path. Snce the nodes n the second hdden layer are merely AND gates, the nputs comng from the frst hdden layer must frst be nverted f a left branch was traversed n T at the node correspondng to a gven nput from the frst hdden layer. Two examples are gven below. As ponted out n (Brent 199), t s more effcent to do classfcatons usng the tree structure than the correspondng network snce the only computatons whch must be performed are those whch le on a sngle path from the root of the tree to a leaf. Convenently, when we later examne how to ncrementally tran a network whch corresponds to an LTU tree, we may then transform the traned network nto a decson tree to attan ths computatonal beneft durng classfcaton.

3 1 1 Fgure 1 3 Fgure Fgure 1 shows a two node tree produced by the LTU tree algorthm, whle Fgure shows the correspondng network after performng the transformaton descrbed above. Nodes 1 and n Fgure 1 correspond drectly to nodes 1 and n Fgure. Node 3 smply has the output of node 1 as ts nput (snce there s a path of length 1 n the tree from the root to node 1 whch s consdered a leaf.) Node s a conjunct of the nverted output of node 1 (snce we must follow the left branch from node 1 to reach node n the tree) and the output of node. Node s smply an OR gate. 1 Fgure 3 3 Fgure 3 shows a more complex tree produced by the LTU tree algorthm, and Fgure represents the correspondng network. Nodes 1,, 3, and n Fgure 3 correspond drectly to the same nodes n Fgure. In Fgure, node represents the path 1-- n the tree, wth the nverted output of node 1, nverted output of node and output of node as nputs. Node 6 represents the path 1- (as node n the tree s also consdered a leaf) wth the nverted output of node 1 and the output of node as nputs. Node 7 corresponds to the path 1-3 and has the outputs of nodes 1 and 3 as nputs. Agan, node 8 s smply a dsjuncton of the outputs of nodes, 6 and 7. Madalne-Style Learnng Algorthm The updatng strategy n ths Madalne-style archtecture s based upon modfyng the weght vectors n the frst hdden layer of nodes by approprately strengthenng and weakenng them based on ncorrect predctons by the network. We also make use of knowng the structure of the LTU tree, T, whch corresponds to the network we are tranng. When an nstance s ncorrectly classfed as a, we know that no nodes n the second hdden layer correspondng to a leaf n T fred. Thus we look for the node correspondng to a leaf node n T whch s closest to threshold and strengthen t. We also examne any nodes correspondng to non-leaf nodes n T that we would know exsts along the path from the root of T to the gven leaf node closest to threshold. If these nodes were over threshold but the gven leaf s down ther left chld n T, then the node n the network correspondng to the partcular non-leaf node n T s weakened. Smlarly f the node correspondng to a non-leaf node n T was under threshold, but the leaf node s on a path down ts rght chld n T, then the node n the network correspondng to the non-leaf node n T s strengthened. When an nstance s msclassfed as a 1, we smply fnd the node n the second hdden layer of the network whch msfred (there can only be one) and weaken all nodes whch are nputs to t and also correspond to leaf nodes n T. In the case of the network n Fgure, ths translates n to the followng updatng procedure: 1 On a msclassfed, determne f node 1 or node s closer to threshold: If node 1 s closer to threshold, then strengthen node 1, else strengthen node On a msclassfed 1, only node 3 or (but not both) msfred n ths case: If the output of node 3 s 1,then weaken node 1, else weaken node. 7 Fgure How nodes are strengthened and weakened s based upon what learnng method was beng used on the Madalne-style networks. Both the classcal fxed ncrement (referred to smply as Madalne below) and Lttlestone's Wnnow algorthm (referred to as Mada-wnnow) were employed n our tests as follows:

4 Algorthm Fxed Increment Updatng Method Strengthen: (Madalne) w k+1 = w k + x Weaken: w k+1 = w k - x Wnnow Strengthen: (Mada-wnnow) w k+1 Weaken: w k+1 = α x w k = β x w k Where w s the weght vector (w s the th component of w) at the node beng modfed and x s the nstance vector whch was msclassfed. Note that α=. and β=. (Wnnow also uses a fxed threshold whch was set to. n our ntal experments). Expermental Results In testng the LTU tree algorthm and the correspondng network for ther ablty to learn, a non-lnearly separable - bt boolean functon was used. Ths functon was defned as: x =1 1 =1 x Ths functon, effectvely beng the dsjuncton of two r-ofk threshold functons, s not lnearly separable, but can be optmally learned usng two hyperplanes to separate the nstance space. Thus n testng our varous learnng methods on ths functon, we compare the LTU tree algorthm aganst tranng networks confgured smlarly to Fgure (as ths s the optmal sze network to learn the gven functon). In tranng the networks, we compare standard Back-Propagaton appled to the entre network (usng preset fxed weghts n the second hdden and output layers to smulate the approprate AND and OR gates) aganst our novel Madalne-style learnng method (dscussed above). Note that our learnng procedure s effectvely only learnng the separatng hyperplanes n the frst hdden layer of the network (correspondng to learnng the nodes of an LTU tree). On a techncal note, the nstance vectors presented to both the LTU tree and Back-Propagaton appled to an entre network nclude the orgnal boolean vector (comprsed of 1's and 's) wth the complements of the orgnal vector to create a "double length" nstance vector (as prelmnary testng showed that the use of complements helped mprove learnng performance wth these algorthms.) In the Madalne-style tests, the nstance vectors presented when usng fxed ncrement updatng were composed of 1's and -1's wthout the addton of complements, whereas when usng Wnnow the nstance vectors were smlar to those wth the LTU tree (complementary attrbutes were added). The number of nstances presented for tranng, as well as the number of dmensons n the nput vector were vared. Note that only the frst bts of the nstance vector are relevant to ts proper classfcaton and the added bts are smply random, rrelevant attrbutes. The dmensons gven n the graphs below measure the sze of the orgnal nstance vector (not ncludng complementary attrbutes). The graphs below represent test runs on each algorthm n each case. Testng s done on an ndependent, randomly generated set of nstances, numberng the same as the tranng set. The "" refers to the percentage of errors made durng testng by each algorthm over the test runs. The "" refers to the smallest percentage of errors made durng testng over the test runs. We see that, n the average case (Fgure ), when traned usng nstances (whch are each seen only once), the Madalne network (usng fxed-ncrement updatng) outperforms all other algorthms as the number of rrelevant attrbutes s ncreased. The LTU tree (called BP tree here) performs wthout errors up to 1 dmensons (durng whch tme t was consstently producng optmal trees of nodes) and quckly begns to degenerate n performance as the trees t produces get larger due to poor separatng hyperplanes beng produced at each node. Not surprsngly, t s at ths same pont when usng Back- Propagaton over an entre network also begns to degenerate quckly leadng us to realze that the network s gettng too small to properly deal wth rrelevant attrbutes. Mada-wnnow also performs very erratcally, due prmarly to seeng too few nstance vectors to settle nto a good "soluton state." The best case analyss (Fgure 6) ndcates a smple lnear ncrease n the number of errors made by Madalne (caused by a lnear ncrease n the sum of weghts from rrelevant attrbutes) as opposed to an erratc ncrease ndcatng that the boolean functon was not learned. Smlarly, Mada-wnnow seems to be capable of learnng the functon up to 3 dmensons and quckly degenerates ndcatng that learnng s not effectvely takng place, as opposed to occasonal msclassfcatons caused by added rrelevant attrbute weghts. We fnd the BP network stll unable to learn beyond 1 dmensons, whle the BP tree s stll effectve up to dmensons. When we examne the results of usng tranng nstances (each of whch s seen once), the effectveness of the Madalne-style archtecture becomes much more clear. In the average case (Fgure 7) we stll fnd the standard BP network degeneratng after 1 dmensons. However, we see extremely low error rates n Madalne all the way through, ndcatng that not only has the target functon been learned, but the effect of rrelevant attrbute weghts has also been mnmzed. Moreover, we fnd that Madawnnow s successful n learnng the target functon wth nstances up to 3 dmensons n length before ts predctve accuracy begns to fall. Smlarly, the BP tree s effectve for nstances up to dmensons before once

5 Mada-wnnow Madalne BP network BP tree Traned usng randomly generated nstances Fgure Fgure 6 Traned usng randomly generated nstances Fgure 7 Fgure 8 agan tree szes grow too large as the lnear separators at each node provde poorer splts. In the best case (Fgure 8) we see the most strkng results as Madalne stll contnues a very low error rate, and Mada-wnnow has % errors over the entre range of dmensons tested! Ths would ndcate that by tranng a number of such Mada-wnnow networks and usng cross-valdaton technques to determne whch has the hghest predctve accuracy, we can learn nonlnearly separable boolean functons wth an extremely hgh degree of accuracy even n the presence of many rrelevant attrbutes. Ths of course does requre some knowledge as to what network sze would provde the best results, but ntally runnng the LTU tree algorthm on our data set could provde us wth good ballpark approxmatons for ths. Non-Optmal Networks Havng seen the predctve accuracy of the Madalne-style networks n learnng when the optmal network sze 3 was known, t s mportant to get an dea for the accuracy of such networks when they are non-optmal. In examnng 3 The noton of optmal network sze stems from the transformaton of an optmal LTU tree. the effects of usng a network that s larger than necessary, the network n Fgure was used to learn the same -bt non-lnearly separable problem. The updatng procedure for ths network s descrbed below: On a msclassfed, determne f node, 3 or s closest to threshold: If node s closest to threshold, then strengthen node and f node 1 s over threshold then weaken node 1. If node 3 s closest to threshold, then strengthen node 3 and f node 1 s not over threshold then strengthen node 1. If node s closest to threshold, then strengthen node and f node 1 s over threshold then weaken node 1. On a msclassfed 1, determne f node, 6 or 7 msfred: If the output of node s 1, then weaken node. If the output of node 6 s 1, then weaken node. If the output of node 7 s 1, then weaken node 3. Now we compare the prevous results of Madalne and Mada-wnnow usng the smaller network, denoted (S), wth the larger network, denoted (L). Agan lookng at the average of test runs on tranng nstances (Fgure 9), we see that the performance of both Madalne and Madawnnow are worse when learnng usng a larger network (as

6 Mada-wnnow (L) Madalne (L) Mada-wnnow (S) Madalne (S) Traned usng randomly generated nstances Fgure 9 Fgure Traned usng randomly generated nstances Fgure 11 Fgure 1 we would expect, snce there s greater possblty for confuson among whch nodes to update). Ths s also seen n the best case graph (Fgure ) where we stll see the erratc behavor of learnng usng the Mada-wnnow (L) algorthm, whch cannot properly learn the target functon even wth only a few rrelevant dmensons. The Madalne (L) algorthm stll holds some promse as t mantans a relatvely low error rate untl about the dmenson mark before t too begns to quckly degenerate n ts predctve ablty. Agan the most strkng dfferences are seen when examnng the graphs of learnng runs usng tranng nstances. Notng that the "% error" scale on Fgures 11 and 1 s much less than the prevous fgures (to make the graph more readable), we see that n the average case, whle Mada-wnnow (L)'s behavor s stll erratc (caused by the way the Wnnow algorthm greatly modfes weghts between each update, leadng to nstablty n the resultant weght vector when tranng ceases), but the error rate stays below %. Moreover, Madalne (L) only shows a small lnear decrease n ts predctve ablty over the entre graph, reflectng agan that the target functon was effectvely learned and msclassfcatons are arsng from the cumulatve sum of small rrelevant attrbute weghts. Fnally, Fgure 1 shows the most mpressve results. Frst, Madalne(L) has only a slghtly hgher error rate that Madalne (S). And more mpressvely, the Mada-wnnow (L) algorthm s able to mantan % error over the entre range of rrelevant attrbutes, reflectng that network sze s not entrely crucal for effectvely learnng wthn ths paradgm. An examnaton of the weghts n the larger network ndcated that, n fact, two nodes n the frst hdden layer contaned the approprate hyperplanes requred to learn the target functon and the other two nodes had somewhat random but essentally "unused" weghts n terms of nstance classfcaton. It s mportant to note that the fxed threshold used wth the Wnnow algorthm was dependent on the number of rrelevant attrbutes n the nstance vectors presented. Ths reflects a problem nherent n the Wnnow algorthm (n whch threshold choce can have a large mpact upon learnng) and s not a shortcomng of the Madalne-style archtecture. Future Work There s stll a great deal of work that needs to be done n examnng and extendng both the LTU tree and the Madalne-style learnng algorthms. In terms of the LTU tree, new methods for fndng better separatng hyperplanes as well as the ncorporaton of post-learnng prunng

7 technques would be very helpful n determnng proper network sze both for Madalne-style and standard neural networks. As for the Madalne-style networks, clearly more work needs to be done n examnng larger networks and learnng more complex functons. Another nterestng problem arses n lookng at methods to prune the network durng tranng to produce better classfcatons. Also theoretcal measures are needed for the number of tranng nstances to present for adequate learnng. Acknowledgments The author s grateful to Prof. Nls Nlsson, wthout whose deas, gudance, help and support, ths work would never have been done. Addtonal thanks go to Prof. Nlsson for readng and commentng on an earler draft of ths paper. Dr. Pat Langley also provded a soundng board for deas for extendng research dealng wth LTU trees. References Brent, R. P Fast tranng algorthms for mult-layer neural nets. Numercal Analyss Project Manuscrpt NA- 9-3, Dept. of Computer Scence, Stanford Unv. D. E. Rumelhart and J. L. McClelland, Cambrdge, MA: MIT Press. Rumelhart, D. E. and McClelland, J. L. eds Parallel Dstrbuted Processng, Vol. 1. Cambrdge, MA: MIT Press. Saham, M An Expermental Study of Learnng Non-Lnearly Separable Boolean Functons Wth Trees of Lnear Threshold Unts. Forthcomng. Utgoff, P. E Perceptron Trees: A Case Study n Hybrd Concept Representaton. In AAAI-88 Proceedngs of the Seventh Natonal Conference on Artfcal Intellgence, San Mateo, CA: Morgan Kaufmann. Wdrow, B., and Wnter, R. G Neural Nets for Adaptve Flterng and Adaptve Pattern Recognton. IEEE Computer, March:-39. Wnston, P Artfcal Intellgence, thrd edton. Readng, MA: Addson-Wesley. Brodley, C. E., and Utgoff, P. E Multvarate Versus Unvarate Decson Trees. COINS Techncal Report 9-8, Dept. of Computer Scence, Unv. of Mass. Duda, R. O., and Hart, P. E Pattern Classfcaton and Scene Analyss. New York: John Wley & Sons. Langley, P Inducton of Recursve Bayesan Classfers. Forthcomng. Lttlestone, N Learnng quckly when rrelevant attrbutes abound: a new lnear-threshold algorthm. Machne Learnng : Lttlestone, N Redundant nosy attrbutes, attrbute errors, and lnear-threshold learnng usng Wnnow. In Proceedngs of the Fourth Annual Workshop of Computatonal Learnng Theory, San Mateo, CA: Morgan Kaufmann Publshers, Inc. Nlsson, N. J Learnng machnes. New York: McGraw-Hll. Qunlan, J. R Inducton of decson trees. Machne Learnng 1:81-6. Rdgway, W. C., 196. An Adaptve Logc System wth Generalzng Propertes. Stanford Electroncs Laboratores Techncal Report 16-1, prepared under Ar Force Contract AF 33(616)-776, Stanford Unv. Rumelhart, D. E.; Hnton, G. E.; and Wllams, R. J Learnng nternal representatons by error propagaton. Parallel Dstrbuted Processng, Vol. 1, eds.

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Three supervised learning methods on pen digits character recognition dataset

Three supervised learning methods on pen digits character recognition dataset Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

An algorithm for correcting mislabeled data

An algorithm for correcting mislabeled data Intellgent Data Analyss 5 (2001) 491 2 491 IOS Press An algorthm for correctng mslabeled data Xnchuan Zeng and Tony R. Martnez Computer Scence Department, Brgham Young Unversty, Provo, UT 842, USA E-mal:

More information

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Concurrent Apriori Data Mining Algorithms

Concurrent Apriori Data Mining Algorithms Concurrent Apror Data Mnng Algorthms Vassl Halatchev Department of Electrcal Engneerng and Computer Scence York Unversty, Toronto October 8, 2015 Outlne Why t s mportant Introducton to Assocaton Rule Mnng

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions Sortng Revew Introducton to Algorthms Qucksort CSE 680 Prof. Roger Crawfs Inserton Sort T(n) = Θ(n 2 ) In-place Merge Sort T(n) = Θ(n lg(n)) Not n-place Selecton Sort (from homework) T(n) = Θ(n 2 ) In-place

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Backpropagation: In Search of Performance Parameters

Backpropagation: In Search of Performance Parameters Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Random Kernel Perceptron on ATTiny2313 Microcontroller

Random Kernel Perceptron on ATTiny2313 Microcontroller Random Kernel Perceptron on ATTny233 Mcrocontroller Nemanja Djurc Department of Computer and Informaton Scences, Temple Unversty Phladelpha, PA 922, USA nemanja.djurc@temple.edu Slobodan Vucetc Department

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Classification algorithms on the cell processor

Classification algorithms on the cell processor Rochester Insttute of Technology RIT Scholar Works Theses Thess/Dssertaton Collectons 8-1-2008 Classfcaton algorthms on the cell processor Mateusz Wyganowsk Follow ths and addtonal works at: http://scholarworks.rt.edu/theses

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

CAN COMPUTERS LEARN FASTER? Seyda Ertekin Computer Science & Engineering The Pennsylvania State University

CAN COMPUTERS LEARN FASTER? Seyda Ertekin Computer Science & Engineering The Pennsylvania State University CAN COMPUTERS LEARN FASTER? Seyda Ertekn Computer Scence & Engneerng The Pennsylvana State Unversty sertekn@cse.psu.edu ABSTRACT Ever snce computers were nvented, manknd wondered whether they mght be made

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

A Statistical Model Selection Strategy Applied to Neural Networks

A Statistical Model Selection Strategy Applied to Neural Networks A Statstcal Model Selecton Strategy Appled to Neural Networks Joaquín Pzarro Elsa Guerrero Pedro L. Galndo joaqun.pzarro@uca.es elsa.guerrero@uca.es pedro.galndo@uca.es Dpto Lenguajes y Sstemas Informátcos

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Query Clustering Using a Hybrid Query Similarity Measure

Query Clustering Using a Hybrid Query Similarity Measure Query clusterng usng a hybrd query smlarty measure Fu. L., Goh, D.H., & Foo, S. (2004). WSEAS Transacton on Computers, 3(3), 700-705. Query Clusterng Usng a Hybrd Query Smlarty Measure Ln Fu, Don Hoe-Lan

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Fast Feature Value Searching for Face Detection

Fast Feature Value Searching for Face Detection Vol., No. 2 Computer and Informaton Scence Fast Feature Value Searchng for Face Detecton Yunyang Yan Department of Computer Engneerng Huayn Insttute of Technology Hua an 22300, Chna E-mal: areyyyke@63.com

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

A Notable Swarm Approach to Evolve Neural Network for Classification in Data Mining

A Notable Swarm Approach to Evolve Neural Network for Classification in Data Mining A Notable Swarm Approach to Evolve Neural Network for Classfcaton n Data Mnng Satchdananda Dehur 1, Bjan Bhar Mshra 2 and Sung-Bae Cho 1 1 Soft Computng Laboratory, Department of Computer Scence, Yonse

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

Data Mining: Model Evaluation

Data Mining: Model Evaluation Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct

More information

Comparison Study of Textural Descriptors for Training Neural Network Classifiers

Comparison Study of Textural Descriptors for Training Neural Network Classifiers Comparson Study of Textural Descrptors for Tranng Neural Network Classfers G.D. MAGOULAS (1) S.A. KARKANIS (1) D.A. KARRAS () and M.N. VRAHATIS (3) (1) Department of Informatcs Unversty of Athens GR-157.84

More information

Hierarchical clustering for gene expression data analysis

Hierarchical clustering for gene expression data analysis Herarchcal clusterng for gene expresson data analyss Gorgo Valentn e-mal: valentn@ds.unm.t Clusterng of Mcroarray Data. Clusterng of gene expresson profles (rows) => dscovery of co-regulated and functonally

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information

Human Face Recognition Using Generalized. Kernel Fisher Discriminant

Human Face Recognition Using Generalized. Kernel Fisher Discriminant Human Face Recognton Usng Generalzed Kernel Fsher Dscrmnant ng-yu Sun,2 De-Shuang Huang Ln Guo. Insttute of Intellgent Machnes, Chnese Academy of Scences, P.O.ox 30, Hefe, Anhu, Chna. 2. Department of

More information

A Facet Generation Procedure. for solving 0/1 integer programs

A Facet Generation Procedure. for solving 0/1 integer programs A Facet Generaton Procedure for solvng 0/ nteger programs by Gyana R. Parja IBM Corporaton, Poughkeepse, NY 260 Radu Gaddov Emery Worldwde Arlnes, Vandala, Oho 45377 and Wlbert E. Wlhelm Teas A&M Unversty,

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

Specialized Weighted Majority Statistical Techniques in Robotics (Fall 2009)

Specialized Weighted Majority Statistical Techniques in Robotics (Fall 2009) Statstcal Technques n Robotcs (Fall 09) Keywords: classfer ensemblng, onlne learnng, expert combnaton, machne learnng Javer Hernandez Alberto Rodrguez Tomas Smon javerhe@andrew.cmu.edu albertor@andrew.cmu.edu

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

Bootstrapping Color Constancy

Bootstrapping Color Constancy Bootstrappng Color Constancy Bran Funt and Vlad C. Carde * Smon Fraser Unversty Vancouver, Canada ABSTRACT Bootstrappng provdes a novel approach to tranng a neural network to estmate the chromatcty of

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

CE 221 Data Structures and Algorithms

CE 221 Data Structures and Algorithms CE 1 ata Structures and Algorthms Chapter 4: Trees BST Text: Read Wess, 4.3 Izmr Unversty of Economcs 1 The Search Tree AT Bnary Search Trees An mportant applcaton of bnary trees s n searchng. Let us assume

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

A classification scheme for applications with ambiguous data

A classification scheme for applications with ambiguous data A classfcaton scheme for applcatons wth ambguous data Thomas P. Trappenberg Centre for Cogntve Neuroscence Department of Psychology Unversty of Oxford Oxford OX1 3UD, England Thomas.Trappenberg@psy.ox.ac.uk

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Investigating the Performance of Naïve- Bayes Classifiers and K- Nearest Neighbor Classifiers

Investigating the Performance of Naïve- Bayes Classifiers and K- Nearest Neighbor Classifiers Journal of Convergence Informaton Technology Volume 5, Number 2, Aprl 2010 Investgatng the Performance of Naïve- Bayes Classfers and K- Nearest Neghbor Classfers Mohammed J. Islam *, Q. M. Jonathan Wu,

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research

More information

CSE 326: Data Structures Quicksort Comparison Sorting Bound

CSE 326: Data Structures Quicksort Comparison Sorting Bound CSE 326: Data Structures Qucksort Comparson Sortng Bound Steve Setz Wnter 2009 Qucksort Qucksort uses a dvde and conquer strategy, but does not requre the O(N) extra space that MergeSort does. Here s the

More information

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE Dorna Purcaru Faculty of Automaton, Computers and Electroncs Unersty of Craoa 13 Al. I. Cuza Street, Craoa RO-1100 ROMANIA E-mal: dpurcaru@electroncs.uc.ro

More information

Feature Subset Selection Based on Ant Colony Optimization and. Support Vector Machine

Feature Subset Selection Based on Ant Colony Optimization and. Support Vector Machine Proceedngs of the 7th WSEAS Int. Conf. on Sgnal Processng, Computatonal Geometry & Artfcal Vson, Athens, Greece, August 24-26, 27 182 Feature Subset Selecton Based on Ant Colony Optmzaton and Support Vector

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

Incremental MQDF Learning for Writer Adaptive Handwriting Recognition 1

Incremental MQDF Learning for Writer Adaptive Handwriting Recognition 1 200 2th Internatonal Conference on Fronters n Handwrtng Recognton Incremental MQDF Learnng for Wrter Adaptve Handwrtng Recognton Ka Dng, Lanwen Jn * School of Electronc and Informaton Engneerng, South

More information

Fuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System

Fuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System Fuzzy Modelng of the Complexty vs. Accuracy Trade-off n a Sequental Two-Stage Mult-Classfer System MARK LAST 1 Department of Informaton Systems Engneerng Ben-Guron Unversty of the Negev Beer-Sheva 84105

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

VFH*: Local Obstacle Avoidance with Look-Ahead Verification

VFH*: Local Obstacle Avoidance with Look-Ahead Verification 2000 IEEE Internatonal Conference on Robotcs and Automaton, San Francsco, CA, Aprl 24-28, 2000, pp. 2505-25 VFH*: Local Obstacle Avodance wth Look-Ahead Verfcaton Iwan Ulrch and Johann Borensten The Unversty

More information

RECOGNIZING GENDER THROUGH FACIAL IMAGE USING SUPPORT VECTOR MACHINE

RECOGNIZING GENDER THROUGH FACIAL IMAGE USING SUPPORT VECTOR MACHINE Journal of Theoretcal and Appled Informaton Technology 30 th June 06. Vol.88. No.3 005-06 JATIT & LLS. All rghts reserved. ISSN: 99-8645 www.jatt.org E-ISSN: 87-395 RECOGNIZING GENDER THROUGH FACIAL IMAGE

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

USING GRAPHING SKILLS

USING GRAPHING SKILLS Name: BOLOGY: Date: _ Class: USNG GRAPHNG SKLLS NTRODUCTON: Recorded data can be plotted on a graph. A graph s a pctoral representaton of nformaton recorded n a data table. t s used to show a relatonshp

More information

General Vector Machine. Hong Zhao Department of Physics, Xiamen University

General Vector Machine. Hong Zhao Department of Physics, Xiamen University General Vector Machne Hong Zhao (zhaoh@xmu.edu.cn) Department of Physcs, Xamen Unversty The support vector machne (SVM) s an mportant class of learnng machnes for functon approach, pattern recognton, and

More information

Run-Time Operator State Spilling for Memory Intensive Long-Running Queries

Run-Time Operator State Spilling for Memory Intensive Long-Running Queries Run-Tme Operator State Spllng for Memory Intensve Long-Runnng Queres Bn Lu, Yal Zhu, and lke A. Rundenstener epartment of Computer Scence, Worcester Polytechnc Insttute Worcester, Massachusetts, USA {bnlu,

More information