Random Kernel Perceptron on ATTiny2313 Microcontroller

Size: px
Start display at page:

Download "Random Kernel Perceptron on ATTiny2313 Microcontroller"

Transcription

1 Random Kernel Perceptron on ATTny233 Mcrocontroller Nemanja Djurc Department of Computer and Informaton Scences, Temple Unversty Phladelpha, PA 922, USA Slobodan Vucetc Department of Computer and Informaton Scences, Temple Unversty Phladelpha, PA 922, USA ABSTRACT Kernel Perceptron s very smple and effcent onlne classfcaton algorthm. However, t requres ncreasngly large computatonal resources wth data stream sze and s not applcable on large-scale problems or on resource-lmted computatonal devces. In ths paper we descrbe mplementaton of Kernel Perceptron on ATTny233 mcrocontroller, one of the most prmtve computatonal devces wth only 28B of data memory and 2kB of program memory. ATTyny233 s a representatve of devces that are popular n embedded systems and sensor networks due to ther low cost and low power consumpton. Implementaton on mcrocontrollers s possble thanks to two propertes of Kernel Perceptrons: () avalablty of budgeted Kernel Perceptron algorthms that bound the model sze, and (2) relatvely smple calculatons requred to perform onlne learnng and provde predctons. Snce ATTny233 s the fxedpont controller that supports floatng-pont operatons through software whch ntroduces sgnfcant computatonal overhead, we consdered mplementaton of basc Kernel Perceptron operatons through fxed-pont arthmetc. In ths paper, we present a new approach to approxmate one of the most used kernel functons, the RBF kernel, on fxed-pont mcrocontrollers. We conducted smulatons of the resultng budgeted Kernel Perceptron on several datasets and the results show that accurate Kernel Perceptrons can be traned usng ATTny233. The success of our mplementaton opens the doors for mplementng powerful onlne learnng algorthms on the most resource-constraned computatonal devces. Categores and Subject Descrptors C.3 [Computer Systems Organzaton]: Specal purpose and applcaton-based systems Real-tme and embedded systems General Terms Algorthms, Performance, Expermentaton. Keywords Kernel Perceptron, Budget, RBF kernel, Mcrocontroller. Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. To copy otherwse, or republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. SensorKDD 0, July 25 th, 200, Washngton, DC, USA. Copyrght 200 ACM $ I TRODUCTIO Kernel Perceptron s a powerful class of machne learnng algorthms whch can solve many real-world classfcaton problems. Intally proposed n [6], t was proven to be both accurate and very easy to mplement. Kernel Perceptron learns a mappng f : X R from a stream of tranng examples S = {(x, y ), = }, where x X s an M-dmensonal nput vector, called the data pont, and y {, +} s a bnary varable, called the label. The resultng Kernel Perceptron can be represented as f ( x ) = sgn ( α K ( x, x )), () = where α are weghts assocated wth tranng examples, and K s the kernel functon. The RBF kernel s probably the most popular choce for Kernel Perceptron because t s ntutve and often results n very accurate classfers. It s defned as 2 x x K( x, x ) = exp( ), (2) A where s the Eucldean dstance, and A s the postve number defnng the wdth of the kernel. Tranng of the Kernel Perceptron s smple: startng from the zero functon f(x) = 0 at tme t = 0, data ponts are observed sequentally, and f(x) s updated as f(x) f(x) + α K(x, x), where α = y, f pont x s msclassfed (y f(x ) 0) and α = 0 otherwse. Examples for whch α = y have to be stored and they are named the support vectors. Despte the smplcty of ths tranng algorthm, Kernel Perceptrons often acheve mpressve classfcaton accuraces on hghly non-lnear problems. On the other hand, number of the support vectors grows lnearly wth the number of tranng examples on nosy classfcaton problems. Therefore, these algorthms requre O( ) memory and O( 2 ) tme to learn n a sngle pass from examples. Because number of examples can be extremely hgh, Kernel Perceptrons can be nfeasble on nosy real-world classfcaton problems. Ths s the reason why budgeted versons of Kernel Perceptron have been proposed wth the objectve of retanng hgh accuracy whle removng the unlmted memory requrement. Budget Kernel Perceptron [4] has been proposed to address the problem of the unbounded growth n resource consumpton wth tranng data sze. The dea s to mantan a constant number of support vectors durng the tranng by removng a sngle support vector every tme the budget s exceeded upon addton of a new

2 support vector. Gven a budget of T support vectors, Budget Kernel Perceptrons acheve constant space scalng O(T), lnear tme to tran O(T ) and constant tme to predct O(T). There are several ways one may choose to mantan the budget. The most popular approach n lterature s removal of an exstng support vector to accommodate the new one. For example Forgetron [5] removes the oldest support vector, Random [3] the randomly selected one, Budget [4] the one farthest from the margn, and Tghtest [3] the one that mpacts the accuracy the least. There are more advanced and computatonally expensve approaches such as mergng of two support vectors [2] and projectng a support vector to the remanng ones [9]. Although Budget Kernel Perceptrons are very frugal, requrng constant memory and lnear space to tran, they are stll not applcable to one of the smplest computatonal devces, mcrocontrollers. These devces have extremely small memory and computatonal power. In ths paper we consder mcrocontroller ATTny233, whch has only 28B of memory avalable to store the model, maxmum processor speed of 8MHz and whose hardware does not support floatng-pont operatons. We propose a modfcaton of Budget Kernel Perceptron that uses only ntegers and makes t very sutable for use on resourcelmted mcrocontrollers. Our method approxmates floatng-pont Budget Kernel Perceptron operatons usng fxed-pont arthmetc that provdes nearly the same accuracy, whle allowng much faster executon tme and requrng less memory. We use Random Removal as budget mantenance method [3, ]. When the budget s full and new support vector s to be ncluded to the support vector set, one exstng support vector s randomly chosen for deleton. Although ths update rule s extremely smple, the resultng Perceptron can acheve hgh accuracy when the allowed budget s suffcently large. The pseudocode of Budget Kernel Perceptron s gven as Algorthm. It should be noted that a sgnfcant body of research exsts on the topc of effcent hardware mplementaton of varous machne learnng and sgnal processng algorthms. In Compressed Kernel Perceptron [] the authors consder effcent memory utlzaton through data quantzaton, but assume floatng-pont processor. Several solutons have been proposed for mplementaton of kernel machnes on fxed-pont processors as well. In [], authors propose specal hardware sutable for budgeted Kernel Perceptron. Smlarly, n [2] authors consder mplementaton of Support Vector Machnes [0] on fxed-pont hardware. The two proposed algorthms, however, assume that the classfers are already traned. Addtonally, use of the problem-specfc hardware s lmted to a sngle problem. We, on the other hand, mplement our method on general-purpose hardware, whch makes the mplementaton much easer and more cost-effcent. Moreover, we combne the two proposed approaches, quantzaton of data and the use of fxed-pont hardware, to obtan an accurate classfer that can be used on the smplest exstng computatonal devces. The paper s organzed as follows. In Secton 2 the proposed method s explaned n detal. In Secton 3 the results are presented and dscussed. Fnally, Secton 4 concludes the paper. Algorthm - Budget Kernel Perceptron Inputs : data sequence ((x, y ),..., (x, y )), budget T Output : support vector set SV = {SV, =... I} I 0; SV = Ø for = : { I f ( (y y K ( x, x ) 0) } { } j= j f (I == T) new = random(i) else { I I + new I } SV new = (x, y ) j 2. METHODOLOGY In ths paper we focus on mplementaton of Kernel Perceptron on the specfc mcrocontroller. We frst descrbe ts propertes and then explan the proposed algorthm. 2. Mcrocontroller The mcrocontroller ATTny233 [4] has been chosen as the target platform because ts specfcatons make t extremely challengng to mplement a data mnng algorthm. It has very lmted speed (0-8 MHz, dependng on voltage that ranges from.8 to 5.5V), low power requrements (when n.8v and MHz mode, t requres only 300µA and 540µW power), and very lmted memory (2kB of program memory that s used to store executable code and constants; 28B of data memory that s used to store model and program varables). Because of ts lmtatons t s also very cheap, wth the prce of around $ [5]. It supports 6-bt ntegers and supports fxed-pont operatons. Multplcaton and dvson of ntegers s very fast, but at the expense of potental overflow problems wth multplcaton and round-off error wth dvson. ATTny233 does not drectly support floatng-pont numbers, but can cope wth them usng certan software solutons. Floatng-pont calculatons can also be supported through C lbrary, but only after ncurrng sgnfcant overhead n both memory and executon tme. 2.2 Attrbute quantzaton In order to use the lmted memory effcently the data are frst normalzed and then quantzed usng B bts, as proposed n []. Usng quantzed data, nstead of (), the predctor s f ( x) = sgn( α K( q( x), q( x ))), (3) where q(x) s the quantzed data pont x. Quantzaton of data ntroduces quantzaton error, whch can have sgnfcant mpact on the classfcaton, especally for the data ponts that are located near the separatng boundary. The quantzaton loss suffered by the predcton model due to quantzaton s dscussed n much more detal n []. Objectve of ths paper s to approxmate (3) usng fxed-pont arthmetc.

3 2.3 Fxed-pont method for predcton Our algorthm should be mplemented on mcrocontroller wth extremely low memory. Although quantzaton can save valuable memory space, there are addtonal memory-related ssues that must be consdered. Frst, number of program varables must be mnmal. Prudent use of varables allows more space to store the model and leads to ncreased accuracy. On the computatonal sde, all unnecessary operatons, such as excess multplcatons and exponentals should be avoded. If ths s not taken nto consderaton, the program can take a lot of memory and a lot of tme to execute. Furthermore, due to nablty of ATTny233 to deal wth floatng-pont numbers n hardware, use of floatngpont lbrary results n even larger program sze. Instead of RBF kernel (2) our algorthm uses ts modfed verson, as proposed n [2], x x A K( x, x ) = e 2, (4) where several dfferences from RBF kernel n (2) can be notced. Frst, nstead of Eucldean dstance, Manhattan dstance s used. In [2], kernel functon wth Manhattan dstance s used because multplcatons can be completely omtted from ther algorthm. For our applcaton, we use the Manhattan dstance to lmt the range of dstances between the quantzed data, whch wll lead to mproved performance of our algorthm. Furthermore, wthout the loss of generalty, we represent the kernel wdth as 2 A, where A s any number. Although the smplfcatons n equaton (4) have been ntroduced, the algorthm cannot be mplemented wthout the penalty n the speed of computaton. Ths s because very smple devces are not desgned to compute exponents or other operatons nvolvng floatng-pont arthmetc effcently. Ths results n slow run-tme and excessve memory usage. Therefore, an alternatve way needs to be devsed to perform kernel calculaton. The dea of our method s to use only ntegers, whch are handled easly by even the smplest devces, and stll manage to use the RBF kernel for predctng the label of new pont. The normalzed data pont x s quantzed usng B bts and ts nteger representaton I(x) s B I( x ) = round( x 2 ). (5) The value of I(x) s n the range [0, 2 B ]. Snce x s now approxmated by q(x) = I(x) 2 -B, we can defne kernel functon whch takes quantzed data Kq ( I ( x) I ( x ) A+ B, x ) = K ( q( x), q( x )) = e 2 x. (6) If we represent I(x) I(x ) as d, and ntroduce for the smplcty of notaton we get A+ B g = e 2, (7) d K q (, x ) = g x. (8) T We need to calculate sgn( y K ( x, x )), where T s the budget. T = Snce sgn( y K ( x, x )) = sgn( y ck ( x, x )) for any c > 0, = we can replace q d q T q = d d nn g from (8) wth Cg, where d nn s the dstance between newly observed data pont x and the nearest support vector and C s a postve nteger. Then, to allow nteger representaton, we agan replace t wth ntegers d ( ) ( d w d d = round C g nn nn ), (9) and approxmate K x, x ) w( d d ). It s clear that f q ( nn d = d nn then w = C, f d d nn = then w = round(c g), and f d d nn s suffcently large then w = 0. We can notce that f the -th support vector s close to data pont x ts weght w and ts nfluence on classfcaton wll be large. Our fnal classfer s approxmated as T ( nn ) = f x ) = sgn( y w( d d ), (0) where y s the label of -th support vector, and w s ts weght whch depends on the dstance d from data pont x. The drawback of the proposed method s that the support vector set needs to be scanned every tme a new data pont x arrves n order to fnd the nearest support vector dstance. In addton, there s a computatonal problem related to calculaton of weghts. These ssues wll be addressed n Secton Error of weght approxmaton By usng roundng operaton (9) we are nevtably ntroducng addtonal error, on top of the error made due to quantzaton of data pont usng B bts []. In order to understand the effect of the approxmaton error, let us defne the relatve error R as true _ value approxmaton R=, () true _ value where true_value s the actual value of the number, and approxmaton s value of the number after roundng. It can be shown [7] that the relatve error R we are makng by the roundng (9) depends on the parameter C as R~/C. Ths shows that large C leads to small relatve error. However, because the mcrocontroller supports only 6-bt ntegers, too large C wll render our method useless on the chosen platform. 2.4 Weght calculaton To be able to use predctor (0) there s a need to calculate w for every support vector. For gven parameters (A, B, C) we can calculate the weght for every possble dstance d = d d nn as llustrated n Table. In the naïve approach, we can precalculate all the weghts and save them as array n mcrocontroller memory. If the dataset s M-dmensonal, then d s n range [0, M (2 B )], and storng each weght could become mpossble. Table Dstances and correspondng weghts Dstance (d) 0 d Weght (w(d)) Cg 0 Cg Cg d Therefore, we must fnd a way to represent entre array of weghts wth only a subset of weghts n order to save memory. Two

4 approaches are dscussed next, both requrng very lmted memory Sequental method The most obvous way s to store weghts w(0) and w(), for dstances 0 and, respectvely. By storng these two weghts we can calculate other weghts n Table. Every other weght can be teratvely calculated, n nteger calculus, as w( d ) 2 w ( d) =. (2) w( d 2) The memory requrement of ths approach s mnmal, but there are certan problems. In order to calculate partcular weght we need to calculate all the weghts up to that one by repeatedly applyng (2), whch can take substantal tme. In addton, the dvson of two ntegers results n roundng error that would propagate and ncrease for large d Log method Dfferent dea s to save only the weght of dstance 0, whch equals C, together wth the weghts at dstances that are the power of 2, namely weghts of dstances, 2, 4, 8, 6 and so on. In ths way the number of weghts we need to save s logarthm of total number of dstances, whch s approxmately log(m 2 B ). Usng the saved weghts we calculate weght for any dstance usng Algorthm 2. Algorthm 2 - Fnd Weght Inputs : array of weghts W and correspondng dstances D dstance d for whch the weght s beng calculated Output : weght w Index of the nearest smaller dstance to d n array D w W[] d d D[] whle (d > 0) { Index of the nearest smaller dstance to d n array D w w W[] / W[0] d d D[] } As can be seen n Algorthm 2, f we are lookng for the weght of the dstance d that s the power of 2, the weght wll mmedately be found n the array W. If not, we are lookng for the dstance n array D that s the closest smaller value than d, and start the calculaton from there. We repeat the process untl convergence. In ths way, we wll always try to make the smallest error whle calculatng the weght, and the algorthm runs usng O(log(M 2 B )) space and O(log(M 2 B )) tme, whch s effcent and fast even for hghly-dmensonal data and large bt-lengths. Log method s llustrated n Fgure. Assume that we are lookng for the weght of dstance 25, and we saved weghts at dstances 0,, 2, 4, 8, 6 and 32. It s clear that 25 can be represented as In the frst step we wll use w(6), then w(8), because 8 s the closest to 25 6, and fnally w(). The sought-after weght wll be found both very quckly and wth mnmal error. 3. EXPERIME TAL RESULTS We evaluated our algorthm on several benchmark datasets from UCI ML Repostory whose propertes are summarzed n Table 2. The dgt dataset Pendgts, whch was orgnally mult-class, was converted to bnary dataset n the followng way: classes representng dgts, 2, 4, 5, 7 (non-round dgts) were converted to negatve class, and those representng dgts 3, 6, 8, 9, 0 (round dgts) to the postve class. Banana and Checkerboard were orgnally bnary class datasets. Checkerboard dataset s shown n Fgure 8. Dataset Table 2 Dataset summares Tranng set Testng set sze sze Number of attrbutes Banana Checkerboard Pendgts In all reported results, A s the kernel wdth parameter, B s btlength, and C s postve nteger, all defned n Secton 2.3. In Fgure 2 the absolute dfference between the true weghts and the calculated ones usng sequental and log methods for weght calculaton descrbed n Sectons 2.4. and s shown. It can be seen that log method makes very small error wth mnmal memory overhead over the whole range of dstances. We use the log method as our default for weght calculaton. Error of approxmaton Dstance Sequental method Log method Banana dataset A = - B = 5 bts C = 00 Fgure 2 Error of two weght calculaton methods Fgure Illustraton of log method In the next set of experments the qualty of approxmaton was evaluated. Frst, the Random Kernel Perceptron was traned usng equaton (6) that requres floatng-pont calculatons. Then, our fxed-pont method was executed, and the predctons of the two algorthms were compared. The approxmaton accuracy s defned as

5 fxed floatng I ( y, y ) accuracy= =, (3) where I s the ndcator functon that s equal to when the predctons are the same and 0 otherwse, y s the classfer predcton, and s the sze of the test set. Approxmaton accuracy Parameter C Banana dataset A = 0 B = 5 bts budget = 70 bytes Fgure 3 Approxmaton accuracy as the functon of C Fgure 3 shows that the approxmaton accuracy s relatvely large even for small values of C and that t quckly ncreases wth C, as expected from dscusson n Secton Snce ATTny233 mcrocontroller supports 6-bt ntegers, the maxmum value of a number s However, as can be seen from Algorthm 2, n order to calculate weghts multplcatons are requred, and at no pont the product should exceed ths maxmum value. Therefore, the best choce for C s the square root of 65535, whch s 255. In the followng experments C was set to 255. In Fgure 4 the approxmaton accuracy s gven for 3 datasets. Total budget was fxed to only 70 bytes. Each dataset was shuffled before each experment and the reported results are averages after 0 experments. The value of parameter A was changed n order to estmate ts mpact on the performance. Hgh negatve values of A correspond to the algorthm that acts lke nearest neghbour. Hgh postve values of parameter A correspond to the majorty vote algorthm. It can be seen that over large range of A values the approxmaton accuracy was around 99%, and that s was lower at the extreme values of A. Ths behavor s due to the numercal lmtatons of double-precson format used when calculatng equaton (6). The exponentals of large negatve numbers are too close to 0 and are rounded to 0, whch results n much smaller accuracy. On the other hand, t can be seen that for large values of A correspondng to the algorthm that acts lke majorty vote the approxmaton accuracy starts to decrease. Ths was expected, because n ths case all the weghts, as calculated by our method, are practcally the same snce the weghts decrease extremely slowly and very small dfferences between actual weghts are lost when they are calculated usng the log method. However, t should be noted that n ths problem settng kernel wdth depends on A as 2 A, and the values where predcton accuracy starts to fall are practcally never used. Experments were conducted for dfferent values of parameter B as well. However, the results for other bt-lengths were very smlar, and are thus not shown. Approxmaton accuracy Kernel wdth parameter A Banana Pendgts Checkerboard Fgure 4 Approxmaton accuracy (B = 5 bts) The next set of experments was conducted to evaluate predcton accuracy of the proposed Random Kernel Perceptron mplementaton. The values of parameters A and B were changed n order to estmate ther nfluence on the accuracy of methods. The results are the averages over 0 experments and are reported n Fgures 5, 6 and 7. Accuracy Banana dataset B = 5 bts budget = 70 bytes Kernel wdth parameter A Fgure 5 Accuracy of two methods Floatng-pont method Fxed-pont method It can be seen that both floatng- and fxed-pont mplementatons acheve essentally the same accuracy, as was expected snce the approxmaton has been proven to be very good. In some cases for very small values of A the accuracy of floatng-pont method drops sharply, as n Fgure 6. Ths happens because of the numercal problems, where the predctons are so close to 0 that when calculated n double-precson they are rounded to 0. Our method does not have ths problem. It s also worth notcng that for large A the accuracy drops sharply for both algorthms, whch supports the clam that large kernel wdths are usually not useful. It s exactly for these large values of A that the approxmaton accuracy drops as well. Therefore, as can be seen n Fgures 5, 6 and 7, these values are not of practcal relevance. Only results for bt-length of 5 bts were shown, snce for the other bt-lengths the results were nearly the same.

6 Accuracy Accuracy Pendgts dataset B = 5 bts budget = 507 bytes Kernel wdth parameter A Floatng-pont method Fxed-pont method Fgure 6 Accuracy of two methods Checkerboard dataset B = 5 bts budget = 70 bytes Kernel wdth parameter A Floatng-pont method Fxed-pont method Fgure 7 Accuracy of two methods Fgure 8 Checkerboard dataset 3. Hardware mplementaton The algorthm was tested on mcrocontroller ATTny233. The program was wrtten n C programmng language usng free IDE AVR Studo 4 avalable for download from Atmel s webste. Dfferent methods were used n order to assess the complexty of mplementaton on mcrocontroller. The detals are gven n Tables 3, 4 and 5. Program and Data represent the requred program and data memory n bytes. Tme, n mllseconds, s gven after 00 observed examples, and kernel wdth parameter A s set to 6. Accuracy s calculated after all data ponts were observed. Numbers n the brackets represent the total sze of memory block of the mcrocontroller. The mcrocontroller speed s set to 4 MHz. Number of support vectors was chosen so that the entre data memory of mcrocontroller s utlzed n the case of fxed-pont method. Ths number of support vectors was then mposed on both methods. Therefore, as the bt-length s ncreased, the number of support vector decreased. For Tables 3 and 4 the target mcrocontroller was ATTny233. For Table 5 mcrocontroller wth twce bgger memory was used because of the hgh dmensonalty of Pendgts dataset. ATTny48 was chosen whch has 4kB of program memory and 256B of data memory. It s clear that proposed fxed-pont algorthm takes much less memory and s faster than the floatng-pont algorthm. Both tme and space costs of fxed-pont algorthm are nearly 3 tmes smaller. In fact, n order to run smulatons usng floatng-pont numbers, mcrocontroller ATTny84 wth four tmes larger memory was used snce the memory requrement was too bg for both ATTny233 and a more powerful ATTny48, colored red n the tables. The accuracy of these two methods s practcally the same, and the dfference n computatonal cost s therefore even more strkng. In addton, t can be concluded that accuracy depends greatly on the number of support vectors. The hgher number of support vectors usually means hgher accuracy. However, n our experments, hgher number of support vectors s acheved at the expense of smaller bt-lengths, so the results n the tables can be msleadng. In some experments larger support vector set does not necessarly lead to better performance because the quantzaton error s too bg and ths affects predctons consderably. It should be noted that the support vector set sze/bt-length trade-off s not the topc of ths work and wll be addressed n our future study. 4. CO CLUSIO In ths paper we proposed approxmaton of Kernel Perceptron that s well-sutable for mplementaton on resource-lmted devces. The new algorthm uses only ntegers, wthout any need for floatng-pont numbers, whch makes t perfect for smple devces such as mcrocontrollers. The presented results show that the approach consdered n ths paper can be successfully mplemented. The descrbed algorthm approxmates the kernel functon defned n (6) wth hgh accuracy, and yelds good results on several dfferent datasets. The results are, as expected, not perfect, whch s the consequence of the hghly lmted computaton capabltes of ATTny233 that requred several approxmatons n calculaton of the predcton. Whle the quantzaton and approxmaton error lmt the performance, the degradaton s relatvely moderate consderng very lmted computatonal resources. The smulatons on ATTny mcrocontrollers proved that the algorthm s very frugal and fast, whle beng able to match the performance of ts unconstraned counterpart. Although the kernel functon defned n (6) s used, whch s a slghtly modfed RBF kernel, the fxed-pont method can also be used to approxmate the orgnal RBF kernel (2). The proposed dea could probably be extended to some other kernel functons

7 Table 3 Mcrocontroller mplementaton costs Banana 2 bts 4 bts 6 bts 8 bts ATTny233 Fxed Float Fxed Float Fxed Float Fxed Float Program [B] (2048B) Data [B] (28B) Tme [ms] Accuracy [%] # of SVs Table 4 Mcrocontroller mplementaton costs Checkerboard 2 bts 4 bts 6 bts 8 bts ATTny233 Fxed Float Fxed Float Fxed Float Fxed Float Program [B] (2048B) Data [B] (28B) Tme [ms] Accuracy [%] # of SVs Table 5 Mcrocontroller mplementaton costs Pendgts 2 bts 4 bts 6 bts 8 bts ATTny48 Fxed Float Fxed Float Fxed Float Fxed Float Program [B] (4096B) Data [B] (256B) Tme [ms] Accuracy [%] # of SVs that requre operatons wth floatng-pont numbers, such as polynomal kernel. Ths would be of great practcal mportance for problems that do not work well wth the RBF kernel. Ths ssue wll be pursued n our future work. The proposed algorthm could be mproved by some more advanced budget mantenance method, such as the one descrbed n [8]. By usng support vector mergng, the performance of predctor would certanly mprove, but t s to be seen f the lmted devce can support such an approach. Also, n ths paper we assumed that parameters A and B, kernel wdth and bt-length, respectvely, are gven. Although ths s reasonable f we want to use ths method for applcatons where the classfcaton problem s well understood, t would be nterestng to mplement on a resource-lmted devce an algorthm that can automatcally fnd the optmal parameter values. 5. ACK OWLEDGME TS Ths work s funded n part by NSF grant IIS REFERE CES [] Anguta, D., Bon, A., Rdella, S., Dgtal Kernel Perceptron, Electroncs letters, 38: 0, pp , [2] Anguta, D., Pschutta, S., Rdella, S., Sterp, D., Feed Forward Support Vector Machne wthout multplers, IEEE Transactons on eural etworks, 7, pp , [3] Cesa-Banch, N., Gentle, C., Trackng the best hyperplane wth a smple budget Perceptron, Proc. of the neteenth Annual Conference on Computatonal Learnng Theory, pp , [4] Crammer, K., Kandola, J., Snger, Y., Onlne Classfcaton on a Budget, Advances n eural Informaton Processng Systems, [5] Dekel, O., Shalev-Shwartz, S., Snger, Y., The Forgetron: A kernel-based Perceptron on a fxed budget, Advances n eural Informaton Processng Systems, pp , [6] Freund, Y., Schapre, D., Large Margn Classfcaton Usng the Perceptron Algorthm, Machne Learnng, pp , 998.

8 [7] Hldebrand, F. B., Introducton to Numercal Analyss, 2 nd edton, Dover, 987. [8] Nguyen, D., Ho, T., An effcent method for smplfyng support vector machnes, Proceedngs of ICML, pp , [9] Orabona, F., Keshet, J., Caputo, B., The Projectron: a bounded kernel-based Perceptron, Proceedngs of ICML, [0] Vapnk, V., The nature of statstcal learnng theory, Sprnger, 995. [] Vucetc, S., Corc, V., Wang, Z., Compressed Kernel Perceptrons, Data Compresson Conference, pp , [2] Wang, Z., Crammer, K., Vucetc, S., Mult-Class Pegasos on a Budget, Proceedngs of ICML, 200. [3] Wang, Z., Vucetc, S., Tghter Perceptron wth Improved Dual Use of Cached Data for Model Representaton and Valdaton, Proceedngs of IJC, [4] ATTny233 Datasheet [5]

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Tighter Perceptron with Improved Dual Use of Cached Data for Model Representation and Validation

Tighter Perceptron with Improved Dual Use of Cached Data for Model Representation and Validation Proceedngs of Internatonal Jont Conference on Neural Networks, Atlanta, Georga, USA, June 49, 29 Tghter Perceptron wth Improved Dual Use of Cached Data for Model Representaton and Valdaton Zhuang Wang

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Array transposition in CUDA shared memory

Array transposition in CUDA shared memory Array transposton n CUDA shared memory Mke Gles February 19, 2014 Abstract Ths short note s nspred by some code wrtten by Jeremy Appleyard for the transposton of data through shared memory. I had some

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Conditional Speculative Decimal Addition*

Conditional Speculative Decimal Addition* Condtonal Speculatve Decmal Addton Alvaro Vazquez and Elsardo Antelo Dep. of Electronc and Computer Engneerng Unv. of Santago de Compostela, Span Ths work was supported n part by Xunta de Galca under grant

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Data Representation in Digital Design, a Single Conversion Equation and a Formal Languages Approach

Data Representation in Digital Design, a Single Conversion Equation and a Formal Languages Approach Data Representaton n Dgtal Desgn, a Sngle Converson Equaton and a Formal Languages Approach Hassan Farhat Unversty of Nebraska at Omaha Abstract- In the study of data representaton n dgtal desgn and computer

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier Some materal adapted from Mohamed Youns, UMBC CMSC 611 Spr 2003 course sldes Some materal adapted from Hennessy & Patterson / 2003 Elsever Scence Performance = 1 Executon tme Speedup = Performance (B)

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Efficient Text Classification by Weighted Proximal SVM *

Efficient Text Classification by Weighted Proximal SVM * Effcent ext Classfcaton by Weghted Proxmal SVM * Dong Zhuang 1, Benyu Zhang, Qang Yang 3, Jun Yan 4, Zheng Chen, Yng Chen 1 1 Computer Scence and Engneerng, Bejng Insttute of echnology, Bejng 100081, Chna

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

RADIX-10 PARALLEL DECIMAL MULTIPLIER

RADIX-10 PARALLEL DECIMAL MULTIPLIER RADIX-10 PARALLEL DECIMAL MULTIPLIER 1 MRUNALINI E. INGLE & 2 TEJASWINI PANSE 1&2 Electroncs Engneerng, Yeshwantrao Chavan College of Engneerng, Nagpur, Inda E-mal : mrunalngle@gmal.com, tejaswn.deshmukh@gmal.com

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search Sequental search Buldng Java Programs Chapter 13 Searchng and Sortng sequental search: Locates a target value n an array/lst by examnng each element from start to fnsh. How many elements wll t need to

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

Efficient Distributed File System (EDFS)

Efficient Distributed File System (EDFS) Effcent Dstrbuted Fle System (EDFS) (Sem-Centralzed) Debessay(Debsh) Fesehaye, Rahul Malk & Klara Naherstedt Unversty of Illnos-Urbana Champagn Contents Problem Statement, Related Work, EDFS Desgn Rate

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

K-means and Hierarchical Clustering

K-means and Hierarchical Clustering Note to other teachers and users of these sldes. Andrew would be delghted f you found ths source materal useful n gvng your own lectures. Feel free to use these sldes verbatm, or to modfy them to ft your

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

Floating-Point Division Algorithms for an x86 Microprocessor with a Rectangular Multiplier

Floating-Point Division Algorithms for an x86 Microprocessor with a Rectangular Multiplier Floatng-Pont Dvson Algorthms for an x86 Mcroprocessor wth a Rectangular Multpler Mchael J. Schulte Dmtr Tan Carl E. Lemonds Unversty of Wsconsn Advanced Mcro Devces Advanced Mcro Devces Schulte@engr.wsc.edu

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

Related-Mode Attacks on CTR Encryption Mode

Related-Mode Attacks on CTR Encryption Mode Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

Wavefront Reconstructor

Wavefront Reconstructor A Dstrbuted Smplex B-Splne Based Wavefront Reconstructor Coen de Vsser and Mchel Verhaegen 14-12-201212 2012 Delft Unversty of Technology Contents Introducton Wavefront reconstructon usng Smplex B-Splnes

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

THE PULL-PUSH ALGORITHM REVISITED

THE PULL-PUSH ALGORITHM REVISITED THE PULL-PUSH ALGORITHM REVISITED Improvements, Computaton of Pont Denstes, and GPU Implementaton Martn Kraus Computer Graphcs & Vsualzaton Group, Technsche Unverstät München, Boltzmannstraße 3, 85748

More information

CHAPTER 2 DECOMPOSITION OF GRAPHS

CHAPTER 2 DECOMPOSITION OF GRAPHS CHAPTER DECOMPOSITION OF GRAPHS. INTRODUCTION A graph H s called a Supersubdvson of a graph G f H s obtaned from G by replacng every edge uv of G by a bpartte graph,m (m may vary for each edge by dentfyng

More information

Newton-Raphson division module via truncated multipliers

Newton-Raphson division module via truncated multipliers Newton-Raphson dvson module va truncated multplers Alexandar Tzakov Department of Electrcal and Computer Engneerng Illnos Insttute of Technology Chcago,IL 60616, USA Abstract Reducton n area and power

More information

Parallel Inverse Halftoning by Look-Up Table (LUT) Partitioning

Parallel Inverse Halftoning by Look-Up Table (LUT) Partitioning Parallel Inverse Halftonng by Look-Up Table (LUT) Parttonng Umar F. Sddq and Sadq M. Sat umar@ccse.kfupm.edu.sa, sadq@kfupm.edu.sa KFUPM Box: Department of Computer Engneerng, Kng Fahd Unversty of Petroleum

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints TPL-ware Dsplacement-drven Detaled Placement Refnement wth Colorng Constrants Tao Ln Iowa State Unversty tln@astate.edu Chrs Chu Iowa State Unversty cnchu@astate.edu BSTRCT To mnmze the effect of process

More information

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe CSCI 104 Sortng Algorthms Mark Redekopp Davd Kempe Algorthm Effcency SORTING 2 Sortng If we have an unordered lst, sequental search becomes our only choce If we wll perform a lot of searches t may be benefcal

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

Complex System Reliability Evaluation using Support Vector Machine for Incomplete Data-set

Complex System Reliability Evaluation using Support Vector Machine for Incomplete Data-set Internatonal Journal of Performablty Engneerng, Vol. 7, No. 1, January 2010, pp.32-42. RAMS Consultants Prnted n Inda Complex System Relablty Evaluaton usng Support Vector Machne for Incomplete Data-set

More information

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

Learning Non-Linearly Separable Boolean Functions With Linear Threshold Unit Trees and Madaline-Style Networks

Learning Non-Linearly Separable Boolean Functions With Linear Threshold Unit Trees and Madaline-Style Networks In AAAI-93: Proceedngs of the 11th Natonal Conference on Artfcal Intellgence, 33-1. Menlo Park, CA: AAAI Press. Learnng Non-Lnearly Separable Boolean Functons Wth Lnear Threshold Unt Trees and Madalne-Style

More information

Explicit Formulas and Efficient Algorithm for Moment Computation of Coupled RC Trees with Lumped and Distributed Elements

Explicit Formulas and Efficient Algorithm for Moment Computation of Coupled RC Trees with Lumped and Distributed Elements Explct Formulas and Effcent Algorthm for Moment Computaton of Coupled RC Trees wth Lumped and Dstrbuted Elements Qngan Yu and Ernest S.Kuh Electroncs Research Lab. Unv. of Calforna at Berkeley Berkeley

More information

Fast Computation of Shortest Path for Visiting Segments in the Plane

Fast Computation of Shortest Path for Visiting Segments in the Plane Send Orders for Reprnts to reprnts@benthamscence.ae 4 The Open Cybernetcs & Systemcs Journal, 04, 8, 4-9 Open Access Fast Computaton of Shortest Path for Vstng Segments n the Plane Ljuan Wang,, Bo Jang

More information

High level vs Low Level. What is a Computer Program? What does gcc do for you? Program = Instructions + Data. Basic Computer Organization

High level vs Low Level. What is a Computer Program? What does gcc do for you? Program = Instructions + Data. Basic Computer Organization What s a Computer Program? Descrpton of algorthms and data structures to acheve a specfc ojectve Could e done n any language, even a natural language lke Englsh Programmng language: A Standard notaton

More information

General Vector Machine. Hong Zhao Department of Physics, Xiamen University

General Vector Machine. Hong Zhao Department of Physics, Xiamen University General Vector Machne Hong Zhao (zhaoh@xmu.edu.cn) Department of Physcs, Xamen Unversty The support vector machne (SVM) s an mportant class of learnng machnes for functon approach, pattern recognton, and

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR Judth Aronow Rchard Jarvnen Independent Consultant Dept of Math/Stat 559 Frost Wnona State Unversty Beaumont, TX 7776 Wnona, MN 55987 aronowju@hal.lamar.edu

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

Correlative features for the classification of textural images

Correlative features for the classification of textural images Correlatve features for the classfcaton of textural mages M A Turkova 1 and A V Gadel 1, 1 Samara Natonal Research Unversty, Moskovskoe Shosse 34, Samara, Russa, 443086 Image Processng Systems Insttute

More information

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process

More information

Analysis of Collaborative Distributed Admission Control in x Networks

Analysis of Collaborative Distributed Admission Control in x Networks 1 Analyss of Collaboratve Dstrbuted Admsson Control n 82.11x Networks Thnh Nguyen, Member, IEEE, Ken Nguyen, Member, IEEE, Lnha He, Member, IEEE, Abstract Wth the recent surge of wreless home networks,

More information

Data Mining: Model Evaluation

Data Mining: Model Evaluation Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct

More information

Efficient Distributed Linear Classification Algorithms via the Alternating Direction Method of Multipliers

Efficient Distributed Linear Classification Algorithms via the Alternating Direction Method of Multipliers Effcent Dstrbuted Lnear Classfcaton Algorthms va the Alternatng Drecton Method of Multplers Caoxe Zhang Honglak Lee Kang G. Shn Department of EECS Unversty of Mchgan Ann Arbor, MI 48109, USA caoxezh@umch.edu

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information