SVM-based Learning for Multiple Model Estimation

Size: px
Start display at page:

Download "SVM-based Learning for Multiple Model Estimation"

Transcription

1 SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN Abstract: Ths paper presents new constructve learnng methodology for multple model estmaton. Under multple model formulaton, tranng data are generated by several (unknown) statstcal models, so exstng learnng methods (for classfcaton or regresson) based on a sngle model formulaton are no longer applcable. We descrbe general framework for multple model estmaton usng SVM methodology. The proposed constructve methodology s analyzed n detal for regresson formulaton. We also present several emprcal examples for multple-model regresson formulaton. These emprcal results llustrate advantages of the proposed multple model estmaton approach.. Introducton Ths paper descrbes constructve learnng methods for multple model regresson formulaton proposed n [Cherkassky and Ma, 00]. Under ths formulaton, avalable (tranng) data ( x, y ), =,,..., n are generated by several (unknown) regresson models, so the goal of

2 learnng s two-fold,.e., parttonng of avalable data nto several subsets and estmatng regresson models for each subset of avalable data. Hence, the problem of multple model estmaton s nherently more complex than tradtonal supervsed learnng where all tranng data are used to estmate a sngle model. Conceptually, there are two prncpal approaches for desgnng constructve learnng methods for multple model estmaton: - () Frst partton avalable data nto several subsets, then estmate model parameters for each subset of data; - () Frst estmate a domnant model usng all avalable data, and then partton the data nto several subsets. Under the frst approach, the learnng starts wth a clusterng step (unsupervsed learnng) followed by supervsed learnng on a subset of avalable data. Practcal mplementaton of ths approach s descrbed n [Tanaka, 00] usng the framework of mxture densty estmaton, where each (hdden) model s modeled as a component n a mxture. The man practcal lmtatons of ths approach are as follows: - Inherent complexty of densty estmaton (wth fnte samples). There s theoretcal and emprcal evdence that densty estmaton s much harder than supervsed learnng (regresson) wth fnte samples [Vapnk, 999, Cherkassky and Muler, 998]; - Moreover, the settng of multple model estmaton leads to clusterng/densty estmaton n local regons of the nput space. That s, for a gven nput value, there may be several dstnctly dfferent output (response) values, correspondng to dfferent models. Hence, data parttonng (clusterng) should be based on dfferent response values, and ths leads clusterng/densty estmaton n local regons of the nput space. Practcal mplementaton of such clusterng usng a (small) porton of avalable data becomes very problematc wth fnte samples due to the curse of dmensonalty.

3 3 Under the second approach, we apply robust regresson estmator to all avalable data n order to estmate a domnant model (where a domnant model s a model that descrbes the majorty of data samples). Clearly, ths approach s better than clusterng/densty estmaton strategy because: - It s based on regresson rather than densty estmaton formulaton; - It uses all avalable data (rather than a porton of the data) for model estmaton. Hence, n ths paper we focus on mplementaton of the second approach. The man practcal requrement for ths approach s avalablty of robust regresson algorthm, where robustness refers to capablty of estmatng a sngle (domnant) model when avalable data are generated by several (hdden) models possbly corrupted by addtve nose. Ths noton of robustness s somewhat dfferent from tradtonal robust estmaton technques. Ths s because standard robust methods are stll based on a sngle-model formulaton, where the goal of robust estmaton s resstance (of estmates) wth respect to unknown nose models. Recently, robust statstcal methods have been appled to computer vson problems that can be descrbed usng multple model formulaton. In these studes, exstence of multple models (n the data) s referred to as the presence of structured outlers [Chen et al, 000]. Emprcal evdence suggests that tradtonal robust statstcal methods usually fal n the presence of structured outlers, especally when the model nstances are corrupted by sgnfcant nose [Chen et al, 000]. Ths can be explaned as follows. When the data are generated by several (hdden) models, each of the data subsets (structures) has the same mportance, and relatve to any one of them the rest of the data are outlers. As a result, the noton of the breakdown pont (n robust statstcs) whch descrbes processng the majorty of data ponts loses ts meanng under multple-model formulaton. Moreover, tradtonal robust estmaton methods cannot handle more than 30% of outlers n the

4 4 data [Rousseeuw and Leroy, 987]. Hence, we need to develop new constructve learnng methodologes for multple model estmaton. Ths paper descrbes new learnng algorthms for multple model estmaton based on SVM methodology. The followng smple example llustrates desrable propertes of robust algorthms for multple model estmaton. Consder a data set comprsng two (lnear) regresson models: domnant model M (70% of the data) and secondary model M (30% of the data) shown n Fg.a. The data are corrupted by addtve gaussan nose (wth standard devaton 0.). Results n Fg. b show the model estmated by (lnear) SVM regresson wth nsenstve zone ε =0.084, and the model estmated by ordnary least squares (OLS). Both estmaton algorthms use all avalable data (generated by both models). OLS method produces rather naccurate model, whereas SVM produces very accurate estmate of the domnant model M. Further, data set n Fg. c shows another data set generated usng the same domnant model M but completely dfferent secondary model M. Applcaton of SVM (wth the same ε -value) to ths data set yelds (almost) dentcal estmate of the domnant model M, as shown n Fg. d. However, applcaton of OLS to ths data set yelds an estmate of M (shown n Fg. d) that s completely dfferent from the estmate n Fg. b. Note that the number of (hdden) models s unknown (to an estmaton method), and we use two models only to smplfy presentaton. Ths example shows an exstence of robust regresson algorthm that can be used to accurately estmate a sngle (domnant) model from a data set generated by several models. Here robustness (n the context of multple model estmaton) refers to accurate estmaton of the domnant model and the stablty of such estmates n spte of sgnfcant potental varablty of data generated by secondary model(s). Gong back to example n Fg. : after the domnant mode M has been dentfed by a robust regresson method, t may be possble to dentfy and remove data samples generated by M, and then apply robust regresson to the remanng data n order to estmate the next model.

5 5 Hence, we propose general methodology for multple model estmaton based on successve applcaton of robust regresson algorthm to avalable data, so that durng each (successve) teraton, we estmate a sngle (domnant) model and then partton the data nto two subsets. Ths teratve procedure s outlned next: Table : PROCEDURE for MULTIPLE MODEL ESTIMATION Intalzaton: Avalable data = all tranng samples. Step : Estmate domnant model,.e. apply robust regresson to avalable data, resultng n a domnant model M (descrbng the majorty of avalable data). Step : Partton avalable data nto two subsets,.e. samples generated by M and samples generated by other models (the remanng data). Ths parttonng s performed by analyzng avalable data samples ordered accordng to ther dstance (resduals) to domnant model M. Step 3: Remove subset of data generated by domnant model from avalable data. Iterate: Apply Steps -3 to avalable data untl some stoppng crteron s met. It s mportant to note here that the above procedure reles heavly on the exstence of robust (regresson) estmaton algorthm that can relably dentfy and estmate a domnant model (descrbng majorty of avalable data) n the presence of (structured) outlers and nose. The exstence of such robust regresson method based on Support Vector Machne (SVM) regresson has been demonstrated n the example shown n Fg.. However, results n Fg. are purely emprcal and requre further explanaton, snce the orgnal SVM methodology has been developed for sngle model formulaton. Even though SVM s known for ts robustness, ts applcaton for multple model estmaton s far from beng obvous.

6 6 In the next secton we provde conceptual and theoretcal justfcaton for usng SVM method n the context of multple model estmaton,.e. we explan why SVM regresson can be used n Step of an teratve procedure outlned above. Secton 3 descrbes detals and mplementaton of the parttonng Step. In addton, Secton 3 provdes gudelnes on selecton of meta-parameters for SVM regresson, mportant for practcal applcaton of SVM. Secton 4 presents emprcal results for multple model estmaton. These results show successful applcaton of the proposed SVM-based multple model estmaton for both lnear and nonlnear regresson models. Secton 5 presents a clusterng algorthm based on multple model estmaton, where the goal (of learnng) s to partton avalable (tranng) data nto several subsets, such that each subset s generated by a dfferent model. Fnally, conclusons are gven n Secton 6.. SVM Regresson for robust model estmaton It s well known that Support Vector Machne (SVM) methodology s robust under standard sngle-model estmaton settng [Vapnk, 999]. That s, SVM approach works well for estmatng ndcator functon (pattern recognton problem) and for estmatng real-valued functon (regresson problem) from nosy sparse tranng data. In ths secton, we demonstrate SVM robustness under multple model estmaton settng,.e. we explan why SVM regresson provdes stable and accurate estmates of the domnant model when avalable (tranng) data are generated by several (hdden) models. Frst, we revew standard (lnear) SVM regresson formulaton [Vapnk, 995]. The goal of regresson s to select the best model from a set of admssble models (aka approxmatng functons) f ( x, ω), where ω denotes (generalzed) set of parameters. The best model provdes

7 7 good predcton accuracy (generalzaton) for future (test) samples, and ts selecton s performed va mnmzaton of some loss functon (aka emprcal rsk) for avalable tranng data ( x, y ), =,,..., n. The man feature of SVM regresson responsble for ts attractve propertes s the noton of ε -nsenstve loss functon: L( y, 0 f y f ( x, ω) ε f ( x, ω)) = () y f ( x, ω) ε, otherwse Here the lnear nature of the loss functon accounts for SVM robustness whereas the ε - nsenstve zone leads to sparseness of SVM regresson models [Vapnk, 995]. Let us consder (for smplcty) lnear SVM regresson: f ( x, ω) =< ω, x > +b () SVM approach to lnear regresson amounts to (smultaneous) mnmzaton of ε -nsenstve loss functon () and mnmzaton of the norm of lnear parameters ω [Vapnk, 995]. Ths can * be formally descrbed by ntroducng (non-negatve) slack varables ξ ξ =,... n, to measure the devaton of tranng samples outsde ε -nsenstve zone. Thus SVM regresson can be formulated as mnmzaton of the followng functonal: Subject to constrants n =, * ω + C ( ξ + ξ ) (3) y < < ω, x ω, x ξ, ξ > + b y * > b ε + ξ * ε + ξ 0, =,..., n The constant C determnes the trade off between the model complexty (flatness) and the degree to whch devatons larger than ε are tolerated n optmzaton formulaton. Ths optmzaton problem can be transformed nto the dual problem [Vapnk, 995], and ts soluton s gven by

8 8 n * f (x) = ( α α ) < x,x > + b (4) = wth coeffcent values n the range 0 α * C, 0 α C. In representaton (4), typcally only a fracton of tranng samples appear wth non-zero coeffcents, and such tranng samples are called support vectors. For most applcatons, the number of support vectors (SVs) n SV s usually smaller than the number of tranng samples. Thus, wth ε -nsenstve loss, SVM solutons are typcally sparse. For nonlnear regresson problem, SVM approach performs frst a mappng from the nput space onto a hgh-dmensonal feature space and then performs lnear regresson n the hghdmensonal feature space. The SVM soluton s n * f ( x) = ( α α ) K( x, x) + b (5) = where the K(, x) s a kernel functon. The choce of the kernel functons and kernel parameters x s determned by a user and s (usually) applcaton-dependent. In ths paper, we use RBF kernels K x x x, x) = exp( ) (6) p ( where p s RBF wdth parameter. Next, we explan why SVM regresson s sutable for estmatng the domnant model under multple model formulaton. We assume, for smplcty, lnear SVM formulaton (4); however smlar arguments hold for nonlnear SVM as well. The objectve functon n (3) can be vewed as a prmal problem, and ts dual form can be obtaned by constructng Lagrange functon and ntroducng a set of (dual) varables [Vapnk, 995]. For the dual form, the so called Karush- Kuhn-Tucker (KKT) condtons hold at the optmal soluton, whch state that the product between dual varables and constrants has to vansh:

9 9 α ( ε + ξ y + < ω, x > + b) = 0 * * α ( ε + ξ y < ω, x > b) = 0 (7) ( C α ) ξ = 0 * ( C α ) ξ * = 0 We may further analyze propertes of coeffcents α (dual varables) n the SVM soluton evdent from KKT condtons [Smola and Schölkopf, 998]. Frst, only samples wth correspondng α = C le outsde the ε nsenstve zone. Second, condtonα α * = 0, mples that dual varables α * and α cannot be smultaneously be nonzero, snce nonzero slack cannot happen n both drectons. Let us analyze contrbuton of tranng samples n SVM soluton (4). As shown n Fg., all data samples can be dvded nto 3 subsets: data ponts nsde the ε -tube (labeled as n Fg..), data ponts on the ε -tube (border) (labeled as n Fg..) and data ponts outsde the ε -tube (label as n Fg..). Note that data samples nsde the ε -tube cannot be support vectors, whereas data samples on the ε -tube border and outsde the ε -tube are the support vectors but they have dfferent values of the slack varables ξ and dual varables α, as summarzed n Table. Table : Values of Slack Varables and dual varables for dfferent subsets Sample Locaton SV ξ α Subset Insde the ε -tube Not SV ξ =0 α = 0 Subset On the ε -tube Is SV ξ =0 α (0, C) Subset 3 Outsde the ε -tube Is SV ξ >0 α = C Recall that the coeffcents ω n the (lnear) SVM soluton (4) are calculated as

10 0 n ω = ( α α )x = * (8) where non-zero contrbuton s provded only by support vectors, whch are the data ponts n subset (on the ε -tube) and subset 3 (outsde the ε -tube). Further, the value of coeffcent ω s determned by α and x -value of tranng samples, however for samples n subset 3, the value α = C (constant) does not depend on the y -values of tranng samples. Hence, data ponts from subset 3 gve the same contrbuton to SVM soluton regardless of ther y -values,.e. ndependent of how far away ther y -values are from the ε -tube. Ths property enables robust SVM estmaton of the domnant model durng multple model estmaton. For example n Fg., consder two ponts, labeled as Pont and Pont. Although ther y -values are qute dfferent, ther x -values are very close (or equal) and the correspondng α = C, so ther contrbutons to SVM soluton (8) are (approxmately) the same. Smlarly, one can analyze contrbuton of data samples to the bas term n SVM soluton. Followng [Smola and Schölkopf, 998] the bas term (b) s gven by: b b < ω, x > ε for α ( 0, C ) (9) = y * < ω, x > +ε for α (0, C ) = y where the constrant α (0, C ) corresponds to data ponts n Subset (on the border of ε - tube). Hence, the ponts outsde the ε -tube (n subset 3) do not contrbute to the bas,.e. outlers (samples outsde the ε -tube) have no affect on the value of bas. In summary, our analyss of (lnear) SVM regresson presented above ndcates that: - SVM regresson model depends manly on SVs on the border of ε -nsenstve zone;

11 - SVM regresson soluton s very robust to outlers (.e. data samples outsde ε - nsenstve zone). In partcular, SVM soluton does not depend on the y-values of such outlers. These propertes make SVM very attractve for ts use n an teratve procedure for multple model estmaton descrbed n Secton, where a robust estmator appled to all tranng data needs to provde a relable estmate of the frst domnant model. The man practcal ssue s specfyng condtons under whch SVM regresson would yeld an accurate estmate of the domnant model, under multple model settng. To answer ths queston, recall that an SVM model depends manly on SVs on the border of ε -nsenstve zone. Hence, SVM regresson would provde an accurate estmate of the domnant model only f these SVs are generated by the domnant model. Ths wll happen only f all (or most) samples n subset (nsde the ε -tube) and samples n subset (on the ε -tube) are generated by the domnant model. Snce the SVM model s estmated usng all tranng data (from several models), the last condton mples that the majorty of the data (say, over 55%) should be generated by domnant model. The requrement that majorty of avalable data should be generated by a domnant model s standard n robust statstcs [Lawrence and Arthur, 987]. Here we smply derved ths condton for SVM algorthm n the context of multple model estmaton. 3. SVM methodology for multple model estmaton Ths secton descrbes practcal algorthms (based on SVM regresson) for multple model estmaton. These algorthms are based on the teratve procedure descrbed n Secton. However, practcal mplementaton of ths procedure requres addressng the followng ssues: - How to set meta-parameters of SVM regresson;

12 - How to partton the data nto two subsets (after the domnant model had been estmated). In order to smplfy presentaton, all descrptons n ths paper assume that tranng data are generated by two models,.e. model M (domnant model) and model M (mnor model). The goal s to accurately estmate the domnant model n the frst teraton of an teratve procedure gven n Secton. Then the second model M s estmated n the second teraton of an algorthm. Generalzaton to data sets wth multple models s straghtforward. Selecton of SVM meta-parameters. Next we dscuss proper settng of ε (nsenstve zone) and C (regularzaton parameter) n SVM regresson for estmatng the domnant model n Step of an teratve procedure gven n Secton. There are many proposals for settng SVM metaparameters for standard sngle-model estmaton [Vapnk, 995; Cherkassky and Muler, 998; Schölkopf et al, 999; Haste et al, 00]. However, most theoretcal prescrptons for settng meta-parameters are based on restrctve assumptons and n practce SVM meta-parameters are often selected va resamplng [Schölkopf et al, 999]. In ths paper, however, we are nterested n selectng meta-parameters for multple model estmaton settng. Recently, [Cherkassky and Ma, 00] proposed analytcal selecton of SVM meta-parameters (for standard sngle-model regresson formulaton), as detaled next. For SVM regresson, the values of meta-parameters are: C = max( y + 3σ, y 3σ ) (0) y y where y s the mean of the tranng response values, and σ y s the standard devaton of the tranng response values; ln n ε ( σ, n) = 3σ () n where σ s the standard devaton of addtve nose and n s the number of tranng samples.

13 3 Further, t can be shown that the value of ε -parameter plays the most mportant role for SVM regresson, whereas SVM solutons are rather nsenstve to regularzaton parameter as long as ths parameter s larger than the value gven by (0) [Cherkassky and Ma, 00]. Ths nsenstvty to regularzaton parameter values s partcularly true for lnear SVM regresson formulaton (3). In other words, one can use very large value of regularzaton parameter n (3), so that SVM soluton depends only on proper settng of ε. So n the remander of the paper we shall be only concerned wth selecton of ε. In order to apply () for multple model estmaton, consder (for smplcty) only lnear SVM. Then n order to estmate domnant model M we should know the standard devatonσ of addtve nose n the domnant model and the number of samples generated by the domnant model M. Hence, we may consder two possbltes: - Frst, we assume that the nose level for each (hdden) model s avalable or can be somehow estmated (usng a pror knowledge). In ths case, we smply use () for selectng the value of ε. - Second, the nose level (standard devaton) and the number of samples for each model s not known. Let us consder the second (more dffcult) possblty. In ths case, selecton of ε reles on the requrement that majorty of avalable s generated by the domnant model (beng estmated by SVM). Hence, we need to select ε -value such that most of the data (say 55%) les nsde the ε -tube. Ths can be done by tral-and-error approach (.e., tryng dfferent ε -values and examnng support vectors n SVM estmates) or usng a more systematc approach called ν - SVM [Schölkopf et al, 998]. Ths approach effectvely mplements SVM regresson havng prespecfed number of support vectors specfed by parameter ν (.e., a gven fracton of the total number of samples). In the context of multple model estmaton, the requrement that 55% of the

14 4 data s nsde the nsenstve zone s equvalent to specfyng ν =0.45,.e. that 45% of the data s outsde the ε -tube. Remarkably, the ablty of SVM to accurately estmate the domnant model s not very senstve to the chosen wdth of ε -nsenstve zone. For example, let us apply (lnear) SVM to the data set shown n Fg., n order to estmate the domnant model M. Assumng the nose standard devatonσ =0. s known, the value of ε -nsenstve zone accordng to () should be ε = Ths value has been used to generate regresson estmates shown n Fg.. In practce, we can only know/use crude estmates of the nose level (and hence crude ε -values). So we try to estmate the domnant model for data set n Fg. a usng SVM regresson wth three dfferent ε - values (ε =0.084, 0.04 and 0.6). These values are chosen as a half and one-and-a-half of the value ε =0.084 specfed by (). Fg. 3 shows SVM estmates of the domnant model for dfferent ε -values; clearly these estmates are almost dentcal, n spte of sgnfcant varatons n ε -values. Hence, usng naccurate values of σ and n (the number of samples) for estmatng the value ε of va () should not affect accurate estmaton of the domnant model. For example, f the total number of samples s 00 (known number), then the (unknown) number of samples n the domnant model should be at least 50. Accordng to (), the dfference between ln and ln s about 5%, so usng naccurate values of the number of samples should result n 5% varaton n ε -values (n the worst case). Ths varaton would not affect the accuracy of SVM regresson estmates, as ndcated by emprcal results n Fg.3. Data parttonng step. Followng estmaton of the domnant model, we need to partton avalable data nto two subsets,.e. data generated by domnant model and the remanng data (generated by other models). Ths s done by analyzng the (absolute value of) resduals between the tranng response values y and SVM estmates provded by domnant model yˆ ( x ) :

15 5 res = y yˆ( x ) for =,..., n () Namely, tranng samples wth resdual values smaller than certan threshold are generated by domnant model, and samples wth large absolute values of resduals are generated by other model(s). Emprcally we found that a good threshold equals twce the standard devaton of addtve nose n the domnant model M: If res < σ then ( x, ) M (3) y Here we assume that the nose level σ s known a pror or can be (accurately) estmated from data. In fact, the nose level (ts standard devaton) can be readly estmated from data as outlned next. Let us plot the hstogram of resduals res = y yˆ ( x ) for tranng data. Snce SVM provdes very accurate estmates of the domnant model (n Step ) and the majorty of the data are produced by the domnant model, these samples wll form a large cluster (of resduals) symmetrc around zero, whereas samples generated by other models wll produce a few more smaller clusters. Then the standard devaton of nose (n the domnant model) can be estmated va standard devaton of resduals n a large cluster. Further, emprcal results (n Fg.3) ndcate that overall qualty of multple model estmaton procedure s not senstve to accurate knowledge/estmaton of the nose level. 4. Emprcal results Ths secton descrbes emprcal results for multple model estmaton procedure usng synthetc data sets. We only show examples where tranng samples generated by two models,.e. - Model M generates n samples accordng to y = r ( x) + δ Z = ( x, y ), =,,..., n are

16 6 - Model M generates n samples accordng to y = r ( x) + δ, so that n + n = n. Note that the same algorthm can been successfully appled when the data are generated by larger number of models (these results are not shown here due to space constrants). In all examples the nput values of the tranng data are generated as random samples from a unform dstrbuton. Both hdden models are defned n the same doman n the nput space. We use addtve gaussan nose to generate tranng data for the examples presented n ths secton. However, the proposed method works well wth other types of nose; also the standard devaton (of nose) may be dfferent for dfferent models. To smplfy the presentaton, the standard devaton of nose s assumed to be known (to the algorthm); however n practcal settngs the nose level can be estmated from data, as descrbed n secton 3. The frst example assumes that both models are lnear, hence we apply lnear SVM regresson method (wthout kernels). In ths example we have 00 tranng samples generated as follows: - Model M: y = r ( x) + δ, where r ( x) = 0.8x +, x [0,], n = 60 (major model). - Model M: y = r ( x) + δ, where r ( x) = 0.x +, x [0,], n = 40 (mnor model). We consder two nose levels: σ = σ =0. (small nose) and σ = σ =0.3 (large nose). The tranng data sets (wth small nose and wth large nose) are shown n Fg. 4, wth samples generated by major model labeled as +, and samples generated by mnor model labeled as. These data are hard to separate vsually (by a human eye) n the case of large nose. However, the proposed method can accurately estmate both models, and separate the tranng data, as shown n Fg.4. As expected, the model estmaton accuracy s better for low nose; however even wth large nose the model estmates are qute good.

17 7 In second example, both models are nonlnear. Hence we use nonlnear SVM regresson. An RBF kernel functon (6) wth wdth parameter p=0. s used n ths example. Avalable tranng data (total of 00 samples) are generated as follows: - Model M: y = r ( x) + δ, where r ( x) = sn(πx), x [0,], n =70 - Model M: y = r ( x) + δ, where r ( x) = cos(πx), x [0,], n = 30. Agan, we consder two nose levels: σ = σ =0. (small nose) and σ = σ =0.3 (large nose). The tranng data sets (wth small nose and wth large nose) are shown n Fg. 5. Results n Fg. 5 ndcate that the proposed method provdes very accurate model estmates, even n the case of large nose. Fnally, we show an example of multple model estmaton for hgher-dmensonal data. We consder lnear models n a 4-dmensonal nput space, so that avalable tranng data are generated as follows: - Model M: y = r ( x) + δ, where r ( x ) = x + x + x3 + x4, 4 x [0,], n 60 (major = model). - Model M: y = r ( x) + δ,where r ( x ) = 6 x x3 x4, 4 x [0,], n 40 (mnor = model). Tranng data are corrupted by addtve nose wth standard devaton σ = σ =0.. For ths data set, we llustrate the data parttonng step n Fg.6. Results n Fg. 6 show the dstrbuton of resdual values,.e. the dfference between response values and M model estmates (normalzed by standard devaton of nose) calculated accordng to (). Resdual values for the frst 60 samples (generated by model M) are on the left-hand sde of Fg. 6, and the next 40 samples (generated by model M) are on the rght-hand sde of Fg.6. Parttonng of data samples s performed usng resdual values accordng to (3) usng threshold value, and

18 8 ths threshold s ndcated as a horzontal lne n the mddle of Fg.6. That s, samples below ths lne are assumed to orgnate from M, and samples above ths lne are assumed to orgnate from M. As expected, data parttonng s not very accurate, snce some samples from M are classfed as samples from M, and vce versa. Ths s because the two models actually provde the same (or very close) response values n a small regon of the nput space; so perfectly accurate classfcaton s not possble. However, the proposed multple model estmaton procedure provdes very accurate model estmates for ths data set. Namely, the estmates obtaned for models M and M are: For model M: y ˆ( x ) = x x +.08x x4 MSE (for M) = 0.077; For model M: yˆ ( x ) = x 0.93x x x4 MSE (for M) = Clearly, the above estmates are very close to the target functons used to generate nosy data. The MSE measure ndcates the mean-squared-error between regresson estmates and the true target functon (for each model) obtaned usng 500 ndependently generated test samples. 5. Clusterng usng multple model estmaton In many applcatons, the goal of data modelng (assumng that data are generated by several models) s to cluster/partton avalable data nto several subsets, correspondng to dfferent generatng models. Ths goal s concerned manly wth accurate parttonng of the data, rather than wth accurate estmaton of the (hdden) regresson models, even though these two objectves are hghly correlated. Example shown n Fg. 6 llustrates the problem: even though data parttonng (mplemented by proposed algorthm) s not very accurate, the algorthm produces very accurate and robust estmates of regresson models. In ths secton we show how to mprove the accuracy of data parttonng under multple model estmaton formulaton.

19 9 For example, consder nonlnear data set descrbed n Secton 4 and shown n Fg. 5c. For ths data set, some samples are very dffcult to assgn to an approprate model, especally n regons of the nput space where the models have smlar response values. For ths data set, the proposed multple model estmaton algorthm correctly classfes 90.4% of samples generated by major model M, and 53.5% of samples generated by mnor model M. However, the accuracy of data parttonng can be further mproved usng smple post processng procedure descrbed next. Ths procedure uses regresson estmates provded by proposed multple model estmaton algorthm. Let us denote regresson estmate for major model M as y ( ( x), and regresson estmate for ˆ ) ( mnor model M as y ( x). Then each tranng sample ( x, y ), =,,..., n can be assgned to one of the two models based on the (absolute) value of resduals: ˆ ) res () () () ( ) = y yˆ ( x ) and res = y yˆ ( x ) That s: If () () res < res then (, y ) M x else ( x, ) M (4) y Effectvely, such post processng method mplements nearest neghbor classfcaton usng (absolute value of) resduals. Applyng prescrpton (4) for parttonng the data set shown n Fg. 5c yelds classfcaton accuracy 9.6% for samples generated by M, and classfcaton accuracy 80% for samples generated by M. Hence, data re-parttonng technque (4) gves better accuracy than data parttonng produced by the orgnal multple model estmaton procedure. In concluson, we comment on applcablty and mplcatons of clusterng/data parttonng approach descrbed n ths secton. Ths approach to clusterng assumes that tranng data are generated by several models, and the clusterng reles heavly of accurate estmates of (regresson) models obtaned by robust SVM-based algorthm. Hence, the problem settng tself

20 0 combnes supervsed learnng (.e. estmaton of regresson models) and unsupervsed learnng (.e. data parttonng or clusterng). We expect ths approach to clusterng to outperform tradtonal clusterng technques for applcatons that can be descrbed usng multple model formulaton. Fnally, proposed nearest neghbor rule (4) for data parttonng assumes that both (hdden) models have the same nose level (or standard devaton), and the same msclassfcaton cost for both models. These assumptons hold true for the data set n Fg. 5c, and ths explans mproved classfcaton accuracy for ths example. In many applcatons, however, the nose levels and msclassfcaton costs for dfferent (hdden) models are not the same, and one should adjust the rule (4) to account for these dfferences. 6. Summary and dscusson Ths paper presents a new algorthm for multple model estmaton. The proposed method s based on SVM learnng adapted to multple model formulaton. Emprcal results presented n ths paper demonstrate that SVM-based learnng can be successfully appled to multple model regresson problems. In addton, we ntroduced a new clusterng/data parttonng method sutable for multple model formulaton. Future related work may focus on applcatons of the proposed methodology to real-world problems, rangng from computer vson (moton analyss) to fnancal engneerng. As dscussed n [Cherkassky and Ma, 00], such applcatons should be based on a thorough understandng of each applcaton doman, necessary for a meanngful specfcaton/parameterzaton of (hdden) models. Addtonal research may be concerned wth better understandng of robustness of SVM methodology, and comparng t wth tradtonal robust methods n the context of multple model estmaton.

21 Fnally, we pont out that the proposed learnng method assumes that all (hdden) models are defned on the same doman (.e., the same regon n the nput space). In stuatons where dfferent models are defned n dfferent (dsjont) regons of the nput space, the proposed algorthm can not be successfully appled. Instead, one should use well-known (tree) parttonng algorthms such as CART, mxture of experts and ther varants [Haste et al, 00]. These algorthms effectvely partton the nput space nto several (dsjont) regons and estmate output (response) values n each regon of the nput space. Here t may be nterestng to note that tree parttonng algorthms are based on a sngle model formulaton, so they tend to enforce smoothness at the regon boundares. It may be possble to develop learnng algorthms for regresson n dsjont regons usng multple model formulaton, and then compare ts accuracy wth tradtonal tree parttonng methods.

22 REFRENCES [] V. Cherkassky & Y. Ma, Multple Model Estmaton: A New Formulaton for Predctve Learnng, IEEE Trans. Neural Networks (under revew), 00 [] M. Tanaka, Mxture of Probablstc Factor Analyss Model and Its Applcaton, n Proc. ICANN 00, LNCS 30, pp , 00 [3] V. Vapnk. The Nature of Statstcal Learnng Theory ( nd ed.). Sprnger, 999 [4] V. Cherkassky & F. Muler, Learnng from Data: Concepts, Theory and Methods, Wley998 [5] H. Chen, P. Meer and D. Tyler, Robust Regresson for Data wth Multple Structures, n CVPR 00, Proc. IEEE Computer Socety Conf., pp , 00 [6] P. Rousseeuw and A. Leroy, Robust Regresson and Outler Detecton, Wley, NY, 987 [7] V. Vapnk, The Nature of Statstcal Learnng Theory, Sprnger, 995 [8] K. Lawrence and J. Arthur, Robust Regresson- analyss and applcatons, New York : M. Dekker, 990 [9] A. Smola and B. Schölkopf, A Tutoral on Support Vector Regresson, NeuroCOLT Techncal Report NC-TR , Royal Holloway College, Unversty of London, UK, 998 [0] T. Haste, R. Tbshran and J. Fredman, The Elements of Statstcal Learnng: Data Mnng, Inference and Predcton, Sprnger, 00 [] B. Schölkopf, J. Burges and A. Smola, ed., Advances n Kernel Methods: Support Vector Machnes, MIT Press, 999

23 3 [] V. Cherkassky and Y. Ma, Selecton of Meta-Parameters for Support Vector Regresson, Proc. ICANN-00 (to appear), 00 [3] B. Schölkopf, P. Bartlett, A. Smola, and R. Wllamson, Support Vector regresson wth automatc accuracy control, n L. Nklasson, M. Bodén, and T. Zemke (Eds), Proc. ICANN'98, Sprnger, pp -6, 998 FIGURE CAPTIONS Fg. : Comparng robust method vs least squares estmaton of the domnant model. (a) Frst data set (domnant model 70% of data samples, secondary model 30% of samples), (b) Estmates of domnant model M by robust method vs. least squares, for data set (a), (c) Second data set (domnant model 70% of data samples, secondary model 30% of samples), (d) Estmates of domnant model M by robust method vs. least squares, for data set (b). Fg. : Locaton of tranng data wth respect to ε -nsenstve tube showng three possble subsets of data. Fg. 3. SVM estmates of domnant model do not depend on accurate selecton of ε -values. Results show three SVM estmates for data set n Fg. (a), usng optmal ε (for ths data set), 0.5ε, and.5ε : Sold lne --- optmal ε =0.084 Dashed lne --- ε =0.04 Dotted lne --- ε =0.6 Fg. 4. Example of multple model estmaton procedure for lnear models: (a) Tranng data (wth small nose) (b) Model estmates for data set (a) obtaned usng proposed algorthm (c) Tranng data (wth large nose) (d) Model estmates for data set (b) obtaned usng proposed algorthm. Fg. 5. Example of multple model estmaton procedure for nonlnear models:

24 4 (a) Tranng data (wth small nose) (b) Model estmates for data set (a) obtaned usng proposed algorthm (c) Tranng data (wth large nose) (d) Model estmates for data set (b) obtaned usng proposed algorthm. Fg.6. Illustraton of data parttonng Step n the proposed algorthm for hgh-dmensonal data set. Horzontal threshold lne s used to partton the data nto two subsets Domnant M Secondary M (a) 3 0 By Robust Method By Least Squares (b) 3 0 Domnant M Secondary M (c) 0 By Robust Method By Least Squares (d) Fg.. Comparng robust method vs least squares estmaton of the domnant model. (a) Frst data set (domnant model 70% of data samples, secondary model 30% of samples), (b) Estmates of domnant model M by robust method vs. least squares, for data set (a), (c) Second data set (domnant model 70% of data samples, secondary model 30% of samples), (d) Estmates of domnant model M by robust method vs. least squares, for data set (b).

25 5 [Cherkassy and Ma] Pont subset 3 subset Pont subset ε Fg. : Locaton of tranng data wth respect to ε -nsenstve tube showng three possble subsets of data. [Cherkassy and Ma]

26 eps=0.084 eps=0.04 eps= Fg. 3. SVM estmates of domnant model do not depend on accurate selecton of ε -values. Results show three SVM estmates for data set n Fg. (a), usng optmal ε (for ths data set), 0.5ε, and.5ε : Sold lne --- optmal ε =0.084 Dashed lne --- ε =0.04 Dotted lne --- ε =0.6 [Cherkassy and Ma]

27 (a) (c) M estmate M estmate (b) M estmate M estmate (d) Fg. 4. Example of multple model estmaton procedure for lnear models: (a) Tranng data (wth small nose) (b) Model estmates for data set (a) obtaned usng proposed algorthm (c) Tranng data (wth large nose) (d) Model estmates for data set (b) obtaned usng proposed algorthm. [Cherkassy and Ma]

28 (a) 0 - M estmate M estmate (b) (c) 0 - M estmate M estmate (d) Fg. 5. Example of multple model estmaton procedure for nonlnear models: (a) Tranng data (wth small nose) (b) Model estmates for data set (a) obtaned usng proposed algorthm (c) Tranng data (wth large nose) (d) Model estmates for data set (b) obtaned usng proposed algorthm. [Cherkassy and Ma]

29 9 0 0 a gm S u al/ R esd Index of Tranng Samples Fg.6. Illustraton of data parttonng Step n the proposed algorthm for hgh-dmensonal data set. Horzontal threshold lne s used to partton the data nto two subsets. [Cherkassy and Ma]

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Why consder unlabeled samples?. Collectng and labelng large set of samples s costly Gettng recorded speech s free, labelng s tme consumng 2. Classfer could be desgned

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010 Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement

More information

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT 3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters Proper Choce of Data Used for the Estmaton of Datum Transformaton Parameters Hakan S. KUTOGLU, Turkey Key words: Coordnate systems; transformaton; estmaton, relablty. SUMMARY Advances n technologes and

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

Data Mining: Model Evaluation

Data Mining: Model Evaluation Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Review of approximation techniques

Review of approximation techniques CHAPTER 2 Revew of appromaton technques 2. Introducton Optmzaton problems n engneerng desgn are characterzed by the followng assocated features: the objectve functon and constrants are mplct functons evaluated

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

A Statistical Model Selection Strategy Applied to Neural Networks

A Statistical Model Selection Strategy Applied to Neural Networks A Statstcal Model Selecton Strategy Appled to Neural Networks Joaquín Pzarro Elsa Guerrero Pedro L. Galndo joaqun.pzarro@uca.es elsa.guerrero@uca.es pedro.galndo@uca.es Dpto Lenguajes y Sstemas Informátcos

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

A Robust LS-SVM Regression

A Robust LS-SVM Regression PROCEEDIGS OF WORLD ACADEMY OF SCIECE, EGIEERIG AD ECHOLOGY VOLUME 7 AUGUS 5 ISS 37- A Robust LS-SVM Regresson József Valyon, and Gábor Horváth Abstract In comparson to the orgnal SVM, whch nvolves a quadratc

More information

Detection of an Object by using Principal Component Analysis

Detection of an Object by using Principal Component Analysis Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

A Robust Method for Estimating the Fundamental Matrix

A Robust Method for Estimating the Fundamental Matrix Proc. VIIth Dgtal Image Computng: Technques and Applcatons, Sun C., Talbot H., Ourseln S. and Adraansen T. (Eds.), 0- Dec. 003, Sydney A Robust Method for Estmatng the Fundamental Matrx C.L. Feng and Y.S.

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Supervsed vs. Unsupervsed Learnng Up to now we consdered supervsed learnng scenaro, where we are gven 1. samples 1,, n 2. class labels for all samples 1,, n Ths s also

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches Proceedngs of the Internatonal Conference on Cognton and Recognton Fuzzy Flterng Algorthms for Image Processng: Performance Evaluaton of Varous Approaches Rajoo Pandey and Umesh Ghanekar Department of

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

CLASSIFICATION OF ULTRASONIC SIGNALS

CLASSIFICATION OF ULTRASONIC SIGNALS The 8 th Internatonal Conference of the Slovenan Socety for Non-Destructve Testng»Applcaton of Contemporary Non-Destructve Testng n Engneerng«September -3, 5, Portorož, Slovena, pp. 7-33 CLASSIFICATION

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

Incremental Learning with Support Vector Machines and Fuzzy Set Theory

Incremental Learning with Support Vector Machines and Fuzzy Set Theory The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and

More information

Relevance Assignment and Fusion of Multiple Learning Methods Applied to Remote Sensing Image Analysis

Relevance Assignment and Fusion of Multiple Learning Methods Applied to Remote Sensing Image Analysis Assgnment and Fuson of Multple Learnng Methods Appled to Remote Sensng Image Analyss Peter Bajcsy, We-Wen Feng and Praveen Kumar Natonal Center for Supercomputng Applcaton (NCSA), Unversty of Illnos at

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Parameter estimation for incomplete bivariate longitudinal data in clinical trials

Parameter estimation for incomplete bivariate longitudinal data in clinical trials Parameter estmaton for ncomplete bvarate longtudnal data n clncal trals Naum M. Khutoryansky Novo Nordsk Pharmaceutcals, Inc., Prnceton, NJ ABSTRACT Bvarate models are useful when analyzng longtudnal data

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

High resolution 3D Tau-p transform by matching pursuit Weiping Cao* and Warren S. Ross, Shearwater GeoServices

High resolution 3D Tau-p transform by matching pursuit Weiping Cao* and Warren S. Ross, Shearwater GeoServices Hgh resoluton 3D Tau-p transform by matchng pursut Wepng Cao* and Warren S. Ross, Shearwater GeoServces Summary The 3D Tau-p transform s of vtal sgnfcance for processng sesmc data acqured wth modern wde

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE Dorna Purcaru Faculty of Automaton, Computers and Electroncs Unersty of Craoa 13 Al. I. Cuza Street, Craoa RO-1100 ROMANIA E-mal: dpurcaru@electroncs.uc.ro

More information

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016) Technsche Unverstät München WSe 6/7 Insttut für Informatk Prof. Dr. Thomas Huckle Dpl.-Math. Benjamn Uekermann Parallel Numercs Exercse : Prevous Exam Questons Precondtonng & Iteratve Solvers (From 6)

More information

Backpropagation: In Search of Performance Parameters

Backpropagation: In Search of Performance Parameters Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,

More information

A Workflow for Spatial Uncertainty Quantification using Distances and Kernels

A Workflow for Spatial Uncertainty Quantification using Distances and Kernels A Workflow for Spatal Uncertanty Quantfcaton usng Dstances and Kernels Célne Schedt and Jef Caers Stanford Center for Reservor Forecastng Stanford Unversty Abstract Assessng uncertanty n reservor performance

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,

More information

Fuzzy Logic Based RS Image Classification Using Maximum Likelihood and Mahalanobis Distance Classifiers

Fuzzy Logic Based RS Image Classification Using Maximum Likelihood and Mahalanobis Distance Classifiers Research Artcle Internatonal Journal of Current Engneerng and Technology ISSN 77-46 3 INPRESSCO. All Rghts Reserved. Avalable at http://npressco.com/category/jcet Fuzzy Logc Based RS Image Usng Maxmum

More information

Fuzzy Model Identification Using Support Vector Clustering Method

Fuzzy Model Identification Using Support Vector Clustering Method Fuzzy Model Identfcaton Usng Support Vector Clusterng Method $\úhj OUçar, Yakup Demr, and Cüneyt * ]HOLú Electrcal and Electroncs Engneerng Department, Engneerng Faculty, ÕUDW Unversty, Elazg, Turkey agulucar@eee.org,ydemr@frat.edu.tr

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

The Man-hour Estimation Models & Its Comparison of Interim Products Assembly for Shipbuilding

The Man-hour Estimation Models & Its Comparison of Interim Products Assembly for Shipbuilding Internatonal Journal of Operatons Research Internatonal Journal of Operatons Research Vol., No., 9 4 (005) The Man-hour Estmaton Models & Its Comparson of Interm Products Assembly for Shpbuldng Bn Lu and

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

Machine Learning. Topic 6: Clustering

Machine Learning. Topic 6: Clustering Machne Learnng Topc 6: lusterng lusterng Groupng data nto (hopefully useful) sets. Thngs on the left Thngs on the rght Applcatons of lusterng Hypothess Generaton lusters mght suggest natural groups. Hypothess

More information

Empirical Distributions of Parameter Estimates. in Binary Logistic Regression Using Bootstrap

Empirical Distributions of Parameter Estimates. in Binary Logistic Regression Using Bootstrap Int. Journal of Math. Analyss, Vol. 8, 4, no. 5, 7-7 HIKARI Ltd, www.m-hkar.com http://dx.do.org/.988/jma.4.494 Emprcal Dstrbutons of Parameter Estmates n Bnary Logstc Regresson Usng Bootstrap Anwar Ftranto*

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Angle-Independent 3D Reconstruction. Ji Zhang Mireille Boutin Daniel Aliaga

Angle-Independent 3D Reconstruction. Ji Zhang Mireille Boutin Daniel Aliaga Angle-Independent 3D Reconstructon J Zhang Mrelle Boutn Danel Alaga Goal: Structure from Moton To reconstruct the 3D geometry of a scene from a set of pctures (e.g. a move of the scene pont reconstructon

More information

Intra-Parametric Analysis of a Fuzzy MOLP

Intra-Parametric Analysis of a Fuzzy MOLP Intra-Parametrc Analyss of a Fuzzy MOLP a MIAO-LING WANG a Department of Industral Engneerng and Management a Mnghsn Insttute of Technology and Hsnchu Tawan, ROC b HSIAO-FAN WANG b Insttute of Industral

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

A Simple and Efficient Goal Programming Model for Computing of Fuzzy Linear Regression Parameters with Considering Outliers

A Simple and Efficient Goal Programming Model for Computing of Fuzzy Linear Regression Parameters with Considering Outliers 62626262621 Journal of Uncertan Systems Vol.5, No.1, pp.62-71, 211 Onlne at: www.us.org.u A Smple and Effcent Goal Programmng Model for Computng of Fuzzy Lnear Regresson Parameters wth Consderng Outlers

More information

A Semi-parametric Regression Model to Estimate Variability of NO 2

A Semi-parametric Regression Model to Estimate Variability of NO 2 Envronment and Polluton; Vol. 2, No. 1; 2013 ISSN 1927-0909 E-ISSN 1927-0917 Publshed by Canadan Center of Scence and Educaton A Sem-parametrc Regresson Model to Estmate Varablty of NO 2 Meczysław Szyszkowcz

More information