Mixed Linear System Estimation and Identification
|
|
- Mary Day
- 6 years ago
- Views:
Transcription
1 48th IEEE Conference on Decson and Control, Shangha, Chna, December 2009 Mxed Lnear System Estmaton and Identfcaton A. Zymns S. Boyd D. Gornevsky Abstract We consder a mxed lnear system model, wth both contnuous and dscrete nputs and outputs, descrbed by a coeffcent matrx and a set of nose varances. When the dscrete nputs and outputs are absent, the model reduces to the usual nose-corrupted lnear system. Wth dscrete nputs only, the model has been used n fault estmaton, and wth dscrete outputs only, the system reduces to a probt model. We consder two fundamental problems: Estmatng the model nput, gven the model parameters and the model output; and dentfyng the model parameters, gven a tranng set of nput-output pars. The estmaton problem leads to a mxed Boolean-convex optmzaton problem, whch can be solved exactly when the number of dscrete varables s small enough. In other cases the estmaton problem can be solved approxmately, by solvng a convex relaxaton, roundng, and possbly, carryng out a local optmzaton step. The dentfcaton problem s convex and so can be exactly solved. Addng l 1 regularzaton to the dentfcaton problem allows us to trade off model ft and model parsmony. We llustrate the dentfcaton and estmaton methods wth a numercal example. A. System model I. INTRODUCTION In ths paper we ntroduce a model wth multple contnuous (.e., real valued) and dscrete (.e., Boolean) nputs and outputs. The contnuous outputs are lnearly related to the contnuous and dscrete nputs and are corrupted by zero mean Gaussan nose of known varance. The dscrete outputs ndcate whether a lnear combnaton of the contnuous and dscrete nputs, corrupted by zero mean Gaussan nose, s above or below a known threshold. As such, the model that we consder s a hybrd generalzed lnear model (GLM) [1], [2], [3]. The model has the form where y = A cc x + A dc d + b c + v c, (1) z = pos(a cd x + A dd d + b d + v d ), (2) Ths materal s based upon work supported by the Focus Center for Crcut & System Solutons (C2S2), by JPL award I291856, by the Precourt Insttute on Energy Effcency, by Army award W911NF , by NSF award , by DARPA award N C- 2021, by NASA award NNX07AEIIA, by AFOSR award FA , and by AFOSR award FA The authors are wth the Department of Electrcal Engneerng, Stanford Unversty (e-mal: {azymns, boyd, gorn}@stanford.edu). x R n c s the contnuous nput, d {0, 1} n d s the dscrete nput, y R mc, s the contnuous measurement or output, z {0, 1} m d s the dscrete measurement or output, v c R m c s the contnuous nose term, and v d R m d s the dscrete nose term. The functon pos : R m d {0, 1} m d s defned by { 1, u > 0 pos(u) = 0, u 0. Note that for any dagonal postve matrx D, we have pos(du) = pos(u). The noses are Gaussan, wth all components ndependent, wth (v c ) N (0, σ 2 ), = 1,..., n c, (v d ) N (0, 1), = 1,..., n d. (We can assume the dscrete nose components have unt varance wthout loss of generalty, usng a postve dagonal scalng.) The model s defned by the matrces A cc, A dc, A cd, and A dd, the ntercept terms b c and b d, and the contnuous nose varances σ 2, = 1,..., n c. The model (1) (2) ncludes several well known and wdely used specal cases. For m d = 0 and n d = 0 we have a smple lnear model wth addtve Gaussan nose. For n c = 0 and n d = 1, we obtan a probt model [1]. We menton several applcatons n I-D. In ths paper we address two basc problems assocated wth ths model: estmaton and dentfcaton. B. Estmaton We frst look at the problem of estmatng the model nputs x and d, gven one or more output samples. The pror dstrbuton on the nputs s specfed by a densty p(x) for x, whch we assume s log-concave, and the probablty p j that d j = 1. (We assume that x and all d j are ndependent.) We wll see that the maxmum a posteror (MAP) estmate of (x, d) s the soluton of a mxed Boolean convex problem. If n d s small enough, ths problem can be solved exactly, by exhaustve enumeraton of the possble values of d, or by a branch-and-bound or other global optmzaton method. For other cases, we propose to solve the optmzaton
2 problem approxmately, by solvng a convex relaxaton, and roundng the result, possbly followed by a local optmzaton step. We refer to the resultng estmate as the RMAP ( relaxed MAP ) estmate. Numercal smulaton suggests that the estmaton performance of the RMAP estmator s qute smlar to the MAP estmate; unlke the MAP estmate, however, t s computatonally tractable even for very large problems [4]. C. Identfcaton We then look at the dual problem of fttng a model of the form (1) (2),.e., determnng values for the model parameters, gven a gven set of (tranng) data samples (x (1), d (1), y (1), z (1) ),..., (x (K), d (K), y (K), z (K) ). We show that the assocated log-lkelhood functon s a concave functon of the model parameters, so maxmum lkelhood (ML) model fttng reduces to solvng a convex optmzaton problem. To obtan a parsmonous model,.e., one n whch many of the entres of the parameter matrces are zero, we propose l 1 -regularzed ML fttng (whch s also a convex problem). By varyng a regularzaton parameter, we can trade off model ft and model parsmony [5], [6], [7], [8]. D. Pror and related work The model that we consder s n essence a hybrd generalzed lnear model (GLM). These models have been extensvely used for explanatory modellng. Some good references on GLMs are [1], [2], [3]. There s a consderable amount of research that deals wth specal cases of our problem. For example, when m c = 0 and n c = 0, we get a model whch s very smlar to a dgraph model commonly used n dagnostcs, e.g., see [9], [10]. The formulaton n ths paper s an extenson of the earler work of the authors [4], [11], [12]. In [4] we consdered a specal case of the current problem, n the context of fault dentfcaton. The paper [11] consders a dynamc system wth contnuous and dscrete states and contnuous outputs. The upcomng paper [12] consders a specal case of sparse parametrc nputs and dscrete outputs, where the goal s to compute the sparsty pattern of the nputs. II. ESTIMATION In ths secton we consder the problem of estmatng the most probable values of (x, d), gven measurements (y, z). We frst derve the log posteror probablty of (x, d) gven (y, z) and show that t s jontly concave n x and d (wth d relaxed to take values n [0, 1]). Thus the problem of estmatng the maxmum a posteror (MAP) estmate of x and d s a mxed nteger convex problem,.e., a convex optmzaton problem wth the addtonal constrant that some of the varables take values n {0, 1}. We then present an effcent heurstc for solvng ths problem approxmately, based on a convex relaxaton of the combnatoral MAP problem. Our analyss follows closely the prevous work of the authors n fault detecton [4]. The method that we present s readly extended to the case when we have multple measurements (y (1), z (1) ),..., (y (M), z (M) ) for the same nput (x, d): We just have to stack all these measurements and augment the system model equatons approprately. A. Maxmum a posteror estmaton a) Log posteror: From Bayes rule the posteror probablty (densty) of x and d gven y and z s m c m d p(x, d y, z) p(y x, d) p(z x, d)p(d)p(x). =1 Takng logarthms, we have =1 log p(x, d y, z) = l(x, d) + C, where C s a constant and l(x, d) = l mc (x, d) + l md (x, d) + l pd (d) + l pc (x), (3) wth the terms descrbed below. The posteror contrbuton due to the contnuous measurements s m c 1 l mc (x, d) = (y A cc x A dc d b c ) 2. 2σ 2 =1 The posteror contrbuton due to the dscrete measurements s m d l md (x, d) = z log Φ(A cd x + A dd d + b d ) =1 m d + (1 z ) log Φ( A cd x A dd d b d ), (4) =1 where Φ s the cumulatve dstrbuton of a standard Gaussan. The dscrete varable pror term s l pd (d) = λ T d, where λ j = log(p j /(1 p j )). The contnuous varable pror term s l pc (x) = log p(x). The log posteror (3) s jontly concave n x and d, when d s relaxed to take values n [0, 1] n d. Indeed, each of the terms s concave n x and d: l mc s a concave quadratc n x and d, l pd s a lnear functon of d, and log p(x) s concave by assumpton. Concavty of l md follows from log-concavty of Φ; see, e.g., [13, 3.5.2]. Fgure 1 shows a plot of log Φ(x) versus x for x rangng between 5 and 5. 2
3 Sfrag replacements x log Φ(x) y x Fg. 1: Plot of log Φ(x) versus x. b) MAP estmaton: The problem of estmatng the most probable nput varables x and d, gven the outputs y and z, can be cast as the followng optmzaton problem: maxmze l(x, d) subject to d {0, 1} n d (5), wth varables x R n c and d {0, 1} n d. Ths s a mxed nteger convex problem. One straghtforward method for solvng t s to enumerate all 2 n d possble values for d, and to fnd the optmum x n each case, whch s tractable, snce for fxed d the problem s convex n x [13]. Ths approach s not practcal for n d larger than around 15 or so, or smaller, f the other dmensons are large. A branch and bound method, or other global optmzaton technque, can be used to (possbly) speed up computaton of the globally optmal soluton [14], [15]. But the worst case complexty of any method that computes the global soluton s exponental n n d. For ths reason we need to consder heurstcs for solvng problem (5) approxmately. B. Relaxed MAP estmaton In ths secton we descrbe a heurstc for approxmately solvng the MAP problem (5). Our heurstc s based on replacng the hard constrants d j {0, 1} wth soft constrants d j [0, 1]. Ths results n a convex optmzaton problem that we can solve effcently. We follow ths by roundng and (possbly) a smple local optmzaton method to mprove our estmate. c) Lnear relaxaton: We relax problem (5) to the followng optmzaton problem maxmze l(x, d) subject to 0 d 1, (6) wth varables x R nc and d R n d. Ths s a convex optmzaton problem, snce t nvolves maxmzng a concave functon over a convex set. We can effcently solve ths problem n many ways, e.g., va nteror-pont methods [16], [13]. The complexty of such methods can be shown to be cubc n n c + n d (assumng a fxed number of teratons of an nteror-pont method). Snce the feasble set for the relaxed MAP problem (6) contans the feasble set for the MAP problem (5), the optmal value of the relaxed MAP problem, whch we denote l ub, gves an upper bound on the optmal value of the MAP problem. Let (x, d ) be an optmal pont for the relaxed MAP problem (6), so we have l ub = l(x, d ) l(x, d) for any Boolean d. If d s also Boolean,.e., d j {0, 1} for all j, we conclude that d s n fact optmal for the MAP problem. In other words: When a soluton to the relaxed MAP problem turns out to have Boolean entres, t s optmal for the MAP problem. In general, of course, ths does not happen; at least some values of d j wll le between 0 and 1. d) Roundng: Let (x, d ) denote the optmal pont of the problem (6). We refer to d as a soft decson, snce ts components can be strctly between 0 and 1. The next step s to round the soft decson d to obtan a vald Boolean soluton (or hard decson) for d. Let θ (0, 1) and set ˆd = pos(d θ). To create ˆd, we smply round all entres of d j smaller than the threshold θ to zero. Thus θ s a threshold for guessng that a dscrete nput varable s 1, based on the relaxed MAP soluton d. As θ vares from 0 to 1, ths method generates up to n dfferent estmates ˆd, as each entry n d falls below the threshold. We can effcently fnd them all by sortng the entres of d, and settng the values of ˆd j to one n the order of ncreasng d j. We evaluate the log-posteror for each of these (or a subset) by solvng the optmzaton problem maxmze l(x, ˆd), (7) wth varables x R n c. Ths s an unconstraned convex problem that can also be solved effcently, agan wth cubc complexty n n c. The RMAP contnuous varable estmate ˆx s obtaned as the mnmzer of (7) correspondng to the best obtaned estmate ˆd. e) Local optmzaton: Further mprovement n our estmate can sometmes be obtaned by a local optmzaton method. We descrbe here the smplest possble such method. We ntalze ˆd as the one whch results n the largest value of l(x, d) after roundng. We then 3
4 cycle through j = 1,..., n, at step j replacng ˆd j wth 1 ˆd j. If ths leads to an ncrease n the optmal value of problem (7), we accept the change and contnue. If (as usually s the case) flppng the jth bt results n a decrease n l, we go on to the next ndex. We contnue untl we have rejected changes n all entres n ˆd. (At ths pont we can be sure that ˆd s 1-OPT, whch means that no change n one entry wll mprove the loss functon.) Numercal experments show that ths local optmzaton method often has no effect, whch means that the rounded soluton s 1-OPT. In some cases, however, t can lead to a modest ncrease n l. f) Performance of RMAP: The performance of our estmate of (x, d) should be judged by (for example) the mean-square error n estmatng x, and the probablty of makng errors n estmatng d (whch could be further broken down nto false postve and false negatve error rates). Numercal examples show that RMAP has very smlar performance as MAP, but has the advantage of tractablty. Ths can be partally explaned as follows. When the estmaton problem s hard, for example, when the nose levels are hgh, no estmaton method (and n partcular, nether MAP nor RMAP) can do a good job at estmatng x and d. When the estmaton problem s easy, for example, when the nose levels are low, even smple estmaton methods (ncludng RMAP) can do a good job at estmatng x and d. So t s only problems n between hard and easy where we could possbly see a sgnfcant dfference n estmaton performance between MAP and RMAP. In ths regon, however, we observe from numercal expermens that MAP and RMAP acheve very smlar performance. III. IDENTIFICATION In II we addressed the problem of estmatng (x, d) gven measurements (y, z) when the system model s known. In ths secton we look at the dual problem of fttng a model of the form (1) (2) to gven data, assumng that the contnuous nose varances are known. We show that the log lkelhood of the model parameters gven the measurements s a concave functon, so we can solve the maxmum lkelhood (ML) problem effcently. We frst observe that snce all measurements are ndependent of each other, we can separately dentfy the model parameters that correspond to each measurement. The complexty of the resultng ML dentfcaton technque s thus lnear n the total number of measurements. Fnally, we present a smple technque, varously known as compressed sensng [5], [6], [7], the Lasso [17], [18], sparse sgnal recovery [19], and bass pursut [20] that can be used to dentfy parsmonous models that ft the data well. Ths nvolves usng the l 1 -norm of the parameter vector of a gven lnear model as a surrogate for the model sparsty. A. Contnuous parameter dentfcaton Suppose that we are gven samples of the form (x (j), d (j), y (j) ) for j = 1,..., K. Let a cc, adc denote the th row of A cc, A dc respectvely. Snce the contnuous nose terms v c are Gaussan, the ML estmate of (a cc, adc, bc ) gven the data, s the one that maxmzes K l c (a cc, a dc, b c ) = (y (j) a cct x (j) a dc T d (j) b c ) 2. j=1 We can evaluate the maxmum lkelhood estmates of these model parameters effcently (va least squares), f σ s known. In order to estmate a parsmonous model, as well as σ, we propose solvng the followng problem maxmze l c (a cc, adc, bc ) µc ( acc 1 + a dc 1), (8) whch s a convex optmzaton problem, for a fxed parameter µ c > 0. Havng solved problem (8), we can then estmate the nose varance σ 2 as the varance of the measurement resduals (y (j) a cc T x (j) a dc T d (j) b c ). B. Dscrete parameter dentfcaton Now suppose that we are gven samples of the form (x (j), d (j), z (j) ) for j = 1,..., K. Let a cd, add denote the th row of A cd, A dd respectvely. The log lkelhood of (a cc ) gven the data s l d (a cd, adc, bc, add, b d ) = K j=1 (z (j) +(1 z (j) log Φ(a cd ) log Φ( a cd T x (j) + a dd T d (j) + b d ) ) T x (j) a dd T d (j) b d ), whch s a concave functon of (a cd, add, b d ). We can thus effcently evaluate the ML estmates of the dscrete model parameters effcently, for example usng Newton s method. Ths s equvalent to solvng a probt regresson problem. In order to estmate a parsmonous model we propose solvng the followng problem maxmze l d (a cd, add, b d ) µd ( acd 1 + a dd 1 ), (9) whch s a convex optmzaton problem, for a fxed parameter µ d > 0. C. Computatonal complexty Problems (8) and (9) can be solved effcently n a varety of methods such as nteror-pont methods (n a way smlar to [21], [8]) and frst order methods, e.g., [22]. In any case the complexty of solvng ths problem s cubc n n c + n d and lnear n K. Thus the overall complexty of l 1 -regularzed ML system dentfcaton s O(K(m c + m d )(n c + n d ) 3 ). 4
5 D. Choce of regularzaton parameter The regularzaton parameter µ c and µ d control the tradeoff between data ft (as measured by l(a)) and model sparsty (as measured by the l 1 norm). In order to keep our modellng procedure as flexble as possble, we use a dfferent regularzaton parameter for each 10 contnuous and dscrete sensor. 1 We use cross valdaton to choose each of the regularzaton parameters µ c and µd. We dvde our avalable data nto a tranng set and a test set. For each contnuous sensor = 1,..., m c and for each value PSfrag of µ c we replacements solve problem (8) for the tranng set and measure the average square resdual of measurement on the test set. We µ 10 2 then choose as µ c the value of µ that gves the smallest e 3 z mu resdual. We repeat ths process for the dscrete sensors, Fg. 2: Plot of e z as a functon of µ on the valdaton set for where nstead of square resdual we use the average error dscrete sensor 5. rate n the dscrete sensor measurement. Ths s the smplest possble way of fttng the regularzaton parameters. There are varous other methods, mnmum relatve root mean square error r y on the such as mnmzng the Akake nformaton crteron valdaton set, defned as (AIC), or mnmzng the Bayesan nformaton crteron (BIC). For a detaled descrpton of these methods see r y = y ŷ 2, y 2 [3, 7]. where ŷ s the output predcted by the estmated model, IV. NUMERICAL EXAMPLE gven the true nput. For each dscrete sensor we choose the value of µ that acheves the mnmum average error rate on e z on the predcted value of z for the valdaton set. As an example, fgure 2 shows the plot of e z versus µ for dscrete sensor 5. We then use ths model estmate to predct the nputs (x, d) for the outputs (y, z) for the test set, usng the relaxed MAP method descrbed n II. We compute the error rate n our estmate of d (whch we call e d ), as well as the average relatve root-mean-square (RMS) error between our estmate and the true value of x, defned as In ths secton we present the results of applyng our estmaton and dentfcaton methods to a small artfcally generated example. Specfcally, we consder a system wth n c = n d = 10 and m c = 18 and m d = 16. We draw the elements of all system matrces randomly such that each of the matrces A cc, A dc, A cd, or A dd are 10% sparse. We draw the nonzero entres of the A cc and A dc matrces from a N (0, 1) dstrbuton, and the nonzero entres of the A cd and A dd matrces from a N (0, 1) dstrbuton. The entres of b c and b d are drawn from a N (0, 1) and a N (0, 100) dstrbuton respectvely. We set σ = 0.01 for all. Each element of nput x s generated accordng to a N (0, 1) pror. The pror probablty of d j beng equal to 1 s 0.2 for all j. We generate three sets of 500 samples: a tranng set, whch we use to ft our estmated model, a valdaton set, whch we use to select the best value of the regularzaton parameter µ for each sensor, and a test set, on whch we judge the accuracy of our resultng model. We look at 30 canddate values of µ, logarthmcally spaced n the range (10 2, 10 3 ). For each value of µ, we use the method descrbed n III to ft a mxed lnear model to the gven tranng set. For each contnuous sensor, we choose the value of µ whch acheves the errz 10 0 r x = x ˆx 2 x 2. Our estmated model yelds an error rate e d of about on ths data and a relatve RMS error n x of about In contrast, usng the true model for estmaton, yelds an error rate e d of about and a relatve RMS error n x of about We thus see that our system dentfcaton method does a reasonable job at fttng such a model to the gven data. The modellng and estmaton performance of our estmated model s shown n tables I and II respectvely. We judge the modellng performance of the model, based on how well ths model can predct the outputs from the nputs. As we can see from table I, the model does a good job at ths task, gven that the value of r y and e z for the test set are of the same order of magntude 5
6 Tran Valdaton Test r y e z TABLE I: Modellng performance of estmated model. Tran Valdaton Test r x e d TABLE II: Estmaton performance of estmated model. as the same values for the tranng and valdaton sets. Furthermore, from table II, we see that the model also generalzes well for estmaton. The values of r x and e d for the test set are of the same order as the ones for the tranng and valdaton sets. V. CONCLUSIONS We have ntroduced a class of mxed lnear systems that ncludes the standard lnear model and the probt model as specal cases. We have presented a smple heurstc for estmatng the nput gven the output of such a system based on a convex relaxaton of a combnatoral MAP problem. We have also presented a smple heurstc that uses l 1 -regularzed ML to dentfy such a model gven a set of data ponts. We have shown that ths method performs well n practce through a numercal example. [11] A. Zymns, S. Boyd, and D. Gornevsky, Mxed state estmaton for a lnear Gaussan Markov model, n 47th IEEE Conference on Decson and Control, 2008, pp [12] A. Zymns, S. Boyd, and E. Candès, Compressed sensng wth quantzed measurements, 2009, n preparaton. [13] S. Boyd and L. Vandenberghe, Convex Optmzaton. Cambrdge Unversty Press, [14] E. Lawler and D. Wood, Branch-and-bound methods: A survey, Operatons Research, vol. 14, pp , [15] R. Moore, Global optmzaton to prescrbed accuracy, Computers and Mathematcs wth Applcatons, vol. 21, no. 6 7, pp , [16] J. Nocedal and S. Wrght, Numercal Optmzaton. Sprnger, [17] R. Tbshran, Regresson shrnkage and selecton va the Lasso, Journal of the Royal Statstcal Socety, vol. 58, no. 1, pp , [18] B. Efron, T. Haste, I. Johnstone, and R. Tbshran, Least angle regresson, Annals of statstcs, pp , [19] J. Tropp, Just relax: Convex programmng methods for dentfyng sparse sgnals n nose, IEEE Transactons on Informaton Theory, vol. 52, no. 3, pp , [20] S. Chen, D. Donoho, and M. Saunders, Atomc decomposton by bass pursut, SIAM revew, pp , [21] S.-J. Km, K. Koh, M. Lustg, S. Boyd, and D. Gornevsky, An nteror-pont method for large-scale l 1 -regularzed least squares, IEEE Journal of Selected Topcs n Sgnal Processng, vol. 1, no. 4, pp , [22] E. Hale, W. Yn, and Y. Zhang, Fxed-pont contnuaton for l 1 -mnmzaton: Methodology and convergence, SIAM Journal on Optmzaton, vol. 19, pp , REFERENCES [1] P. McCullagh and J. Nelder, Generalzed lnear models. Chapman & Hall, [2] T. Haste and R. Tbshran, Generalzed addtve models, [3] T. Haste, R. Tbshran, and J. Fredman, The elements of statstcal learnng: data mnng, nference, and predcton. Sprnger, [4] A. Zymns, S. Boyd, and D. Gornevsky, Relaxed maxmum a posteror fault dentfcaton, Sgnal Processng, vol. 89, no. 6, pp , [5] D. Donoho, Compressed sensng, IEEE Transactons on Informaton Theory, vol. 52, no. 4, pp , [6] E. Candès, J. Romberg, and T. Tao, Stable sgnal recovery from ncomplete and naccurate measurements, Communcatons on Pure and Appled Mathematcs, vol. 59, no. 8, pp , [7] E. Candès, J. Romberg, and T. Tao, Robust uncertanty prncples: exact sgnal reconstructon from hghly ncomplete frequency nformaton, IEEE Transactons on Informaton Theory, vol. 52, no. 2, pp , [8] K. Koh, S.-J. Km, and S. Boyd, An nteror pont method for large-scale l 1 -regularzed logstc regresson, Journal of Machne Learnng Research, vol. 8, pp , July [9] I. Sacks, Dgraph matrx analyss, IEEE transactons on relablty, vol. 34, no. 5, pp , [10] S. Deb, K. Pattpat, V. Raghavan, M. Shaker, and R. Shrestha, Mult-sgnal flow graphs: a novel approach for system testablty analyss and fault dagnoss, IEEE Aerospace and Electronc Systems Magazne, vol. 10, no. 5, pp ,
Support Vector Machines
/9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.
More informationAn Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices
Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal
More informationS1 Note. Basis functions.
S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type
More informationFeature Reduction and Selection
Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components
More informationNAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics
Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson
More informationLearning the Kernel Parameters in Kernel Minimum Distance Classifier
Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department
More informationCS 534: Computer Vision Model Fitting
CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust
More informationDetermining the Optimal Bandwidth Based on Multi-criterion Fusion
Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn
More informationGSLM Operations Research II Fall 13/14
GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are
More informationSum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints
Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan
More informationParallelism for Nested Loops with Non-uniform and Flow Dependences
Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr
More informationSolving two-person zero-sum game by Matlab
Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by
More informationEXTENDED BIC CRITERION FOR MODEL SELECTION
IDIAP RESEARCH REPORT EXTEDED BIC CRITERIO FOR ODEL SELECTIO Itshak Lapdot Andrew orrs IDIAP-RR-0-4 Dalle olle Insttute for Perceptual Artfcal Intellgence P.O.Box 59 artgny Valas Swtzerland phone +4 7
More informationProblem Definitions and Evaluation Criteria for Computational Expensive Optimization
Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty
More informationClassification / Regression Support Vector Machines
Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM
More informationCategories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms
3. Fndng Determnstc Soluton from Underdetermned Equaton: Large-Scale Performance Modelng by Least Angle Regresson Xn L ECE Department, Carnege Mellon Unversty Forbs Avenue, Pttsburgh, PA 3 xnl@ece.cmu.edu
More informationy and the total sum of
Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton
More informationOutline. Type of Machine Learning. Examples of Application. Unsupervised Learning
Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton
More informationMathematics 256 a course in differential equations for engineering students
Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the
More informationSupport Vector Machines
Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned
More informationUnsupervised Learning
Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and
More informationSVM-based Learning for Multiple Model Estimation
SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:
More informationAn Entropy-Based Approach to Integrated Information Needs Assessment
Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology
More informationSmoothing Spline ANOVA for variable screening
Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory
More informationHermite Splines in Lie Groups as Products of Geodesics
Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the
More informationCluster Analysis of Electrical Behavior
Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School
More informationPositive Semi-definite Programming Localization in Wireless Sensor Networks
Postve Sem-defnte Programmng Localzaton n Wreless Sensor etworks Shengdong Xe 1,, Jn Wang, Aqun Hu 1, Yunl Gu, Jang Xu, 1 School of Informaton Scence and Engneerng, Southeast Unversty, 10096, anjng Computer
More informationLecture 4: Principal components
/3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness
More informationSubspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;
Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features
More informationNUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS
ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana
More informationExercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005
Exercses (Part 4) Introducton to R UCLA/CCPR John Fox, February 2005 1. A challengng problem: Iterated weghted least squares (IWLS) s a standard method of fttng generalzed lnear models to data. As descrbed
More informationSupport Vector Machines. CS534 - Machine Learning
Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators
More informationA mathematical programming approach to the analysis, design and scheduling of offshore oilfields
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and
More informationAvailable online at ScienceDirect. Procedia Environmental Sciences 26 (2015 )
Avalable onlne at www.scencedrect.com ScenceDrect Proceda Envronmental Scences 26 (2015 ) 109 114 Spatal Statstcs 2015: Emergng Patterns Calbratng a Geographcally Weghted Regresson Model wth Parameter-Specfc
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 5 Luca Trevisan September 7, 2017
U.C. Bereley CS294: Beyond Worst-Case Analyss Handout 5 Luca Trevsan September 7, 207 Scrbed by Haars Khan Last modfed 0/3/207 Lecture 5 In whch we study the SDP relaxaton of Max Cut n random graphs. Quc
More informationA Robust Method for Estimating the Fundamental Matrix
Proc. VIIth Dgtal Image Computng: Technques and Applcatons, Sun C., Talbot H., Ourseln S. and Adraansen T. (Eds.), 0- Dec. 003, Sydney A Robust Method for Estmatng the Fundamental Matrx C.L. Feng and Y.S.
More informationParallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)
Technsche Unverstät München WSe 6/7 Insttut für Informatk Prof. Dr. Thomas Huckle Dpl.-Math. Benjamn Uekermann Parallel Numercs Exercse : Prevous Exam Questons Precondtonng & Iteratve Solvers (From 6)
More informationUser Authentication Based On Behavioral Mouse Dynamics Biometrics
User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA
More informationCompiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz
Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster
More informationContent Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers
IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth
More informationIntra-Parametric Analysis of a Fuzzy MOLP
Intra-Parametrc Analyss of a Fuzzy MOLP a MIAO-LING WANG a Department of Industral Engneerng and Management a Mnghsn Insttute of Technology and Hsnchu Tawan, ROC b HSIAO-FAN WANG b Insttute of Industral
More informationA Statistical Model Selection Strategy Applied to Neural Networks
A Statstcal Model Selecton Strategy Appled to Neural Networks Joaquín Pzarro Elsa Guerrero Pedro L. Galndo joaqun.pzarro@uca.es elsa.guerrero@uca.es pedro.galndo@uca.es Dpto Lenguajes y Sstemas Informátcos
More informationLECTURE : MANIFOLD LEARNING
LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors
More informationLearning to Project in Multi-Objective Binary Linear Programming
Learnng to Project n Mult-Objectve Bnary Lnear Programmng Alvaro Serra-Altamranda Department of Industral and Management System Engneerng, Unversty of South Florda, Tampa, FL, 33620 USA, amserra@mal.usf.edu,
More informationA MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS
Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung
More informationEdge Detection in Noisy Images Using the Support Vector Machines
Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona
More informationClassifier Selection Based on Data Complexity Measures *
Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.
More informationProgramming in Fortran 90 : 2017/2018
Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values
More informationSimulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010
Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement
More informationBiostatistics 615/815
The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts
More informationEmpirical Distributions of Parameter Estimates. in Binary Logistic Regression Using Bootstrap
Int. Journal of Math. Analyss, Vol. 8, 4, no. 5, 7-7 HIKARI Ltd, www.m-hkar.com http://dx.do.org/.988/jma.4.494 Emprcal Dstrbutons of Parameter Estmates n Bnary Logstc Regresson Usng Bootstrap Anwar Ftranto*
More informationTsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance
Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for
More informationCourse Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms
Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques
More informationComplex System Reliability Evaluation using Support Vector Machine for Incomplete Data-set
Internatonal Journal of Performablty Engneerng, Vol. 7, No. 1, January 2010, pp.32-42. RAMS Consultants Prnted n Inda Complex System Relablty Evaluaton usng Support Vector Machne for Incomplete Data-set
More informationMachine Learning 9. week
Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below
More informationHelsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)
Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute
More informationThe Research of Support Vector Machine in Agricultural Data Classification
The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou
More informationMeta-heuristics for Multidimensional Knapsack Problems
2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,
More information2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements
Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.
More informationMachine Learning. K-means Algorithm
Macne Learnng CS 6375 --- Sprng 2015 Gaussan Mture Model GMM pectaton Mamzaton M Acknowledgement: some sldes adopted from Crstoper Bsop Vncent Ng. 1 K-means Algortm Specal case of M Goal: represent a data
More informationA Simple and Efficient Goal Programming Model for Computing of Fuzzy Linear Regression Parameters with Considering Outliers
62626262621 Journal of Uncertan Systems Vol.5, No.1, pp.62-71, 211 Onlne at: www.us.org.u A Smple and Effcent Goal Programmng Model for Computng of Fuzzy Lnear Regresson Parameters wth Consderng Outlers
More informationAPPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT
3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ
More informationImprovement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration
Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,
More informationQuality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation
Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on
More informationParallel matrix-vector multiplication
Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more
More informationA Robust LS-SVM Regression
PROCEEDIGS OF WORLD ACADEMY OF SCIECE, EGIEERIG AD ECHOLOGY VOLUME 7 AUGUS 5 ISS 37- A Robust LS-SVM Regresson József Valyon, and Gábor Horváth Abstract In comparson to the orgnal SVM, whch nvolves a quadratc
More informationFEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur
FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents
More informationAdaptive Transfer Learning
Adaptve Transfer Learnng Bn Cao, Snno Jaln Pan, Yu Zhang, Dt-Yan Yeung, Qang Yang Hong Kong Unversty of Scence and Technology Clear Water Bay, Kowloon, Hong Kong {caobn,snnopan,zhangyu,dyyeung,qyang}@cse.ust.hk
More informationOptimization Methods: Integer Programming Integer Linear Programming 1. Module 7 Lecture Notes 1. Integer Linear Programming
Optzaton Methods: Integer Prograng Integer Lnear Prograng Module Lecture Notes Integer Lnear Prograng Introducton In all the prevous lectures n lnear prograng dscussed so far, the desgn varables consdered
More informationWavefront Reconstructor
A Dstrbuted Smplex B-Splne Based Wavefront Reconstructor Coen de Vsser and Mchel Verhaegen 14-12-201212 2012 Delft Unversty of Technology Contents Introducton Wavefront reconstructon usng Smplex B-Splnes
More informationThe Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique
//00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy
More informationClassifying Acoustic Transient Signals Using Artificial Intelligence
Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)
More information5 The Primal-Dual Method
5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton
More informationRelevance Assignment and Fusion of Multiple Learning Methods Applied to Remote Sensing Image Analysis
Assgnment and Fuson of Multple Learnng Methods Appled to Remote Sensng Image Analyss Peter Bajcsy, We-Wen Feng and Praveen Kumar Natonal Center for Supercomputng Applcaton (NCSA), Unversty of Illnos at
More informationThe BGLR (Bayesian Generalized Linear Regression) R- Package. Gustavo de los Campos, Amit Pataki & Paulino Pérez. (August- 2013)
Bostatstcs Department Bayesan Generalzed Lnear Regresson (BGLR) The BGLR (Bayesan Generalzed Lnear Regresson) R- Package By Gustavo de los Campos, Amt Patak & Paulno Pérez (August- 03) (contact: gdeloscampos@gmal.com
More informationSENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR
SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR Judth Aronow Rchard Jarvnen Independent Consultant Dept of Math/Stat 559 Frost Wnona State Unversty Beaumont, TX 7776 Wnona, MN 55987 aronowju@hal.lamar.edu
More informationEECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science
EECS 730 Introducton to Bonformatcs Sequence Algnment Luke Huan Electrcal Engneerng and Computer Scence http://people.eecs.ku.edu/~huan/ HMM Π s a set of states Transton Probabltes a kl Pr( l 1 k Probablty
More informationOptimal Scheduling of Capture Times in a Multiple Capture Imaging System
Optmal Schedulng of Capture Tmes n a Multple Capture Imagng System Tng Chen and Abbas El Gamal Informaton Systems Laboratory Department of Electrcal Engneerng Stanford Unversty Stanford, Calforna 9435,
More informationMulticriteria Decision Making
Multcrtera Decson Makng Andrés Ramos (Andres.Ramos@comllas.edu) Pedro Sánchez (Pedro.Sanchez@comllas.edu) Sonja Wogrn (Sonja.Wogrn@comllas.edu) Contents 1. Basc concepts 2. Contnuous methods 3. Dscrete
More informationAn Accurate Evaluation of Integrals in Convex and Non convex Polygonal Domain by Twelve Node Quadrilateral Finite Element Method
Internatonal Journal of Computatonal and Appled Mathematcs. ISSN 89-4966 Volume, Number (07), pp. 33-4 Research Inda Publcatons http://www.rpublcaton.com An Accurate Evaluaton of Integrals n Convex and
More informationSLAM Summer School 2006 Practical 2: SLAM using Monocular Vision
SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,
More informationAn Optimal Algorithm for Prufer Codes *
J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,
More informationParameter estimation for incomplete bivariate longitudinal data in clinical trials
Parameter estmaton for ncomplete bvarate longtudnal data n clncal trals Naum M. Khutoryansky Novo Nordsk Pharmaceutcals, Inc., Prnceton, NJ ABSTRACT Bvarate models are useful when analyzng longtudnal data
More informationSHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE
SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE Dorna Purcaru Faculty of Automaton, Computers and Electroncs Unersty of Craoa 13 Al. I. Cuza Street, Craoa RO-1100 ROMANIA E-mal: dpurcaru@electroncs.uc.ro
More informationUnsupervised Learning and Clustering
Unsupervsed Learnng and Clusterng Why consder unlabeled samples?. Collectng and labelng large set of samples s costly Gettng recorded speech s free, labelng s tme consumng 2. Classfer could be desgned
More informationGeneralized Additive Bayesian Network Classifiers
Generalzed Addtve Bayesan Network Classfers Janguo L and Changshu Zhang and Tao Wang and Ymn Zhang Intel Chna Research Center, Bejng, Chna Department of Automaton, Tsnghua Unversty, Chna {janguo.l, tao.wang,
More informationBOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET
1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School
More informationContext-Specific Bayesian Clustering for Gene Expression Data
Context-Specfc Bayesan Clusterng for Gene Expresson Data Yoseph Barash School of Computer Scence & Engneerng Hebrew Unversty, Jerusalem, 91904, Israel hoan@cs.huj.ac.l Nr Fredman School of Computer Scence
More informationHigh-Boost Mesh Filtering for 3-D Shape Enhancement
Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,
More informationA Saturation Binary Neural Network for Crossbar Switching Problem
A Saturaton Bnary Neural Network for Crossbar Swtchng Problem Cu Zhang 1, L-Qng Zhao 2, and Rong-Long Wang 2 1 Department of Autocontrol, Laonng Insttute of Scence and Technology, Benx, Chna bxlkyzhangcu@163.com
More informationThree supervised learning methods on pen digits character recognition dataset
Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru
More informationA CLASS OF TRANSFORMED EFFICIENT RATIO ESTIMATORS OF FINITE POPULATION MEAN. Department of Statistics, Islamia College, Peshawar, Pakistan 2
Pa. J. Statst. 5 Vol. 3(4), 353-36 A CLASS OF TRANSFORMED EFFICIENT RATIO ESTIMATORS OF FINITE POPULATION MEAN Sajjad Ahmad Khan, Hameed Al, Sadaf Manzoor and Alamgr Department of Statstcs, Islama College,
More informationCHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION
48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue
More informationOn Some Entertaining Applications of the Concept of Set in Computer Science Course
On Some Entertanng Applcatons of the Concept of Set n Computer Scence Course Krasmr Yordzhev *, Hrstna Kostadnova ** * Assocate Professor Krasmr Yordzhev, Ph.D., Faculty of Mathematcs and Natural Scences,
More informationAn Ensemble Learning algorithm for Blind Signal Separation Problem
An Ensemble Learnng algorthm for Blnd Sgnal Separaton Problem Yan L 1 and Peng Wen 1 Department of Mathematcs and Computng, Faculty of Engneerng and Surveyng The Unversty of Southern Queensland, Queensland,
More informationA Binarization Algorithm specialized on Document Images and Photos
A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a
More informationKent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming
CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems
More informationLECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming
CEE 60 Davd Rosenberg p. LECTURE NOTES Dualty Theory, Senstvty Analyss, and Parametrc Programmng Learnng Objectves. Revew the prmal LP model formulaton 2. Formulate the Dual Problem of an LP problem (TUES)
More informationAn Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed
More informationFusion Performance Model for Distributed Tracking and Classification
Fuson Performance Model for Dstrbuted rackng and Classfcaton K.C. Chang and Yng Song Dept. of SEOR, School of I&E George Mason Unversty FAIRFAX, VA kchang@gmu.edu Martn Lggns Verdan Systems Dvson, Inc.
More information