Adaptive Transfer Learning

Size: px
Start display at page:

Download "Adaptive Transfer Learning"

Transcription

1 Adaptve Transfer Learnng Bn Cao, Snno Jaln Pan, Yu Zhang, Dt-Yan Yeung, Qang Yang Hong Kong Unversty of Scence and Technology Clear Water Bay, Kowloon, Hong Kong Abstract Transfer learnng ams at reusng the knowledge n some source tasks to mprove the learnng of a target task. Many transfer learnng methods assume that the source tasks and the target task be related, even though many tasks are not related n realty. However, when two tasks are unrelated, the knowledge extracted from a source task may not help, and even hurt, the performance of a target task. Thus, how to avod negatve transfer and then ensure a safe transfer of knowledge s crucal n transfer learnng. In ths paper, we propose an Adaptve Transfer learnng algorthm based on Gaussan Processes (AT-GP), whch can be used to adapt the transfer learnng schemes by automatcally estmatng the smlarty between a source and a target task. The man contrbuton of our work s that we propose a new sem-parametrc transfer kernel for transfer learnng from a Bayesan perspectve, and propose to learn the model wth respect to the target task, rather than all tasks as n mult-task learnng. We can formulate the transfer learnng problem as a unfed Gaussan Process (GP) model. The adaptve transfer ablty of our approach s verfed on both synthetc and real-world datasets. Introducton Transfer learnng (or nductve transfer) ams at transferrng the shared knowledge from one task to other related tasks. In many real-world applcatons, we expect to reduce the labelng effort of a new task (referred to as target task) by transferrng knowledge from one or more related tasks (source tasks) whch have plenty of labeled data. Usually, the accomplshment of transfer learnng s based on certan assumptons and the correspondng transfer schemes. For example, (Lawrence and Platt 2004; Schwaghofer, Tresp, and Yu 2005; Rana, Ng, and Koller 2006; Lee et al. 2007) assume that related tasks should share some (hyper- )parameters. By dscoverng the shared (hyper-) parameters, the knowledge can be transferred across tasks. Other algorthms, such as (Da et al. 2007; Rana et al. 2007), assume that some nstances or Copyrght c 2010, Assocaton for the Advancement of Artfcal Intellgence ( All rghts reserved. features can be used as a brdge for knowledge transfer. If these assumptons fal to be satsfed, however, the transfer may be nsuffcent or unsuccessful. In the worst case, t may even hurt the performance, whch can be referred to as negatve transfer (Rosensten and Detterch 2005). Snce t s not trval to verfy whch assumptons hold for real-world tasks, we are nterested n pursung an adaptve transfer learnng algorthm whch can automatcally adapt the transfer schemes n dfferent scenaros and then avod negatve transfer. We expect the adaptve transfer learnng algorthm to at least demonstrate the followng propertes: The shared knowledge between tasks should be transferred as much as possble when these tasks are related. An extreme case s that when they are exactly the same task, the performance of the adaptve transfer learnng algorthm should be as good as that when t s consdered as a sngle-task problem. Negatve transfer should be avoded as much as possble when these tasks are unrelated. An extreme case s when these tasks are totally unrelated, the performance of the adaptve transfer learnng algorthm should be no worse than that of the non-transferlearnng baselnes. Two basc transfer-learnng schemes can be constructed based the above requrements. One s a no transfer scheme, whch dscards the data n the source task when tranng a model for the target task. Ths would be the best scheme when the source and the target tasks are not related at all. The other s transfer all scheme that consders the data n the source task to be the same as those n the target task. Ths would be the best scheme when the source and target tasks are exactly the same. What we wsh to get s an adaptve scheme that s always no worse than the two schemes. However, gven that there are so many transfer learnng algorthms that have been proposed, a mechansm has been lackng to automatcally adjust ts transfer schemes to acheve ths. In ths paper, we address the problem of constructng an adaptve transfer learnng algorthm that satsfes both propertes mentoned above. We propose an Adaptve Transfer learnng algorthm based on Gaus-

2 san Process (AT-GP) to acheve the goal of adaptve transfer. Advantages of Gaussan process methods nclude that the prors and hyper-parameters of the traned models are easy to nterpret as well as that varances of predctons can be provded. Dfferent from prevous works on transfer learnng and mult-task learnng usng GP whch are ether based on transferrng through shared parameters (Lawrence and Platt 2004; Yu, Tresp, and Schwaghofer 2005; Schwaghofer, Tresp, and Yu 2005) or shared representaton of nstances (Rana et al. 2007), the model proposed n ths paper can automatcally learn the transfer scheme from the data. Our key dea s to learn a transfer kernel to model the correlaton of the outputs when the nputs come from dfferent tasks, whch can be regarded as a measure of smlarty between tasks. What to transfer s based on how smlar the source s to the target task. On one hand, f the tasks are very smlar then the knowledge would be transferred from the source data and the learnng performance would tend to the transfer all scheme n the extreme case. On the other hand, f the tasks are not smlar, the model would only transfer the pror nformaton on the parameters to approxmate the no transfer scheme. Snce we have very few labeled data for the target task, we consder a Bayesan estmaton of the task smlarty rather than pont estmaton (Gelman et al. 2003). A sgnfcant dfference between our problem and multtask learnng s that we only care about the target task rather than all tasks, whch s a very natural scenaro n real world applcatons. For example, we may want to use the prevous learned tasks to help learn a new task. Therefore, our target s to mprove the new task rather than the old ones. For ths purpose, the learnng process should focus on the target task rather than all tasks. Therefore, we propose to learn the model based on the condtonal dstrbuton of the target task gven the source task, whch s a novel varaton of the classcal Gaussan process model. The Adaptve Transfer Learnng Model va Gaussan Process We consder regresson problems n ths paper. Suppose that we have a regresson problem as a source task S wth a large amount of tranng data and another regresson problem as a target task T wth a small amount of tranng data. Let y (S) correspondng to the nput x (S) the source task and y (T ) j denote the observed output of the th nstance n denote the observed output of the j th nstance x (T ) j n the target task. We assume that the underlyng latent functon between the nput and output for the source task s f (S). Let f (S) be the vector wth n th element f (S) (x (S) ) and we have a notaton f (T ) for the target task. Suppose we have N data nstances for the source task and M data nstances for the target data, then f (S) s of length N and f (T ) s of length M. We model the nose on observatons by an addtve nose term, y (S) = f (S) + ɛ (S), y (T ) j = f (T ) j + ɛ (T ) j where f ( ) = f ( ) (x ( ) ) 1. The pror dstrbuton (GP pror) over the latent varables f ( ), s gven by a GP p(f ( ) ) = N (f ( ) 0, K ( ) ), wth the kernel matrx K ( ). The notaton 0 denotes a vector wth all entres beng zero. We assume that the nose ɛ ( ) s a random nose varable whose value s ndependent for each observaton y ( ) and follows a zero-mean Gaussan, p(y ( ) f ( ) ) = N (y ( ) f ( ), β 1 ( ) ) (1) where β s and β t are hyper-parameters representng the precson (nverse varance) of the nose n the source and target tasks, respectvely. Snce the nose varables are..d., the dstrbuton of observed outputs y (S) = (y (S) 1,, y (S) N )T and y (T ) = (y (T ) 1,, y (T ) M )T condtoned on correspondng nputs f (S) and f (T ) can be wrtten n a Gaussan form as follows p(y ( ) f ( ) ) = N (y ( ) f ( ), β 1 ( ) I)) (2) where I s the dentty matrx wth proper dmensons. In order to transfer knowledge from the source task S to the target task T, we need to construct connectons between them. In general, there are two knds of connectons between the source and the target tasks. One s that the two GP regresson models for the source and target tasks share the same parameters θ n ther kernel functons. Ths ndcates that the smoothness of the regresson functons of the source and target tasks are smlar. Ths type of transfer scheme s ntroduced n (Lawrence and Platt 2004) for GP models. Many other mult-task learnng models also use smlar schemes by sharng prors or regularzaton terms over tasks (Lee et al. 2007; Rana, Ng, and Koller 2006; Ando and Zhang 2005). The other knd of connecton s the correlaton between outputs of data nstances between tasks (Bonlla, Agakov, and Wllams 2007; Bonlla, Cha, and Wllams 2008). Unlke the frst knd (Lawrence and Platt 2004), we do not assume the data n dfferent tasks to be ndependent of each other gven the shared GP pror, but consder the jont dstrbuton of outputs of both tasks. The connecton through shared parameters gves t the parametrc flavor whle the connecton through correlaton of data nstances gves t the nonparametrc flavor. Therefore our model may be regarded as a semparametrc model. Suppose the dstrbuton of observed outputs condtoned on the nputs X s p(y X), where y = (y (S), y (T ) ) and X = (X (S), X (T ) ). For mult-task learnng problems where the tasks are equally mportant, the objectve would be the lkelhood p(y X). However, for transfer learnng where we have a clear 1 We use ( ) to denote both (S) and (T ) to avod redundancy.

3 target task, t s not necessary to optmze the parameters wth respect to the source task. Therefore, we drectly consder the condtonal dstrbuton p(y (T ) y (S), X (T ), X (S) ). Let f = (f (S), f (T ) ), we frst defne a Gaussan process over f, p(f X, θ) = N (f 0, K), and the kernel matrx K for transfer learnng K nm k(x n, x m )e ζ(xn,xm)ρ, (3) where ζ(x n, x m ) = 0 f x n and x m come from the same task, otherwse, ζ(x n, x m ) = 1. The ntuton behnd Equaton (3) s that the addtonal factor makes the correlaton between nstances of the dfferent tasks are less or equal to the correlaton between the ones n the same task. The parameter ρ represents the dssmlarty between S and T. One dffculty n transfer learnng s to estmate the (ds)smlarty wth lmt amount of data. We propose a Baysan approach to tackle ths dffculty. Therefore, nstead of usng a pont estmaton, we can consder ρ s from a Gamma dstrbuton ρ Γ(b, µ). We now have the transfer kernel as K nm = E[K nm ] = k(x n, x m ) e ζ(xn,xm)ρ b 1 e ρ/µ ρ µ b Γ(b) dρ. By ntegratng out ρ, we can obtan, ( ) b 1 k(x K nm = n, x m ), ζ(x n, x m ) = 1, 1 + µ k(x n, x m ), otherwse. (4) The factor before kernel functon has range of [0, 1]. Therefore, the above form of kernel does not consder the negatve correlaton between tasks. Therefore, we can further extend t nto the followng form K nm k(x n, x m )(2e ζ(xn,xm)ρ 1), (5) and ts Bayesan form ( ( ) b 1 k(x n, x m) 2 1), ζ(x n, x m) = 1, K nm = 1 + µ k(x n, x m), otherwse. (6) Theorem 1 shows that the kernel matrces defned n Equaton (4) and Equaton (6) are postve semdefnte (PSD) matrces as long as k s a vald kernel functon. Both transfer kernel models the correlaton of outputs based on not only the smlarty between nputs but also the smlarty between tasks. Snce the kernel n Equaton (6) has the ablty to model negatve correlaton between tasks and therefore has stronger expresson ablty, we use t as the transfer kernel. We wll further dscuss ts propertes n later secton. Thus, the condtonal dstrbuton of f (T ) gven f (S) can be wrtten as follows p(f (T ) f (S), X (T ), θ) = N (K 21K 1 11 f (S), K 22 K 21K 1 11 K 12), ( ) K11 K where K = 12 s a block matrx. K K 21 K 11 and 22 K 22 are the kernel matrces of the data n the source task and target task, respectvely. K 12 (= K T 21) s the kernel matrx across tasks. ( ) K11 K Theorem 1. Let K = 12 be a PSD matrx ( wth K 12 ) = K T 21. Then for λ 1, K = K 21 K 22 K11 λk 12 s also a PSD matrx. λk 21 K 22 We omt the proof here to reduce space. 2 So far, we have descrbed how to construct a unfed GP regresson model for adaptve transfer learnng. In the followng subsectons, we wll dscuss how to do nference and parameter learnng n our proposed GP regresson model. Inductve Inference For a test pont x n the target task, we want to predct ts output value y by determnng the predctve dstrbuton p(y y (S), y (T ) ), where, for smplcty, the nput varables are omtted. The nference process of the model s the same as that n standard GP models. The mean and varance of the predctve dstrbuton of the target task data are gven by m(x) = k C 1 x y, σ 2 T (x) = c k C 1 x k x, (7) ( ) where C = K+Λ β 1 s I and Λ = N 0 0 βt 1, and c = I M k(x, x) + βt 1 and k x can be calculated by the transfer kernel defned n Equaton (3). Therefore, m(x) can be further decomposed as follows m(x) = α j k(x, x j ) + λα k(x, x ), (8) x j X (T ) x X (S) where λ = 2( 1 1+µ )b 1 and α s the th element of C 1 y. The frst term n the above formula represents the correlaton between the test data pont and the data n the target task. The second term represents the correlaton between the test data pont and the source task data where a shrnkage s ntroduced based on the smlarty between tasks. Parameter Learnng Gven the observatons y (S) n the source task and y (T ) n the target task, we wsh to learn parameters {θ } P =1 (P s the number of parameters n the kernel functon) n the kernel functon as well as the parameter b, µ (denoted by θ P +1 and θ P +2 for smplcty) by maxmzng the margnal lkelhood of data of the target task. Multtask GP models (Bonlla, Cha, and Wllams 2008) consder the jont dstrbuton of source 2 The proof of the theorem can be found at caobn/papers/atgp ext.pdf

4 and target tasks. However, for transfer learnng problems, we may only have relatvely few labeled data n the target task and optmze wth respect to the jont dstrbuton may bas the model towards source rather than target. Therefore, we propose to optmze the condtonal dstrbuton nstead, p(y (T ) y (S), X (T ), X (S) ). (9) As we analyzed before, ths dstrbuton s also a Gaussan and the model s stll a GP. A slght dfference between ths model and classcal GP s that ts mean s not a zero vector any more and t s also a functon of the parameters. where p(y (T ) y (S), X (T ), X (S) ) N (µ t, C t ), (10) µ t = K 21 (K 11 + σ 2 si) 1 y s, C t = (K 22 + σ 2 t I) K 21 (K 11 + σ 2 si) 1 K 12, (11) and K 11 (x n, x m ) = K 22 (x n, x m ) = k(x n, x m ) and K 21 (x n, x m ) = K 12 (x n, x m ) = k(x n, x m )(2( 1 1+µ )b 1). The log-lkelhood equaton s gven as follows ln p(y t θ) = 1 2 ln C 1 t 2 (yt µt)t C 1 t (y N t µ t) 2 ln(2π). (12) We can compute the dervatve of the log-lkelhood wth respect to the parameters, ln p(y θ) = 1 C t θ 2 Tr(C 1 t ) θ (yt µt)t C 1 C t t C 1 t (y t µ t) θ + ( µt θ ) T C 1 t (y t µ t) The dfference between the proposed learnng model and classcal GP learnng models s the exstence of the last term n the above equaton and non-zero mean Gaussan process. However, the standard nference and learnng algorthms can stll be used. Thus, many approxmaton technques for GP models (Bottou et al. 2007) can also be appled drectly to speed-up the nference and learnng processes of AT-GP. Transfer Kernel: Modelng Correlaton Between Tasks As mentoned above, our man contrbuton s the proposed sem-parametrc transfer kernel for transfer learnng. In ths secton, we further dscuss ts powerful propertes for modelng correlatons between tasks. In general, the kernel functon n GP expresses that for ponts x n and x m that are smlar, the correspondng values y(x n ) and y(x m ) wll be more strongly correlated than for dssmlar ponts. In the transfer learnng scenaro, the correlaton between y(x n ) and y(x m ) also depends on whch tasks the nputs x n and x m come from and how smlar the tasks are. Therefore the transfer kernel expresses that for ponts x n and x m from dfferent tasks, how the correspondng values y(x n ) and y(x m ) are correlated. The transfer kernel can transfer through dfferent schemes n three cases: Transfer over prors: λ 0, meanng we know the source and target tasks are not smlar or have no confdence on ther relaton. When the correlatons between data n the source and target tasks are slm, what we transfer s only the shared parameters n the kernel functon k. So we only requre the degree of smoothness of the source and target tasks to be shared. Transfer over data: 0 < λ < 1. In ths case, besdes the smoothness nformaton, the model drectly transfers data from the source task to the target task. How much the data n the source task nfluence the target task depends on the value of λ. Sngle task problem: λ = 1, meanng we have hgh confdence the task s extremely correlated, we can treat the two tasks to be one. In ths case, t s equvalent to the transfer all scheme. The learnng algorthm can automatcally determne nto whch settng the problem falls. Ths s acheved by estmatng λ on the labeled data from both the source and target tasks. Experments n the next secton show that only a few labeled data are requred to estmate λ well. Experments Synthetc Dataset In ths experment, we show how our proposed AT-GP model performs when the smlarty between the source task and target task changes. We generate a synthetc data set to test our AT-GP algorthm frst, n order to better llustrate the propertes of the algorthm under dfferent parameter settngs. We use a lnear regresson problem as a case study. Frst, we are gven a lnear regresson functon f(x) = w0 T x + ɛ where w 0 R 100 and ɛ s a zero-mean Gaussan nose term. The target task s to learn ths regresson model wth a few data generated by ths model. In our experment, we use ths functon to generate 500 data for the target task. Among them, 50 data are randomly selected for tranng and the rest s used for testng. For the source task, we use g(x) = w T x + ɛ = (w 0 + δ w) T x + ɛ to generate 500 data for tranng, where w s randomly generated vector and δ s the varable controllng the dfference between g and f. In the experment we ncrease δ and vary the dstance between the two tasks D f = w w 0 F. Fgure (2) shows how the mean absolute error (MAE) on 450 target test data changes at dfferent dstance between the source and target tasks. The results are compared wth the transfer all scheme (drectly use all of the tranng data) and the no transfer scheme (only use tranng data n the target task). As we can see, when the two tasks are very smlar, the AT-GP model performance s as good as transfer all, whle when the tasks are very dfferent, the AT-GP

5 Transfer All(GP) No Transfer(GP) AT GP MAE 60 λ 0.7 λ λ* MAE D f D f Fgure 2: The left fgure shows the change to MAE wth ncreasng dstance wth f. The results are compared wth transfer all and no transfer; The rght fgure shows the change to λ wth ncreasng dstance wth f. We can see that λ s strongly correlated wth D f Num. of Data Num. of Data Fgure 4: Learnng wth dfferent numbers of labeled data n the target task. The left fgure shows the convergence curve of λ wth respect to the number of data. The rght fgure shows the change to MAE on test data. (λ s the value of λ after convergence and λ = 0.3 here.) model s no worse than no transfer. Fgure (4) shows the expermental results on learnng λ under a varyng number of labeled data n the target task. It s nterestng to observe that the number of data requred to learn λ well (left fgure) s much less than the number of data requred to learn the task well (rght fgure). Ths ndcates why transfer learnng works. Real-World Datasets In ths secton, we conduct experments on three real world datasets. WF Localzaton 3 : The task s to predct the locaton of each collecton of receved sgnal strength (RSS) values n an ndoor envronment, receved from the WF Access Ponts (APs). A set of (RSS values, Locaton) data s gven as tranng data. The tranng data are collected at a dfferent tme perod from the test data, so there exsts a dstrbuton change between the tranng and test data. In WF locaton estmaton, when we use the outdated data as the tranng data, the error can be very large. However, because the locaton nformaton s constant across tme, there s a certan part of the data that can be transferred. If ths can be done successfully, we can save a lot of manual labellng effort for the new tme perod. Therefore, we want to use the outdated data as the source task to help predct the locaton for current sgnals. Dfferent from mult-task learnng whch cares about the performances of all tasks, n ths scenaro we only care about the performance of current data correspondng to the target task. Wne 4 : The dataset s about wne qualty ncludng red and whte wne samples. The features nclude objectve tests (e.g. PH values) and the output s based on sensory data. The labels are gven by experts wth grades between 0 (very bad) and 10 (very excellent). There are 1599 records for the red wne and 4898 for the whte wne. We use the qualty predcton problem 3 qyang/icdmdmc07/ 4 for the whte wne as the source task and the qualty predcton problem for red wne as the target task. SARCOS 5 : The dataset relates to an nverse dynamcs problem for a seven degrees-of-freedom SARCOS anthropomorphc robot arm. The task s to map from a 21-dmensonal nput space (7 jont postons, 7 jont veloctes, 7 jont acceleratons) to the correspondng 7 jont torques. The orgnal problem s a mult-output regresson problem. It can also be treated as multtask learnng problem by treatng the seven mappngs as seven tasks. In ths paper we use one of the task as the target task and another as the source task to test our algorthm. Therefore, we can form 49 task pars n total for our experments. In our experments, all data n the source task and 5% of the data n the target task are used for tranng. The remanng 95% data n the target task are used for evaluaton. We use NMSE (Normalzed Mean Square Error) for the evaluaton of results on Wne and SAR- COS datasets and error dstance (n meter) for WF. A smaller value ndcates a better performance for both evaluaton crtera. The average performance results are shown n Table 1, where No and All are GP models wth no-transfer and transfer-all schemes, and Mult-1 s (Lawrence and Platt 2004) and Mult-2 s (Bonlla, Cha, and Wllams 2008). Dscusson We further dscuss the expermental results n ths secton. For the task pars n the datasets, sometmes the source task and target task would be qute related, such as the case of WF dataset. In these cases, the λ parameter learned n the model would be large, allowng the shared knowledge to be transferred successfully. However, n other cases such as the ones on the SARCOS dataset, the source and target tasks may not be related and negatve transfer may occur. A safer way s to use parameter transfer scheme (Mult-1 n (Lawrence and Platt 2004)) or the no transfer scheme to avod 5

6 Data No All Mult-1 Mult-2 AT Wne SARCOS WF Table 1: Results on three real world datasets. The NMSE of all source/target-task pars are reported for the dataset Wne and SARCOS, whle error dstances (n meter) are reported for the dataset WF. Both means (before plus) and standard devaton (after plus) are reported. We have conduct t-tests whch show the mprovements are sgnfcant wth sgnfcance level negatve transfer. The drawback of parameter transfer transfer scheme or no transfer scheme s that they may lose a lot of shared knowledge when the tasks are smlar. Besdes, snce mult-task learnng cares about both the source and target tasks wth no dfference and the source task may domnate the learnng of parameters, the performance of the target task may even worse than no transfer case, as for the SARCOS dataset. However, what we should be focused on s the target task. In our method, we conduct the learnng process on the target task and the learned parameters would ft the target task. Therefore, the AT-GP model performs the best on all three datasets. In many real world applcatons, t s hard to know exactly whether the tasks are related or not. Snce our method can adjust the transfer schema automatcally accordng to the smlarty of the two tasks, we are able to adaptvely transfer the shared knowledge as much as possble and avod negatve transfer. Related Work Mult-task learnng s closely related to transfer learnng. Many papers (Yu, Tresp, and Schwaghofer 2005; Schwaghofer, Tresp, and Yu 2005) consder mult-task learnng and transfer learnng as the same problem. Recently, varous GP models have been proposed to solve mult-task learnng problems. Yu et al. n (Yu, Tresp, and Schwaghofer 2005; Schwaghofer, Tresp, and Yu 2005) proposed the herarchcal Gaussan process model for mult-task learnng. Lawrence n (Lawrence and Platt 2004) also proposed a mult-task learnng model based on Gaussan process. Ths model tres to dscover the common kernel parameters over dfferent tasks and the nformatve vector machne was ntroduced to solve large-scale problems. In (Bonlla, Cha, and Wllams 2008) Bonlla et al. proposed a mult-task regresson model usng Gaussan process. They consdered the smlarty between tasks and constructed a free-form kernel matrx to represent task relatons. The major dfference between ther model and ours s the constructed kernel matrx. They consder a pont estmaton of the correlatons between tasks, whch may not be robust when data n target task s small. They also treat the tasks equally mportant rather than the transfer settng. One dfference of transfer learnng from mult-task learnng s that n transfer learnng we are partcularly nterested n transferrng knowledge from one or more source tasks to a target task rather than learnng these tasks smultaneously. What we concern s the performance n the target task only. On the problem of adaptve transfer learnng, to our best knowledge, only (Rosensten and Detterch 2005) addressed the problem of negatve transfer, but they stll faled to acheve adaptve transfer. Concluson In ths paper, we proposed an adaptve transfer Gaussan process (AT-GP) model for adaptve transfer learnng. Our proposed model can automatcally learn the smlarty between tasks. Accordng to our method, how much to transfer s based on how smlar the tasks are and negatve transfer can be avoded. The experments on both synthetc and real-world datasets verfy the effectveness of our proposed model. Acknowledgments Bn Cao, Snno Jaln Pan and Qang Yang thank the support of RGC/NSFC grant N HKUST624/09. References Ando, R. K., and Zhang, T A framework for learnng predctve structures from multple tasks and unlabeled data. J. Mach. Learn. Res. 6. Bonlla, E. V.; Agakov, F.; and Wllams, C Kernel mult-task learnng usng task-specfc features. In In Proc. of the Eleventh Internatonal Conference on Artfcal Intellgence and Statstcs AISTATS 07. Bonlla, E.; Cha, K. M.; and Wllams, C Mult-task gaussan process predcton. In Platt, J.; Koller, D.; Snger, Y.; and Rowes, S., eds., NIPS 20. MIT Press. Bottou, L.; Chapelle, O.; DeCoste, D.; and Weston, J., eds Large Scale Kernel Machnes. Cambrdge: MIT Press. Da, W.; Yang, Q.; Xue, G.-R.; and Yu, Y Boostng for transfer learnng. In Proc. of the 24th ICML. ACM. Gelman, A.; Carln, J. B.; Stern, H. S.; and Rubn, D. B Bayesan Data Analyss. Chapman & Hall/CRC, second edton. Lawrence, N. D., and Platt, J. C Learnng to learn wth the nformatve vector machne. In Proc. of the 21st ICML. Banff, Alberta, Canada: ACM. Lee, S.-I.; Chatalbashev, V.; Vckrey, D.; and Koller, D Learnng a meta-level pror for feature relevance from multple related tasks. In Proc. of the 24th ICML, Corvals, Oregon: ACM. Rana, R.; Battle, A.; Lee, H.; Packer, B.; and Ng, A. Y Self-taught learnng: transfer learnng from unlabeled data. In ICML 07. New York, NY, USA: ACM. Rana, R.; Ng, A. Y.; and Koller, D Constructng nformatve prors usng transfer learnng. In Proc. of the 23rd ICML. Pttsburgh, Pennsylvana: ACM. Rosensten, M. T., M. Z. K. L. P., and Detterch, T. G To transfer or not to transfer. In NIPS 2005 Workshop on Transfer Learnng. Schwaghofer, A.; Tresp, V.; and Yu, K Learnng gaussan process kernels va herarchcal bayes. In NIPS 17. Yu, K.; Tresp, V.; and Schwaghofer, A Learnng gaussan processes from multple tasks. In Proc. of the 22nd ICML. Bonn, Germany: ACM.

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010 Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

Empirical Distributions of Parameter Estimates. in Binary Logistic Regression Using Bootstrap

Empirical Distributions of Parameter Estimates. in Binary Logistic Regression Using Bootstrap Int. Journal of Math. Analyss, Vol. 8, 4, no. 5, 7-7 HIKARI Ltd, www.m-hkar.com http://dx.do.org/.988/jma.4.494 Emprcal Dstrbutons of Parameter Estmates n Bnary Logstc Regresson Usng Bootstrap Anwar Ftranto*

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters Proper Choce of Data Used for the Estmaton of Datum Transformaton Parameters Hakan S. KUTOGLU, Turkey Key words: Coordnate systems; transformaton; estmaton, relablty. SUMMARY Advances n technologes and

More information

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

Fusion Performance Model for Distributed Tracking and Classification

Fusion Performance Model for Distributed Tracking and Classification Fuson Performance Model for Dstrbuted rackng and Classfcaton K.C. Chang and Yng Song Dept. of SEOR, School of I&E George Mason Unversty FAIRFAX, VA kchang@gmu.edu Martn Lggns Verdan Systems Dvson, Inc.

More information

Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input

Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input Real-tme Jont Tracng of a Hand Manpulatng an Object from RGB-D Input Srnath Srdhar 1 Franzsa Mueller 1 Mchael Zollhöfer 1 Dan Casas 1 Antt Oulasvrta 2 Chrstan Theobalt 1 1 Max Planc Insttute for Informatcs

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Three supervised learning methods on pen digits character recognition dataset

Three supervised learning methods on pen digits character recognition dataset Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Learning physical Models of Robots

Learning physical Models of Robots Learnng physcal Models of Robots Jochen Mück Technsche Unverstät Darmstadt jochen.mueck@googlemal.com Abstract In robotcs good physcal models are needed to provde approprate moton control for dfferent

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

Econometrics 2. Panel Data Methods. Advanced Panel Data Methods I

Econometrics 2. Panel Data Methods. Advanced Panel Data Methods I Panel Data Methods Econometrcs 2 Advanced Panel Data Methods I Last tme: Panel data concepts and the two-perod case (13.3-4) Unobserved effects model: Tme-nvarant and dosyncratc effects Omted varables

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

A Semi-parametric Regression Model to Estimate Variability of NO 2

A Semi-parametric Regression Model to Estimate Variability of NO 2 Envronment and Polluton; Vol. 2, No. 1; 2013 ISSN 1927-0909 E-ISSN 1927-0917 Publshed by Canadan Center of Scence and Educaton A Sem-parametrc Regresson Model to Estmate Varablty of NO 2 Meczysław Szyszkowcz

More information

Fast Sparse Gaussian Processes Learning for Man-Made Structure Classification

Fast Sparse Gaussian Processes Learning for Man-Made Structure Classification Fast Sparse Gaussan Processes Learnng for Man-Made Structure Classfcaton Hang Zhou Insttute for Vson Systems Engneerng, Dept Elec. & Comp. Syst. Eng. PO Box 35, Monash Unversty, Clayton, VIC 3800, Australa

More information

An Image Fusion Approach Based on Segmentation Region

An Image Fusion Approach Based on Segmentation Region Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

A Background Subtraction for a Vision-based User Interface *

A Background Subtraction for a Vision-based User Interface * A Background Subtracton for a Vson-based User Interface * Dongpyo Hong and Woontack Woo KJIST U-VR Lab. {dhon wwoo}@kjst.ac.kr Abstract In ths paper, we propose a robust and effcent background subtracton

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Parameter estimation for incomplete bivariate longitudinal data in clinical trials

Parameter estimation for incomplete bivariate longitudinal data in clinical trials Parameter estmaton for ncomplete bvarate longtudnal data n clncal trals Naum M. Khutoryansky Novo Nordsk Pharmaceutcals, Inc., Prnceton, NJ ABSTRACT Bvarate models are useful when analyzng longtudnal data

More information

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT 3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ

More information

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity Journal of Sgnal and Informaton Processng, 013, 4, 114-119 do:10.436/jsp.013.43b00 Publshed Onlne August 013 (http://www.scrp.org/journal/jsp) Corner-Based Image Algnment usng Pyramd Structure wth Gradent

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Domain-Constrained Semi-Supervised Mining of Tracking Models in Sensor Networks

Domain-Constrained Semi-Supervised Mining of Tracking Models in Sensor Networks Doman-Constraned Sem-Supervsed Mnng of Trackng Models n Sensor Networks Rong Pan 1, Junhu Zhao 2, Vncent Wenchen Zheng 1, Jeffrey Junfeng Pan 1, Dou Shen 1, Snno Jaln Pan 1 and Qang Yang 1 1 Hong Kong

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

EXTENDED BIC CRITERION FOR MODEL SELECTION

EXTENDED BIC CRITERION FOR MODEL SELECTION IDIAP RESEARCH REPORT EXTEDED BIC CRITERIO FOR ODEL SELECTIO Itshak Lapdot Andrew orrs IDIAP-RR-0-4 Dalle olle Insttute for Perceptual Artfcal Intellgence P.O.Box 59 artgny Valas Swtzerland phone +4 7

More information

3D vector computer graphics

3D vector computer graphics 3D vector computer graphcs Paolo Varagnolo: freelance engneer Padova Aprl 2016 Prvate Practce ----------------------------------- 1. Introducton Vector 3D model representaton n computer graphcs requres

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated.

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated. Some Advanced SP Tools 1. umulatve Sum ontrol (usum) hart For the data shown n Table 9-1, the x chart can be generated. However, the shft taken place at sample #21 s not apparent. 92 For ths set samples,

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

Fast and Scalable Training of Semi-Supervised CRFs with Application to Activity Recognition

Fast and Scalable Training of Semi-Supervised CRFs with Application to Activity Recognition Fast and Scalable Tranng of Sem-Supervsed CRFs wth Applcaton to Actvty Recognton Maryam Mahdavan Computer Scence Department Unversty of Brtsh Columba Vancouver, BC, Canada Tanzeem Choudhury Intel Research

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

High-Boost Mesh Filtering for 3-D Shape Enhancement

High-Boost Mesh Filtering for 3-D Shape Enhancement Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,

More information

A Similarity-Based Prognostics Approach for Remaining Useful Life Estimation of Engineered Systems

A Similarity-Based Prognostics Approach for Remaining Useful Life Estimation of Engineered Systems 2008 INTERNATIONAL CONFERENCE ON PROGNOSTICS AND HEALTH MANAGEMENT A Smlarty-Based Prognostcs Approach for Remanng Useful Lfe Estmaton of Engneered Systems Tany Wang, Janbo Yu, Davd Segel, and Jay Lee

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Simulation Based Analysis of FAST TCP using OMNET++

Simulation Based Analysis of FAST TCP using OMNET++ Smulaton Based Analyss of FAST TCP usng OMNET++ Umar ul Hassan 04030038@lums.edu.pk Md Term Report CS678 Topcs n Internet Research Sprng, 2006 Introducton Internet traffc s doublng roughly every 3 months

More information

Improved Methods for Lithography Model Calibration

Improved Methods for Lithography Model Calibration Improved Methods for Lthography Model Calbraton Chrs Mack www.lthoguru.com, Austn, Texas Abstract Lthography models, ncludng rgorous frst prncple models and fast approxmate models used for OPC, requre

More information

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches Proceedngs of the Internatonal Conference on Cognton and Recognton Fuzzy Flterng Algorthms for Image Processng: Performance Evaluaton of Varous Approaches Rajoo Pandey and Umesh Ghanekar Department of

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

Synthesizer 1.0. User s Guide. A Varying Coefficient Meta. nalytic Tool. Z. Krizan Employing Microsoft Excel 2007

Synthesizer 1.0. User s Guide. A Varying Coefficient Meta. nalytic Tool. Z. Krizan Employing Microsoft Excel 2007 Syntheszer 1.0 A Varyng Coeffcent Meta Meta-Analytc nalytc Tool Employng Mcrosoft Excel 007.38.17.5 User s Gude Z. Krzan 009 Table of Contents 1. Introducton and Acknowledgments 3. Operatonal Functons

More information

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS P.G. Demdov Yaroslavl State Unversty Anatoly Ntn, Vladmr Khryashchev, Olga Stepanova, Igor Kostern EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS Yaroslavl, 2015 Eye

More information

Relevance Assignment and Fusion of Multiple Learning Methods Applied to Remote Sensing Image Analysis

Relevance Assignment and Fusion of Multiple Learning Methods Applied to Remote Sensing Image Analysis Assgnment and Fuson of Multple Learnng Methods Appled to Remote Sensng Image Analyss Peter Bajcsy, We-Wen Feng and Praveen Kumar Natonal Center for Supercomputng Applcaton (NCSA), Unversty of Illnos at

More information

Type-2 Fuzzy Non-uniform Rational B-spline Model with Type-2 Fuzzy Data

Type-2 Fuzzy Non-uniform Rational B-spline Model with Type-2 Fuzzy Data Malaysan Journal of Mathematcal Scences 11(S) Aprl : 35 46 (2017) Specal Issue: The 2nd Internatonal Conference and Workshop on Mathematcal Analyss (ICWOMA 2016) MALAYSIAN JOURNAL OF MATHEMATICAL SCIENCES

More information

Object-Based Techniques for Image Retrieval

Object-Based Techniques for Image Retrieval 54 Zhang, Gao, & Luo Chapter VII Object-Based Technques for Image Retreval Y. J. Zhang, Tsnghua Unversty, Chna Y. Y. Gao, Tsnghua Unversty, Chna Y. Luo, Tsnghua Unversty, Chna ABSTRACT To overcome the

More information

MOTION BLUR ESTIMATION AT CORNERS

MOTION BLUR ESTIMATION AT CORNERS Gacomo Boracch and Vncenzo Caglot Dpartmento d Elettronca e Informazone, Poltecnco d Mlano, Va Ponzo, 34/5-20133 MILANO boracch@elet.polm.t, caglot@elet.polm.t Keywords: Abstract: Pont Spread Functon Parameter

More information

A Statistical Model Selection Strategy Applied to Neural Networks

A Statistical Model Selection Strategy Applied to Neural Networks A Statstcal Model Selecton Strategy Appled to Neural Networks Joaquín Pzarro Elsa Guerrero Pedro L. Galndo joaqun.pzarro@uca.es elsa.guerrero@uca.es pedro.galndo@uca.es Dpto Lenguajes y Sstemas Informátcos

More information

Active Contours/Snakes

Active Contours/Snakes Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

Life Tables (Times) Summary. Sample StatFolio: lifetable times.sgp

Life Tables (Times) Summary. Sample StatFolio: lifetable times.sgp Lfe Tables (Tmes) Summary... 1 Data Input... 2 Analyss Summary... 3 Survval Functon... 5 Log Survval Functon... 6 Cumulatve Hazard Functon... 7 Percentles... 7 Group Comparsons... 8 Summary The Lfe Tables

More information

Learning Topic Structure in Text Documents using Generative Topic Models

Learning Topic Structure in Text Documents using Generative Topic Models Learnng Topc Structure n Text Documents usng Generatve Topc Models Ntsh Srvastava CS 397 Report Advsor: Dr Hrsh Karnck Abstract We present a method for estmatng the topc structure for a document corpus

More information

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors Onlne Detecton and Classfcaton of Movng Objects Usng Progressvely Improvng Detectors Omar Javed Saad Al Mubarak Shah Computer Vson Lab School of Computer Scence Unversty of Central Florda Orlando, FL 32816

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

APPLICATION OF PREDICTION-BASED PARTICLE FILTERS FOR TELEOPERATIONS OVER THE INTERNET

APPLICATION OF PREDICTION-BASED PARTICLE FILTERS FOR TELEOPERATIONS OVER THE INTERNET APPLICATION OF PREDICTION-BASED PARTICLE FILTERS FOR TELEOPERATIONS OVER THE INTERNET Jae-young Lee, Shahram Payandeh, and Ljljana Trajovć School of Engneerng Scence Smon Fraser Unversty 8888 Unversty

More information

Brushlet Features for Texture Image Retrieval

Brushlet Features for Texture Image Retrieval DICTA00: Dgtal Image Computng Technques and Applcatons, 1 January 00, Melbourne, Australa 1 Brushlet Features for Texture Image Retreval Chbao Chen and Kap Luk Chan Informaton System Research Lab, School

More information

Lecture 5: Probability Distributions. Random Variables

Lecture 5: Probability Distributions. Random Variables Lecture 5: Probablty Dstrbutons Random Varables Probablty Dstrbutons Dscrete Random Varables Contnuous Random Varables and ther Dstrbutons Dscrete Jont Dstrbutons Contnuous Jont Dstrbutons Independent

More information

Bayesian Approach for Fatigue Life Prediction from Field Inspection

Bayesian Approach for Fatigue Life Prediction from Field Inspection Bayesan Approach for Fatgue Lfe Predcton from Feld Inspecton Dawn An, and Jooho Cho School of Aerospace & Mechancal Engneerng, Korea Aerospace Unversty skal@nate.com, jhcho@kau.ac.kr Nam H. Km, and Srram

More information

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Positive Semi-definite Programming Localization in Wireless Sensor Networks Postve Sem-defnte Programmng Localzaton n Wreless Sensor etworks Shengdong Xe 1,, Jn Wang, Aqun Hu 1, Yunl Gu, Jang Xu, 1 School of Informaton Scence and Engneerng, Southeast Unversty, 10096, anjng Computer

More information

Intelligent Information Acquisition for Improved Clustering

Intelligent Information Acquisition for Improved Clustering Intellgent Informaton Acquston for Improved Clusterng Duy Vu Unversty of Texas at Austn duyvu@cs.utexas.edu Mkhal Blenko Mcrosoft Research mblenko@mcrosoft.com Prem Melvlle IBM T.J. Watson Research Center

More information

Modeling Waveform Shapes with Random Effects Segmental Hidden Markov Models

Modeling Waveform Shapes with Random Effects Segmental Hidden Markov Models Modelng Waveform Shapes wth Random Effects Segmental Hdden Markov Models Seyoung Km, Padhrac Smyth Department of Computer Scence Unversty of Calforna, Irvne CA 9697-345 {sykm,smyth}@cs.uc.edu Abstract

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

THE THEORY OF REGIONALIZED VARIABLES

THE THEORY OF REGIONALIZED VARIABLES CHAPTER 4 THE THEORY OF REGIONALIZED VARIABLES 4.1 Introducton It s ponted out by Armstrong (1998 : 16) that Matheron (1963b), realzng the sgnfcance of the spatal aspect of geostatstcal data, coned the

More information