A Machine Learning Approach to Developing Rigid-body Dynamics Simulators for Quadruped Trot Gaits

Size: px
Start display at page:

Download "A Machine Learning Approach to Developing Rigid-body Dynamics Simulators for Quadruped Trot Gaits"

Transcription

1 A Machne Learnng Approach to Developng Rgd-body Dynamcs Smulators for Quadruped Trot Gats Jn-Wook Lee Abstract I present a machne learnng based rgd-body dynamcs smulator for trot gats of the quadruped LttleDog. My contrbuton can be dvded nto two parts: the rst part for the reducton of one-tme-step predcton error, and the second part for a more accurate predcton over longer tme scales. Frst, n order to reduce the one-step predcton error, I compared three regresson methods: st order lnear regresson (LR), lnear weght projecton regresson (LWPR) [] and Gaussan process regresson (GPR) []. Although GPR shows the hghest accuracy, ts cost for computaton and storage O(n ) and O(n ), respectvely s too hgh to handle a large amount of tranng data whch are requred n my problem. Therefore, I developed a sparse GPR, called local GPR (LGPR). In LGPR, tranng data are dvded nto k clusters by k-means, and a locally full GPR model s constructed for each cluster. The predcton s done wth the local model closest to a test data pont, n terms of the normalzed Eucldean dstance. The cost for computaton s tractable, approxmately O(m ) where m=n/k, makng t possble to use a large amount of tranng data. I showed that LGPR sgncantly reduces the one-step predcton error. Second, to reduce the accumulaton of predcton error, I proposed a projecton method, called α- P ROJ. I found that usng LGPR as t s accumulates the predcton error too much over longer tme scales. One of the man sources of such error turned out to be n some predcted states that are not wthn the dynamcally feasble regon. α- P ROJ projects a predcted state to such regon. I dened the regon wthout usng any specc nformaton of LttleDog (e.g. max/mn jont angles). It s only dened by the tranng data. Thus, ths projecton method can be appled to other robot systems wthout modcaton. α-p ROJ hghly mproved the predcton over longer tme scales. The emprcal results show that these key deas led to the sgncant reducton of predcton error and better performance than the smulator. I. INTRODUCTION A. SIMULATOR FOR QUADRUPED TROT GAITS In recent years, Machne learnng algorthms have been extensvely appled to the development of a robotc controller to learn the robot's dynamcs. When developng such controllers, t s very useful to have an accurate Fg.. The LttleDog robot smulator n order to test the controller's performance before applyng t to a real robot. Especally for the ground robots, there have been several rgd-body dynamcs engnes, the most wdely used one beng Open Dynamcs Engne (). Although s used n wde-rangng applcatons not only for robotcs but also for games and anmatons, t s not accurate enough for engneerng purposes. Furthermore, n general, t s very hard to get an accurate dynamcs model Desgned and bult by Boston Dynamcs Open Dynamcs Engne, for complex robot systems []. Varous characterstcs of a robot such as a complex body shape, jont motors and other speccatons, whch can sgncantly affect ts dynamcs, are far too complcated to be nput to an type smulator. In order to resolve these challenges, I propose a learnng based approach to developng an accurate smulator (hereafter, an ML smulator) that does not requre any specc nformaton, but learns from data for a robot of nterest. I focus on learnng dynamcs of trot gats of the quadruped LttleDog on a at terran. B. APPROACH IN THIS RESEARCH Denote by s t a state of the robot at a certan dscrete tmestep t, then our goal s to predct s t+ for t =,,..., T lke the followng: s t+ = predct(s t, u t ). A state s s R N and an acton u s R M, where N = and M = for the problem I deal wth. The ntal state s and actons u t for t T are gven for each tme step. s conssts of the body's poston, lnear velocty, orentaton, angle of jonts and tme-dervatves for them, such that t can solely represent the state of the robot at tme t. Unlke ordnary regresson problems, developng ths knd of ML smulator s challengng for two reasons. Frst, the ML smulator should predct mult-dmensonal outputs for every element of s t+ accurately. Second, a predcted state s t+ must be used as a feature set to predct ts successve state s t+. Ths means that once s t devates from the dynamcally feasble regon, the predcton error wll quckly accumulate or dverge. In order to resolve these dfcultes, I constructed largely two goals. Frst, I focused on reducng the onestep predcton error. I compared varous regresson methods: st order lnear regresson (LR), lnear weghted projecton regresson (LWPR) and Gaussan process regresson (GPR). GPR was the best among them. However, GPR suffers from extremely hgh cost for computaton O(n ) and storage O(n ) such that only thousands of tranng data ponts are actually used for a predcton. Thus, I adopted a sparse GPR, called subset of regressors (SoR) [], [5], [6]. For ths knd of sparse GPRs, t s very mportant to choose a good set of m support data ponts whch can represent the entre n ( m) tranng data ponts. I proposed an efcent method that nds such support data ponts by a k-fold-cv test n the tranng data pool. Ths selecton method outperformed over the ordnary random samplng. However, the accuracy was not good enough to be used as a smulator agan. Thus, I developed a local GPR (LGPR) that dvdes tranng data ponts nto k clusters by k-means, and predct wth only one local model closest to

2 a test data pont. Ths further reduced the one-step predcton error, and the computatonal cost also reduced to a tractable level, O(m ) where m=n/k, whle explotng all data ponts n the tranng data pool. The second part of my contrbuton s on the reducton of the predcton error over longer tme scales (hereafter, the sequental predcton error). The sequental predcton error accumulates as tme step ncreases because each one-step predcton error s not zero. Ths accumulaton can cause a predcted state s t t to devate from the feasble data regon (FDR), n whch robot's dynamc constrants are satsed. I proposed a projecton method called α-p ROJ that prevents ths. I show that the sequental predcton error sgncantly decreased. Secton II descrbes data collecton and feature selecton. In Secton III, varous regresson methods are compared. Secton IV proposes the method of choosng support data ponts. Secton V explans the proposed novel LGPR. Secton VI presents the projecton method α-p ROJ that sgncantly reduces the sequental predcton error. Secton VII presents emprcal results, and Secton VIII contans concludng remarks. II. DATA AND FEATURE SELECTION I collected data from LttleDog's trot gats. Each data pont s encoded as {(s t, u t ), s t+ }, the feature part and the response part. Each element of the feature part (s t, u t ) has ts own scale, so each element s normalzed to have a unt-varance. I splt the collected data nto two categores, T ranp ool and T estp ool, such that all the tranng data come from T ranp ool and all the test data from T estp ool. The tme step was. second and one trot move lasted for seconds so that each move has about data ponts. There were types of moves: F /F L/F R(turnng or movng straght forward), B/BL/BR (turnng or movng straght backward), L/R (movng sdewse), DL/DR (movng dagonally forward) and S (standng). For each type of trot gats, there were moves, for T ranp ool and 6 for T estp ool. Now, pror to the dscusson of the selecton of a proper feature set, let me dene a robot's state s here n more detal. There are rgd-bodes ncludng the robot body, eght leg parts and the ground. The jont constrants among these rgd-bodes restrct the degrees of freedom (DOF) of the system to 8; so, I have 8 dmensonal conguraton space: q = (x, y, z, φ, θ, ψ, ϕ,..., ϕ ). I denote by (x, y, z) the poston of the center of gravty (COG) for the robot body. (φ, θ, ψ) means the roll-ptch-yaw orentaton of the body. ϕ j represents the j-th jont angle for the -th leg, where each leg has three jont angles. Addtonally, I consder the rst order tme-dervatves,.e. the lnear velocty and the angular rate of the orentaton/jont angles. One mght thnk that a state should be wrtten as the followng: s = (x w, y w, z w, ẋ w, ẏ w, ż w, φ, θ, ψ, φ, θ, ψ, ϕ,..., ϕ,...ρ,..., ρ ) R 6, () where the superscrpt w stands for the world frame, b means the body frame, and ρ j denotes the angular rate of each jont. However, takng nto account the fact that features should be relevant to predctng a response, three features (x w, y w and ψ) should be removed from the feature set. A Robot's x- y poston (x w,y w ) does not actually affect robot's behavor, t t+ acton state (gven) ~ z x y z Φ θ Φ θ ψ φ φ j ρ j j z Fg.. x y z Φ θ Φ θ ψ φ j One-step predcton for a mult-dmensonal response and so doesn't the yaw angle ψ. Excludng these features s very mportant to dene the feasble data regon (FDR) because FDR must be bounded. I wll dscuss FDR later n Secton VI. For the smlar reason, ẋ w,ẏ w and ż w should be rewrtten to be orgnated from the body frame. Because the tme step s xed to be.s, I used the dfference terms x b, y b, z b, φ, θ and ψ rather than the velocty terms ẋ b, ẏ b, ż b, φ, θ and ψ. Fnally, a state s can be rewrtten as: s = (z w, x b, y b, z b, φ, θ, φ, θ, ψ, ϕ,..., ϕ,...ρ,..., ρ ) R. () An acton u s smply u = ( ϕ,..., ϕ ) R where ϕ j denotes the desred jont angle for the next tme step. Therefore, there are 5(=+) features and responses. III. COMPARISON OF REGRESSION METHODS For the one-step predcton, the problem I consder has a mult-dmensonal response s t+. As llustrated n Fg, I used a state s t and an acton u t to predct each element of the response s t+. I appled the same regresson method to predct each s t+ (s denotes the -th element of a state s). To reduce the one-step predcton error rst, I compared three regresson methods: st order lnear regresson (LR) as a baselne approach, lnear weghted projecton regresson (LWPR) [] and Gaussan process regresson (GPR) []. LR predcts a mult-dmensonal response Ȳ = X (X X) X Y, where X and Y are a desgn matrx and a response matrx, respectvely. X s a test feature set. LWPR manages K locally lnear models, called a receptve eld, wth a center c k. The predcton can be done by computng the weghted mean of each model: ŷ = K k= w kŷ/ K k= w k. Each model's weght s w k = exp( (x c k ) D k (x c k )/) where D k s a dstance metrc n a Gaussan kernel. LWPR learns tranng data one by one to update ts center c k and parameters, and ncrease or decrease the number of receptve elds f necessary. LWPR s parametrc; so t can predct quckly and handle a large number of tranng data. However, the major drawback of LWPR s the requred manual tunng of many hghly datadependent parameters []. GPR assumes y = f(x) + ε wth addtve Gaussan nose ε wth varance σn. It predcts a response f(x ) = k (K + σni) y wth varance V [f ] = k(x, x ) k (K + σni) k, where k = φ(x ) Σ p Φ, k(x p, x q ) = φ(x p ) Σ p φ(x q ) and K = ΦΣ p Φ. GPR requres less dataspecc tunng of hyperparameters of ts covarance functon [], and can be easly optmzed by common gradent ascent algorthms. However, t suffers from extremely hgh computatonal cost O(n ) and memory usage O(n ) for nvertng (K + σni). Varous sparse approxmatons were appled to full GPR for resolvng ths drawback [7]. I wll ρ j ~ φ j

3 dscuss ths n the followng secton. For GPR, I chose the automatc relevance determnaton (ARD: a squared exponental kernel wth ndvdual dstance metrc for each feature). Then I optmzed ts hyperparameters by usng a conjugate gradent descent that mnmzes the negatve margnal loglkelhood. These optmzed hyperparameters were also used to tune the dstance metrc of each feature for LWPR. I evaluated these three regressers by a hold-out crossvaldaton test wth the dentcal test data set. I used normalzed mean squared error (nmse) as an error measurement metrc such that I can easly compare the error of each element of a state. In terms of test error, GPR outperformed over other two regressors as shown n Table I. Therefore, I decded to use GPR for mplementng the ML smulator. IV. SAMPLE SELECTION FOR SPARSE GPR Due to GPR's hgh cost of computaton and memory, I consdered sparse GPR methods that are desgned for handlng a larger set of tranng data. I tred one of them, called subset of regressors (SoR) [], [5], [6]. The key dea for SoR's approxmaton s that t chooses m `support data ponts' that can represent the entre tranng data pool T ranp ool. In SoR, the rest (n-m) tranng data ponts are assumed to be ndependent wth one another and they only communcate through m support data ponts. SoR's cost for computaton and memory s O(m n) and O(mn), respectvely. Needless to say, t s very mportant for SoR to choose m good support data ponts out of the entre n tranng data ponts. These data ponts should be well-dstrbuted so that t can cover up most of T ranp ool. There are several approaches that jontly nd optmal hyperparameters and support data ponts where m s gven [8]. However, especally for the hgh dmensonal and data-rch problem I consder, I couldn't obtan the soluton n a reasonable tme. Instead, I nvented a smple-but-efcent method that nds support data ponts by dong k-fold-cv test n the tranng data pool T ranp ool. Because I used a squared exponental kernel, covarances among data ponts are closely related to ther normalzed Eucldean dstances. Therefore, f a data pont s not predcted well n a leave-one-out-cross-valdaton (LOOCV) test, the pont s lkely to be separated from other data ponts. To cover up a wder range of T ranp ool, these separated data ponts should be ncluded n the support data ponts set as many as possble. The key dea of my samplng method s measurng `separateness' of each tranng data pont, and gvng them `separateness dstrbuton'. Samplng support data ponts from ths dstrbuton prevents choosng too many from clustered data ponts, such that selected ones can be well-dstrbuted n T ranp ool. LOOCV test s the deal one for my samplng method. However, the problem s that full GPR can hold only thousands of tranng data ponts for each tme of CV test. Thus, I appled k-fold-cv test such that a small number of tranng data ponts are nvolved n each CV test. Frst, I randomly dvded the entre n data ponts n T ranp ool nto k groups, and dd k tmes of CV test by full GPR. n/k tranng data ponts and n(k-)/k test data ponts are nvolved n each tme of the CV test. Each data pont n T ranp ool was tested k- tmes, and hence had k- number of predcton errors. Summng up these error for each data pont and normalzng t over n tranng data ponts gave the `separateness dstrbuton' over the entre tranng data pool. Because I appled ndvdual regresson model for each element of a state s, `separateness dstrbuton' should be computed for each element separately. V. LOCAL GPR SoR gave less predcton error. However, the one-step predcton should be as accurate as possble because the sequental predcton error can be greatly accumulated through a number of one-step predctons. Thus, I nvented a novel sparse GPR called local GPR (LGPR). LGPR can take advantage of usng the entre tranng data ponts. It dvdes T ranp ool nto k clusters by k-means, and construct k locally full GPR models. A predcton for a test data pont s done wth the closest local model only. Ths s smlar to the block-dagonal approxmaton of a covarance matrx. However, t s dfferent n that only one local model s actvated for a predcton, so computatonally less expensve. Of course, the hyperparameters are re-optmzed for each local model. Each local model has approxmately m = n/k tranng data ponts, so the computatonal cost of LGPR wll be O(m ); the overhead to nd the closest local model s gnored. LGPR can be a better alternatve for hgh dmensonal problems whch requre a very large set of tranng data for an accurate predcton. VI. PROJECTION TO TRAINING DATA POOL In the prevous sectons I focused on the reducton of the one-step predcton error. One mght thnk that the sequental predcton wll be done well through a number of one-step predctons n a row. However, a predcted state s t t wll greatly devate from the dynamcally feasble regon by the accumulaton of one-step predcton errors. Therefore, t s requred to deal wth such devaton. In the real world or the system, a state s t+ always satses the robot's dynamc constrants. For example, a z- poston of each foot should be over zero and each jont's angle should be bounded. I wll denote such constraned regon by the feasble data regon (FDR). FDR s truly bounded snce I excluded some features (x w and y w ) that are not bounded n Secton II. For most one-step predctons by an ML smulator, a predcted state s t+ usually devates from FDR a lttle at least. Because the predcton for s t+ uses such s t+ as features, the predcton error wll be further ampled. Thus, we need to adjust the devated s t+ to be n FDR. However, I cannot explctly dene FDR, e.g. lettng each foot's z-poston over zero, because the goal of ths research s on the development of an ML smulator that does not consder any specc nformaton about the robot. Because the problem I deal wth s data-rch, I can collect a very large T ranp ool coverng most of FDR. If I collect nnte number of tranng data, T ranp ool wll be equal to FDR. The smplest way of utlzng T ranp ool to adjust a devated state s t+ s gvng the upperbound U and the lowerbound L to each element of the state s t+ lke the followng: s t+ := mn({u, max({s t+, L })}).

4 TABLE I COMPARISON OF NMSE FOR VARIOUS REGRESSION METHODS LR, LWPR, GPR, SOR AND LGPR Method z w x b y b z b φ θ φ θ ψ ρ ρ ρ ρ LR LWPR GPR SoR LGPR x devated predcton (s t+ ). α-proj MBR..8 q FDR nmse.6.. Fg.. tranng data ponts Comparson of two adjustng methods, α-p ROJ and Of course, the boundares are those observed n T ranp ool and ths method s called hereafter. However, there can be a tghter adjustment called α-p ROJ as shown n Fg.. Let me denote by q = mn p T ranp ool p s t+ ; the tranng data pont closest to the devated predcton s t+. Then α-p ROJ can be wrtten as the followng adjustment: s t+ := ( α)s t+ + αq,.. α I used nmse for each element as an error measurement metrc for the one-step predcton error. However, for the sequental predcton error, I used two other dfferent error metrcs: the non-cumulatve error and the cumulatve error. The non-cumulatve sequental predcton error s measured for each element as s t s t, where s t denotes the -th element of the actual state at tme t. The reason why I call t `non-cumulatve' s that every element of a state s s non-cumulatve. As I already dscussed n Secton II, each element except the z-poston z w s not cumulatve because t s orgnated from the body frame. Also, z w does not accumulate due to the gravty. Therefore, t s not possble to observe a cumulatve behavor of the sequental predcton error wth these elements of s. Instead, I computed (x w, y w ) from s and used (x w x w) + (y w yw) as the cumulatve sequental predcton error. The emprcal results for no adjustment, and α- P ROJ wth varous α, are gven n the followng secton. VII. EMPIRICAL RESULTS A. RESULTS OF ONE-STEP PREDICTION T ranp ool and T estp ool have 67 and 999 data ponts, respectvely. Computaton was performed on a laptop wth Intel Core--Duo L5.GHz CPU and GB RAM. Table I shows the comparson of performance for all regresson methods I dscussed above. Dfferent sze of tranng data set was used for each regresson method. Ths s because some of regressors (full GPR and SoR) suffer from very hgh cost for computaton and memory as the sze of tranng data ncreases. I mentoned n Secton IV that storage costs x..8 GPR SoR (Random) SoR (CV-Test) L-GPR # tranng data Fg.. nmse of varous regresson methods for predctng φ for full GPR and SoR are O(n ) and O(mn), respectvely. Therefore, full GPR couldn't work wth more than 5 tranng data due to lack of memory, and SoR couldn't hold mn over ( 5 ) for the same reason. I carefully optmzed each regressor's hyperparameters and determned the sze of tranng data set such that t yelds the best performance consderng the lmtaton of the system resource. All of 67 tranng data were used for LR, LWPR, and LGPR. 5 and tranng data were used for full GPR and SoR (where, m=), respectvely. As shown n Table I, GPRs outperformed over LR and LWPR. LWPR was poorer than expected. Among GPRs, LGPR was the best. Fg. shows nmse for predctng φ, for each GPR method, wth respect to dfferent sze of tranng data set. For smplcty, I wll not show nmse for every element of a state. Instead, only the hardest element to be predcted, the roll angle dfference φ, wll be compared. Fg. descrbes that the accuracy of most regressors roughly ncreases as the sze of the tranng data set. However, some regressors suffer from dealng wth such large tranng set. As I mentoned prevously GPR couldn't take advantage of tranng data over 5, although ts performance was the best wth tranng data less than 5. I tred to x the number of support data ponts m of SoR to be for ts best performance. However, mn couldn't be over, so t was unavodable to decrease m from to and for the case of n= and n=67. Ths s why SoR's performance got worse wth n= and n=67. For LGPR, dfferent number of clusters k was used for each case. I set the sze of tranng data n a sngle local GPR model to be approxmately, so that k s, 5,, and for each case. LGPR excelled over all other regresson methods. We can see that only LGPR can handle very large tranng data set and the accuracy s the best. Fg. also shows that SoR wth k-fold-cv test outperformed over SoR wth random selecton of m support data ponts.

5 Cumulatve Sequental Predcton Error α-proj (α=.5) Non-cumulatve Sequental Error x 8 6 α-proj (α=.5) α-proj (α=.75) α-proj (α=.) Tme Step Element Fg. 5. Cumulatve Sequental Predcton Error Cumulatve sequental error for the robot movng straght forward α-proj (α=.5) Tme Step Fg. 6. Cumulatve sequental error for the robot turnng left backward B. RESULTS OF SEQUENTIAL PREDICTION I wll compare the two adjustng methods α-p ROJ and n terms of two dfferent error measurement metrcs, the cumulatve error and the non-cumulatve error, as explaned n Secton VI. Frst, I wll take a look at the cumulatve behavor of the sequental error and how two methods releve t. Second, I wll check the non-cumulatve error and see f they actually make the sequental predcton converge to FDR. Fg. 5 and Fg. 6 show the cumulatve sequental error for LttleDog's two knds of move (F and BL, see Secton II). No adjustment gave the worst result. and α-p ROJ are ndeed helpful to reduce the sequental predcton error, but α-p ROJ, α >.5, gave the best result. I omtted the result of α =.75 and α =. because they yelded smlar result as α =.5. At last, the ML Smulator wth α-p ROJ defeats. In the -d vsualzaton of these results, α-p ROJ, α >.5 looks very smlar to the real robot moton rather than. Note that smulator's parameters are very carefully tuned to make ts moton look smlar to the real LttleDog. Fg. 7 shows the element-wse error sum for all tme steps, T t= s t s t ; ths represents the non-cumulatve sequental error. The result mples the convergence crteron for FDR. For ths specc problem, α=.5 s enough for the predcton to converge. Fg. 7. Element-wse non-cumulatve sequental error VIII. CONCLUSION I developed a smulator that learns and predcts quadruped trot gats wthout explctly consderng the dynamcs over t. I nvented the local Gaussan process regresson (LGPR) that sgncantly reduces the one-tme-step predcton error by takng advantage of a large set of tranng data. It has k locally full GPR models and predcts wth the local model closest to each test data pont. I also proposed the projecton method called α-p ROJ that can prevent the predcton over longer tme scales from dvergng. By projectng a predcted state to the feasble data regon (FDR) n whch dynamcs constrants are satsed, α-p ROJ greatly reduces the predcton error over longer tme scales. Emprcal results valdate that my approach s more efcent for smulatng quadruped trot gats than the smulator and other methods I tred. Ths research can be extended to problems wth harder motons such as gallopng and jumpng. Addng heght/slope map of a terran to a feature set wll allow the ML smulator to deal wth a bumpy terran. Local GPR actually approxmates the orgnal covarance matrx as the block dagonal one. Allowng the blocks to overlap (one data pont s belong to multple local clusters) can mprove the predcton. Mult-task learnng can also be appled to reduce the predcton error. Acknowledgement: I thank Zco Kolter for hs helpful advce on ths project. REFERENCES [] S. Vjayakumar, A. D'Souza, and S. Schaal, Incremental onlne learnng n hgh dmensons, Neural Computaton, vol. 7, no., pp. 6 6, 5. [] C. E. Rasmussen and C. K. I. Wllams, Gaussan Processes for Machne Learnng. Cambrdge, MA: The MIT Press, 6. [] J. P. Duy Nguyen-Tuong, Local gaussan process regresson for realtme model-based robot control, n IEEE/RSJ Internatonal Conference on Intellgent Robots and Systems, Sept. 8. [] M. Seeger, Some aspects of the splne smoothng approach to nonparametrc regresson curve ttng, n J. Roy. Stat. Soc., vol. B, 7(). Sprnger-Verlag, 985, pp. 5. [5] G. Wahba, G. Wahba, X. Ln, X. Ln, F. Gao, F. Gao, D. Xang, D. Xang, R. Klen, and B. Klen, The bas-varance trade-off and the randomzed gacv, n Advances n Neural Informaton Processng Systems. MIT Press, 999, pp [6] A. J. Smola and P. Bartlett, Sparse greedy gaussan process regresson, n Advances n Neural Informaton Processng Systems. MIT Press,, pp [7] J. Quñonero Candela and C. E. Rasmussen, A unfyng vew of sparse approxmate gaussan process regresson, Journal of Machne Learnng Research, vol. 6, pp , 5. [8] M. Seeger, Fast forward selecton to speed up sparse Gaussan process regresson, n Workshop on Artcal Intellgence and Statstcs 9,. 5

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Learning physical Models of Robots

Learning physical Models of Robots Learnng physcal Models of Robots Jochen Mück Technsche Unverstät Darmstadt jochen.mueck@googlemal.com Abstract In robotcs good physcal models are needed to provde approprate moton control for dfferent

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

Lecture #15 Lecture Notes

Lecture #15 Lecture Notes Lecture #15 Lecture Notes The ocean water column s very much a 3-D spatal entt and we need to represent that structure n an economcal way to deal wth t n calculatons. We wll dscuss one way to do so, emprcal

More information

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide Lobachevsky State Unversty of Nzhn Novgorod Polyhedron Quck Start Gude Nzhn Novgorod 2016 Contents Specfcaton of Polyhedron software... 3 Theoretcal background... 4 1. Interface of Polyhedron... 6 1.1.

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Optimization Methods: Integer Programming Integer Linear Programming 1. Module 7 Lecture Notes 1. Integer Linear Programming

Optimization Methods: Integer Programming Integer Linear Programming 1. Module 7 Lecture Notes 1. Integer Linear Programming Optzaton Methods: Integer Prograng Integer Lnear Prograng Module Lecture Notes Integer Lnear Prograng Introducton In all the prevous lectures n lnear prograng dscussed so far, the desgn varables consdered

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated.

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated. Some Advanced SP Tools 1. umulatve Sum ontrol (usum) hart For the data shown n Table 9-1, the x chart can be generated. However, the shft taken place at sample #21 s not apparent. 92 For ths set samples,

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Adaptive Transfer Learning

Adaptive Transfer Learning Adaptve Transfer Learnng Bn Cao, Snno Jaln Pan, Yu Zhang, Dt-Yan Yeung, Qang Yang Hong Kong Unversty of Scence and Technology Clear Water Bay, Kowloon, Hong Kong {caobn,snnopan,zhangyu,dyyeung,qyang}@cse.ust.hk

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices Steps for Computng the Dssmlarty, Entropy, Herfndahl-Hrschman and Accessblty (Gravty wth Competton) Indces I. Dssmlarty Index Measurement: The followng formula can be used to measure the evenness between

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

Computer Animation and Visualisation. Lecture 4. Rigging / Skinning

Computer Animation and Visualisation. Lecture 4. Rigging / Skinning Computer Anmaton and Vsualsaton Lecture 4. Rggng / Sknnng Taku Komura Overvew Sknnng / Rggng Background knowledge Lnear Blendng How to decde weghts? Example-based Method Anatomcal models Sknnng Assume

More information

Three supervised learning methods on pen digits character recognition dataset

Three supervised learning methods on pen digits character recognition dataset Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010 Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

Data Mining: Model Evaluation

Data Mining: Model Evaluation Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Active Contours/Snakes

Active Contours/Snakes Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Brave New World Pseudocode Reference

Brave New World Pseudocode Reference Brave New World Pseudocode Reference Pseudocode s a way to descrbe how to accomplsh tasks usng basc steps lke those a computer mght perform. In ths week s lab, you'll see how a form of pseudocode can be

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

(1) The control processes are too complex to analyze by conventional quantitative techniques.

(1) The control processes are too complex to analyze by conventional quantitative techniques. Chapter 0 Fuzzy Control and Fuzzy Expert Systems The fuzzy logc controller (FLC) s ntroduced n ths chapter. After ntroducng the archtecture of the FLC, we study ts components step by step and suggest a

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Multi-view 3D Position Estimation of Sports Players

Multi-view 3D Position Estimation of Sports Players Mult-vew 3D Poston Estmaton of Sports Players Robbe Vos and Wlle Brnk Appled Mathematcs Department of Mathematcal Scences Unversty of Stellenbosch, South Afrca Emal: vosrobbe@gmal.com Abstract The problem

More information

Detection of Human Actions from a Single Example

Detection of Human Actions from a Single Example Detecton of Human Actons from a Sngle Example Hae Jong Seo and Peyman Mlanfar Electrcal Engneerng Department Unversty of Calforna at Santa Cruz 1156 Hgh Street, Santa Cruz, CA, 95064 {rokaf,mlanfar}@soe.ucsc.edu

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

Computer models of motion: Iterative calculations

Computer models of motion: Iterative calculations Computer models o moton: Iteratve calculatons OBJECTIVES In ths actvty you wll learn how to: Create 3D box objects Update the poston o an object teratvely (repeatedly) to anmate ts moton Update the momentum

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors Onlne Detecton and Classfcaton of Movng Objects Usng Progressvely Improvng Detectors Omar Javed Saad Al Mubarak Shah Computer Vson Lab School of Computer Scence Unversty of Central Florda Orlando, FL 32816

More information

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION 24 CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION The present chapter proposes an IPSO approach for multprocessor task schedulng problem wth two classfcatons, namely, statc ndependent tasks and

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

Kinematics of pantograph masts

Kinematics of pantograph masts Abstract Spacecraft Mechansms Group, ISRO Satellte Centre, Arport Road, Bangalore 560 07, Emal:bpn@sac.ernet.n Flght Dynamcs Dvson, ISRO Satellte Centre, Arport Road, Bangalore 560 07 Emal:pandyan@sac.ernet.n

More information

Genetic Tuning of Fuzzy Logic Controller for a Flexible-Link Manipulator

Genetic Tuning of Fuzzy Logic Controller for a Flexible-Link Manipulator Genetc Tunng of Fuzzy Logc Controller for a Flexble-Lnk Manpulator Lnda Zhxa Sh Mohamed B. Traba Department of Mechancal Unversty of Nevada, Las Vegas Department of Mechancal Engneerng Las Vegas, NV 89154-407

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Hierarchical clustering for gene expression data analysis

Hierarchical clustering for gene expression data analysis Herarchcal clusterng for gene expresson data analyss Gorgo Valentn e-mal: valentn@ds.unm.t Clusterng of Mcroarray Data. Clusterng of gene expresson profles (rows) => dscovery of co-regulated and functonally

More information

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

A Robust LS-SVM Regression

A Robust LS-SVM Regression PROCEEDIGS OF WORLD ACADEMY OF SCIECE, EGIEERIG AD ECHOLOGY VOLUME 7 AUGUS 5 ISS 37- A Robust LS-SVM Regresson József Valyon, and Gábor Horváth Abstract In comparson to the orgnal SVM, whch nvolves a quadratc

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

A Post Randomization Framework for Privacy-Preserving Bayesian. Network Parameter Learning

A Post Randomization Framework for Privacy-Preserving Bayesian. Network Parameter Learning A Post Randomzaton Framework for Prvacy-Preservng Bayesan Network Parameter Learnng JIANJIE MA K.SIVAKUMAR School Electrcal Engneerng and Computer Scence, Washngton State Unversty Pullman, WA. 9964-75

More information

Related-Mode Attacks on CTR Encryption Mode

Related-Mode Attacks on CTR Encryption Mode Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory

More information

Detection of an Object by using Principal Component Analysis

Detection of an Object by using Principal Component Analysis Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Learning-Based Top-N Selection Query Evaluation over Relational Databases

Learning-Based Top-N Selection Query Evaluation over Relational Databases Learnng-Based Top-N Selecton Query Evaluaton over Relatonal Databases Lang Zhu *, Wey Meng ** * School of Mathematcs and Computer Scence, Hebe Unversty, Baodng, Hebe 071002, Chna, zhu@mal.hbu.edu.cn **

More information

Angle-Independent 3D Reconstruction. Ji Zhang Mireille Boutin Daniel Aliaga

Angle-Independent 3D Reconstruction. Ji Zhang Mireille Boutin Daniel Aliaga Angle-Independent 3D Reconstructon J Zhang Mrelle Boutn Danel Alaga Goal: Structure from Moton To reconstruct the 3D geometry of a scene from a set of pctures (e.g. a move of the scene pont reconstructon

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

Fast and Efficient Incremental Learning for High-dimensional Movement Systems

Fast and Efficient Incremental Learning for High-dimensional Movement Systems Vjayakumar, S, Schaal, S (2). Fast and effcent ncremental learnng for hgh-dmensonal movement systems, Internatonal Conference on Robotcs and Automaton (ICRA2). San Francsco, Aprl 2. Fast and Effcent Incremental

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

Fusion Performance Model for Distributed Tracking and Classification

Fusion Performance Model for Distributed Tracking and Classification Fuson Performance Model for Dstrbuted rackng and Classfcaton K.C. Chang and Yng Song Dept. of SEOR, School of I&E George Mason Unversty FAIRFAX, VA kchang@gmu.edu Martn Lggns Verdan Systems Dvson, Inc.

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

An Accurate Evaluation of Integrals in Convex and Non convex Polygonal Domain by Twelve Node Quadrilateral Finite Element Method

An Accurate Evaluation of Integrals in Convex and Non convex Polygonal Domain by Twelve Node Quadrilateral Finite Element Method Internatonal Journal of Computatonal and Appled Mathematcs. ISSN 89-4966 Volume, Number (07), pp. 33-4 Research Inda Publcatons http://www.rpublcaton.com An Accurate Evaluaton of Integrals n Convex and

More information

EXTENDED BIC CRITERION FOR MODEL SELECTION

EXTENDED BIC CRITERION FOR MODEL SELECTION IDIAP RESEARCH REPORT EXTEDED BIC CRITERIO FOR ODEL SELECTIO Itshak Lapdot Andrew orrs IDIAP-RR-0-4 Dalle olle Insttute for Perceptual Artfcal Intellgence P.O.Box 59 artgny Valas Swtzerland phone +4 7

More information

Fast Sparse Gaussian Processes Learning for Man-Made Structure Classification

Fast Sparse Gaussian Processes Learning for Man-Made Structure Classification Fast Sparse Gaussan Processes Learnng for Man-Made Structure Classfcaton Hang Zhou Insttute for Vson Systems Engneerng, Dept Elec. & Comp. Syst. Eng. PO Box 35, Monash Unversty, Clayton, VIC 3800, Australa

More information