Selecting Shape Features Using Multi-class Relevance Vector Machine
|
|
- Blake Sparks
- 5 years ago
- Views:
Transcription
1 Selectng Shape Features Usng Mult-class Relevance Vector Machne Hao Zhang Jtendra Malk Electrcal Engneerng and Computer Scences Unversty of Calforna at Berkeley Techncal Report No. UCB/EECS October, 5
2 Copyrght 5, by the author(s). All rghts reserved. Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. To copy otherwse, to republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson.
3 Selectng Shape Features Usng Mult-class Relevance Vector Machne Hao Zhang Jtendra Malk Computer Scence Dvson, EECS Dept UC Berkeley, CA 947 Abstract The task of vsual obect recognton benefts from feature selecton as t reduces the amount of computaton n recognzng a new nstance of an obect, and the selected features gve nsghts nto the classfcaton process. We focus on a class of current feature selecton methods known as embedded methods: due to the nature of mult-way classfcaton n obect recognton, we derve an extenson of the Relevance Vector Machne technque to mult-class. In experments, we apply Relevance Vector Machne on the problem of dgt classfcaton and study ts effects. Expermental results show that our classfer enhances accuracy, yelds good nterpretaton for the selected subset of features and costs only a constant factor of the baselne classfer. 1. Introducton Fgure 1: Collecton of dgts Prototype dgts wth hghlghted dscrmnatve part. When lookng at a slate of dgts (fg. 1), after compensatng for the varaton n wrtng style, one can see that there are dstnctve parts of the shape whch best tell apart each dgt aganst other classes (fg. 1). Beng able to dentfy these dscrmnatve parts s a remarkable ablty for classfyng the dgts quckly and accurately. The am of ths work s to fnd those parts automatcally usng feature selecton technques. Even though we llustrate t wth dgts, ths problem s representatve and s mportant to general obect recognton because: 1. Importance of shape cue Studes n bologcal vson have suggested that the shape cues account for the maorty of the necessary nformaton for vsual obect recognton (among other cues such as texture and color) [1]. Therefore, t s worthwhle to study how to dfferentate among shape cues and to derve methods to pck out the most useful shape cues from the data. Handwrtten dgts have natural varatons n shape, thus provdes a good settng for studyng selecton of shape cues.. Fne dscrmnaton In obect recognton, a common challenge s to classfy vsually smlar obects. Ths demands a vson system to emphasze the more dscrmnatve parts of the nstances rather than comparng them on the whole. Dgts 1
4 provde a good source of ths problem because there are borderlne nstances (e.g. among 8s and 9s) that need detaled dscrmnaton. Wth currently avalable technques, we choose to tackle the problem by phrasng t n terms of feature selecton. Gven the popularty of usng feature selecton technques on genomc data, t s mportant to note that our am s dfferent than thers: the ground truth n genomcs s that only a few features (genes) are relevant for the property under study, but n shape classfcaton, n fact, all features are relevant and useful for classfcaton. In genomcs, feature selecton has the potental to yeld a near-perfect classfer. But n shape classfcaton, we do not expect a classfer bult on feature selecton to perform better than those that use all features. Our am s to lmt ourselves wth the number of features and see what are the best ones to use n a classfer. Followng the survey of [], feature selecton approaches fall nto three man categores: flter, wrapper and embedded methods. We descrbe those categores wth examples from computer vson lterature where possble: 1. Flter methods select the features ndependent of the classfer, and are a preprocessng step to prune features. Example n computer vson: In [3], scale-nvarant mage features are extracted and ranked by a lkelhood or mutual nformaton crteron. The most relevant features are then used n a standard classfer. However, we fnd the separaton of the selecton stage and the classfcaton stage unsatsfactory.. Wrapper methods treat the classfer as a black box and provde t wth varyng subsets of features so as to optmze for the best feature set. In practce, wrapper methods often search exhaustvely on all possble subsets whch s exponentally slow or use greedy search whch only explores a porton of all subsets. 3. Embedded methods, as an mprovement on wrapper methods, ncorporate feature selecton n the tranng of the classfer. Example n computer vson: In [4], a par-wse affnty matrx s computed on the tranng set and an optmzaton on the Laplacan spectrum of the affnty matrx s carred out. However, The constructon of affnty matrx and ts subsequent learnng would be neffcent for moderate sze tranng set (e.g. a few hundred examples). In ths paper, we focus on a new class of embedded methods for feature selecton ( [5] and [6]). Compared to other methods, they are dstnctve n the sense that they ntegrate feature selecton nto the tranng process n an effcent manner: 1. Roughly speakng they optmze the tranng error as a sum of loss on data plus a regularzaton penalty on the weghts that promotes sparseness. They optmze a clear error crteron that s tuned for the task of classfcaton (n contrast, flter methods s not ted wth the classfer).. Wthout the regularzaton term (or wth a slght change of the term), they reduce to well-known classfers such as SVM and logstc regresson. Therefore, from a practcal pont of vew they can be thought of as mproved (or alternatve) verson of the baselne classfer. 3. Algorthmcally, they cost only a constant factor of the tme of the baselne classfer. ([5] costs about the same as a usual SVM, [6] costs about tmes of a logstc regresson, snce t usually converges n teratons). The theoretcal part of ths work s to extend [6] to mult-class settng because n shape dscrmnaton the task almost always nvolves more than two classes. On the emprcal sde, we study the our classfer n both two-class(n comparson to [5]) and mult-class problems. In lght of the vson for learnng motvaton, we beleve that shape cues provde an excellent data set to gan nsghts nto a partcular feature selecton technque. The shape features selected by a technque can be vsualzed and evaluated aganst ntuton, thus provdng nsghts nto the classfcaton process. Ths s an advantage that other data sets (e.g. genomcs) do not have. The paper s organzed as follows: Secton dscusses the shape features extracted from the mage. Secton 3 ntroduces two embedded methods: (1) 1-norm Support Vector Machne (1-norm SVM) and () Relevance Vector Machne (RVM). There we develop a mult-class extenson to RVM. Secton 4 studes the expermental results on dgts and we conclude n secton 5. Shape Features The most straghtforward experment on feature selecton takes the raw mage as the feature vector. Ths leaves the learnng machnery wth the ob of modellng shape varaton whch s often hard. Moreover, each pxel locaton s treated as a feature whch may actually correspond to dfferent parts on the shape n dfferent mages. In order to obtan features n correspondence, we choose to use the shape context descrptor obtaned from algnng the mages of the dgts, followng that
5 Fgure : Anchor shapes for four dgt classes Vsualzaton of the process of fndng correspondences of [7]. We choose ths set of features over other mage-based features (e.g. ntensty) to focus on the shape aspect of the mage. From each class, we select an anchor dgt that all other dgts of that class are matched to. Ths dgt s pcked as the medan n terms of shape context dstance (so that the maxmum of ts dstance to all dgts of that class s mnmum). Fg shows the anchor shapes. When each dgt s matched to the anchor, the algorthm establshes correspondence between the ponts on the anchor and the ponts on the dgt (the process s llustrated n fg ). Ths enables us to defne a shape feature as the pont on each dgt that corresponds to a partcular pont on the anchor shape. At each shape feature locaton, we extract two numbers: the shape context dstance to the correspondng shape context on the anchor dgt, and the sum of squared dfferences to the correspondng mage patch on the anchor dgt. We use dstances nstead of orgnal values to cut down dmensonalty, so that there are enough tranng examples for the feature selecton stage. We have shape feature locatons on each shape and therefore we obtan numbers from each dgt. When studyng the mportance of a sngle feature, we add the weght on ts shape context dstance and ts mage patch dstance. 3 Feature Selecton: Embedded Methods We study embedded methods that optmze the followng obectve functon: 1 n n f loss (w T x ) + λf penalty (w) (1) =1 where x s the th data pont, w s the weght vector, and λ s a relatve weghtng of the two terms. The frst term s the tranng error: an average of loss values at the data ponts. The second term s a regularzng penalty on the weghts. By choces of the loss functon (f loss ) and the penalty functon (f penalty ), ths general model reduces to well-known cases such as lnear regresson (f loss beng square loss and f penalty beng ), logstc regresson (f loss beng logstc loss and f penalty beng ) or SVM (f loss beng hnge loss and f penalty beng w T w). It s also well-known that certan types of f penalty promote sparseness n the weghts. An example s the Lasso regresson [8] whch s smlar to lnear regresson but puts an L 1 penalty on the weghts. It pushes many components of the weght vector w to zero, effectvely selectng features. To understand the reason why those types of f penalty promote sparseness, we llustrate n the specal case of lnear regresson n two dmensons (where w = (w 1, w ) and f loss = (w 1 x 1 + w x ) ). Frst, note that the optmzaton problem (1) s equvalent as: 1 n f loss (w T x ) s.t. f penalty (w) = C () n =1 where C s a constant determned by λ. Then, the optmzaton problem can be thought of as fndng the pont on the penalty contour (f penalty (w) = C) such that the t touches the smallest tranng error contour ( n =1 f loss(w T x ) = C ). Note that n the lnear regresson case, the tranng error contour s always an ellpse. We consder three dfferent types of f penalty : 1.f penalty = w T w (L norm): also known as rdge regresson, the penalty contour s a crcle. The pont on the penalty contour that mnmzes the tranng error s where the two contours are tangental. Usually, at that pont none of the weghts are zero. (fg. 3). f penalty = w (L 1 norm): specal case of lasso regresson, the penalty contour s a damond. Because the vertces of the damond tend to stck out, they are usually the spot on the penalty contour where the smallest tranng error contour s attaned. (fg. 3) At those vertces, one of the w 1, w s pushed to zero. 3
6 w w 1 w w 1 w w 1 3. f penalty = ( k w k ɛ ) 1/ɛ (L ɛ norm): the penalty contour s concave and the vertces of the contour stcks out much more than the L 1 case, therefore t s more lkely that one of the vertces mnmzes the tranng error. (fg. 3(c)) Ths phenomenon s general for other types of f loss as well: by choosng those types of f penalty, we obtan a classfer whch has feature selecton capablty. However, not all of the choce has a tractable soluton. We study two partcular varants that admts effcent computaton: 1-norm SVM and mult-class Relevance Vector Machne. (c) Fgure 3: The penalty contour (n sold lne) and the tranng error ellpses (n dotted lne) L norm L 1 norm (c) L ɛ norm norm SVM Here the loss functon f loss s the SVM hnge loss and the f penalty s L 1 -norm. In the optmal soluton, the number of nonzero weghts depends on the value of the relatve weghtng λ n Eq. 1: a larger λ drves more weghts to zero. [5] proposes a soluton to Eq. 1 n whch each weght, as a functon of λ, s computed n an ncremental way as more weghts are actvated. Each weght w follows a pecewse lnear path, and the computatonal complexty of all the paths s ust slghtly more than the baselne SVM classfer. In our experments, we vsualze the weghts as they are beng ncrementally actvated by the 1-norm SVM. 3. Mult-class Relevance Vector Machne In the mult-class settng, the SVM loss does not extend naturally. For ths reason, we turn to an alternatve classfcaton technque: multnomal logstc regresson, whch gves rse to the Relevance Vector Machne technque ntroduced n [6]. In ths case, the f loss s the multnomal logstc loss, and the f penalty s the negatve log of the student-t dstrbuton on w, and the weghtng λ s 1. The penalty contours, whch are plotted n fg. 4, has the desrable property of beng hghly concave. (In practce, we use a lmtng pror whose penalty contour s even more concave, see fg. 4.) Also, n ths case, the optmzaton problem Eq. 1 s equvalent(by takng the negatve of the log of probablty) to the problem of a maxmum a posteror (MAP) estmate gven the student-t pror and the multnomal logt lkelhood. Therefore t can be casted as a estmaton problem on the followng herarchcal model, as shown n fg. 4(c): a hyper-parameter α s ntroduced for each weght (gven α, w s dstrbuted as a zero-mean, varance 1/α Gaussan), and the α tself has a Gamma pror. (When the α s ntegrated out, the margnal dstrbuton of w s a student-t densty, as n the orgnal setup.) The α parameter for each w s ntutvely called the relevance of that feature, n the sense that the bgger the α, the more lkely the feature weght w s drven to zero. Ths addtonal layer of hyper-parameter yelds an optmzaton process n two stages, n a fashon smlar to Expectaton- Maxmzaton: (1) optmzng over w wth fxed α, () optmze over α by usng the optmal w from (1). Ths s the man teraton n the RVM technque. In our dervatons below, we call them the nner loop and outer loop. The detals of the dervaton and a dscusson s n the appendx. At the end of the teraton, we obtan a set of converged α s and w s. Typcally, many of the w s are zero. However, even for those w s that are not zero, ther assocated α s vary much, whch suggests the method of rankng the features by the α s and threshold at successve levels to select dfferent sze subsets of features. (Note that the rankng s not ndependent for each feature because the α s are obtaned by consderng all features.) Ths way of studyng the effect of feature selecton makes t comparable to the successve larger subsets of features n the 1-norm SVM case. The orgnal RVM s derved and expermented on two-class problems. Whle mentonng an extenson to mult-class, the orgnal formulaton essentally treats the mult-class problems as a seres of n one-vs-rest bnary classfcaton problems. Ths would translate nto tranng n bnary classfers ndependently. 1 Instead, to fully explot the potental of ths technque, 1 In [6] eqn. 8, the multclass lkelhood s defned as P (t w) = Q Q n k σ(y k(x n; w k )) t nk, where x and w are nput varables and weghts, 4
7 contour plot of (1+xx )*(1+yy )=C contour plot of xx*yy=c we derve a mult-class RVM based on the frst prncples of the multnomal logstc regresson and the pror on w, n the herarchcal model. (n appendx) α w φ (c) y n Fgure 4: The equal penalty contour for student-t pror on w when a = b >. The equal penalty contour for Jeffrey s pror (densty 1 x ) on w when a = b =. (c) Graphcal model for the RVM. The response y s a multnomal logt functon on the nput data φ wth the weghts w s. Each weght w s assocated a hyper-parameter α and p(w α) = N(w, α 1 ). Each α s dstrbuted as Gamma(a, b). 4 Expermental results 4.1 Setup We experment on dgts from the MNIST database [9]. To extract features, each query dgt s algned aganst a collecton of prototype dgts, one for each class, resultng n a total number of C shape context features. By restrctng our attenton to only one prototype per class, we use an overly smple classfer so that we can study the feature selecton problem n solaton. Here, we do not address the ssue of feature selecton across more prototypes per class but beleve that our fndngs are ndcatve of the general case. In our dervaton of RVM, the features are class-specfc (.e. the φ (p) n secton A.1), whch agrees wth the process of gettng features by matchng aganst each class. In the 1-norm SVM settng we smply concatenate the features from all classes. 4. Two-class We frst study the effects n the two-class problem: 8s vs 9s. They are a easly confused par and ndeed yelds worse error rate than a classfer traned on most other pars of dgts. For ths problem, we run both RVM and 1-norm SVM on the shape context features. An overvew plot of the error rate s n fg. 5, computed by 5-fold cross valdaton. The eventual error rate s not n the range of the state of the art(.6% as n [7]), but s well-performng for a sngle unt n a prototype-based method, that wll perform much better wth addtonal unts of other prototypes []. The error rate also drops notceably wth a few actvated features, suggestng that a small porton of the features account for the overall classfcaton. We look more closely as to what features are actvated n ths range, shown n fg. 6. As a baselne, we also nclude results from the smple flter method of rankng features by ts mutual nformaton wth the class label, shown n fg. 6. A few thngs can be notced: 1. Snce the mutual nformaton crteron pcks features ndependently, t selects features from the lower half of the dgt 9 repeatedly. Those feature each has a hgh predctve power but as a collecton, they are hghly correlated wth each other. In contrast, RVM or 1-norm SVM quckly exhausts the features there and moves onto other parts of the shape.. The two embedded methods, RVM and 1-norm SVM, agree farly well on the features they selected, and agree well on ther weghts. One dfference s that RVM also tend to pck out another spot of dscrmnaton, the upper rght corner of the dgt 9 (whch s useful for tellng aganst 8s that are not closed n that regon). In comparson, 1-norm SVM took longer to start assgnng sgnfcant weghts to that regon. respectvely; t nk s the ndcator varable for observaton n to be n class k, y k s the predctor for class k, and σ(y) s the logt functon 1/(1 + e y ). The product Q Q of bnary logt functons treats the class ndcator varable y k ndependently for each class. In contrast, a true multclass lkelhood s P (t w) = n k σ(y k; y 1, y,..., y K ) t nk where the predctors for each class y k s coupled n the multnomal logt functon (or the softmax): σ(y k ; y 1,..., y K ) = e y k/(e y e y K ). 5
8 As an nterestng sde experment, we also try to compare the shape context feature aganst the raw ntensty values from the orgnal MNIST mage, usng our feature selecton methods. We frst run the 1-norm SVM on the two separate feature sets. Fg. 7 shows that shape context outperforms ntensty when a few features are actvated, whle ther eventual performance are comparable. Ths suggests that shape contexts are the feature of choce when allowed only a lmted number of features, whch s confrmed n fg. 7: a run of 1-norm SVM on the combned pool of features selects mostly the shape contexts frst. Ths effect s also present n the RVM, though the contrast between the two types of feature s less error rate of RVM and 1normSVM on 8s vs 9s RVM 1normSVM.5 error rate number of features Fgure 5: Error rate on two-class problem as more features are added 4.3 Four-class We pck classes 3,6,8,9 for a smlar reason as n the two-class case: They have vsually smlar shapes and some parts of ther shapes we can dentfy as beng dscrmnatve n ths four-class classfcaton task. We are nterested n seeng whether ths noton s reflected n the results from the our method. Snce the problem s mult-class, 1-norm SVM s not applcable and we run only RVM, wth mutual nformaton as a baselne. Fg. 8 plots the error rate as a functon of the number of features. Smlar to the two-class problem, peak performance s reached wth a smlar percentage of features. The peak error rate s only slghtly worse than that of the two-class case (6% from 4%), valdatng the mult-class extenson. In the well-performng range of number of features, we vsualze the magntude of the weghts on each feature(fg. 8). Smlar to the two-class case, we see that the features tend to spread out n RVM as t avods selectng correlated features. It s nterestng to reflect on the features selected n fg. 8 to notce that those are the features good for tellng the dgt class apart from the rest of the classes: the upper left porton of 3, the tp of the stem of 6, the lower crcle of 8(whch has a dfferent topology than the other classes), and the lower porton of 9 can be examned alone to decde whch class t s. If we then look at the magntude of the weghts on those features, for example, the bgger weghts on the lower porton of the 9 suggests that the compared to other cues, the multnomal logstc classfer can beneft the most from comparng the query dgt to the prototype 9 and examne the lower porton of the matchng score. We thnk t s mportant and nterestng to nterpret the learnng mechansm n ths way. 5 Concluson In ths work, we have demonstrated how doman knowledge from shape analyss can be used to extract a good ntal set of features sutable for selecton algorthms. We extended one of the embedded feature selecton methods to handle mult-class RVM 1norm SVM (c) Fgure 6: Feature weghts as more features are actvated: mutual nformaton RVM (c)1-norm SVM 6
9 error rate vs nubmer of features on dgts ntensty sc 1 1 number features selected from each category sc ntensty overall.4.35 error rate number of features num features 8 teraton steps Fgure 7: Comparng shape context and ntensty features by 1-norm SVM: error rate as a functon of number of actvated features number of features from each category.7 error rate of RVM on error rate number of features Fgure 8: Error rate of RVM on four-class problem as more features are added Fgure 9: Feature weghts as more features are actvated: mutual nformaton RVM problems, studed the performance of two varants of the embedded method, and showed that the selected features have an ntutve nterpretaton for the classfcaton task. Acknowledgments We thank Matthas Seeger and J Zhu for frutful dscussons, and L Wang for provdng the 1-norm SVM code. References [1] Stephen E. Palmer. Vson Scence Photons to Phenomenology. The MIT Press, [] Isabelle Guyon and André Elsseeff. An ntroducton to varable and feature selecton. J. Mach. Learn. Res., 3: , 3. [3] Gy. Dorkó and C. Schmd. Selecton of scale nvarant neghborhoods for obect class recognton. In Proceedngs of the 9th Internatonal Conference on Computer Vson,, pages 634 6, 3. 7
10 [4] Lor Wolf and Amnon Shashua. Feature selecton for unsupervsed and supervsed nference: the emergence of sparsty n a weghtedbased approach. In Proceedngs of the 9th Internatonal Conference on Computer Vson, pages , 3. [5] J Zhu, Saharon Rosset, Trevor Haste, and Rob Tbshran. 1-norm support vector machnes. In Sebastan Thrun, Lawrence Saul, and Bernhard Schölkopf, edtors, Advances n Neural Informaton Processng Systems 16. MIT Press, Cambrdge, MA, 4. [6] Mchael E. Tppng. Sparse bayesan learnng and the relevance vector machne. J. Mach. Learn. Res., 1:11 44, 1. [7] S. Belonge, J. Malk, and J. Puzcha. Shape matchng and obect recognton usng shape contexts. IEEE Trans. Pattern Anal. Mach. Intell., 4(4):9 5,. [8] Robert Tbshran. Regresson shrnkage and selecton va the lasso. Journal of the Royal Statstcal Socety B, 58:67 88, [9] Y. LeCun, L. Bottou, Y. Bengo, and P. Haffner. Gradent-based learnng appled to document recognton. Proceedngs of the IEEE, 86(11):78 34, November [] Hao Zhang and Jtendra Malk. Learnng a dscrmnatve classfer usng shape context dstances. In Proc. IEEE Conf. Comput. Vson and Pattern Recognton, pages 4 47, 3. [11] Davd J. C. MacKay. Bayesan nterpolaton. Neural Comput., 4(3): , 199. A Appendx A.1 Inner loop: L regularzed logstc regresson Ths s where α s are fxed. Suppose the number of nput data to be n, the number of classes to be C. The feature vector components are class-specfc,.e., an nput s mapped nto a seres of feature vectors, one correspondng to each class p: φ (p). Followng multnomal logstc modellng: u (p) = w (p), φ (p). The output s a softmax over the u s: µ (p) = Suppose each w (p) s regularzed by hyper parameter α (p),.e., w (p) prncple, we mnmze the negatve log of the posteror: e u(1) N(, 1/α (p) e u(p) +...+e u(c) log p(w y). = log p(y w) log p(w α) := Ψ(w). = X,p y (p) log µ (p) + X,p 1 (α(p) (w (p) ) log α (p). ). Under the maxmum a posteror (MAP) where =. denotes equalty modulo constants(the frst =. s modulo log p(y α), a term whch wll become mportant n the outer loop but here s a constant snce α s fxed), and y (p) s the bnary ndcator (.e. y (p) = 1 ff data s of class p). For a more ntutve dervaton, we adopt a more succnct notaton: let φ (p) be the desgn matrx for class p, namely, φ (p) = (φ (p) 1...φ(p) n ) T. And let φ = dag(φ (p) ) p=1..c. Let w be the concatenaton of all the weghts w (p) (n -maor order). Let w k denotes ts k th component. Smlarly, let α and α k be that of the α (p). Let K be the total number of features from all classes. Let A = dag(α k ) k=1..k. Let B p,q = dag(µ (p) (δp q µ (q) )) =1..n and B be a block matrx consstng of B p,q. Then the dervatves of Ψ(w) can be wrtten n matrx form as: H(w) := Ψ w = φt (µ y) + Aw Ψ w w = φt Bφ + A These frst and second dervatves are used n the teratve reweghed least squares(irls) procedure, as n the ordnary logstc regresson, untl convergence. A. Outer loop: MAP estmate of α We want to mnmze the negatve log posteror for α: f (α) = log p(y α) log p(α). To obtan p(y α), we know by condtonal p(y, w α) probablty defnton that p(y α) =. Take negatve log of both sdes, recall the defnton of Ψ(w) and t gves: log p(y α) = p( w y,α) Ψ( w) + log p( w y, α). As ustfed n [6], assume saddle pont approxmaton for p(w y, α),.e., p(w y, α) N(w w, H( w) 1 ). Then, log p( w y, α) log N( w w, H( w) 1 ) = 1 log det H( w) K log(π). Therefore the overall negatve log posteror, droppng the constant of K log(π), s f(α) =. 1 log det H( w) + Ψ( w) log p(α) ) 8
11 The α s tend to grow very large durng estmaton hence we wll optmze w.r.t. to log(α). (Ths reparametrzaton affects p(α) slghtly va a change of varable.) d log det X The dervatve of f(α) has three parts: The frst term s computed based on the matrx calculus result that = X T and the d X chan rule that d f(x) = tr ` f X. d α X α 1 log det H( w) = 1 «tr (H( w)) 1 H( w) = 1 ˆ(H( w)) 1 := 1 kk Σ k Here we have assumed that B s constant w.r.t. α. An exact dervatve wthout assumng a constant B can be obtaned whch s more complcated. However, durng our experments, t produces neglgble dfference n the converged answer. The second term Ψ( w) depends on α through two ways: n a drect way through the terms nvolvng the pror on w and ndrectly through the optmal w whch depends on the value of α. However, we explot the fact that w s optmal so that the second part has dervatve zero: Ψ( w) Ψ( w) = = Ψ( w) fxed w + w Ψ( w) fxed w + = 1 ( w k 1 α k ) The thrd term s smply the negatve log of the Gamma pror: ( log p(α)) w = b a α k fxed α w Based on the dervatve, we set them to zero to obtan a set of fxed-pont teraton equatons. Ths leads to the re-estmaton rule for the α s, smlar n form to [11]: defne the degree of well-determnedness parameter γ k to be γ k = 1 α k Σ k, then the re-estmate update s: α k = γ k + a w k + b A.3 Dscusson of RVM The choce of the values for a and b: When a = b >, the equvalent pror on w s a student-t dstrbuton whch approxmates a Gaussan near the orgn. Ths s undesrable as t puts an L norm penalty on the weghts when the weghts become small. To avod ths, we set the parameters a = b =, whch puts an mproper, densty 1 pror that s ndependent of the scale of the weghts and always has concave x equal-penalty contours. The algorthm s fast: In mplementaton, the nner loop returns along wth w and b the nverse of the Hessan at the optmum (as s needed n logstc regresson anyway). The smple updates on α vrtually don t cost any tme. In experments, we see that the reestmate converges quckly. Those α s that correspond to suppressed features tend to grow very large after a few teratons. As n [6], we prune those features whch also speeds up later teratons. 9
Feature Reduction and Selection
Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components
More informationLearning the Kernel Parameters in Kernel Minimum Distance Classifier
Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department
More informationLecture 5: Multilayer Perceptrons
Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented
More informationSupport Vector Machines
Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned
More informationThree supervised learning methods on pen digits character recognition dataset
Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru
More informationSupport Vector Machines
/9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.
More informationCS 534: Computer Vision Model Fitting
CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust
More informationMathematics 256 a course in differential equations for engineering students
Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the
More informationMachine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)
Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes
More informationA Binarization Algorithm specialized on Document Images and Photos
A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a
More informationBOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET
1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School
More informationOutline. Type of Machine Learning. Examples of Application. Unsupervised Learning
Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton
More informationSkew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach
Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research
More informationUser Authentication Based On Behavioral Mouse Dynamics Biometrics
User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA
More informationEdge Detection in Noisy Images Using the Support Vector Machines
Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona
More informationRange images. Range image registration. Examples of sampling patterns. Range images and range surfaces
Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples
More informationBAYESIAN MULTI-SOURCE DOMAIN ADAPTATION
BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,
More informationSubspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;
Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features
More informationSmoothing Spline ANOVA for variable screening
Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory
More informationOptimizing Document Scoring for Query Retrieval
Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng
More informationTerm Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task
Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto
More informationHermite Splines in Lie Groups as Products of Geodesics
Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the
More informationUnsupervised Learning
Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and
More informationClassifier Selection Based on Data Complexity Measures *
Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.
More informationContent Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers
IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth
More informationSLAM Summer School 2006 Practical 2: SLAM using Monocular Vision
SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,
More informationFEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur
FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents
More informationy and the total sum of
Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton
More informationMULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION
MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and
More informationTsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance
Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for
More informationAnnouncements. Supervised Learning
Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples
More informationDiscriminative Dictionary Learning with Pairwise Constraints
Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse
More informationDetermining the Optimal Bandwidth Based on Multi-criterion Fusion
Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn
More informationA Fast Visual Tracking Algorithm Based on Circle Pixels Matching
A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng
More informationMachine Learning 9. week
Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below
More informationS1 Note. Basis functions.
S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type
More informationCollaboratively Regularized Nearest Points for Set Based Recognition
Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,
More informationBiostatistics 615/815
The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts
More informationOutline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1
4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:
More informationTN348: Openlab Module - Colocalization
TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages
More information12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification
Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero
More informationParallel matrix-vector multiplication
Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more
More informationCompiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz
Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster
More informationX- Chart Using ANOM Approach
ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are
More informationCHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION
48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue
More informationThe Codesign Challenge
ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.
More informationParallelism for Nested Loops with Non-uniform and Flow Dependences
Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr
More informationBackpropagation: In Search of Performance Parameters
Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,
More informationNAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics
Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson
More informationCluster Analysis of Electrical Behavior
Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School
More informationAn Entropy-Based Approach to Integrated Information Needs Assessment
Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology
More informationA mathematical programming approach to the analysis, design and scheduling of offshore oilfields
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and
More informationClassification / Regression Support Vector Machines
Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM
More informationThe Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique
//00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy
More informationGSLM Operations Research II Fall 13/14
GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are
More informationMachine Learning: Algorithms and Applications
14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of
More informationLobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide
Lobachevsky State Unversty of Nzhn Novgorod Polyhedron Quck Start Gude Nzhn Novgorod 2016 Contents Specfcaton of Polyhedron software... 3 Theoretcal background... 4 1. Interface of Polyhedron... 6 1.1.
More informationAn Optimal Algorithm for Prufer Codes *
J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,
More informationReducing Frame Rate for Object Tracking
Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg
More informationImprovement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration
Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,
More information6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour
6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the
More informationNUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS
ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana
More informationA Robust Method for Estimating the Fundamental Matrix
Proc. VIIth Dgtal Image Computng: Technques and Applcatons, Sun C., Talbot H., Ourseln S. and Adraansen T. (Eds.), 0- Dec. 003, Sydney A Robust Method for Estmatng the Fundamental Matrx C.L. Feng and Y.S.
More informationFor instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)
Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A
More informationActive Contours/Snakes
Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng
More informationHelsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)
Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute
More informationA PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION
1 THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMANIAN ACADEMY, Seres A, OF THE ROMANIAN ACADEMY Volume 4, Number 2/2003, pp.000-000 A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION Tudor BARBU Insttute
More informationSVM-based Learning for Multiple Model Estimation
SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:
More informationLECTURE : MANIFOLD LEARNING
LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors
More informationThe Research of Support Vector Machine in Agricultural Data Classification
The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou
More informationKent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming
CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems
More informationFeature-Based Matrix Factorization
Feature-Based Matrx Factorzaton arxv:1109.2271v3 [cs.ai] 29 Dec 2011 Tanq Chen, Zhao Zheng, Quxa Lu, Wenan Zhang, Yong Yu {tqchen,zhengzhao,luquxa,wnzhang,yyu}@apex.stu.edu.cn Apex Data & Knowledge Management
More informationProblem Definitions and Evaluation Criteria for Computational Expensive Optimization
Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty
More informationAPPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT
3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ
More informationDetection of an Object by using Principal Component Analysis
Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,
More informationHigh-Boost Mesh Filtering for 3-D Shape Enhancement
Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,
More informationAn Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices
Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15
CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc
More informationTPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints
TPL-ware Dsplacement-drven Detaled Placement Refnement wth Colorng Constrants Tao Ln Iowa State Unversty tln@astate.edu Chrs Chu Iowa State Unversty cnchu@astate.edu BSTRCT To mnmze the effect of process
More informationFuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches
Proceedngs of the Internatonal Conference on Cognton and Recognton Fuzzy Flterng Algorthms for Image Processng: Performance Evaluaton of Varous Approaches Rajoo Pandey and Umesh Ghanekar Department of
More informationFace Recognition University at Buffalo CSE666 Lecture Slides Resources:
Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural
More informationComplex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.
Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal
More informationCorner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity
Journal of Sgnal and Informaton Processng, 013, 4, 114-119 do:10.436/jsp.013.43b00 Publshed Onlne August 013 (http://www.scrp.org/journal/jsp) Corner-Based Image Algnment usng Pyramd Structure wth Gradent
More informationUnsupervised Learning and Clustering
Unsupervsed Learnng and Clusterng Why consder unlabeled samples?. Collectng and labelng large set of samples s costly Gettng recorded speech s free, labelng s tme consumng 2. Classfer could be desgned
More informationLearning-Based Top-N Selection Query Evaluation over Relational Databases
Learnng-Based Top-N Selecton Query Evaluaton over Relatonal Databases Lang Zhu *, Wey Meng ** * School of Mathematcs and Computer Scence, Hebe Unversty, Baodng, Hebe 071002, Chna, zhu@mal.hbu.edu.cn **
More informationProper Choice of Data Used for the Estimation of Datum Transformation Parameters
Proper Choce of Data Used for the Estmaton of Datum Transformaton Parameters Hakan S. KUTOGLU, Turkey Key words: Coordnate systems; transformaton; estmaton, relablty. SUMMARY Advances n technologes and
More informationA Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems
A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty
More informationImage Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline
mage Vsualzaton mage Vsualzaton mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and Analyss outlne mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and
More informationA MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS
Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung
More informationProblem Set 3 Solutions
Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,
More informationFINDING IMPORTANT NODES IN SOCIAL NETWORKS BASED ON MODIFIED PAGERANK
FINDING IMPORTANT NODES IN SOCIAL NETWORKS BASED ON MODIFIED PAGERANK L-qng Qu, Yong-quan Lang 2, Jng-Chen 3, 2 College of Informaton Scence and Technology, Shandong Unversty of Scence and Technology,
More informationEfficient Segmentation and Classification of Remote Sensing Image Using Local Self Similarity
ISSN(Onlne): 2320-9801 ISSN (Prnt): 2320-9798 Internatonal Journal of Innovatve Research n Computer and Communcaton Engneerng (An ISO 3297: 2007 Certfed Organzaton) Vol.2, Specal Issue 1, March 2014 Proceedngs
More informationFitting: Deformable contours April 26 th, 2018
4/6/08 Fttng: Deformable contours Aprl 6 th, 08 Yong Jae Lee UC Davs Recap so far: Groupng and Fttng Goal: move from array of pxel values (or flter outputs) to a collecton of regons, objects, and shapes.
More informationRelational Lasso An Improved Method Using the Relations among Features
Relatonal Lasso An Improved Method Usng the Relatons among Features Kotaro Ktagawa Kumko Tanaka-Ish Graduate School of Informaton Scence and Technology, The Unversty of Tokyo ktagawa@cl.c..u-tokyo.ac.jp
More informationIncremental Learning with Support Vector Machines and Fuzzy Set Theory
The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and
More informationAn efficient method to build panoramic image mosaics
An effcent method to buld panoramc mage mosacs Pattern Recognton Letters vol. 4 003 Dae-Hyun Km Yong-In Yoon Jong-Soo Cho School of Electrcal Engneerng and Computer Scence Kyungpook Natonal Unv. Abstract
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd
More informationCourse Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms
Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques
More informationR s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes
SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges
More informationMeta-heuristics for Multidimensional Knapsack Problems
2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,
More information