Efficient Large-Scale Image Annotation by Probabilistic Collaborative Multi-Label Propagation

Size: px
Start display at page:

Download "Efficient Large-Scale Image Annotation by Probabilistic Collaborative Multi-Label Propagation"

Transcription

1 Effcent Large-Scale Image Annotaton by Probablstc Collaboratve Mult-Label Propagaton Xangyu Chen, Yadong Mu, Shucheng Yan, and Tat-Seng Chua NUS Graduate School for Integratve Scences and Engneerng, School of Computng, Department of Electrcal and Computer Engneerng Natonal Unversty of Sngapore, Sngapore {chenxangyu, elemy, eleyans, ABSTRACT Annotatng large-scale mage corpus requres huge amount of human efforts and s thus generally unaffordable, whch drectly motvates recent development of sem-supervsed or actve annotaton methods. In ths paper we revst ths notorously challengng problem and develop a novel multlabel propagaton scheme, whereby both the effcacy and accuracy of large-scale mage annotaton are further enhanced. Our nvestgaton starts from a survey of prevous graph propagaton based annotaton approaches, wheren we analyze ther man drawbacks when scalng up to large-scale datasets and handlng mult-label settng. Our proposed scheme outperforms the state-of-the-art algorthms by makng the followng contrbutons. 1) Unlke prevous approaches that propagate over ndvdual label ndependently, our proposed large-scale mult-label propagaton LSMP) scheme encodes the tag nformaton of an mage as a unt label confdence vector, whch naturally mposes nter-label constrants and manpulates labels nteractvely. It then utlzes the probablstc Kullback-Lebler dvergence for problem formulaton on mult-label propagaton. 2) We perform the mult-label propagaton on the so-called hashng-based l 1 -graph, whch s effcently derved wth Localty Senstve Hashng approach followed by sparse l 1-graph constructon wthn the ndvdual hashng buckets. 3) An effcent and convergency provable teratve procedure s presented for problem optmzaton. Extensve experments on NUS- WIDE dataset both lte verson wth 56k mages and full verson wth 270k mages) well valdate the effectveness and scalablty of the proposed approach. Categores and Subject Descrptors H.3.1 [Informaton Storage and Retreval]: Analyss and Indexng-ndexng methods General Terms Algorthms, Performance, Expermentaton Content Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. To copy otherwse, to republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. MM 10, October 25 29, 2010, Frenze, Italy. Copyrght 2010 ACM /10/10...$ Keywords Image Annotaton, Collaboratve Mult-label Propagaton 1. INTRODUCTION For many applcatons lke mage annotaton, especally n large-scale settng, annotatng tranng data s often very tme-consumng and tedous. Sem-supervsed learnng SSL) lends tself as an effectve technque, through whch users only need to annotate a small amount of mage data, and other unlabeled data can work together wth these labeled data for learnng and nference. In ths paper we are partcularly nterested n effcent graph-based mult-label propagaton n large-scale settng. It s known that graph s a natural representaton for label propagaton, wheren each vertex corresponds to a unque mage and any edge connectng the two vertces ndcates certan relatons between the mages. Unlke generatve modelng methods, graph modelng focuses on nonparametrc local structure dscovery, rather than a pror probablstc assumptons. For the transducton task on partally labeled data known as sem-supervsed learnng n lterature), graph-based methods usually demonstrate the state-of-the-art performance than other SSL algorthms [24]. Generally, there are three crucal subtasks n graph-based algorthms: 1) graph constructon; 2) the choce of loss functon; and 3) the choce of regularzaton term. As argued n [23], graph constructon s supposed to be more domnatng than the other two factors n terms of performance. Unfortunately, t s also the area that s most nadequately studed. In Secton 2.2, we propose a novel hashng-based scheme for effcent large-scale graph constructon. The solutons to the last two subtasks may affect the fnal accuracy as well as the proper optmzaton strategy thus the convergence speed). As reported n [8], early work on semsupervsed learnng can only handle unlabeled samples. Consequently, a large number of recent endeavors has been devoted to the scalablty to large-scale datasets. Several recent large scale algorthms e.g. [11, 6]) plug graph Laplacan based regularzers nto transductve support vector machnes TSVM) to obtan better transducton capablty. The work n [11] solves a graph transducton problem wth 650, 000 samples. The whole objectve functon s optmzed va the stochastc gradent descent. Whle the method n [6] suggests a tranng method usng the concave-convex procedure CCCP), whch brngs scalablty mprovement on large-scale dataset. The work n [19]

2 Fgure 1: Flowchart of our proposed scheme for mult-label propagaton. Step-0 and step-1 are the proposed hashng-based l 1 -graph constructon scheme, whch perform neghborhood selecton and weght computaton respectvely; Step-2 s the probablstc mult-label propagaton based Kullback-Lebler dvergence. solves the largest graph-based problem to date, where there are about 900, 000 samples ncludng both labeled and unlabeled data). By usng a sparsfed manfold regularzer and formulatng as a center-constraned mnmum enclosng ball problem, ths method produces sparse solutons wth low tmeandspacecomplextesandcanbeeffcentlysolvedby the core vector machne CVM). The semnal work n [17] s most smlar to our work n ths paper. Unlke prevous approaches, ths method models the mult-class label confdence vector as a probablstc dstrbuton, and utlzes the Kullback-Lebler KL) dvergence to gauge the parwse dscrepancy. The underlyng phlosophy s that such soft regularzaton term wll be less vulnerable to nosy annotaton or outlers. Here we adopt the same representaton and dstance measure, yet n a dfferent scenaro.e. mult-label mage annotaton), thus demandng new soluton. Several algorthms were recently proposed to explot the nter-relatons among dfferent labels [12]. For example, Q et al. [15] proposed a unfed Correlatve Mult-Label CML) framework to smultaneously classfy labels and model correlatons between them. Chen et al. [4] formulated ths problem as a sylvester equaton, whch s smlar to [22]. They frst constructed two graphs at the sample level and category level assocated wth a quadratc energy functon respectvely, and then obtan the labels of the unlabeled mages by mnmzng the combnaton of the two energy functons. Lu et. al. [13] utlzed constraned nonnegatve matrx factorzaton CNMF) to optmze the consstency between mage smlarty and label smlarty. Unfortunately, most of the aforementoned algorthms are of hgh complexty and unsutable to scale up to the large-scale datasets. Most exstng work n the lne of graph-based label propagaton suffer or partally suffer) from these dsadvantages: 1) they consder each tag ndependently when handlng multlabel propagaton problem, 2) the derved labels for one mage are not rankable, and 3) the graph constructon process s tme-consumng. And most recent large-scale algorthms focus on the sngle label case, but the scalablty to large number of labels s unclear. To address the above ssues, we proposed a new large-scale graph-based mult-label propagaton approach by mnmzng the Kullback-Lebler dvergence of the mage-wse label confdence vector and ts propagated verson va the so-called hashng-based l 1- graph, whch s effcently derved wth Localty Senstve Hashng approach followed by sparse l 1-graph constructon wthn the ndvdual hashng buckets. Fnally, an effcent and convergency provable teratve procedure s presented for problem optmzaton. The major contrbutons of our proposed scheme can be summarzed as follows: We propose a probablstc collaboratve mult-label propagaton formulaton for large-scale mage annotaton, whch s founded on Kullback-Lebler dvergence based label smlarty measurement and scalable l 1 - graph constructon. We also propose a novel hashng-based scheme for effcent large-scale graph constructon. Localty senstve hashng [10, 1, 14] s utlzed to speed up the canddate selecton of smlar neghbors for one mage, whch makes the l 1 -graph constructon process scalable.

3 The remander of ths paper s organzed as follows. In Secton 2, we elaborate on the proposed probablstc collaboratve mult-label propagaton LSMP) algorthm. Secton 3 presents analyss on algorthmc complexty and convergence propertes. Expermental results on both mddlescale and large-scale mage datasets are reported n Secton 4. Secton 5 concludes ths work along wth future work dscusson. 2. OUR PROPOSED SCHEME 2.1 Scheme Overvew Our proposed large-scale mult-label propagaton framework ncludes three concatenatng parts: 1) An effcent k-nearest-neghbor k-nn) search based on localty senstve hashng LSH) approach; 2) sparse l 1 -graph constructon wthn hashng buckets; and 3) mult-label propagaton based on Kullback-Lebler dvergence. Fgure 1 gves an llustraton of the algorthmc ppelne. 2.2 Hashng-based l 1 -Graph Constructon The frst step of the proposed framework s the constructon of an drected weghted graph G =< V, E>, where the cardnalty of the node set V s m = l + u denote the labeled and unlabeled data respectvely), and the edge set E V V descrbes the graph topology. Let V l and V u be the sets of labeled and unlabeled vertces respectvely. G can be equvalently represented by a weght matrx W = {w j } R m m. To effcently handle the large-scale data, we enforce the constructed graph to be sparse. The weght between two nodes w j s nonzero only when j N, where N denotes the local neghborhood of the -th mage. The graph constructon can thus be decomposed nto two sub-problems: 1) how to determne the neghborhood of a datum; and 2) how to compute the edge weght w j Neghborhood Selecton For the frst problem, the conventonal strateges n prevous work can be roughly dvded nto two categores: k-nearest-neghbor based neghborhood: w j s nonzero only f x j s among the k-nearest neghbors to the - th datum. Obvously, graphs constructed n ths way may ensure a constant vertex degree, avodng overdense sub-graphs and solated vertces. ɛ-ball neghborhood: gven a pre-specfed dstance measure between two nodes d Gx, x j) and a threshold ɛ. Any vertex x j that satsfes d Gx, x j) ɛ wll be ncorporated n the neghborhood of the vertex x,resultng n nonzero w j. Itseasytoobservethatthe weght matrx of the constructed graph s symmetrc. However, for some vertces beyond a dstance from the others, there s probably no edge connectng to other vertces. Although domnatng the graph-based learnng lterature, the above two schemes are both computaton-ntensve on large-scale dataset, snce a lnear scan s requred to processasnglesampleandtheoverallcomplextyson 2 )n s the number of all samples). For a typcal mage data set to annotate, there are mages, from each of whch hgh-dmensonal features are extracted. A nave mplementaton based on ether of these two schemes usually takes several days to accomplsh graph constructon, whch s defntely unaffordable n terms of effcacy. Instead, n our mplementaton we use the localty-senstve hashng LSH) to enhance the effcacy on large-scale data sets. The basc dea of LSH s to store proxmal samples nto the same bucket, whch greatly saves the retreval tme at the expense of addtonal storage of hash bts. LSH s a recently proposed hashng algorthm famly. The most attractve property of LSH s the theoretc guarantee that the collson probablty of two samples.e., projected nto the same bucket) s proportonal to ther smlarty n feature space. The most popular LSH approach reles on random projecton followed by a threshold-based bnarzaton. Formally, gven a random projecton drecton v, the whole dataset s spltted nto two half-spaces, accordng to the rule hx )= Booleanv T x > 0). The hash table typcally conssts of k ndependent bts, namely the fnal hash bts are obtaned va sequental concatenaton Hx )= h 1 x ),...,h k x ). In the retreval phase, the k-nn canddate set can be safely confned to be the buckets whose Hammng dstances to the query sample are below a pre-specfed small threshold. Pror nvestgaton at the theoretc aspect reveals that a sublnear retreval complexty s feasble by the LSH method, whch s a crucal acceleraton for the scenaro of large-scale mage search. Note that n our mplementaton, LSH s run for multple tmes n all the experments, and the neghborhoods are the combned to avod the case of solated subgraphs Weght Computaton A proper nter-sample smlarty defnton s the core for graph-based label propagaton. The message transmtted from the neghborng vertces wth hgher weghts wll be much stronger than the others. Generally, the more smlar a sample s to another sample, the stronger the nteracton thus larger weght) exsts between them. Below are some popular ways to calculate the parwse weghts: Unweghted k-nn smlarty: The smlarty w j between x and x j s 1 f x j s among the k-nn of x ; otherwse 0. For undrected graph, the weght matrx s symmetrc and therefore w j = w j s enforced. Exponentally weghted smlarty: Forallchosenk-NN neghbors, ther weghts are determned as below: ) w j =exp dg x,x j ), 1) σ 2 where d Gx,x j) s the ground truth dstance and σ s a free parameter to control the decay rate. Weghted lnear neghborhood smlarty [16, 20]: In ths scheme sample x s assumed to be lnearly reconstructed from ts k-nn. The weghts are obtaned va solvng the followng optmzaton problem: mn x w w jx j 2. 2) j j N Typcally addtonal constrants are gven to w j. For example, n [20], the constrants w j 0and j wj = 1 are mposed. In our mplementaton, we adopt a scheme smlar to the dea n [16, 20], based on the lnear reconstructon assumpton. Moreover, pror work [18] reveals that mnmzng the

4 l 1 norm over the weghts s able to suppress the nose contaned n data. The constructed graph s non-parametrc and s comparably more robust than the other graph constructon strateges. Meanwhle, the graph constructed by datum-wse one-vs-all sparse reconstructon of samples can remove consderable label-unrelated lnks between those semantcally unrelated samples to reduce the ncorrect nformaton for label propagaton. Suppose we have an over-determned system of lnear equatons: [ ] x1 x 2 x k w = x, 3) where x s the feature vector of the -th mage to be reconstructed, w s the vector of the unknown reconstructon coeffcents. Let X R d k be a data matrx, each column of whch corresponds to the feature vector of one of ts k- NN. In practce, there are probably noses n the features, and a natural way to recover these elements and provde a robust estmaton of w s to formulate x = Xw +ξ, where ξ R d s the sparse nose term. We can then solve the followng l 1-norm mnmzaton problem wth respect to both reconstructon coeffcents and feature nose: arg w, ξ mn ξ 1 4) s.t. x = Xw + ξ, w 0, w 1 =1. Ths optmzaton problem s convex and can be transformed nto a general lnear programmng problem. There exsts a globally optmal soluton, and the optmzaton can be solved effcently usng many avalable l 1 -norm optmzaton toolboxes lke l 1 -MAGIC [3]. 2.3 Problem Formulaton Let M l = {x,r } l be the set of labeled mages, where x s the feature vector of the -th mage and r s a multlabel vector ts entry s set to be 1 f t s assgned wth the correspondng label, otherwse 0). Let M u = {x } l+u =l+1 be the set of unlabeled mages, and M = {M l,m u} s the entre data set. The graph-based mult-label propagaton s ntrnscally a transductve learnng process, whch propagates the labels of M l to M u. For each x, we defne the probablty measure p over the measurable space Y,Y). Here Y s the σ-feld of measurable subsets of Y and Y N the set of natural numbers) s the space of classfer outputs. Y = 2 yelds bnary classfcaton whle Y > 2 mples mult-label. In ths paper, we focus on the mult-label case. Hereafter, we use p and r for the -th mage, both of whch are subject to the multnomal dstrbutons, and p y) s the probablty that x belongs to class y. As mentoned above, {r j,j V l } encodes the supervson nformaton of the labeled data. If t s assgned a unque label by the annotator, r j becomes the so-called one-hot vector only the correspondng entry s 1, the rest s 0). In case beng assocated wth multple labels, r j s represented to be a probablstc dstrbuton wth multple non-zero entres. We propose the followng crteron to gude the propagaton of the supervson nformaton, whch s based on the concept of KL dvergence defned on two dstrbutons: D 1p) = l ) m D KL r p + μ D KL p j N) w jp j ), 5). The frst term n D 1p) trgger a heavy penalty f the estmated value p devates from the pre-specfed r. Note that unlke most tradtonal approaches, there s no constrant for the rgd equvalence between p and r. Such a relaxaton s able to mtgate the bad effect of nosy annotatons. The second term of D 1 and the optmal soluton p =arg p mn D 1 p). Here D KL r p ) denotes the KL dvergence between r and p, whose formal defnton for the dscrete case s expressed as D KL r p )= y r y)log r y) p y) stems from the assumpton that p can be lnearly reconstructed from the estmatons of ts neghbors, thus penalzng the nconsstency between the p and ts neghborhood estmaton. Unlke prevous works [20] usng squared-error optmal under a Gaussan loss assumpton), the adopted KL-based loss penalzes relatve error rather than absolute error n the squared-error case. In other words, they can be regarded as the regularzaton terms from pror supervson and local coherence respectvely. μ s a free parameter to balance these two terms. If μ, w j 0, then D 1 p) sconvextheproofsgven n Appendx I). Snce no closed-form soluton s feasble, standard numercal optmzaton approaches such as nteror pont methods IPM) or method of multplers MOM) can be used to solve the problem. However, most of these approaches guarantee global optma yet are trcky to mplement e.g., an mplementaton of MOM to solve ths problem would have seven extraneous parameters) [17]. Instead, we utlze a smple alternatng mnmzaton method n ths work. Alternatng mnmzaton s an effectve strategy to optmze functons of the form fx, y) where x, y are two sets of varables. In many cases, smultaneous optmzng over x and y s computatonally ntractable or unstable, whle optmzng over one set of varables wth the other fxed s relatvely easer. Formally, a typcal alternatng mnmzaton loops over two sub-problems,.e., x t) =arg x mn fx, y t 1) ) and y t) =arg y mn fx t),y). An example for alternatng optmzaton s the well-known Expectaton-Maxmzaton EM) algorthm. Note that D 1 n Equaton 5) s not amenable to alternatng optmzaton. We further propose a modfed verson by ntroducng a new group of varables {q },whch s shown as below: D 2 p, q) = l m D KL r q )+μ D KL p +η j N ) w j q j ) m D KLp q ). 6) In the above, a thrd measure q s ntroduced to decouple the orgnal term μ m DKL p ) j N) wjpj. q can actually be regarded as a relaxed verson of p. To enforce consstency between them, the thrd term m D KLp q ) s ncorporated. 2.4 Part I: Optmze p wth q Fxed Wth {q, =1...m} fxed, the optmzaton problem s reduced to the followng form: p = arg p mn D 2p, q) 7) s.t. p y) =1, p 0,. y The above constraned optmzaton problem can be easly

5 transformed nto an unconstraned one usng the Lagrange multpler: m p = arg p mn D 2p, q)+ p y)). 8) λ 1 y For brevty, let L p D 2 p, q) + m λ 1 y p y)). Recall that any locally optmal solutons should be subject to the zero frst-order dervatve,.e., L p p y) = μ log p y)+1 log j N ) w jq jy) ) +η log p y)+1 log q y) ) λ = 0. 9) From Equaton 9), t s easly verfed that let γ = μ+η): μ log j N ) p y) =exp w ) jq j y)+ηlog q y) γ + λ. γ Recall that λ sthelagrangecoeffcentforthe-th sample and unknown. Based on the fact y py) =1,λ can be elmnated and fnally we obtan the updatng rule: ) μ exp log w γ j q j y)) + η log q γ y) j N ) p y) = ). 10) y exp μ log w γ j q j y)+ η log q γ y) j N ) 2.5 Part II: Optmze q wth p Fxed The other step of the proposed alternatng optmzaton s to update q wth p fxed. Unfortunately, t proves that the same trck used n subsecton 2.4 cannot be appled to the optmzaton of q, due to the hghly non-lnear term ) log j N w jq j y). To ensure that q s stll a vald probablty vector after updatng, we set the updatng rule as: q new = q old + Uh, 11) where the column vector of matrx U R d d 1) s constraned to be summed 0. Denote e to be a column vector wth ts all entres equal to 1, then we have e T U = 0. An alternatve vew of ths relatonshp s that U s the complementary subspace of the one spanned by 1 n e,thus UU T = I 1 n eet also holds. Vector h n each teraton should be carefully chosen so that the updated value of q new results n a non-trval decrease of the overall objectve functon. Denote L q D 2 p, q) and the value of q at the t-th teraton as q t),wehave L h q t) ) Lq q t) + U T h) = U T L q h q q =q t). 12) Note that n each teraton h s typcally ntalzed as 0, thus h = α L h q t) ) s a canddate descent drecton α s a parameter to control the step sze). By substtutng t nto Equaton 11), we obtan the followng updatng rule: q t+1) = q t) αuu T L q q q =q t) = q t) αi 1 n eet ) Lq. 13) q q =q t) Algorthm 1 Probablstc Collaboratve Mult-Label Propagaton 1: Input:An drected weghted sparse graph G =< V, E> of the whole mage dataset M = {M l,m u}, where M l = {x,r } l s the labeled mage set and M u = {x } l+u =l+1 s the set of unlabeled mages. x s the feature vector of the -th mage and r s a mult-label confdence vector for x. 2: Output: The convergent probablty measures p and q. 3: Intalzaton: Randomly ntalze {p 0, y p y) =1} and {q 0, y q y) =1}. 4: for p and q are not convergent do 5: Optmze p wth q Fxed: p y) = ) μ exp γ log w j q j y))+ η γ log q y) j N ) ). y exp μ γ log w j q j y)+ η γ log q y) j N ) 6: Optmze q wth p Fxed: q t+1) = q t) range defned n Equaton 16). 7: end for αi 1 n eet ) L q q, where α les n the Fgure 2: The dstrbuton of the number of nearest neghbors denote as k) n our proposed LSMP. In ths way, the pursut of the descent drecton wth respect to q s transformed nto an equvalent problem takng h as varable, whch s further solved by calculatng Lq q. For completeness, we lst the concrete value of an entry of L q q : L q q = ry) y) q μ w kp k y) y) k: N j N k k w kj q η py) jy) q. 14) y) One practcal ssue s the feasble regon of parameter α. An arbtrary α probably cannot ensure that the updated p t+1) n Equaton 13) stays wthn the range [0, 1]. A proper value of α should ensure: 0 q αuu T L q 1. 15) q q =q t) Denote v = UU T L q q q =q t).itseasytoverfythat { { }} q y) 0 α mn max vy), q y) 1, ɛ. 16) vy) In practce, α can be adaptvely determned from q t).the whole process of optmzaton s llustrated n Algorthm 1. The resultant p s adopted to nfer the mage tags, as t connects both r and q.

6 x 106 Average Precson AP) EGSSC LNP k NN Objectve Functon Value The value of k Fgure 3: The performance of three baselne algorthms wth respect to the number of nearest neghbors denote as k). 3. ALGORITHMIC ANALYSIS 3.1 Computatonal Complexty Overall speakng, the computatonal complexty of the proposed algorthm conssts of two components: the cost of hashng-based l 1 -graph constructon, and the cost of KLbased label propagaton. The effcacy of tradtonal graph constructon as n [21, 18] hnges on the complexty of k- NN retreval, whch s typcally On 2 )n s the number of mages) for a nave lnear-scan mplementaton. Our proposed LSH-based scheme guarantees a sublnear complexty by aggregatng vsually smlar mages nto the same buckets, greatly reducng the cardnalty of the set of canddate neghbors. Formally, recent work ponts out the lower bound of LSH s only slghtly hgh than On logn)), whch drastcally reduces the computatonal overhead of graph constructon compared wth tradtonal On 2 )complexty. On the other hand, for our proposed KL-guded label propagaton procedure, t has On kl) computaton n each teraton, where k denotes the averaged number of nearest neghbors for a graph vertex and l s the total number of labels. Actually, most label propagaton methods based on local confdence exchange have the same complexty. The consumed tme n real calculaton manly hnges on the value of k. In Fgure 2 we plot the dstrbuton of k obtaned va the proposed l 1-regularzed weght computaton, whch reaches ts peek value around k = 35. Ths small k value ndcates that l 1 penalty term s able to select much compacter reconstructon bass for a vertex. In contrast, to obtan nearly optmal performance, prevous works usually take k>100 see Fgure 3). In mplementaton, we fnd that the subtle reduce of k results n a drastc reduce of the runnng tme see more detals n the expermental secton). 3.2 Algorthmc Convergence The above two updatng procedures are terated untl converged. For the experments on NUS-WIDE dataset, generally about 50 teratons are requred for the convergency of the soluton. An exemplar convergency curve s shown n Fgure EXPERIMENTS To valdate the effectveness of our proposed approach on large-scale mult-label datasets, we conduct extensve experments on the real-world mage dataset NUS-WIDE [5], whch contans 269,648 mages accompaned wth totally Iteraton Number Fgure 4: Convergence curve of our proposed Algorthm on NUS-WIDE dataset. 5,018 unque tags. Images n ths dataset are crawled from the photo sharng webste Flckr by usng ts publc API. The underlyng mage dversty and complexty make t a good testbed for large-scale mage annotaton experments. Moreover, a subset of NUS-WIDE known as NUS-WIDE-Lte) obtaned after nosy tag removal s also publcly avalable. We provde quanttatve study on both the lte dataset and the full NUS-WIDE dataset, wth an emphass on the comparson wth fve state-of-the-art related algorthms n terms of accuracy and computatonal cost. 4.1 Datasets NUS-WIDE [5]: The dataset contans 269,648 mages and the assocated 5,018 tags. For evaluaton, we construct two mage pools from the whole dataset: the pool of labeled mages s comprsed of 161,789 mages whlst the rest are used for the pool of unlabeled mages. For each mage, an 81-D label vector s mantaned to ndcate ts relatonshp to 81 dstnct concepts tghtly related to tags yet relatvely hgh-level). Moreover, to testfy the performance stablty of varous algorthms, we vary the percentage of labeled mages selected from the labeled mage pool n mplementaton t s varyng from 10% to 100% ncreased by a step of 10%. We ntroduce the varable τ [0, 1] for t). The sampled labeled mages are then amalgamated wth the whole set of unlabeled mages 107,859 n all). We extract multple types of local vsual features from the mages 225-D blockwse color moments, 128-D wavelet texture and 75-D edge drecton hstogram). NUS-WIDE-Lte: As stated above, ths dataset s a lte verson of the whole NUS-WIDE database. It conssts of 55,615 mages randomly selected from the NUS-WIDE dataset. And the labels of each mage are also lke those of NUS- WIDE, an 81-D label vector s set to ndcate ts relatonshp to 81 dstnct concepts. As done on NUS-WIDE, three types of local vsual features are also extracted for ths dataset. We randomly select about half of the mages as labeled and theresttobeunlabeled. Agan,weusethesamesamplng strategy on the labeled set to perform the stablty test. 4.2 Evaluaton Crtera and Baselnes In the experments, fve baselne algorthms as shown n Table 1 are evaluated for comparatve study. Amongst them, the support vector machnes SVM) s orgnally developed to solve bnary-class or mult-class classfcaton problem. Here we use ts mult-class verson by adoptng the one-vsone method. The selected baselnes ncludes several stateof-the-art algorthms for sem-supervsed learnng. The ln-

7 Table 1: The Baselne Algorthms. Name Methods KNN k-nearest Neghbors [9] SVM Support Vector Machne [6] LNP Lnear Neghborhood Propagaton [20] EGSSC Entropc Graph Sem-Supervsed Classfcaton [17] SGSSL Sparse Graph-based Sem-supervsed Learnng [18] ear neghborhood propagaton LNP) [20] bases on a lnearconstructon crteron to calculate the edge weghts of the graph, and dssemnates the supervson nformaton by a local propagaton and updatng process. The EGSSC [17] s an entopc graph-regularzed sem-supervsed classfcaton method, whch s based on mnmzng a Kullback-Lebler dvergence on the graph bult from k-nn Gaussan smlarty as ntroduced n Sub-secton and The SGSSL [18] s a sparse graph-based method for sem-supervsed learnng by harnessng the labeled and unlabeled data smultaneously, whch consders each label ndependently. The crtera to compare the performance nclude Average Precson AP) for each label or concept) and Mean Average Precson MAP) for all labels. The former s a well-known gauge wdely used n the feld of mage retreval, whlst the latter s developed to handle the mult-class or mult-label cases. Forexample,nourapplcatonMAPsobtaned by averagng the APs on 81 concepts. All experments are conducted on a common desktop PC equpped wth Intel dual-core CPU frequency: 3.0 GHz) and 32G bytes physcal memory. For the experments on NUS-WIDE-Lte, the proposed method s compared wth all the fve baselne algorthms. Whle on the NUS-WIDE, the results from SGSSL s not reported due to ts ncapablty to handle dataset n such large scale. 4.3 Experment-I: NUS-WIDE-LITE 56k) In ths experment, we compare the proposed algorthm wth fve baselne algorthms. The results wth varyng numbers of labeled mages controlled by the parameter τ) are presented n Fgure 5. Below are the parameters and the adopted values for each method: for KNN, there s only one parameter k for tunng, whch stands for the number of nearest neghbors and s trvally set as 500. For SVM algorthm, we adopt the RBF kernel. For ts two parameters γ and C, wesetγ =0.6 andc = 1 n experments after fne tunng. For LNP algorthm, one parameter α s adjusted, whch s the fracton of label nformaton that each mage receves from ts neghbors. The optmal value s α = 0.95 n our experments. There are three parameters μ, ν and β n EGSSC, where μ and ν are used for weghtng the Kullback-Lebler dvergence term and Shannon entropy term respectvely and β ensures the convergence of the two smlar probablty measures. The optmal values are set as μ =0.1, ν =1andβ = 2 here. For our proposed algorthm, we set μ =10andη = 5. MAP of these sx methods s llustrated n Fgure 6. Our observatons from Fgure 5 are descrbed as follows: Our proposed algorthm LSMP outperforms the other baselne algorthms sgnfcantly when selectng dfferent proportons of labeled set. For example, wth 10 percent of labeled mages selected, LSMP has an m- Fgure 5: The results of the comparson of LSMP and the fve baselnes wth varyng parameter τ on NUS-WIDE-Lte dataset. provement 16.6% over SGSSL, 58.5% over EGSSC, 107.6% over LNP, 137.2% over SVM, and 154.5% over KNN. The mprovement s supposed to stem from the fact that our proposed algorthm encodes the label nformaton of each mage as a unt confdence vector, whch mposes extra nter-label constrants. In contrast, other methods ether consder the vsual smlarty graph only, or consders each label ndependently. Wth the ncreasng number of labeled mages, the performances of all algorthms consstently ncrease. When τ 0.6, the algorthm SGSSL outperforms the other two state-of-art algorthms LNP and EGSSC sgnfcantly. However, when τ>0.6, the mprovement of SGSSL over the others s lower. The proposed method keeps hgher MAP value than other fve methods over all values of τ. Recall that the proposed algorthm s a probablstc collaboratve mult-label propagaton algorthm, wheren p y) expresses the probablty for the -th mage to be assocated wth the y-th label. A drect applcaton for ths probablstc mplcaton s the tag rankng task. Some exemplar results of tag rankng are shown n Fgure Experment-II: NUS-WIDE 270k) In ths experment, we compare the proposed LSMP algorthm wth four state-of-the-art algorthms on the large-scale NUS-WIDE dataset for mult-label mage annotaton. As n prevous experments, we modulate the parameter τ to vary the percentage of the labeled mages used n the experments and carefully tune the optmal parameters n each method for far comparson. For KNN, the optmal value s k = For SVM algorthm, we set λ =0.8 andc =2. ForLNP method, the optmal value s α = In the experment of EGSSC, the best values are μ =0.5, ν =1andβ =1. For our proposed LSMP algorthm, μ =15andη =8. The results of all algorthms are shown n Fgure 8 and the results wth respect to each ndvdual concept are presented n Fgure 9. From Fgure 9, we can observe that On the large-scale real-world mage dataset, the proposed algorthm outperforms other algorthms sgnfcantly at all values of τ. For example, when τ = 0.1, LSMP has an mprovement 53.5% over EGSSC, 112.6% over LNP, 197.2% over SVM, and 220.5% over

8 Fgure 6: The comparson of APs for the 81 concepts usng sx methods wth τ =1. Wth the ncreasng parameter τ, the performances of all algorthms also ncrease. When τ 0.6, the algorthm EGSSC outperforms LNP sgnfcantly, but for τ>0.6, the mprovement of EGSSC than LNP s neglgble. The proposed method LSMP also keeps hgher MAP value than all baselnes over all feasble values of τ smlar to the case on NUS-WIDE-LITE, whch valdates the robustness of our proposed algorthm. We also provde the recorded runnng tme for dfferent algorthms on NUS-WIDE, as shown n Table 2. A salent effcacy mprovement can be observed from our proposed method. Fgure 8: The results of the comparson of LSMP and the four baselnes wth varyng parameter τ on NUS-WIDE. KNN. Compared wth the performance on NUS-WIDE- Lte, the best performance of LSMP n NUS-WIDE s 0.193, whch s smaller than the MAP value n the Lte verson. The performance degradaton s prmarly attrbuted to the ncrease of data scale the sze of labeled mage pool n NUS-WIDE s 170K, whle for the Lte verson t s only 27K). 5. CONCLUSION In ths paper we propose and valdate an effcent largescale mage annotaton method. Our contrbutons le n both the hashng-accelerated l 1-graph constructon, and KLdvergence orented soft loss functon and regularzaton term n graph-based modelng. The optmzaton framework utlzes the nter-label relatonshp and fnally returns a probablstc label vector for each mage, whch s more robust to noses and can be used for tag rankng. The proposed algorthm s expermented on several publcly-avalable mage benchmarks bult for mult-label annotaton, ncludng the

9 Fgure 7: The tags rankng results of LSMP n NUS-WIDE-LITE. Table 2: Executng tme unt: hours) comparson of dfferent algorthmson the NUS-WIDE dataset. Algorthms Graph Constructon Tme Label Estmaton Tme Total Tme KNN SVM LNP EGSSC LSMP ever-known largest NUS-WIDE data set. We shows ts superorty n terms of both accuracy and effcacy. Our future work wll follow two drectons: 1) extend the mage annotaton datasets to web-scale and further testfy the scalablty of our proposed method; 2) develop more elegant algorthms for KL-based label propagaton whch shows better convergent speed. 6. ACKNOWLEDGMENTS Ths research was supported by Research Grants NRF2007IDM- IDM on NRF/IDM Program and by AcRF Ter-1 Grant of R , Sngapore. 7. REFERENCES [1] A. Andon and P. Indyk. Near-optmal hashng algorthms for approxmate nearest neghbor n hgh dmensons. Commun. ACM, 511): , February [2] S. Boyd and L. Vandenberghe. Convex Optmzaton. Cambrdge Unversty Press, [3] E.J.Candès, J. K. Romberg, and T. Tao. Robust uncertanty prncples: exact sgnal reconstructon from hghly ncomplete frequency nformaton. IEEE Transactons on Informaton Theory, 522): , February [4] G. Chen, Y. Song, F. Wang, and C. Zhang. Sem-supervsed mult-label learnng by solvng a sylvester equaton. In SIAM Internatonal Conference on Data Mnng, [5] T.-S. Chua, J. Tang, R. Hong, H. L, Z. Luo, and Y.-T. Zheng. Nus-wde: A real-world web mage database from natonal unversty of sngapore. In CIVR, July [6] R.Collobert,F.H.Snz,J.Weston,andL.Bottou. Large scale transductve svms. Journal of Machne Learnng Research, 7: , September [7] T.M.CoverandJ.A.Thomas.Elements of Informaton Theory. Wley Seres n Telecommuncatons, [8] O. Delalleau, Y. Bengo, and N. Le Roux. Effcent non-parametrc functon nducton n sem-supervsed learnng. In Proceedngs of the Tenth Internatonal Workshop on Artfcal Intellgence and Statstcs, pages , [9] R. Duda, D. Stork, and P. Hart. Pattern Classfcaton. JOHN WILEY, [10] P. Indyk and R. Motwan. Approxmate nearest neghbors: Towards removng the curse of dmensonalty. In Proceedngs of the Symposum on Theory Computng, [11] M. Karlen, J. Weston, A. Erkan, and R. Collobert. Large-scale manfold transducton. In ICML, [12] D. Lu, X.-S. Hua, L. Yang, M. Wang, and H. jang Zhang. Tag rankng. In WWW, [13] Y. Lu, R. Jn, and L. Yang. Sem-supervsed mult-label learnng by constraned non-negatve matrx factorzaton. In AAAI, [14] Y. Mu, J. Shen, and S. Yan. Weakly-supervsed hashng n kernel space. In CVPR, [15] G.-J.Q,X.-S.Hua,Y.Ru,J.Tang,T.Me,and H.-J. Zhang. Correlatve mult-label vdeo annotaton. In MM, [16] S.T.Rowes and L.K.Saul. Nonlnear dmensonalty reducton by locally lnear embeddng. Scence, 290: , [17] A. Subramanya and J. Blmes. Entropc graph regularzaton n non-parametrc sem-supervsed classfcaton. In NIPS, [18] J. Tang, S. Yan, R. Hong, G.-J. Q, and T.-S. Chua. Inferrng semantc concepts from communty-contrbuted mages and nosy tags. In MM, [19] I. W. Tsang and J. T. Kwok. Large-scale sparsfed manfold regularzaton. In NIPS, [20] F. Wang and C. Zhang. Label propagaton through lnear neghborhoods. In ICML, June [21] J. Yuan, J. L, and B. Zhang. Explotng spatal

10 Fgure 9: The comparson of APs for the 81 concepts wth τ =1.0 on NUS-WIDE. context constrants for automatc mage regon annotaton. In MM, [22] Z.-J.Zha,T.Me,J.Wang,Z.Wang,andX.-S.Hua. Graph-based sem-supervsed learnng wth multple labels. Journal of Vsual Communcaton and Image Representaton, 202):97 103, February [23] X. Zhu. Sem-supervsed learnng wth graphs. Carnege Mellon Unversty, [24] X. Zhu. Sem-Supervsed Learnng Lterature Survey. Carnege Mellon Unversty, APPENDIX: Convexty of D 1 p) and D 2 p, q) PROOF: The convexty of D 1p) s obvous f D KLr p ) and D KLp j N) wjpj) prove convex. Consequently, to justfy the convexty of D 1 p), frst we elaborate on the convexty of KL dvergence defned on two probablty mass functons, whch has already been studed n the felds of both nformaton theory [7] and convex optmzaton [2]. Specfcally, for D KLp q) defned on two pars of probablty mass functons p 1,q 1)andp 2,q 2), the convexty of D KL equvalently mples the followng fact: D KL λp 1 +1 λ)p 2 λq 1 +1 λ)q 2 ) λd KL p 1 q 1 ) +1 λ)d KL p 2 q 2 ), 17) clear by applyng the log-sum nequalty [7],.e., n ) n a log a n n a log a, b b on both the left and rght sdes of the followng nequalty: D KL λp 1 +1 λ)p 2 λq 1 +1 λ)q 2 )= λp 1 y)+1 λ)p 2 y)) log λp 1y)+1 λ)p 2 y) λq 1 y)+1 λ)q 2 y). y It s easly verfed that D KL λp 1 +1 λ)p 2 λq 1 +1 λ)q 2 ) λp λp1y) 1y)log λq + 1 λ)p2y) 1 λ)p 2y)log y 1y) 1 λ)q y 2y) = λd KLp 1 q 1)+1 λ)d KLp 2 q 2). 18) Thus D KLr p )sconvex. And lkewse the convexty of D KLp j N) wjpj) can be justfed, observng that j N k ) w jp j s a convex, lnear combnaton of several varables. Hence D 1p) sconvex. Usng the smlar trcks above, D 2 p, q) s also demonstrated to be convex. where λ [0, 1]. The correctness of the above nequalty s

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY SSDH: Sem-supervsed Deep Hashng for Large Scale Image Retreval Jan Zhang, and Yuxn Peng arxv:607.08477v2 [cs.cv] 8 Jun 207 Abstract Hashng

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Private Information Retrieval (PIR)

Private Information Retrieval (PIR) 2 Levente Buttyán Problem formulaton Alce wants to obtan nformaton from a database, but she does not want the database to learn whch nformaton she wanted e.g., Alce s an nvestor queryng a stock-market

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices Steps for Computng the Dssmlarty, Entropy, Herfndahl-Hrschman and Accessblty (Gravty wth Competton) Indces I. Dssmlarty Index Measurement: The followng formula can be used to measure the evenness between

More information

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

High-Boost Mesh Filtering for 3-D Shape Enhancement

High-Boost Mesh Filtering for 3-D Shape Enhancement Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,

More information

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide Lobachevsky State Unversty of Nzhn Novgorod Polyhedron Quck Start Gude Nzhn Novgorod 2016 Contents Specfcaton of Polyhedron software... 3 Theoretcal background... 4 1. Interface of Polyhedron... 6 1.1.

More information

Data-dependent Hashing Based on p-stable Distribution

Data-dependent Hashing Based on p-stable Distribution Data-depent Hashng Based on p-stable Dstrbuton Author Ba, Xao, Yang, Hachuan, Zhou, Jun, Ren, Peng, Cheng, Jan Publshed 24 Journal Ttle IEEE Transactons on Image Processng DOI https://do.org/.9/tip.24.2352458

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

An Image Fusion Approach Based on Segmentation Region

An Image Fusion Approach Based on Segmentation Region Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua

More information

Backpropagation: In Search of Performance Parameters

Backpropagation: In Search of Performance Parameters Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,

More information

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

All-Pairs Shortest Paths. Approximate All-Pairs shortest paths Approximate distance oracles Spanners and Emulators. Uri Zwick Tel Aviv University

All-Pairs Shortest Paths. Approximate All-Pairs shortest paths Approximate distance oracles Spanners and Emulators. Uri Zwick Tel Aviv University Approxmate All-Pars shortest paths Approxmate dstance oracles Spanners and Emulators Ur Zwck Tel Avv Unversty Summer School on Shortest Paths (PATH05 DIKU, Unversty of Copenhagen All-Pars Shortest Paths

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 5 Luca Trevisan September 7, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 5 Luca Trevisan September 7, 2017 U.C. Bereley CS294: Beyond Worst-Case Analyss Handout 5 Luca Trevsan September 7, 207 Scrbed by Haars Khan Last modfed 0/3/207 Lecture 5 In whch we study the SDP relaxaton of Max Cut n random graphs. Quc

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Positive Semi-definite Programming Localization in Wireless Sensor Networks Postve Sem-defnte Programmng Localzaton n Wreless Sensor etworks Shengdong Xe 1,, Jn Wang, Aqun Hu 1, Yunl Gu, Jang Xu, 1 School of Informaton Scence and Engneerng, Southeast Unversty, 10096, anjng Computer

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Why consder unlabeled samples?. Collectng and labelng large set of samples s costly Gettng recorded speech s free, labelng s tme consumng 2. Classfer could be desgned

More information

Learning a Class-Specific Dictionary for Facial Expression Recognition

Learning a Class-Specific Dictionary for Facial Expression Recognition BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

A Robust Method for Estimating the Fundamental Matrix

A Robust Method for Estimating the Fundamental Matrix Proc. VIIth Dgtal Image Computng: Technques and Applcatons, Sun C., Talbot H., Ourseln S. and Adraansen T. (Eds.), 0- Dec. 003, Sydney A Robust Method for Estmatng the Fundamental Matrx C.L. Feng and Y.S.

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Network Intrusion Detection Based on PSO-SVM

Network Intrusion Detection Based on PSO-SVM TELKOMNIKA Indonesan Journal of Electrcal Engneerng Vol.1, No., February 014, pp. 150 ~ 1508 DOI: http://dx.do.org/10.11591/telkomnka.v1.386 150 Network Intruson Detecton Based on PSO-SVM Changsheng Xang*

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

Hierarchical clustering for gene expression data analysis

Hierarchical clustering for gene expression data analysis Herarchcal clusterng for gene expresson data analyss Gorgo Valentn e-mal: valentn@ds.unm.t Clusterng of Mcroarray Data. Clusterng of gene expresson profles (rows) => dscovery of co-regulated and functonally

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

Active Contours/Snakes

Active Contours/Snakes Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

Semi-supervised Classification Using Local and Global Regularization

Semi-supervised Classification Using Local and Global Regularization Proceedngs of the Twenty-Thrd AAAI Conference on Artfcal Intellgence (2008) Sem-supervsed Classfcaton Usng Local and Global Regularzaton Fe Wang 1, Tao L 2, Gang Wang 3, Changshu Zhang 1 1 Department of

More information

Simulation Based Analysis of FAST TCP using OMNET++

Simulation Based Analysis of FAST TCP using OMNET++ Smulaton Based Analyss of FAST TCP usng OMNET++ Umar ul Hassan 04030038@lums.edu.pk Md Term Report CS678 Topcs n Internet Research Sprng, 2006 Introducton Internet traffc s doublng roughly every 3 months

More information

Towards Semantic Knowledge Propagation from Text to Web Images

Towards Semantic Knowledge Propagation from Text to Web Images Guoun Q (Unversty of Illnos at Urbana-Champagn) Charu C. Aggarwal (IBM T. J. Watson Research Center) Thomas Huang (Unversty of Illnos at Urbana-Champagn) Towards Semantc Knowledge Propagaton from Text

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

Intra-Parametric Analysis of a Fuzzy MOLP

Intra-Parametric Analysis of a Fuzzy MOLP Intra-Parametrc Analyss of a Fuzzy MOLP a MIAO-LING WANG a Department of Industral Engneerng and Management a Mnghsn Insttute of Technology and Hsnchu Tawan, ROC b HSIAO-FAN WANG b Insttute of Industral

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

Wavefront Reconstructor

Wavefront Reconstructor A Dstrbuted Smplex B-Splne Based Wavefront Reconstructor Coen de Vsser and Mchel Verhaegen 14-12-201212 2012 Delft Unversty of Technology Contents Introducton Wavefront reconstructon usng Smplex B-Splnes

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

Categories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms

Categories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms 3. Fndng Determnstc Soluton from Underdetermned Equaton: Large-Scale Performance Modelng by Least Angle Regresson Xn L ECE Department, Carnege Mellon Unversty Forbs Avenue, Pttsburgh, PA 3 xnl@ece.cmu.edu

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

A Robust LS-SVM Regression

A Robust LS-SVM Regression PROCEEDIGS OF WORLD ACADEMY OF SCIECE, EGIEERIG AD ECHOLOGY VOLUME 7 AUGUS 5 ISS 37- A Robust LS-SVM Regresson József Valyon, and Gábor Horváth Abstract In comparson to the orgnal SVM, whch nvolves a quadratc

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

UB at GeoCLEF Department of Geography Abstract

UB at GeoCLEF Department of Geography   Abstract UB at GeoCLEF 2006 Mguel E. Ruz (1), Stuart Shapro (2), June Abbas (1), Slva B. Southwck (1) and Davd Mark (3) State Unversty of New York at Buffalo (1) Department of Lbrary and Informaton Studes (2) Department

More information

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016) Technsche Unverstät München WSe 6/7 Insttut für Informatk Prof. Dr. Thomas Huckle Dpl.-Math. Benjamn Uekermann Parallel Numercs Exercse : Prevous Exam Questons Precondtonng & Iteratve Solvers (From 6)

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Non-Split Restrained Dominating Set of an Interval Graph Using an Algorithm

Non-Split Restrained Dominating Set of an Interval Graph Using an Algorithm Internatonal Journal of Advancements n Research & Technology, Volume, Issue, July- ISS - on-splt Restraned Domnatng Set of an Interval Graph Usng an Algorthm ABSTRACT Dr.A.Sudhakaraah *, E. Gnana Deepka,

More information