Message-Passing Algorithms for Quadratic Programming Formulations of MAP Estimation

Size: px
Start display at page:

Download "Message-Passing Algorithms for Quadratic Programming Formulations of MAP Estimation"

Transcription

1 Message-Passng Algorthms for Quadratc Programmng Formulatons of MAP Estmaton Akshat Kumar Department of Computer Scence Unversty of Massachusetts Amherst Shlomo Zlbersten Department of Computer Scence Unversty of Massachusetts Amherst Abstract Computng maxmum a posteror (MAP) estmaton n graphcal models s an mportant nference problem wth many applcatons. We present message-passng algorthms for quadratc programmng (QP) formulatons of MAP estmaton for parwse Markov random felds. In partcular, we use the concaveconvex procedure (CCCP) to obtan a locally optmal algorthm for the non-convex QP formulaton. A smlar technque s used to derve a globally convergent algorthm for the convex QP relaxaton of MAP. We also show that a recently developed expectatonmaxmzaton (EM) algorthm for the QP formulaton of MAP can be derved from the CCCP perspectve. Experments on synthetc and real-world problems confrm that our new approach s compettve wth maxproduct and ts varatons. Compared wth CPLEX, we acheve more than an order-ofmagntude speedup n solvng optmally the convex QP relaxaton. 1 INTRODUCTION Probablstc graphcal models provde an effectve framework for compactly representng probablty dstrbutons over hgh dmensonal spaces and performng complex nference usng smple local update procedures. In ths work, we focus on the class of undrected models called Markov random felds (MRFs) [Wanwrght and Jordan, 008]. A common nference problem n ths model s to compute the most probable assgnment to varables, also called the maxmum a posteror (MAP) assgnment. MAP estmaton s crucal for many practcal applcatons n computer vson and bonformatcs such as proten desgn [Yanover et al., 006; Sontag et al., 008] among others. Computng MAP exactly s NP-hard for general graphs. Thus approxmate nference technques are often used [Wanwrght and Jordan, 008; Sontag et al., 010]. Recently, several convergent algorthms have been developed for MAP estmaton such as tree-reweghted max-product [Wanwrght et al., 00; Kolmogorov, 006] and max-product LP [Globerson and Jaakkola, 007; Sontag et al., 008]. Many of these algorthms are based on the lnear programmng (LP) relaxaton of the MAP problem [Wanwrght and Jordan, 008]. A dfferent formulaton of MAP s based on quadratc programmng (QP) [Ravkumar and Lafferty, 006; Kumar et al., 009]. The QP formulaton s an attractve alternatve because t provdes a more compact representaton of MAP: In a MRF wth n varables, k values per varable, and E edges, the QP has O(nk) varables whereas the LP has O( E k ) varables. The large sze of the LP makes off-the-shelf LP solvers mpractcal for several real-world problems [Yanover et al., 006]. Another sgnfcant advantage of the QP formulaton s that t s exact. However, the QP formulaton s non-convex, makng global optmzaton hard. To remedy ths, Ravkumar and Lafferty [006] developed a convex QP relaxaton of the MAP problem. Our man contrbuton s the analyss of the QP formulatons of MAP as a dfference of convex functons (D.C.) problem, whch yelds effcent, graph-based message-passng algorthms for both the non-convex and convex QP formulatons. We use the concaveconvex procedure (CCCP) to develop the message passng algorthms [Yulle and Rangarajan, 003]. Motvated by geometrc programmng [Boyd et al., 007], we present another QP-based formulaton of MAP and solve t usng the CCCP technque. The resultng algorthm s shown to be equvalent to a recently developed expectaton-maxmzaton (EM) algorthm that provdes good performance for large MAP problems [Kumar and Zlbersten, 010]. The CCCP approach, however, s more flexble than EM and makes t easy to ncorporate addtonal constrants that can

2 tghten the convex QP [Kumar et al., 009]. All the developed CCCP algorthms are guaranteed to converge to a local optmum for non-convex QPs, and to the global optmum for convex QPs. All the algorthms also provde monotonc mprovement n the objectve. We experment on synthetc benchmarks and realworld proten-desgn problems [Yanover et al., 006]. Aganst max-product [Pearl, 1988], CCCP provdes sgnfcantly better soluton qualty, sometmes more than 45% for large Isng graphs. On the real-world proten desgn problems, CCCP acheves near-optmal soluton qualty for most nstances, and s sgnfcantly faster than the max-product LP method [Sontag et al., 008]. Ravkumar and Lafferty [006] proposed to solve the convex QP relaxaton usng standard QP solvers. Our message-passng algorthm for ths case provdes more than an order-of-magntude speedup aganst the state-of-the-art QP solver CPLEX. QP FORMULATION OF MAP A parwse Markov random feld (MRF) s descrbed usng an undrected graph G = (V, E). A dscrete random varable x wth a fnte doman s assocated wth each node V of the graph. Assocated wth each edge (, j) E s a potental functon θ j (x, ). The complete assgnment x has the probablty: ( ) p(x; θ) exp θ j (x, ) j E The MAP problem conssts of fndng the most probable assgnment to all the varables under p(x; θ). Ths s equvalent to fndng the assgnment x that maxmzes the functon f(x; θ) = j E θ j(x, ). We assume w.l.o.g. that each θ j s nonnegatve, otherwse a constant can be added to each θ j wthout changng the optmal soluton. Let p be the margnal probablty assocated wth each MRF node V. The MAP quadratc programmng (QP) formulaton [Ravkumar and Lafferty, 006] s gven by: max p j E subject to x x, p (x )p j ( )θ j (x, ) (1) p (x ) = 1, p (x ) 0 V The above QP s compact even for large graphcal models and has smple lnear constrants: O(nk) varables and n normalzaton constrants where n = V and k s the doman sze. Ravkumar and Lafferty [006] also show that ths formulaton s exact. That s, the global optmum of the above QP wll maxmze the functon f(x; θ) and an ntegral MAP assgnment can be extracted from t. However ths formulaton s non-convex, makng global optmzaton hard. Nonetheless, for several problems, a local optmum of ths QP provdes a good soluton as we wll show emprcally. Ths was also observed by Kumar and Zlbersten [010]..1 The Concave Convex Procedure The concave-convex procedure (CCCP) [Yulle and Rangarajan, 003] s a popular approach to optmze a general non-convex functon expressed as a dfference of two convex functons. We use ths method to obtan message-passng algorthms for QP formulatons of MAP. We descrbe t here brefly. Consder the optmzaton problem: mn{g(x) : x Ω} () where g(x) = u(x) v(x) s an arbtrary functon wth u, v beng real-valued convex functons and Ω beng a convex set. The CCCP method provdes an teratve procedure that generates a sequence of ponts x l by solvng the followng convex program: x l+1 = arg mn{u(x) x T v(x l ) : x Ω} (3) Each teraton of CCCP decreases the objectve g(x) and s guaranteed to converge to a local optmum [Srperumbudur and Lanckret, 009].. Solvng MAP QP Usng CCCP We frst show how the CCCP framework can be used to solve the QP n Eq. (1). We adopt the conventon that a MAP QP always refers to the QP n Eq. (1); the convex varant of ths QP shall be explctly dfferentated when addressed later. Consder the followng functons u, v: u(p) = θ j(x, )( p (x ) + p j( ) ) j x (4) v(p) = θ j(x, )( p (x ) + p j ( ) ) j x (5) The above functons are convex because the quadratc functons f(z) = z and f(y, z) = (y + z) are convex, and the nonnegatve weghted sum of convex functons s also convex [Boyd and Vandenberghe, 004, Ch. 3]. It can be easly verfed that the QP n Eq. (1) can be wrtten as mn p {u(p) v(p)} wth normalzaton and nonnegatvty constrants defnng the constrant set Ω. Intutvely, we used the smple dentty xy = (x + y ) (x + y). We also negated the objectve functon to convert maxmzaton to mnmzaton. For smplcty, we denote the gradent v/ p(x ) by x v. x v =p (x ) θ j (x, )+ θ j (x, )p j ( )

3 The frst part of the above equaton nvolves a local computaton assocated wth an MRF node and the second part defnes the messages δ j from neghbors j Ne() of node. It can be made explct as follows: ˆθ(x )= θ j (x, ); δ j (x )= θ j (x, )p j ( ) x v= p (x )ˆθ(x ) + δ j (x ) (6) CCCP teratons: Each teraton of CCCP nvolves solvng the convex program of Eq. (3). Frst we wrte the Lagrangan functon nvolvng only the normalzaton constrants, later we address the nonnegatvty nequalty constrants. l v denotes the gradent from the prevous teraton l. L(p, λ) = θ j(x, ){ p (x ) + p j( ) } j x p (x ) l x v + λ ( p (x ) 1) (7) x x Solvng for the frst order optmalty condtons p L(x, λ ) and λ L(x, λ ), we get the soluton: p l+1 (x ) = l x v λ ˆθ(x ) λ = 1 x 1 ˆθ(x ) ( x ) l x v ˆθ(x ) 1 (8) (9) Nonnegatvty constrants: Nonnegatvty constrants n the MAP QP are nequalty constrants whch are harder to handle as the Karush-Kuhn- Tucker (KKT) condtons nclude the nonlnear complementary slackness condton µ j p j () = 0 [Boyd and Vandenberghe, 004]. We can use nteror-pont methods, but they lose the effcency of graph based message passng. Fortunately, we show that for MAP QP, the KKT condtons are easly satsfed by ncorporatng an nner-loop n the CCCP teratons. Alg. 1 shows the complete message-passng procedure to solve MAP QPs. Each outer loop corresponds to solvng the CCCP teraton of Eq. (3) and s run untl the desred number of teratons s reached. The messages δ are used for computng the gradent v as n Eq. (6). The nner loop corresponds to satsfyng the KKT condtons ncludng the nonnegatvty nequalty constrants. Intutvely, the strategy to handle nequalty constrants s as hghlghted n [Bertsekas, 1999, Sec ] consderng all possble combnatons of nequalty constrants beng actve (p (x )=0) or nactve (p (x ) > 0) and solvng the resultng KKT condtons, whch s easer as they become lnear equatons. If the resultng soluton satsfes the KKT condtons of the orgnal problem, then we have a vald soluton for the orgnal optmzaton problem. Of course, 1: Graph-based message passng for MAP estmaton nput: Graph G = (V, E) and potentals θ j per edge //outer loop starts repeat foreach node V do δ j() P x p (x )θ j(x, ) Send message δ j to each neghbor j Ne() foreach node V do zeros φ //nner loop starts repeat Set p (x ) 0 x zeros Calculate p (x ) usng Eq. (8) x / zeros zeros zeros {x : p (x ) < 0} untl all belefs p (x ) 0 untl stoppng crteron s satsfed return: The decoded complete ntegral assgnment ths s hghly neffcent for the general case. But fortunately for the MAP QP, we show that the nner loop of Alg. 1 recovers the correct soluton and the Lagrange multplers are computed effcently for the convex program of Eq. (3). We descrbe t below. The nner loop ncludes local computaton to each MRF node and does not requre message passng. Intutvely, the set zeros tracks all the settngs x of the varable x for whch p (x ) was negatve n any prevous nner loop teraton. It then clamps all such belefs to 0 for all future teratons. Then the belefs for the rest of the settngs of x are computed usng Eq. (8). The new Lagrange multpler λ (whch corresponds to the condton λ L(x, λ ) = 0) s calculated usng the equaton x \x p (x ) = 1. Lemma 1. The nner loop of Alg. 1 termnates wth worst case complexty O(k ), and yelds a feasble pont for the convex program of Eq. (3). Proof. The sze of the set zeros ncreases wth each teraton, therefore the nner loop must termnate as each varable s doman s fnte. Wth the doman sze of a varable beng k, the nner loop can run for at most k teratons. Computng new belefs wthn each nner loop teraton also requres O(k) tme. Thus the worst case total complexty s O(k ). The nner loop can termnate only n two ways before teraton k or at teraton k. If t termnates before teraton k, then t mples that all the belefs p (x ) must be nonnegatve. The normalzaton constrants are always enforced by the Lagrange multplers λ s. If t termnates durng teraton k, then t mples that k 1 settngs of the varable x are clamped to zero as the sze of the set zeros wll be exactly k 1. The sze cannot be k because that would make all the belefs equal to zero, makng t mpossble to satsfy the normalzaton constrant; λ wll not allow ths. The sze

4 cannot be smaller than k 1 because the set zeros grows by at least one element durng each prevous teraton. Therefore the only soluton durng teraton k s to set the sngle remanng settng of the varable x to 1 to satsfy the normalzaton and nonnegatvty constrants smultaneously. Therefore the nner loop always yelds a feasble pont upon termnaton. Emprcally, we observed that even for large proten desgn problems wth k = 150, the number of requred nner loop teratons s below 0 far below the worst case complexty. For a fxed outer loop l, let the nner loop teratons be ndexed by r. Lemma. The Lagrange multpler correspondng to the normalzaton constrant for a MRF varable x always ncreases wth each nner loop teraton. Proof. Each nner loop teraton r computes a new Lagrange multpler λ r for the constrant p (x ) = 1 usng Eq. (8). We show that λ r+1 > λ r. For the nner loop teraton r, some of the computed belefs must be negatve, otherwse the nner loop must have termnated. Let x denote those settngs of varable x for whch p (x ) < 0 n teraton r. From the normalzaton constrant for teraton r, we get: x l x v λ r ˆθ(x ) = 1 (10) We used the explct representaton of p (x ) from Eq. (8). Snce p (x ) are negatve, we get: x \x l x v λ r ˆθ(x ) > 1 (11) The belef for all such x wll become zero for the next nner loop teraton r + 1. From the normalzaton constrant for teraton r + 1, we get: x \x l x v λ r+1 ˆθ(x ) = 1 (1) We used a slght smplfcaton n the above equatons as we gnored the effect of prevous teratons, before teraton r. However, t wll not change the concluson as all the belefs that were clamped to zero earler (before teraton r) shall reman so for all future teratons. Note that x and ˆθ do not depend on the nner loop teratons. Subtractng Eq. 1 from Eq. 11: (λ r+1 λ r ) x \x 1 ˆθ(x ) > 0 (13) Snce we assumed that all potental functons θ j are nonnegatve, we must have (λ r+1 λ r ) > 0. Hence > λ r and the lemma s proved. λ r+1 Theorem 3. The nner loop of Alg. 1 correctly recovers all the Lagrange multplers for the equalty and nequalty constrants for the convex program of Eq. (3), thus solvng t exactly. Proof. Lemma 1 shows that the nner loop provdes a feasble pont of the convex program. We now show that ths pont also satsfes the KKT condtons, thus s the optmal soluton. The KKT condtons for the normalzaton constrants are always satsfed durng the belef updates (see. Eq. (8)). The man task s to show that for the nequalty constrant p (x ) 0, the KKT condtons hold. That s, f p (x ) = 0, then the Lagrange multpler µ (x ) 0, and f p (x ) < 0, then µ (x ) = 0. By usng the KKT condton p(x )L(p, λ, µ ) = 0, we get: θ j (x, )p (x ) l x v + λ µ (x ) = 0 (14) The man focus of the proof s on the belefs for elements n the set zeros. Let us focus on the end of an nner loop teraton r, when a new element x s added to zeros because ts computed belef p (x ) < 0. Usng Eq. (8), we know that p (x ) = l x p (x ) < 0 we get: v λ r ˆθ(x ). Because λ r > l x v (15) For all future teratons of the nner loop, p (x ) wll be set to zero. Therefore the KKT condton for teraton r + 1 mandates that µ r+1 (x ) 0. Settng p (x ) = 0 n Eq. (14), we get: µ r+1 (x ) = λ r+1 l x v (16) We know from Lemma that λ r+1 > λ r. Combnng ths fact wth Eq. (15), we get µ r+1 (x ) > 0, thereby satsfyng the KKT condton. Note that the only component dependng on the nner loop n the above condton s λ r ; x s fxed durng each nner loop. Furthermore, for all future nner loop teratons, the KKT condtons for all elements x s n the set zeros wll be met due to the ncreasng nature of the multpler λ. Therefore, when the nner loop termnates, we shall have correct Lagrange multplers µ satsfyng µ 0 for all the elements of the set zeros. For the rest of the elements, the multpler µ = 0, satsfyng all the KKT condtons. As the frst order KKT condtons are both necessary and suffcent for optmalty n convex programs [Bertsekas, 1999, Sec ], the nner loop solves exactly the convex program n Eq. (3).

5 .3 Solvng Convex MAP QP Usng CCCP Because the prevous QP formulaton of MAP s nonconvex, global optmzaton s hard. To remedy ths, Ravkumar and Lafferty [006] developed a convex QP relaxaton for MAP, whch performed well on ther benchmarks. Recently, Kumar et al. [009] showed that the convex QP relaxaton s also equvalent to the second order cone programmng (SOCP) relaxaton. Ravkumar and Lafferty [006] proposed to solve such QP usng standard QP solvers. We show usng CCCP that ths QP relaxaton can be solved effcently usng graph-based message passng, and the resultng algorthm converges to the global optmum of the relaxed QP. Expermentally, we found the resultng message-passng algorthm to be hghly effcent even for large graphs, outperformng CPLEX by more than an order-of-magntude. The relaxed QP s descrbed as follows: max p (x )d (x )+ p (x )p j ( )θ j (x, ) p x j x, p (x )d (x ) (17) x The relaxaton s based on addng a dagonal term, d (x ), for each varable x. Note that under the ntegralty assumpton p (x ) = p (x ), thus the frst and last terms cancel out, resultng n the orgnal MAP QP. The dagonal term s gven by: d (x ) = θ j(x, ) Consder the convex functon u(p) represented as: θ j(x, )( p (x ) + p j( ) ) + p (x )d (x ) x,x j and the convex functon v(p) represented as: θ j(x, )( p (x )+p j ( ) ) + p (x )d (x ) j x,x The above two functons are the same as the orgnal QP formulaton, except for the added dagonal terms d (x ). It can be easly verfed that the relaxed QP objectve can be wrtten as mn p {u(p) v(p)} subject to normalzaton and nonnegatvty constrants. Note that the maxmzaton of the relaxed QP s converted to mnmzaton by negatng the objectve. The gradent requred by CCCP s gven by: x v = p (x )ˆθ(x ) + θ j (x, )p j ( ) + d (x ) Notce the close smlarty wth the MAP QP case n Eq. (6). The only addtonal term s d (x ), whch needs to be computed only once before message passng begns. The messages for the relaxed QP case are exactly the same as the δ messages for the MAP QP. The Lagrangan correspondng to the convex program of Eq. (3) s smlar to the MAP QP case (see Eq. (7)) wth an addtonal term,x p (x )d (x ). The constrant set Ω ncludes the normalzaton and nonnegatvty constrants as for the MAP QP case. Solvng for the optmalty condtons p L(p, λ ) and λ L(p, λ ), we get the new belefs as follows: p l+1 (x ) = l x v λ d (x ) + ˆθ(x ) (18) The Lagrange multpler λ for the normalzaton constrant can be calculated by usng the equaton x p (x ) = 1. The only dfference from the correspondng Eq. (8) for the MAP QP s the addtonal term d (x ) n the denomnator. Thanks to these strong smlartes, we can show that Alg. 1 also works for the convex MAP QP wth mnor modfcatons. Frst, we calculate the dagonal terms d (x ) once for each varable x of the MRF. The message-passng procedure for each outer loop teraton remans the same. The second dfference les n the nner loop that enforces the nonnegatvty constrants. The nner loop now uses Eq. (18) nstead of Eq. (8) to estmate new belefs p (x ). The proof s omtted beng very smlar to the MAP QP case. Theorem 4. The CCCP message-passng algorthm converges to a global optmum of the convex MAP QP. The result s based on the fact that CCCP converges to a statonary pont of the gven constraned optmzaton problem that satsfes the KKT condtons [Srperumbudur and Lanckret, 009]. Because the KKT condtons are both necessary and suffcent for convex optmzaton problems wth lnear constrants [Bertsekas, 1999], the result follows. We also hghlght that a global optmum of the convex QP may not solve the MAP problem exactly, as the convex QP s a varatonal approxmaton to MAP that may not be tght. Nonetheless, t has shown to perform well n practce [Ravkumar and Lafferty, 006]..4 GP Based Reformulaton of MAP QP We now present another formulaton of the MAP QP problem based on geometrc programmng (GP). A GP s a type of mathematcal program characterzed by objectves and constrants that have a specal form. For detals, we refer to [Boyd et al., 007]. Whle the QP formulaton of MAP n Eq. (1) s not exactly a GP, t bears a close resemblance. Ths allows us to transfer some deas from GP, whch we shall descrbe. We start wth some basc concepts of GP.

6 Defnton 1. Let x 1,..., x n denote real postve varables. A real valued monomal functon has the form f(x) = cx a1 1 xa xan n, where c > 0 and a R. A posynomal functon s a sum of one or more monomals: f(x) = K k=1 c kx a 1k 1 x a k x a nk n In a GP, the objectve functon and all nequalty constrants are posynomal functons. The MAP QP (see Eq. (1)) satsfes ths requrement the potental functon θ j corresponds to c k and s postve (the θ j = 0 case can be excluded for convenence); node margnals p (x ) are real postve varables. Snce we already assumed margnals p to be postve, nonnegatvty nequalty constrants are not requred. In a GP, the equalty constrants can only be monomal functons. Ths s not satsfed n MAP QP as normalzaton constrants are posynomals. Nonetheless, we proceed as n GP by convertng the orgnal problem usng certan optmalty-preservng transformatons. The frst change s to let p (x ) = e y(x), where y (x ) s an unconstraned varable. Ths s allowed as all margnals must be postve. The second change s to take the log of the objectve functon; because log s a monotonc functon, ths wll not change the optmal soluton. The reformulated MAP QP s shown below: { } mn: log j subject to: x x, exp ( y (x )+y j ( )+log θ j (x, ) ) e y(x) = 1 V Ths nonlnear program has the same optmal solutons as the orgnal MAP QP. As log-sum-exp s convex, the objectve functon of the above problem s concave. Note that we are mnmzng a concave functon that can have multple local optma. We agan solve t usng CCCP. Consder the functon u(y) = 0 and v(y) as the objectve of the above program, but wthout the negatve sgn. The gradent requred by CCCP s: θ j (x, ) exp ( y (x )+y j ( ) ) l y v = j x, θ j (x, ) exp ( y (x )+y j ( ) ) (19) The Lagrangan correspondng to Eq. (3) wth constrant set ncludng only normalzaton constrants s: L(y, λ) =,x y (x ) l y v + λ ( x e y(x) 1) Usng the frst order optmalty condton, we get: exp ( y (x ) ) = l y v λ (0) We note that the denomnator of Eq. (19) s a constant for each y (x ). Therefore we represent t usng c l. Resubsttutng p (x ) = e y(x) and l y v n Eq. (0): p x (x ) = j θ j (x, )p (x )p j ( ) c l λ where p (x ) s the new parameter for the current teraton, and parameters wthout astersk (on the R.H.S.) are from the prevous teraton. Snce c l λ s also a constant, we can replace them by a normalzaton constant C to get the fnal update equaton: p (x ) = p (x ) θ j (x, )p j ( ) C The message-passng process for ths verson of CCCP s exactly the same as that for MAP QP and convex QP. Ths verson does not requre an nner loop as all the node margnals reman postve usng such updates. Ths update process s also dentcal to the recently developed message-passng algorthm for MAP estmaton that s based on expectaton-maxmzaton (EM) rather than CCCP [Kumar and Zlbersten, 010]. However, CCCP provdes a more flexble framework n that t handled the nonconvex and convex QP n a smlar way as shown earler. Furthermore, the CCCP framework allows for addtonal constrants to be added to the convex QP to make t tghter [Srperumbudur and Lanckret, 009]. In sum, we have shown that the concave-convex procedure provdes a unfyng framework for the varous quadratc programmng formulatons of the MAP problem. Each teraton of CCCP can be easly mplemented usng graph-based message passng. Interestngly, the messages exchanged for all the QP formulatons we dscussed reman exactly the same; the dfferences le n how new node margnals are computed usng such messages. 3 EXPERIMENTS We now present an emprcal evaluaton of the CCCP algorthms. We frst report results on synthetc graphs generated usng the Isng model from statstcal physcs [Baxter, 198]. We compare max-product (MP) and the CCCP message-passng algorthm for the QP formulaton of MAP. We generated D nearest neghbor grd graphs for a number of grd szes (rangng between to 50 50) and varyng values of the couplng parameter. All the varables were bnary. The node potentals were generated by samplng from the unform dstrbuton U[ 0.05, 0.05]. The couplng strength, d coup, for each edge was sampled from U[ β, β] followng the mxed Isng model. The bnary edge potental θ j was defned as follows: { d coup x = θ j (x, ) = d coup x

7 CCCP CCCP CCCP MP MP MP CCCP CCCP CCCP MP MP MP (a) 1 Isng 3! (a) Isng 4 3(a)! 100 Isng 53 4! (b) 1 Isng 3! (b) Isng 4 3(b)! 400 Isng 53 4! (c) 1 Isng 3! (c) Isng 4 3! (c) 900 Isng 53 4! (a) (b) (c) (a) Isng (a)! 100 Isng (a)! 100 Isng! 100 (b) Isng (b)! 400 Isng (b)! 400 Isng! 400 (c) Isng!(c) 900 Isng (c)! 900 Isng! CCCP CCCP CCCP CCCP CCCP CCCP (d) 1 Isng!(d) Isng 4 (d)! Isng 53 4! (e) 1 Isng!(e) Isng 4 (e)! Isng 53 4! (f) Proten 0 5 0(f) desgn 40 Proten (f) nstances desgn 40 Proten nstances 40 desgn nstances (d) Isng (d)! 1600 Isng (d)! 1600 Isng! 1600 (e) Isng (e)! 500 Isng (e)! 500 Isng! 500 (f) Proten (f) desgn Proten (f) nstances Proten desgn nstances desgn nstances (d) (e) Fgure 1: (a) (e) show qualty comparson between max-product (MP) and CCCP for Isng graphs wth varyng number of nodes ( ). The x-axs denotes the couplng parameter β, y-axs shows soluton qualty. (f) shows the soluton qualty CCCP acheves as a percentage of the optmal value (y-axs) for dfferent proten desgn nstances (x-axs). (f) For every grd sze and each settng of the couplng strength parameter β, we generated 10 nstances by samplng d coup per edge. For each nstance, we consdered the best soluton qualty of 10 runs for both max-product and CCCP. We then report the average qualty of the 10 nstances acheved for each parameter β. Both max-product and CCCP were mplemented n JAVA and ran on a.4ghz CPU. Max-product was allowed 1000 teratons and often dd not converge, whereas CCCP converged wthn 500 teratons. Fg. 1(a e) show soluton qualty comparsons between MP and CCCP. For graphs (Fg. 1(a)), both CCCP and MP acheve smlar soluton qualty. The gan n qualty provded by CCCP ncreases wth the sze of the grd graph. For 0 0 grds, the average gan n soluton qualty, ((Q CCCP Q MP )/Q MP ), for each couplng strength parameter β s over 0%. For (Fg. 1(c)) grds, the gan s above 30% for each parameter β; for grds t s 36% and for grds t s 43%. Overall, CCCP provdes much better performance than max-product over these Isng graphs. And unlke max-product, CCCP monotoncally ncreases soluton qualty and s guaranteed to converge. A detaled performance evaluaton of the convex QP s provded [Ravkumar and Lafferty, 006]. As such Isng graphs have relatvely small QP representaton, the CCCP message passng method and CPLEX had smlar runtme for the convex QP. We also expermented on the proten desgn benchmark (total of 97 nstances) [Yanover et al., 006]. In these problems, the task s to fnd a sequence of amno acds that s as stable as possble for a gven backbone structure of proten. Ths problem can be modeled usng a parwse Markov random feld. These problems are partcularly hard and dense wth up to 170 varables, each wth a large doman sze of up to 150 values. Fg. 1(f) shows the % of the optmal value CCCP acheves aganst the best upper bound provded by the LP based approach MPLP [Sontag et al., 008]. MPLP has been shown to be very effectve n solvng exactly the MAP problem for several real-world problems. However for these proten desgn problems, due to the large varable domans, ts reported mean runnng tme s 9.7 hours [Sontag et al., 008]. As Fg. 1(f) shows, CCCP acheves nearoptmal solutons, on average wthn 97.7% of the optmal value. A sgnfcant advantage of CCCP s ts speed: t converges wthn 100 teratons for all these problems and requres 403 seconds for the largest nstance, much faster than MPLP. The mean runnng tme of CCCP was 170 seconds for ths dataset. Thus CCCP can prove to be qute effectve when fast, approxmate solutons are desred. The man reason for ths speedup s that CCCP s messages are easer to compute than MPLP s as also hghlghted n [Kumar and Zlbersten, 010]. Compared to the EM approach of [Kumar and Zlbersten, 010], CCCP provdes better soluton qualty: EM acheved 95% of the optmal value on average, whle CCCP acheves 97.7%. The overhead of the nner loop n CCCP s small aganst EM whch takes 35 seconds for the largest nstance, whle CCCP takes 403 seconds. We also tested CCCP on the proten predcton dataset [Yanover et al., 006]. The problems n ths dataset are much smaller than those n the proten desgn dataset, and both max-product and MPLP acheve good soluton qualty. CCCP s performance was worse n ths case, partly due to the local optma present n the nonconvex QP formulaton of MAP. The convex QP formulaton was not tght n ths case. Fg. (a) shows runtme comparson of CCCP aganst CPLEX for the convex QP for the 5 largest proten

8 15% 10% relaxaton, CCCP provded more than an order-ofmagntude speedup over the state-of-the-art QP solver CPLEX. These results offer a powerful new way for solvng effcently large MAP estmaton problems. 5% 0% (a) Tme comparson (b) Qualty comparson Fgure : (a) Tme comparson of CCCP for convex QP aganst CPLEX for the largest 5 proten desgn nstances (x-axs). The y-axs denotes T CCCP /T CP LEX as a percentage. (b) denotes the sgned qualty dfference Q CCCP Q CP LEX, a hgher value s better. desgn problems w.r.t. the number of graph edges. After tryng dfferent QP solver optons avalable n CPLEX, we chose the barrer method whch provded the best performance. As CPLEX was qute slow, we let CPLEX use 8 CPU cores wth 8GB RAM, whle CCCP only used a sngle CPU. As ths fgure shows, CCCP s more than an order-of-magntude faster than CPLEX even when t uses a sngle core. The longest CPLEX took was 3504 seconds, whereas CCCP only took 99 seconds for the same nstance. The mean runnng tme of CPLEX was 1914 seconds; for CCCP, t was 96 seconds. Surprsngly, CCCP converges n only 15 teratons to the optmal soluton for all 5 problems. Fg. (b) shows the sgned qualty dfference between CCCP and CPLEX for the convex QP objectve. CPLEX provdes the optmal soluton wthn some non-zero ɛ (we used the default settng). Ths fgure shows that even wthn 15 teratons, CCCP acheved a slghtly better soluton. The decoded soluton qualty provded by the convex QP was decent, wthn 80% of the optmal value, but not as hgh as the CCCP method for the nonconvex QP. 4 CONCLUSION We presented new message-passng algorthms for varous quadratc programmng formulatons of the MAP problem. We showed that the concave-convex procedure provdes a unfyng framework for dfferent QP formulatons of the MAP problem represented as a dfference of convex functons. The resultng algorthms were shown to be convergent to a local optmum for the nonconvex QP and to the global optmum of the convex QP. Emprcally, the CCCP algorthm was shown to work well on Isng graphs and real-world proten desgn problems. The CCCP approach provded much better soluton qualty than max-product for Isng graphs and converged sgnfcantly faster than max-product LP for proten desgn problems, whle provdng near optmal solutons. For the convex QP Acknowledgments Support for ths work was provded n part by the NSF Grant IIS and by the AFOSR Grant FA References Baxter, R. (198). Exactly Solved Models n Statstcal Mechancs. Academc Press, London. Bertsekas, D. P. (1999). Nonlnear Programmng. Athena Scentfc, nd edton. Boyd, S., Km, S.-J., Vandenberghe, L., and Hassb, A. (007). A tutoral on geometrc programmng. Optmzaton and Engneerng, 8: Boyd, S. and Vandenberghe, L. (004). Convex Optmzaton. Cambrdge Unversty Press, New York, NY, USA. Globerson, A. and Jaakkola, T. (007). Fxng Max- Product: Convergent message passng algorthms for MAP LP-relaxatons. In NIPS, pages Kolmogorov, V. (006). Convergent tree-reweghted message passng for energy mnmzaton. IEEE Trans. Pattern Anal. Mach. Intell., 8: Kumar, A. and Zlbersten, S. (010). MAP estmaton for graphcal models by lkelhood maxmzaton. In NIPS, pages Kumar, M. P., Kolmogorov, V., and Torr, H. S. P. (009). An analyss of convex relaxatons for map estmaton of dscrete mrfs. J. Mach. Learn. Res., 10: Pearl, J. (1988). Probablstc Reasonng n Intellgent Systems. Morgan Kaufmann Publshers Inc. Ravkumar, P. and Lafferty, J. (006). Quadratc programmng relaxatons for metrc labelng and Markov random feld MAP estmaton. In ICML, pages Sontag, D., Globerson, A., and Jaakkola, T. (010). Introducton to Dual Decomposton for Inference. Optmzaton for Machne Learnng. Sontag, D., Meltzer, T., Globerson, A., Jaakkola, T., and Wess, Y. (008). Tghtenng LP relaxatons for MAP usng message passng. In UAI, pages Srperumbudur, B. and Lanckret, G. (009). On the convergence of the concave-convex procedure. In NIPS, pages Wanwrght, M., Jaakkola, T., and Wllsky, A. (00). MAP estmaton va agreement on (hyper)trees: Message-passng and lnear programmng approaches. IEEE Transactons on Informaton Theory, 51: Wanwrght, M. J. and Jordan, M. I. (008). Graphcal models, exponental famles, and varatonal nference. Foundatons and Trends n Machne Learnng, 1: Yanover, C., Meltzer, T., and Wess, Y. (006). Lnear programmng relaxatons and belef propagaton an emprcal study. J. Mach. Learn. Res., 7:006. Yulle, A. L. and Rangarajan, A. (003). The concaveconvex procedure. Neural Comput., 15:

Fixing Max-Product: Convergent Message Passing Algorithms for MAP LP-Relaxations

Fixing Max-Product: Convergent Message Passing Algorithms for MAP LP-Relaxations Fxng Max-Product: Convergent Message Passng Algorthms for MAP LP-Relaxatons Amr Globerson Tomm Jaakkola Computer Scence and Artfcal Intellgence Laboratory Massachusetts Insttute of Technology Cambrdge,

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

LECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming

LECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming CEE 60 Davd Rosenberg p. LECTURE NOTES Dualty Theory, Senstvty Analyss, and Parametrc Programmng Learnng Objectves. Revew the prmal LP model formulaton 2. Formulate the Dual Problem of an LP problem (TUES)

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Polyhedral Compilation Foundations

Polyhedral Compilation Foundations Polyhedral Complaton Foundatons Lous-Noël Pouchet pouchet@cse.oho-state.edu Dept. of Computer Scence and Engneerng, the Oho State Unversty Feb 8, 200 888., Class # Introducton: Polyhedral Complaton Foundatons

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS Copng wth NP-completeness 11. APPROXIMATION ALGORITHMS load balancng center selecton prcng method: vertex cover LP roundng: vertex cover generalzed load balancng knapsack problem Q. Suppose I need to solve

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 5 Luca Trevisan September 7, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 5 Luca Trevisan September 7, 2017 U.C. Bereley CS294: Beyond Worst-Case Analyss Handout 5 Luca Trevsan September 7, 207 Scrbed by Haars Khan Last modfed 0/3/207 Lecture 5 In whch we study the SDP relaxaton of Max Cut n random graphs. Quc

More information

Data Representation in Digital Design, a Single Conversion Equation and a Formal Languages Approach

Data Representation in Digital Design, a Single Conversion Equation and a Formal Languages Approach Data Representaton n Dgtal Desgn, a Sngle Converson Equaton and a Formal Languages Approach Hassan Farhat Unversty of Nebraska at Omaha Abstract- In the study of data representaton n dgtal desgn and computer

More information

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.

More information

Topology Design using LS-TaSC Version 2 and LS-DYNA

Topology Design using LS-TaSC Version 2 and LS-DYNA Topology Desgn usng LS-TaSC Verson 2 and LS-DYNA Wllem Roux Lvermore Software Technology Corporaton, Lvermore, CA, USA Abstract Ths paper gves an overvew of LS-TaSC verson 2, a topology optmzaton tool

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

Review of approximation techniques

Review of approximation techniques CHAPTER 2 Revew of appromaton technques 2. Introducton Optmzaton problems n engneerng desgn are characterzed by the followng assocated features: the objectve functon and constrants are mplct functons evaluated

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Active Contours/Snakes

Active Contours/Snakes Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Optimal Workload-based Weighted Wavelet Synopses

Optimal Workload-based Weighted Wavelet Synopses Optmal Workload-based Weghted Wavelet Synopses Yoss Matas School of Computer Scence Tel Avv Unversty Tel Avv 69978, Israel matas@tau.ac.l Danel Urel School of Computer Scence Tel Avv Unversty Tel Avv 69978,

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

Efficient Inference for Fully-Connected CRFs with Stationarity

Efficient Inference for Fully-Connected CRFs with Stationarity Effcent Inference for Fully-onnected RFs wth Statonarty Ymeng Zhang Tsuhan hen School of Electrcal and omputer Engneerng, ornell Unversty {yz457,tsuhan}@cornell.edu Abstract The ondtonal Random Feld (RF)

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Why consder unlabeled samples?. Collectng and labelng large set of samples s costly Gettng recorded speech s free, labelng s tme consumng 2. Classfer could be desgned

More information

Optimization Methods: Integer Programming Integer Linear Programming 1. Module 7 Lecture Notes 1. Integer Linear Programming

Optimization Methods: Integer Programming Integer Linear Programming 1. Module 7 Lecture Notes 1. Integer Linear Programming Optzaton Methods: Integer Prograng Integer Lnear Prograng Module Lecture Notes Integer Lnear Prograng Introducton In all the prevous lectures n lnear prograng dscussed so far, the desgn varables consdered

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

LP Decoding. Martin J. Wainwright. Electrical Engineering and Computer Science UC Berkeley, CA,

LP Decoding. Martin J. Wainwright. Electrical Engineering and Computer Science UC Berkeley, CA, Jon Feldman LP Decodng Industral Engneerng and Operatons Research Columba Unversty, New York, NY, 10027 jonfeld@eor.columba.edu Martn J. Wanwrght Electrcal Engneerng and Computer Scence UC Berkeley, CA,

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

EECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science

EECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science EECS 730 Introducton to Bonformatcs Sequence Algnment Luke Huan Electrcal Engneerng and Computer Scence http://people.eecs.ku.edu/~huan/ HMM Π s a set of states Transton Probabltes a kl Pr( l 1 k Probablty

More information

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints TPL-ware Dsplacement-drven Detaled Placement Refnement wth Colorng Constrants Tao Ln Iowa State Unversty tln@astate.edu Chrs Chu Iowa State Unversty cnchu@astate.edu BSTRCT To mnmze the effect of process

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Accounting for the Use of Different Length Scale Factors in x, y and z Directions

Accounting for the Use of Different Length Scale Factors in x, y and z Directions 1 Accountng for the Use of Dfferent Length Scale Factors n x, y and z Drectons Taha Soch (taha.soch@kcl.ac.uk) Imagng Scences & Bomedcal Engneerng, Kng s College London, The Rayne Insttute, St Thomas Hosptal,

More information

Multicriteria Decision Making

Multicriteria Decision Making Multcrtera Decson Makng Andrés Ramos (Andres.Ramos@comllas.edu) Pedro Sánchez (Pedro.Sanchez@comllas.edu) Sonja Wogrn (Sonja.Wogrn@comllas.edu) Contents 1. Basc concepts 2. Contnuous methods 3. Dscrete

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Three supervised learning methods on pen digits character recognition dataset

Three supervised learning methods on pen digits character recognition dataset Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Adaptive Virtual Support Vector Machine for the Reliability Analysis of High-Dimensional Problems

Adaptive Virtual Support Vector Machine for the Reliability Analysis of High-Dimensional Problems Proceedngs of the ASME 2 Internatonal Desgn Engneerng Techncal Conferences & Computers and Informaton n Engneerng Conference IDETC/CIE 2 August 29-3, 2, Washngton, D.C., USA DETC2-47538 Adaptve Vrtual

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

Taxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems

Taxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems Taxonomy of Large Margn Prncple Algorthms for Ordnal Regresson Problems Amnon Shashua Computer Scence Department Stanford Unversty Stanford, CA 94305 emal: shashua@cs.stanford.edu Anat Levn School of Computer

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

Backpropagation: In Search of Performance Parameters

Backpropagation: In Search of Performance Parameters Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,

More information

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT 3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

XLVII SIMPÓSIO BRASILEIRO DE PESQUISA OPERACIONAL

XLVII SIMPÓSIO BRASILEIRO DE PESQUISA OPERACIONAL LP-BASED HEURISTIC FOR PACKING CIRCULAR-LIKE OBJECTS IN A RECTANGULAR CONTAINER Igor Ltvnchev Computng Center of Russan,Academy of Scences Moscow 119991, Vavlov 40, Russa gorltvnchev@gmal.com Lus Alfonso

More information

Very simple computational domains can be discretized using boundary-fitted structured meshes (also called grids)

Very simple computational domains can be discretized using boundary-fitted structured meshes (also called grids) Structured meshes Very smple computatonal domans can be dscretzed usng boundary-ftted structured meshes (also called grds) The grd lnes of a Cartesan mesh are parallel to one another Structured meshes

More information

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR Judth Aronow Rchard Jarvnen Independent Consultant Dept of Math/Stat 559 Frost Wnona State Unversty Beaumont, TX 7776 Wnona, MN 55987 aronowju@hal.lamar.edu

More information

UNIT 2 : INEQUALITIES AND CONVEX SETS

UNIT 2 : INEQUALITIES AND CONVEX SETS UNT 2 : NEQUALTES AND CONVEX SETS ' Structure 2. ntroducton Objectves, nequaltes and ther Graphs Convex Sets and ther Geometry Noton of Convex Sets Extreme Ponts of Convex Set Hyper Planes and Half Spaces

More information

Minimization of the Expected Total Net Loss in a Stationary Multistate Flow Network System

Minimization of the Expected Total Net Loss in a Stationary Multistate Flow Network System Appled Mathematcs, 6, 7, 793-87 Publshed Onlne May 6 n ScRes. http://www.scrp.org/journal/am http://dx.do.org/.436/am.6.787 Mnmzaton of the Expected Total Net Loss n a Statonary Multstate Flow Networ System

More information

Multiobjective fuzzy optimization method

Multiobjective fuzzy optimization method Buletnul Ştnţfc al nverstăţ "Poltehnca" dn Tmşoara Sera ELECTRONICĂ ş TELECOMNICAŢII TRANSACTIONS on ELECTRONICS and COMMNICATIONS Tom 49(63, Fasccola, 24 Multobjectve fuzzy optmzaton method Gabrel Oltean

More information

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network

A New Token Allocation Algorithm for TCP Traffic in Diffserv Network A New Token Allocaton Algorthm for TCP Traffc n Dffserv Network A New Token Allocaton Algorthm for TCP Traffc n Dffserv Network S. Sudha and N. Ammasagounden Natonal Insttute of Technology, Truchrappall,

More information

Wavefront Reconstructor

Wavefront Reconstructor A Dstrbuted Smplex B-Splne Based Wavefront Reconstructor Coen de Vsser and Mchel Verhaegen 14-12-201212 2012 Delft Unversty of Technology Contents Introducton Wavefront reconstructon usng Smplex B-Splnes

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Positive Semi-definite Programming Localization in Wireless Sensor Networks Postve Sem-defnte Programmng Localzaton n Wreless Sensor etworks Shengdong Xe 1,, Jn Wang, Aqun Hu 1, Yunl Gu, Jang Xu, 1 School of Informaton Scence and Engneerng, Southeast Unversty, 10096, anjng Computer

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

Intelligent Information Acquisition for Improved Clustering

Intelligent Information Acquisition for Improved Clustering Intellgent Informaton Acquston for Improved Clusterng Duy Vu Unversty of Texas at Austn duyvu@cs.utexas.edu Mkhal Blenko Mcrosoft Research mblenko@mcrosoft.com Prem Melvlle IBM T.J. Watson Research Center

More information

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016) Technsche Unverstät München WSe 6/7 Insttut für Informatk Prof. Dr. Thomas Huckle Dpl.-Math. Benjamn Uekermann Parallel Numercs Exercse : Prevous Exam Questons Precondtonng & Iteratve Solvers (From 6)

More information

An efficient iterative source routing algorithm

An efficient iterative source routing algorithm An effcent teratve source routng algorthm Gang Cheng Ye Tan Nrwan Ansar Advanced Networng Lab Department of Electrcal Computer Engneerng New Jersey Insttute of Technology Newar NJ 7 {gc yt Ansar}@ntedu

More information

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters Proper Choce of Data Used for the Estmaton of Datum Transformaton Parameters Hakan S. KUTOGLU, Turkey Key words: Coordnate systems; transformaton; estmaton, relablty. SUMMARY Advances n technologes and

More information