An Approximate Dual Subgradient Algorithm for Multi-Agent Non-Convex Optimization

Size: px
Start display at page:

Download "An Approximate Dual Subgradient Algorithm for Multi-Agent Non-Convex Optimization"

Transcription

1 1534 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 6, JUNE 2013 [12] M Fliess, C Join, and M Mboup, Algebraic change-point detection, Applicable Algebra Eng, Commun, Comp, vol 21, no 2, pp , 2010 [13] E Fridman, F Gouaisbaut, M Dambrine, and J-P Richard, A descriptor approach to sliding mode control of systems with time-varying delays, Int J Syst Sci, vol 34, no 8 9, pp , 2003 [14] E Fridman, Stability of systems with uncertain delays: A new complete Lyapunov-Krasovskii functional, IEEE Trans Autom Control, vol 51, no 5, pp , May 2006 [15] J-E Fonde, Delay Differential Equation Models in Mathematical Biology, PhD dissertation, Univ Michigan, Dearborn, 2005 [16] J P Gauthier, H Hammouri, and S Othman, A simple observer for nonlinear systems Applications to bioreactors, IEEE Trans Autom Control, vol 37, no 6, pp , Jun 1992 [17]MGhanes,JBBarbot,JDeLeon,andAGlumineau, Arobust output feedback controller of the induction motor drives: New design and experimental validation, Int J Control, vol 83, no 3, pp , 2010 [18] A Germani, C Manes, and P Pepe, An asymptotic state observer for a class of nonlinear delay systems, Kybernetika, vol 37, no 4, pp , 2001 [19] M Hou and R T Patton, An observer design for linear time-delay systems, IEEE Trans Autom Control, vol 47, no 1, pp , Jan 2002 [20] S Ibrir, Adaptive observers for time delay nonlinear systems in triangular form, Automatica, vol 45, no 10, pp , 2009 [21] V L Kharitonov and D Hinrichsen, Exponential estimates for time delay systems, Syst Control Lett, vol 53, no 5, pp , 2004 [22] N N Krasovskii, On the analytical construction of an optimal control in a system with time lags, J Appl Math Mech, vol 26, no 1, pp 50 67, 1962 [23] V Laskhmikanthan, S Leela, and A Martynyuk, Practical stability of nonlinear systems, in Proc Word Scientific, 1990,[CDROM] [24] N MacDonald, Time lags in biological models, in Lecture Notes in Biomath New York: Springer, 1978 [25] L A Marquez-Martinez, C H Moog, and V V Martin, Observability and observers for nonlinear systems with time delays, Kybernetika, vol 38, no 4, pp , 2002 [26] H Mounier and J Rudolph, Flatness based control of nonlinear delay systems: A chemical reactor example, Int J Control, vol 71, no 5, pp , 1998 [27] K Natori and K Ohnishi, A design method of communication disturbance observer for time-delay compensation, taking the dynamic property of network disturbance into account, IEEE T Indust Electron, vol 55, no 5, pp , 2008 [28] S-I Niculescu, C-E de Souza, L Dugard, and J-M Dion, Robust exponential Stability of uncertain systems with time-varying delays, IEEE Trans Autom Control, vol 43, no 5, pp , May 1998 [29] S-I Niculescu, Delay Effects on Stability: A Robust Control Approach New York: Springer LNCIS, 2001 [30] P Picard, O Sename, and J F Lafay, Observers and observability indices for linear systems with delays, in Proc IEEE Conf Computat Eng Syst Appl (CESA 96), 1996, vol 1, pp [31] J-P Richard, Time-delay systems: An overview of some recent advances and open problems, Automatica, vol 39, no 10, pp , 2003 [32] J-P Richard, F Gouaisbaut, and W Perruquetti, Sliding mode control in the presence of delay, Kybernetica, vol 37, no 4, pp , 2001 [33] O Sename, New trends in design of observers for time-delay systems, Kybernetica, vol 37, no 4, pp , 2001 [34] O Sename and C Briat, H1 observer design for uncertain time-delay systems, in Proc IEEE ECC 07, 2007, pp [35] A Seuret, T Floquet, J-P Richard, and S K Spurgeon, A sliding mode observer for linear systems with unknown time-varying delay, in Proc IEEE ACC 07, 2007, pp [36]ASeuret,TFloquet,J-PRichard,andSKSpurgeon, Topics in Time-Delay Systems: Analysis, Algorithms and Control Berlin, Germany: Springer Verlag, 2008 [37] E Shustin, L Fridman, E Fridman, and F Castaos, Robust semiglobal stabilization of the second order system by relay feedback with an uncertain variable time delay, SIAM J Control Optim, vol 47, no 1, pp , 2008 [38] R Villafuerte, S Mondie, and Z Poznyak, Practical stability of time delay systems: LMI s approach, in Proc IEEE CDC, 2008, pp [39] Z Wang, B Huang, and H Unbehausen, Robust H1 observer design for uncertain time-delay systems: (I) the continuous case, in Proc IFAC 14th World Congress, Beijing, China, 1999, pp [40] J Zhang, X Xia, and C H Moog, Parameter identifiability of nonlinear systems with time-delay, IEEE Trans Autom Control, vol47, no 2, pp , Feb 2006 [41] G Zheng, J P Barbot, D Boutat, T Floquet, and J P Richard, On obserability of nonlinear time-delay systems with unknown inputs, IEEE Trans Autom Control, vol 56, no 8, pp , Aug 2011 An Approximate Dual Subgradient Algorithm for Multi-Agent Non-Convex Optimization Minghui Zhu and Sonia Martínez Abstract We consider a multi-agent optimization problem where agents subject to local, intermittent interactions aim to minimize a sum of local objective functions subject to a global inequality constraint and a global state constraint set In contrast to previous work, we do not require that the objective, constraint functions, and state constraint sets are convex In order to deal with time-varying network topologies satisfying a standard connectivity assumption, we resort to consensus algorithm techniques and the Lagrangian duality method We slightly relax the requirement of exact consensus, and propose a distributed approximate dual subgradient algorithm to enable agents to asymptotically converge to a pair of primal-dual solutions to an approximate problem To guarantee convergence, we assume that the Slater s condition is satisfied and the optimal solution set of the dual limit is singleton We implement our algorithm over a source localization problem and compare the performance with existing algorithms Index Terms Dual subgradient algorithm, Lagrangian duality I INTRODUCTION Recent advances in computation, communication, sensing and actuation have stimulated an intensive research in networked multi-agent systems In the systems and controls community, this has translated into how to solve global control problems, expressed by global objective functions, by means of local agent actions Problems considered include multi-agent consensus or agreement [11], [17], coverage control [4], formation control [7], [21] and sensor fusion [24] The seminal work [2] provides a framework to tackle optimizing a global objective function among different processors where each processor knows the global objective function In multi-agent environments, a problem of focus is to minimize a sum of local objective functions by a group of agents, where each function depends on a common global decision vector and is only known to a specific agent This problem is motivated by others in distributed estimation [16], [23], distributed source localization [20], and network utility maximization [12] More recently, consensus techniques have been proposed to address the issues of switching topologies, asynchronous computation and coupling in objective functions; see for instance [14], [15], [27] More specifically, the paper [14] presents the first Manuscript received October 14, 2010; revised January 25, 2012; accepted October 08, 2012 Date of publication November 16, 2012; date of current version May 20, 2013 This work was supported by the NSF CAREER Award CMMI Recommended by Associate Editor E K P Chong M Zhu is with the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge MA USA ( mhzhu@mitedu) S Martínez is with the Department of Mechanical and Aerospace Engineering, University of California, San Diego, La Jolla, CA USA ( soniamd@ucsdedu) Digital Object Identifier /TAC /$ IEEE

2 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 6, JUNE analysis of an algorithm that combines average consensus schemes with subgradient methods Using projection in the algorithm of [14], the authors in [15] further address a more general scenario that takes local state constraint sets into account Further, in [27] we develop two distributed primal-dual subgradient algorithms, which are based on saddle-point theorems, to analyze a more general situation that incorporates global inequality and equality constraints The aforementioned algorithms are extensions of classic (primal or primal-dual) subgradient methods which generalize gradient-based methods to minimize non-smooth functions This requires the optimization problems of interest to be convex in order to determine a global optimum The focus of the current technical note is to relax the convexity assumption in [27] In order to deal with all aspects of our multi-agent setting, our method integrates Lagrangian dualization, subgradient schemes, and average consensus algorithms Distributed function computation by a group of anonymous agents interacting intermittently can be done via agreement algorithms [4] However, agreement algorithms are essentially convex, and so we are led to the investigation of nonconvex optimization solutions via dualization The techniques of dualization and subgradient schemes have been popular and efficient approaches to solve both convex programs (eg, in [3]) and nonconvex programs (eg, in [5], [6]) Statement of Contributions: Here, we investigate a multi-agent optimization problem where agents desire to agree upon a global decision vector minimizing the sum of local objective functions in the presence of a global inequality constraint and a global state constraint set Agent interactions are changing with time The objective, constraint functions, as well as the state-constraint set, can be nonconvex To deal with both nonconvexity and time-varying interactions, we first define an approximate problem where the exact consensus is slightly relaxed We then propose a distributed dual subgradient algorithm to solve it, where the update rule for local dual estimates combines a dual subgradient scheme with average consensus algorithms, and local primal estimates are generated from local dual optimal solution sets This algorithm is shown to asymptotically converge to a pair of primal-dual solutions to the approximate problem under the following assumptions: firstly, the Slater s condition is satisfied; secondly, the optimal solution set of the dual limit is singleton; thirdly, dynamically changing network topologies satisfy some standard connectivity condition A conference version of this manuscript was published in [26], and an enlarged archived version of this paper is [25] Main differences are the following: i) by assuming that the optimal solution set of the dual limit is a singleton, and changing the update rule in the dual estimates, we are able to determine a global solution in contrast to an approximate solutionin[26];ii)wepresentasimplecriteriontocheckthenewsufficient condition for nonconvex quadratic programs; iii) new simulation results of our algorithm on a source localization example and a comparison of its performance with existing algorithms are performed Due to space limitations, details of the technical proofs and simulations can be found in [25] II PROBLEM FORMULATION AND PRELIMINARIES Consider a networked multi-agent system where agents are labeled by The multi-agent system operates in a synchronous way at time instants, and its topology will be represented by a directed weighted graph, for Here, is the adjacency matrix, where the scalar is the weight assigned to the edge pointing from agent to agent,and is the set of edges with non-zero weights The set of in-neighbors of agent at time is denoted by Similarly, we define the set of out-neighbors of agent at time as Weheremakethe following assumptions on communication graphs: Assumption 21 (Non-Degeneracy): There exists a constant such that,and,for,satisfies,forall Assumption 22 (Balanced Communication): It holds that for all and,and for all and Assumption 23 (Periodical Strong Connectivity): There is a positive integer such that, for all, the directed graph is strongly connected The above network model is standard to characterize a networked multi-agent system, and has been widely used in the analysis of average consensus algorithms; eg see [17], and distributed optimization in [15], [27] Recently, an algorithm is given in [9] which allows agents to construct a balanced graph out of a non-balanced one under certain assumptions The objective of the agents is to cooperatively solve the following primal problem : where is the global decision vector The function is only known to agent, continuous, and referred to as the objective function of agent Theset, the state constraint set, is compact The function are continuous, and the inequality is understood component-wise; ie,,for all, and represents a global inequality constraint We will denote and We will assume that the set of feasible points is non-empty; ie, Since is compact and is closed, then we can deduce that is compact The continuity of follows from that of In this way, the optimal value of the problem is finite and,thesetofprimal optimal solutionss, is non-empty We will also assume the following Slater s condition holds: Assumption 24 (Slater s Condition): There exists a vector such that Such is referred to as a Slater vector of the problem Remark 21: All the agents can agree upon a common Slater vector through a maximum-consensus scheme This can be easily implemented as part of an initialization step, and thus the assumption that the Slater vector is known to all agents does not limit the applicability of our algorithm; see [25] for an algorithm solving this problem In[27],inordertosolvetheconvexcaseoftheproblem (ie; and areconvexfunctionsand isaconvexset),weproposetwodistributed primal-dual subgradient algorithms where primal (resp dual) estimates move along subgradients (resp supergradients) and are projected onto convex sets The absence of convexity impedes the use of the algorithms in [27] since, on the one hand, (primal) gradient-based algorithms are easily trapped in local minima; on the other hand, projection maps may not be well-defined when (primal) state constraint sets are nonconvex In thesequel,wewillemploylagrangiandualization,subgradientmethods and average consensus schemes to design a distributed algorithm which can find an approximate solution to the problem Towards this end, we construct a directed cyclic graph where We assume that each agent has a unique in-neighbor (and out-neighbor) The out-neighbor (resp in-neighbor) of agent is denoted by (resp ) With the graph,wewill study the following approximate problem of problem : (1) (2)

3 1536 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 6, JUNE 2013 where, with a small positive scalar, and is the column vector of ones Problem (2) provides an approximation to,and will be referred to as problem In particular, the approximate problem (2) reduces to the problem when Its optimal value and the set of optimal solutions will be denoted by and,respectively Similarly to the problem, is finite and Remark 22: Thecyclicgraph can be replaced by any strongly connected graph Given, each agent is endowed with two inequality constraints: and,for each out-neighbor This set of inequalities implies that any feasible solution of problem satisfies the approximate consensus; ie, For simplicity, we will use the cyclic graph, with a minimum number of constraints, as the initial graph A Dual Problems Before introducing dual problems, let us denote by,,, and The dual problem associated with is given by where, and Here, the dual function is given as,where is the Lagrangian function (3) Note that is not separable since depends on neighbor s multipliers, B Dual Solution Sets The Slater s condition ensures the boundedness of dual solution sets for convex optimization; eg, [10], [13] We will shortly see that the Slater s condition plays the same role in nonconvex optimization To achieve this, we define the function as follows: Let be a Slater vector for problem Then with is a Slater vector of the problem Similarly to (3) and (4) in [27], which employ Lemma 32 of [27], we have that for any, it holds that where Let be zero in (5) This leads to the upper bound on can be computed locally We de- where note (5) (6) (7) We denote the dual optimal value of the problem by and the set of dual optimal solutions by We endow each agent with the local Lagrangian function and the local dual function defined by In the approximate problem, the introduction of, renders the and separable As a result, the global dual function can be decomposed into a simple sum of the local dual functions More precisely, the following holds: Since and are continuous and is compact, then is continuous; see Theorem 1416 in [1] Similarly, is continuous Since is bounded, then Remark 23: The requirement of exact agreement on in the problem is slightly relaxed in the problem by introducing a small In this way, the global dual function is a sum of the local dual functions,asin(4); is non-empty and uniformly bounded These two properties play important roles in the devise of our subsequent algorithm C Other Notation Define the set-valued map as ; ie, given, the set are the solutions to the following local optimization problem: (8) Notice that in the sum of, each for any appears in two terms: one is,andthe other is With this observation, we regroup the termsinthesummationintermsof, and have the following: (4) Here, is referredtoas themarginal map ofagent Since is compact and are continuous, then in (8) for any In the algorithm we will develop in next section, each agent is required to obtain one (globally) optimal solution and the optimal value the local optimization problem (8) at each iterate We assume that this can be easily solved, and this is the case for problems of,or and being smooth (the extremum candidates are the critical points of the objective function and isolated corners of the boundaries of the constraint regions) or having some specific structure which allows the use of global optimization methods such as branch and bound algorithms In the space, we define the distance between a point to a set as, and the Hausdorff distance between two sets as

4 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 6, JUNE We denote by III DADS ALGORITHM where In this section, we devise a distributed approximate dual subgradient algorithm which aims to find a pair of primal-dual solutions to the approximate problem For each agent,let be the estimate of the primal solution to the approximate problem at time be the estimate of the multiplier on the inequality constraint (resp ) 1 be the estimate of the multiplier associated with the collection of the local inequality constraints (resp ), for all Welet, for to be the collection of dual estimates of agent And denote where and are convex combinations of dual estimates of agent and its neighbors at time At time, we associate each agent a supergradient vector defined as,where, has components,,and for, while the components of are given by:,,and,for For each agent,wedefine the set for some where Let to be the projection onto the set Itiseasy to check that is closed and convex, and thus the projection map is well-defined The Distributed Approximate Dual Subgradient (DADS) Algorithm is described in Table I Algorithm 1 The DADS Algorithm and Initialization: Initially, all the agents agree upon some in the approximate problem Each agent chooses a common Slater vector, computes and obtains through a max-consensus algorithm where is given in (7) After that, each agent chooses initial states and Iteration: Every, each agent executes the following steps: 1) For each,given, solve the local optimization problem (8), obtain a solution and the dual optimal value 2) For each, generate the dual estimate according to the following rule: where the scalar 3) Repeat for is a step-size Remark 31: The DADS algorithm is an extension of the classical dual algorithm, eg, in [19] and [3] to the multi-agent setting and nonconvex case In the initialization of the DADS algorithm, the value serves as an upper bound on InStep1,one solution in is needed, and it is unnecessary to compute the whole set To assure primal convergence, we assume that dual estimates converge to the set where each has a single optimal solution 1 We will use the superscript to indicate that and are estimates of some global variables (9) Definition 31 (Singleton Optimal Dual Solution Set): The set of is the set of such that the set is a singleton, where for each The primal and dual estimates in the DADS algorithm converge to a pair of primal-dual solutions to the approximate problem We formally state this in the following theorem: Theorem 31: (Convergence Properties of the DADS Algorithm): Consider the problem and the corresponding approximate problem with some We let the non-degeneracy assumption 21, the balanced communication assumption 22 and the periodic strong connectivity assumption 23 hold In addition, suppose the Slater s condition 24 holds for the problem Consider the dual sequences of,, and the primal sequence of of the distributed approximate dual subgradient algorithm with satisfying,, 1) (Dual estimate convergence) There exists a dual solution where and such that the following holds for all 2) (Primal estimate convergence) If the dual solution satisfies,ie is a singleton for all, then there is such that, for all IV DISCUSSION Before outlining the technical proofs for Theorem 31, we would like to make the following observations First, our methodology is motivated by the need of solving a nonconvex problem in a distributed way by a group of agents whose interactions change with time This places a number of restrictions on the solutions that one can find Timevarying interactions of anonymous agents can be currently solved via agreement algorithms; however these are inherently convex operations, which does not work well in nonconvex settings To overcome this, one can resort to dualization Admittedly, zero duality gap does not hold in general for nonconvex problems A possibility would be to resort to nonlinear augmented Lagrangians, for which strong duality holds in a broad class of programs [5], [6], [22] However, we find here another problem, as a distributed solution using agreement requires separability, as the one ensured by the linear Lagrangians we use here Thus, we have looked for alternative assumptions that can be easier to check and allow the dualization approach to work More precisely, Theorem 31 shows that dual estimates always converge to a dual optimal solution The convergence of primal estimates requires an additional assumption that the dual limit has a single optimal solution We refer to this assumption as the singleton dual optimal solution set (SD for short) The assumption may not be easy to check apriori, however it is of similar nature as existing algorithms for nonconvex optimization In [5] and [6], subgradient methods are defined in terms of (nonlinear) augmented Lagrangians, and it is shown that every accumulation point of the primal sequence is a primal solution provided that the dual function is required to be differentiable at the dual limit An open question is how to resolve the above issues imposed by the multi-agent setting with less stringent conditions on the nature of the nonconvex optimization problem In the following, we study a class of nonconvex quadratic programs for which a sufficient condition guarantees that the SD assumption holds Nonconvex quadratic programs hold great importance from both

5 1538 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 6, JUNE 2013 theoretic and practical aspects In general, nonconvex quadratic programs are NP-hard, and please refer to [18] for detailed discussion The aforementioned sufficient condition only requires checking the positive definiteness of a matrix Consider the following nonconvex quadratic program: Lemma 52 (Lipschitz Continuity of ): There is a constant such that for any, it holds that In the DADS algorithm, the error induced by the projection map is given by (10) where and are real and symmetric matrices The approximate problem of is given by A basic iterate relation of dual estimates in the DADS algorithm is the following Lemma 53 (Basic Iterate Relation): Under the assumptions in Theorem 31, for any with for all, we have for all (11) We introduce the dual multipliers as before The local Lagrangian function can be written as follows: where the term independent of is dropped and is a linear function of The dual function and dual problem can be defined as before Consider any dual optimal solution Ifforall : (P1) is positive definite; (P2) ; then the SD assumption holds The properties (P1) and (P2) are easy to verify in a distributed way once a dual solution is obtained We would like to remark that (P1) is used in [8] to determine the unique global optimal solution via canonical duality when is absent V CONVERGENCE ANALYSIS This section outlines the analysis steps to prove Theorem 31 Please refer to [25] for more details Recall that is continuous and is compact Then there are such that and for all We start our analysis from the computation of supergradients of Lemma 51 (Supergradient Computation): If,then is a supergradient of at That is, for any A direct result of Lemma 51 is that the vector is a supergradient of ; ie, the following supergradient inequality holds for any : (12) at (14) Asymptotic convergence of dual estimates is shown next Lemma 54 (Dual Estimate Convergence): Under the assumptions in Theorem 31, there exists such that,,and The remainder of the section is dedicated to characterizing the convergence properties of primal estimates Lemma 55 (Properties of Marginal Maps): The set-valued marginal map is closed In addition, it is upper semicontinuous at ; ie, for any,thereis such that for any, it holds that Lemma 56 (Primal Estimate Convergence): Under the assumptions in Theorem 31, for each, it holds that where The main result of this technical note, Theorem 31, can be shown next In particular, we will show complementary slackness, primal feasibility of, and its primal optimality, respectively Proof for Theorem 31: Claim 1:, and Proof: Rearranging the terms related to in (14) leads to the following inequality holding for any with for all : (13) It can be seen that (9) of dual estimates in the DADS algorithm is a combination of a dual subgradient scheme and average consensus algorithms The following establishes that is Lipschitz continuous with some Lipschitz constant (15)

6 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 58, NO 6, JUNE Summing (15) over [0, ], and dividing by (16) We now proceed to show for each Let, for and, in (16) Recall that is not summable but square summable, and is uniformly bounded Take, and then it follows from Lemma 51 in [27] that: (17) On the other hand, since,wehave given the fact that is an upper bound of Let where Then we could choose a sufficiently small and in (16) such that where is given in the definition of and is given by:, for Following the same lines toward (17), it gives that Hence, it holds that The rest of the proof is analogous and thus omitted The proofs of the following two claims can be found in [25] Claim 2: is a primal feasible solution to the approximate problem Claim 3: is a primal solution to the problem VI CONCLUSION We have studied a distributed dual algorithm for a class of multiagent nonconvex optimization problems The convergence of the algorithm has been proven under the assumptions that i) the Slater s condition holds; ii) the optimal solution set of the dual limit is singleton; iii) the network topologies are strongly connected over any given bounded period An open question is how to address the shortcomings imposed by nonconvexity and multi-agent interactions settings REFERENCES [1] JPAubinandHFrankowska, Set-Valued Analysis Boston, MA: Birkhäuser, 1990 [2] DPBertsekasandJNTsitsiklis, Parallel and Distributed Computation: Numerical Methods Boston, MA: Athena Scientific, 1997 [3] DPBertsekas, Convex Optimization Theory Boston, MA: Anthena Scietific, 2009 [4] FBullo,JCortés,andSMartínez, Distributed Control of Robotic Networks, ser Applied Mathematics Series Princeton, NJ: Princeton Univ Press, 2009 [Online] Available: info [5] RSBurachik, Onprimal convergence for augmented Lagrangian duality, Optimization, vol 60, no 8, pp , 2011 [6] R S Burachik and C Y Kaya, An update rule and a convergence result for a penalty function method, J Ind Manag Optim, vol 3, no 2, pp , 2007 [7] JAFaxandRMMurray, Informationflow and cooperative control of vehicle formations, IEEE Trans Autom Control,vol49,no9,pp , Sep 2004 [8] D Y Gao, N Ruan, and H Sherali, Solutions and optimality criteria for nonconvex constrained global optimization problems with connections between canonical and Lagrangian duality, J Global Optim, vol 45, no 3, pp , 2009 [9] B Gharesifard and J Cortés, Distributed strategies for generating weight-balanced and doubly stochastic digraphs, Eur J Control, to be published [10] J-B Hiriart-Urruty and C Lemaréchal, Convex Analysis and Minimization Algorithms: Part 1: Fundamentals New York: Springer, 1996 [11] A Jadbabaie, J Lin, and A S Morse, Coordination of groups of mobile autonomous agents using nearest neighbor rules, IEEE Trans Autom Control, vol 48, no 6, pp , Jun 2003 [12] F P Kelly, A Maulloo, and D Tan, Rate control in communication networks: Shadow prices, proportional fairness and stability, J Oper Res Soc, vol 49, no 3, pp , 1998 [13] A Nedic and A Ozdaglar, Approximate primal solutions and rate analysis for dual subgradient methods, SIAM J Optim, vol 19, no 4, pp , 2009 [14] A Nedic and A Ozdaglar, Distributed subgradient methods for multiagent optimization, IEEE Trans Autom Control, vol 54, no 1, pp 48 61, 2009 [15] A Nedic, A Ozdaglar, and P A Parrilo, Constrained consensus and optimization in multi-agent networks, IEEE Trans Autom Control, vol 55, no 4, pp , Apr 2010 [16] R D Nowak, Distributed EM algorithms for density estimation and clustering in sensor networks, IEEE Trans Signal Processing,vol51, pp , 2003 [17] R Olfati-Saber and R M Murray, Consensus problems in networks of agents with switching topology and time-delays, IEEE Trans Autom Control, vol 49, no 9, pp , Sep 2004 [18] P M Pardalos and S A Vavasis, Quadratic programming with one negative eigenvalue is NP-hard, J Global Optim, vol 1, no 1, pp 15 22, 1991 [19] B T Polyak, A general method for solving extremum problems, Soviet Math Doklady, vol 3, no 8, pp , Aug 1967 [20] M G Rabbat and R D Nowak, Decentralized source localization and tracking, in Proc IEEE Int Conf Acoust, Speech, Signal Processing, May 2004, pp [21] W Ren and R W Beard, Distributed Consensus in Multi-vehicle Cooperative Control, ser Communications and Control Engineering New York: Springer, 2008 [22] R T Rockafellar and R J-B Wets, Variational Analysis NewYork: Springer, 1998 [23] S Sundhar Ram, A Nedic, and V V Veeravalli, Distributed and recursive parameter estimation in parametrized linear state-space models, IEEE Trans Autom Control, vol 55, no 2, pp , Feb 2010 [24] L Xiao, S Boyd, and S Lall, A scheme for robust distributed sensor fusion based on average consensus, in Proc Symp Inform Processing Sensor Networks, Los Angeles, CA, Apr 2005, pp [25] M Zhu and Martínez, An approximate dual subgradient algorithm for multi-agent non-convex optimization, IEEE Trans Autom Control 2013 [Online] Available: [26] M Zhu and S Martínez, An approximate dual subgradient algorithm for multi-agent non-convex optimization, in Proc IEEE Int Conf Decision Control, Atlanta, GA, Dec 2010, pp [27] M Zhu and S Martínez, On distributed convex optimization under inequality and equality constraints, IEEE Trans Autom Control, vol 57, pp , 2012

On Distributed Convex Optimization Under Inequality and Equality Constraints Minghui Zhu, Member, IEEE, and Sonia Martínez, Senior Member, IEEE

On Distributed Convex Optimization Under Inequality and Equality Constraints Minghui Zhu, Member, IEEE, and Sonia Martínez, Senior Member, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 57, NO 1, JANUARY 2012 151 On Distributed Convex Optimization Under Inequality Equality Constraints Minghui Zhu, Member, IEEE, Sonia Martínez, Senior Member,

More information

Lecture 4 Duality and Decomposition Techniques

Lecture 4 Duality and Decomposition Techniques Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian

More information

When does a digraph admit a doubly stochastic adjacency matrix?

When does a digraph admit a doubly stochastic adjacency matrix? When does a digraph admit a doubly stochastic adjacency matrix? Bahman Gharesifard and Jorge Cortés Abstract Digraphs with doubly stochastic adjacency matrices play an essential role in a variety of cooperative

More information

On distributed optimization under inequality constraints via Lagrangian primal-dual methods

On distributed optimization under inequality constraints via Lagrangian primal-dual methods 2010 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 30-July 02, 2010 FrA05.4 On distributed optimization under inequality constraints via Lagrangian primal-dual methods Minghui

More information

Distributed non-convex optimization

Distributed non-convex optimization Distributed non-convex optimization Behrouz Touri Assistant Professor Department of Electrical and Computer Engineering University of California San Diego/University of Colorado Boulder AFOSR Computational

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

Distributed Alternating Direction Method of Multipliers

Distributed Alternating Direction Method of Multipliers Distributed Alternating Direction Method of Multipliers Ermin Wei and Asuman Ozdaglar Abstract We consider a network of agents that are cooperatively solving a global unconstrained optimization problem,

More information

Distributed Optimization of Continuoustime Multi-agent Networks

Distributed Optimization of Continuoustime Multi-agent Networks University of Maryland, Dec 2016 Distributed Optimization of Continuoustime Multi-agent Networks Yiguang Hong Academy of Mathematics & Systems Science Chinese Academy of Sciences Outline 1. Background

More information

IE521 Convex Optimization Introduction

IE521 Convex Optimization Introduction IE521 Convex Optimization Introduction Instructor: Niao He Jan 18, 2017 1 About Me Assistant Professor, UIUC, 2016 Ph.D. in Operations Research, M.S. in Computational Sci. & Eng. Georgia Tech, 2010 2015

More information

Principles of Wireless Sensor Networks. Fast-Lipschitz Optimization

Principles of Wireless Sensor Networks. Fast-Lipschitz Optimization http://www.ee.kth.se/~carlofi/teaching/pwsn-2011/wsn_course.shtml Lecture 5 Stockholm, October 14, 2011 Fast-Lipschitz Optimization Royal Institute of Technology - KTH Stockholm, Sweden e-mail: carlofi@kth.se

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Convex Analysis and Minimization Algorithms I

Convex Analysis and Minimization Algorithms I Jean-Baptiste Hiriart-Urruty Claude Lemarechal Convex Analysis and Minimization Algorithms I Fundamentals With 113 Figures Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Hong Kong Barcelona

More information

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms

More information

Speed-up of Parallel Processing of Divisible Loads on k-dimensional Meshes and Tori

Speed-up of Parallel Processing of Divisible Loads on k-dimensional Meshes and Tori The Computer Journal, 46(6, c British Computer Society 2003; all rights reserved Speed-up of Parallel Processing of Divisible Loads on k-dimensional Meshes Tori KEQIN LI Department of Computer Science,

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

On Distributed Submodular Maximization with Limited Information

On Distributed Submodular Maximization with Limited Information On Distributed Submodular Maximization with Limited Information Bahman Gharesifard Stephen L. Smith Abstract This paper considers a class of distributed submodular maximization problems in which each agent

More information

Fast-Lipschitz Optimization

Fast-Lipschitz Optimization Fast-Lipschitz Optimization DREAM Seminar Series University of California at Berkeley September 11, 2012 Carlo Fischione ACCESS Linnaeus Center, Electrical Engineering KTH Royal Institute of Technology

More information

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions

More information

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling

More information

Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization

Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization Cun Mu, Asim Kadav, Erik Kruus, Donald Goldfarb, Martin Renqiang Min Machine Learning Group, NEC Laboratories America

More information

ME 555: Distributed Optimization

ME 555: Distributed Optimization ME 555: Distributed Optimization Duke University Spring 2015 1 Administrative Course: ME 555: Distributed Optimization (Spring 2015) Instructor: Time: Location: Office hours: Website: Soomin Lee (email:

More information

Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application

Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application Proection onto the probability simplex: An efficient algorithm with a simple proof, and an application Weiran Wang Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science, University of

More information

Convex Optimization. Lijun Zhang Modification of

Convex Optimization. Lijun Zhang   Modification of Convex Optimization Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Modification of http://stanford.edu/~boyd/cvxbook/bv_cvxslides.pdf Outline Introduction Convex Sets & Functions Convex Optimization

More information

DISTRIBUTED NETWORK RESOURCE ALLOCATION WITH INTEGER CONSTRAINTS. Yujiao Cheng, Houfeng Huang, Gang Wu, Qing Ling

DISTRIBUTED NETWORK RESOURCE ALLOCATION WITH INTEGER CONSTRAINTS. Yujiao Cheng, Houfeng Huang, Gang Wu, Qing Ling DISTRIBUTED NETWORK RESOURCE ALLOCATION WITH INTEGER CONSTRAINTS Yuao Cheng, Houfeng Huang, Gang Wu, Qing Ling Department of Automation, University of Science and Technology of China, Hefei, China ABSTRACT

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Dual Subgradient Methods Using Approximate Multipliers

Dual Subgradient Methods Using Approximate Multipliers Dual Subgradient Methods Using Approximate Multipliers Víctor Valls, Douglas J. Leith Trinity College Dublin Abstract We consider the subgradient method for the dual problem in convex optimisation with

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

Lec13p1, ORF363/COS323

Lec13p1, ORF363/COS323 Lec13 Page 1 Lec13p1, ORF363/COS323 This lecture: Semidefinite programming (SDP) Definition and basic properties Review of positive semidefinite matrices SDP duality SDP relaxations for nonconvex optimization

More information

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016 Simulation Lecture O Optimization: Linear Programming Saeed Bastani April 06 Outline of the course Linear Programming ( lecture) Integer Programming ( lecture) Heuristics and Metaheursitics (3 lectures)

More information

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report

More information

Parameterized graph separation problems

Parameterized graph separation problems Parameterized graph separation problems Dániel Marx Department of Computer Science and Information Theory, Budapest University of Technology and Economics Budapest, H-1521, Hungary, dmarx@cs.bme.hu Abstract.

More information

FACES OF CONVEX SETS

FACES OF CONVEX SETS FACES OF CONVEX SETS VERA ROSHCHINA Abstract. We remind the basic definitions of faces of convex sets and their basic properties. For more details see the classic references [1, 2] and [4] for polytopes.

More information

Constrained optimization

Constrained optimization Constrained optimization A general constrained optimization problem has the form where The Lagrangian function is given by Primal and dual optimization problems Primal: Dual: Weak duality: Strong duality:

More information

Asynchronous Distributed Optimization With Event-Driven Communication Minyi Zhong, Student Member, IEEE, and Christos G. Cassandras, Fellow, IEEE

Asynchronous Distributed Optimization With Event-Driven Communication Minyi Zhong, Student Member, IEEE, and Christos G. Cassandras, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 12, DECEMBER 2010 2735 Asynchronous Distributed Optimization With Event-Driven Communication Minyi Zhong, Student Member, IEEE, Christos G. Cassras,

More information

Using Structured System Theory to Identify Malicious Behavior in Distributed Systems

Using Structured System Theory to Identify Malicious Behavior in Distributed Systems Using Structured System Theory to Identify Malicious Behavior in Distributed Systems Shreyas Sundaram Department of Electrical and Computer Engineering University of Waterloo Miroslav Pajic, Rahul Mangharam,

More information

Sharp lower bound for the total number of matchings of graphs with given number of cut edges

Sharp lower bound for the total number of matchings of graphs with given number of cut edges South Asian Journal of Mathematics 2014, Vol. 4 ( 2 ) : 107 118 www.sajm-online.com ISSN 2251-1512 RESEARCH ARTICLE Sharp lower bound for the total number of matchings of graphs with given number of cut

More information

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING DAVID G. LUENBERGER Stanford University TT ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California London Don Mills, Ontario CONTENTS

More information

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year CONLIN & MMA solvers Pierre DUYSINX LTAS Automotive Engineering Academic year 2018-2019 1 CONLIN METHOD 2 LAY-OUT CONLIN SUBPROBLEMS DUAL METHOD APPROACH FOR CONLIN SUBPROBLEMS SEQUENTIAL QUADRATIC PROGRAMMING

More information

Fully discrete Finite Element Approximations of Semilinear Parabolic Equations in a Nonconvex Polygon

Fully discrete Finite Element Approximations of Semilinear Parabolic Equations in a Nonconvex Polygon Fully discrete Finite Element Approximations of Semilinear Parabolic Equations in a Nonconvex Polygon Tamal Pramanick 1,a) 1 Department of Mathematics, Indian Institute of Technology Guwahati, Guwahati

More information

AVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION. Jialin Liu, Yuantao Gu, and Mengdi Wang

AVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION. Jialin Liu, Yuantao Gu, and Mengdi Wang AVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION Jialin Liu, Yuantao Gu, and Mengdi Wang Tsinghua National Laboratory for Information Science and

More information

OVER THE last decades, there has been a significant

OVER THE last decades, there has been a significant IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 13, NO. 4, AUGUST 2005 827 Non-Convex Optimization Rate Control for Multi-Class Services in the Internet Jang-Won Lee, Member, IEEE, Ravi R. Mazumdar, Fellow,

More information

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS Linear Programming Larry Blume Cornell University & The Santa Fe Institute & IHS Linear Programs The general linear program is a constrained optimization problem where objectives and constraints are all

More information

Detection and Mitigation of Cyber-Attacks using Game Theory

Detection and Mitigation of Cyber-Attacks using Game Theory Detection and Mitigation of Cyber-Attacks using Game Theory João P. Hespanha Kyriakos G. Vamvoudakis Correlation Engine COAs Data Data Data Data Cyber Situation Awareness Framework Mission Cyber-Assets

More information

Introduction to Modern Control Systems

Introduction to Modern Control Systems Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November

More information

SINCE a hard disk drive (HDD) servo system with regular

SINCE a hard disk drive (HDD) servo system with regular 402 IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 20, NO. 2, MARCH 2012 Optimal Control Design and Implementation of Hard Disk Drives With Irregular Sampling Rates Jianbin Nie, Edgar Sheh, and

More information

Graphs that have the feasible bases of a given linear

Graphs that have the feasible bases of a given linear Algorithmic Operations Research Vol.1 (2006) 46 51 Simplex Adjacency Graphs in Linear Optimization Gerard Sierksma and Gert A. Tijssen University of Groningen, Faculty of Economics, P.O. Box 800, 9700

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

6. Lecture notes on matroid intersection

6. Lecture notes on matroid intersection Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm

More information

COMS 4771 Support Vector Machines. Nakul Verma

COMS 4771 Support Vector Machines. Nakul Verma COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing 244 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 10, NO 2, APRIL 2002 Heuristic Algorithms for Multiconstrained Quality-of-Service Routing Xin Yuan, Member, IEEE Abstract Multiconstrained quality-of-service

More information

Greed Considered Harmful

Greed Considered Harmful Greed Considered Harmful Nonlinear (in)stabilities in network resource allocation Priya Ranjan Indo-US workshop 2009 Outline Background Model & Motivation Main results Fixed delays Single-user, single-link

More information

Luca Schenato Workshop on cooperative multi agent systems Pisa, 6/12/2007

Luca Schenato Workshop on cooperative multi agent systems Pisa, 6/12/2007 Distributed consensus protocols for clock synchronization in sensor networks Luca Schenato Workshop on cooperative multi agent systems Pisa, 6/12/2007 Outline Motivations Intro to consensus algorithms

More information

AUTONOMOUS vehicle systems are expected to find

AUTONOMOUS vehicle systems are expected to find IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 16, NO. 4, JULY 2008 745 Experimental Validation of Consensus Algorithms for Multivehicle Cooperative Control Wei Ren, Member, IEEE, Haiyang Chao,

More information

Stochastic Separable Mixed-Integer Nonlinear Programming via Nonconvex Generalized Benders Decomposition

Stochastic Separable Mixed-Integer Nonlinear Programming via Nonconvex Generalized Benders Decomposition Stochastic Separable Mixed-Integer Nonlinear Programming via Nonconvex Generalized Benders Decomposition Xiang Li Process Systems Engineering Laboratory Department of Chemical Engineering Massachusetts

More information

DETERMINISTIC OPERATIONS RESEARCH

DETERMINISTIC OPERATIONS RESEARCH DETERMINISTIC OPERATIONS RESEARCH Models and Methods in Optimization Linear DAVID J. RADER, JR. Rose-Hulman Institute of Technology Department of Mathematics Terre Haute, IN WILEY A JOHN WILEY & SONS,

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Two common formulations of linear programming (LP) problems are: min Subject to: 1,,, 1,2,,;, max Subject to: 1,,, 1,2,,;, Linear Programming Problems The standard LP problem

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

Min Max Sliding-Mode Control for Multimodel Linear Time Varying Systems

Min Max Sliding-Mode Control for Multimodel Linear Time Varying Systems IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 48, NO 12, DECEMBER 2003 2141 Min Max Sliding-Mode Control for Multimodel Linear Time Varying Systems Alex S Poznyak, Member, IEEE, Yuri B Shtessel, Member,

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Constrained Optimization Marc Toussaint U Stuttgart Constrained Optimization General constrained optimization problem: Let R n, f : R n R, g : R n R m, h : R n R l find min

More information

Simplex Algorithm in 1 Slide

Simplex Algorithm in 1 Slide Administrivia 1 Canonical form: Simplex Algorithm in 1 Slide If we do pivot in A r,s >0, where c s

More information

Distributed Detection in Sensor Networks: Connectivity Graph and Small World Networks

Distributed Detection in Sensor Networks: Connectivity Graph and Small World Networks Distributed Detection in Sensor Networks: Connectivity Graph and Small World Networks SaeedA.AldosariandJoséM.F.Moura Electrical and Computer Engineering Department Carnegie Mellon University 5000 Forbes

More information

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma Support Vector Machines James McInerney Adapted from slides by Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake

More information

Convex Optimization MLSS 2015

Convex Optimization MLSS 2015 Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours

More information

Interpretation of Dual Model for Piecewise Linear. Programming Problem Robert Hlavatý

Interpretation of Dual Model for Piecewise Linear. Programming Problem Robert Hlavatý Interpretation of Dual Model for Piecewise Linear 1 Introduction Programming Problem Robert Hlavatý Abstract. Piecewise linear programming models are suitable tools for solving situations of non-linear

More information

LECTURE 18 LECTURE OUTLINE

LECTURE 18 LECTURE OUTLINE LECTURE 18 LECTURE OUTLINE Generalized polyhedral approximation methods Combined cutting plane and simplicial decomposition methods Lecture based on the paper D. P. Bertsekas and H. Yu, A Unifying Polyhedral

More information

An Improved Measurement Placement Algorithm for Network Observability

An Improved Measurement Placement Algorithm for Network Observability IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 4, NOVEMBER 2001 819 An Improved Measurement Placement Algorithm for Network Observability Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper

More information

Adaptive Linear Programming Decoding of Polar Codes

Adaptive Linear Programming Decoding of Polar Codes Adaptive Linear Programming Decoding of Polar Codes Veeresh Taranalli and Paul H. Siegel University of California, San Diego, La Jolla, CA 92093, USA Email: {vtaranalli, psiegel}@ucsd.edu Abstract Polar

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Orientation of manifolds - definition*

Orientation of manifolds - definition* Bulletin of the Manifold Atlas - definition (2013) Orientation of manifolds - definition* MATTHIAS KRECK 1. Zero dimensional manifolds For zero dimensional manifolds an orientation is a map from the manifold

More information

DEGENERACY AND THE FUNDAMENTAL THEOREM

DEGENERACY AND THE FUNDAMENTAL THEOREM DEGENERACY AND THE FUNDAMENTAL THEOREM The Standard Simplex Method in Matrix Notation: we start with the standard form of the linear program in matrix notation: (SLP) m n we assume (SLP) is feasible, and

More information

Research Interests Optimization:

Research Interests Optimization: Mitchell: Research interests 1 Research Interests Optimization: looking for the best solution from among a number of candidates. Prototypical optimization problem: min f(x) subject to g(x) 0 x X IR n Here,

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal

More information

1. Lecture notes on bipartite matching

1. Lecture notes on bipartite matching Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans February 5, 2017 1. Lecture notes on bipartite matching Matching problems are among the fundamental problems in

More information

Leaderless Formation Control for Multiple Autonomous Vehicles. Wei Ren

Leaderless Formation Control for Multiple Autonomous Vehicles. Wei Ren AIAA Guidance, Navigation, and Control Conference and Exhibit - 4 August 6, Keystone, Colorado AIAA 6-669 Leaderless Formation Control for Multiple Autonomous Vehicles Wei Ren Department of Electrical

More information

PARALLEL OPTIMIZATION

PARALLEL OPTIMIZATION PARALLEL OPTIMIZATION Theory, Algorithms, and Applications YAIR CENSOR Department of Mathematics and Computer Science University of Haifa STAVROS A. ZENIOS Department of Public and Business Administration

More information

Linear methods for supervised learning

Linear methods for supervised learning Linear methods for supervised learning LDA Logistic regression Naïve Bayes PLA Maximum margin hyperplanes Soft-margin hyperplanes Least squares resgression Ridge regression Nonlinear feature maps Sometimes

More information

Estimation of Unknown Disturbances in Gimbal Systems

Estimation of Unknown Disturbances in Gimbal Systems Estimation of Unknown Disturbances in Gimbal Systems Burak KÜRKÇÜ 1, a, Coşku KASNAKOĞLU 2, b 1 ASELSAN Inc., Ankara, Turkey 2 TOBB University of Economics and Technology, Ankara, Turkey a bkurkcu@aselsan.com.tr,

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge October 20, 2016 Abstract We consider the problem of minimizing

More information

STRUCTURAL & MULTIDISCIPLINARY OPTIMIZATION

STRUCTURAL & MULTIDISCIPLINARY OPTIMIZATION STRUCTURAL & MULTIDISCIPLINARY OPTIMIZATION Pierre DUYSINX Patricia TOSSINGS Department of Aerospace and Mechanical Engineering Academic year 2018-2019 1 Course objectives To become familiar with the introduction

More information

Delay-minimal Transmission for Energy Constrained Wireless Communications

Delay-minimal Transmission for Energy Constrained Wireless Communications Delay-minimal Transmission for Energy Constrained Wireless Communications Jing Yang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, M0742 yangjing@umd.edu

More information

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes On the Complexity of the Policy Improvement Algorithm for Markov Decision Processes Mary Melekopoglou Anne Condon Computer Sciences Department University of Wisconsin - Madison 0 West Dayton Street Madison,

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge May 10, 2017 Abstract We consider the problem of minimizing

More information

An Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation

An Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation Dhaka Univ. J. Sci. 61(2): 135-140, 2013 (July) An Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation M. Babul Hasan and Md. Toha De epartment of Mathematics, Dhaka

More information

arxiv: v1 [math.co] 27 Feb 2015

arxiv: v1 [math.co] 27 Feb 2015 Mode Poset Probability Polytopes Guido Montúfar 1 and Johannes Rauh 2 arxiv:1503.00572v1 [math.co] 27 Feb 2015 1 Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany,

More information

IE598 Big Data Optimization Summary Nonconvex Optimization

IE598 Big Data Optimization Summary Nonconvex Optimization IE598 Big Data Optimization Summary Nonconvex Optimization Instructor: Niao He April 16, 2018 1 This Course Big Data Optimization Explore modern optimization theories, algorithms, and big data applications

More information

George B. Dantzig Mukund N. Thapa. Linear Programming. 1: Introduction. With 87 Illustrations. Springer

George B. Dantzig Mukund N. Thapa. Linear Programming. 1: Introduction. With 87 Illustrations. Springer George B. Dantzig Mukund N. Thapa Linear Programming 1: Introduction With 87 Illustrations Springer Contents FOREWORD PREFACE DEFINITION OF SYMBOLS xxi xxxiii xxxvii 1 THE LINEAR PROGRAMMING PROBLEM 1

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Analysis of a Reduced-Communication Diffusion LMS Algorithm

Analysis of a Reduced-Communication Diffusion LMS Algorithm Analysis of a Reduced-Communication Diffusion LMS Algorithm Reza Arablouei a (corresponding author), Stefan Werner b, Kutluyıl Doğançay a, and Yih-Fang Huang c a School of Engineering, University of South

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

Optimal network flow allocation

Optimal network flow allocation Optimal network flow allocation EE384Y Project intermediate report Almir Mutapcic and Primoz Skraba Stanford University, Spring 2003-04 May 10, 2004 Contents 1 Introduction 2 2 Background 2 3 Problem statement

More information

Loopy Belief Propagation

Loopy Belief Propagation Loopy Belief Propagation Research Exam Kristin Branson September 29, 2003 Loopy Belief Propagation p.1/73 Problem Formalization Reasoning about any real-world problem requires assumptions about the structure

More information

Nonsmooth Optimization and Related Topics

Nonsmooth Optimization and Related Topics Nonsmooth Optimization and Related Topics Edited by F. H. Clarke University of Montreal Montreal, Quebec, Canada V. F. Dem'yanov Leningrad State University Leningrad, USSR I and F. Giannessi University

More information

Global Minimization via Piecewise-Linear Underestimation

Global Minimization via Piecewise-Linear Underestimation Journal of Global Optimization,, 1 9 (2004) c 2004 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Global Minimization via Piecewise-Linear Underestimation O. L. MANGASARIAN olvi@cs.wisc.edu

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information