Dual Subgradient Methods Using Approximate Multipliers
|
|
- Milton Hill
- 6 years ago
- Views:
Transcription
1 Dual Subgradient Methods Using Approximate Multipliers Víctor Valls, Douglas J. Leith Trinity College Dublin Abstract We consider the subgradient method for the dual problem in convex optimisation with approximate multipliers, i.e., the subgradient used in the update of the dual variables is obtained using an approximation of the true Lagrange multipliers. This problem is interesting for optimisation problems where the exact Lagrange multipliers might not be readily accessible. For example, in distributed optimisation the exact Lagrange multipliers might not be available at the nodes due to communication delays or losses. We show that we can construct approximate primal solutions that can get arbitrarily close to the set of optima as step size α is reduced. Applications of the analysis include unsynchronised subgradient updates in the dual variable update and unsynchronised max-weight scheduling. Index Terms subgradient methods, Lagrange multipliers, max-weight scheduling, unsynchronised updates. I. INTRODUCTION In this paper we study the convergence of the subgradient method with constant step size in constrained convex optimisation with approximate subgradients. More specifically, the subgradient used in the update of the dual variables is computed using an approximation of the Lagrange multipliers instead of the true Lagrange multipliers. One of the motivations of considering approximate multipliers in the optimisation is that in some problems the exact Lagrange multipliers might not be available, but a noisy, delayed or perturbed version of the multipliers is available instead. For instance, in distributed optimisation a node might not have access to the exact Lagrange multiplier in the system due to transmission delays or losses. Another example is in networ problems where discrete valued queue occupancies can be identified with approximate scaled Lagrange multipliers []. We show that the subgradient method with approximate multipliers converges as long as the distance between the approximate and Lagrange multipliers can be controlled with step size α >. In particular, we show that the running average of the associated primal points can get arbitrarily close to the set of primal optima as step size α is reduced. Further, this running average is asymptotically attracted to a feasible point for every α>. These results establish conditions under which inaccurate nowledge of the Lagrange multipliers does not affect the convergence of the subgradient method. Important applications of the analysis include unsynchronised subgradient updates of the dual variable and unsynchronised max-weight scheduling. This wor was supported by Science Foundation Ireland under Grant No. /PI/77. A. Related Wor Subgradient methods in constrained convex optimisation have been extensively studied under various steps size rules. For example, see the early wors by Shor [] and Polya [3] or the more modern wors by Bertseas et. al. in [4] and [5]. Subgradient methods in constrained convex optimisation with noisy updates have been studied by Nedić and Bertseas in [6]. In particular, the authors study the case where the noise is deterministic and bounded, and show optimality using several step size rules. The wor in [7] considers unsynchronised subgradient updates where a subset of subgradients are updated at each iteration. The convergence of the algorithm is proved using a diminishing step size rule. The wor in [8] considers asynchronous subgradient updates with constant step size, and asymptotic optimality is proved using averaging schemes. Approximate solutions to convex problems under an averaging scheme have been studied by Nedić in[9]and []. The wor in [9] assumes that the dual function can be computed efficiently and the wor in [] considers a sequence of primal-dual subgradient updates. For a good reference on primal averaging schemes see the related wor section and references in [9]. The max-weight scheduling introduced by Tassiulas in [] has been extended in a wide range of papers to consider the minimise a convex utility function. For instance, energy minimisation [] or fairneess [3] among others. In [] we showed that Lagrange multipliers and queue occupancies in networ are related, and that max-weight resource allocation problems can be formulated as convex optimisation problems. B. Notation Vectors and matrices are indicated in bold type. Since we often use subscripts to indicate elements in a sequence, to avoid confusion we usually use a superscript x (j) to denote the j th element of a vector x. The j th element of operator [x] + equals x (j) (the j th element of x) whenx (j) > and otherwise equals when x (j) <. The subgradient of a convex function f at point x is denoted f(x). For two vectors x, y R m we use element-wise comparisons x y and y x to denote when y (j) x (j), y (j) >x (j) respectively for all j =,...,m.
2 II. PRELIMINARIES A. Problem Setup Consider the following convex optimisation problem P : minimise f(z) subject to g(z) where f,g (j) : R n R, j =,...,m are convex functions, g(z) =[g () (z),...,g (m) (z)] T and C is a bounded convex set from R n.letc := z C g(z) } and we will assume C is non-empty, i.e., problem P is feasible. Further, we will denote by C := arg min f(z) C the set of optima and f := f(z ), z C. B. Lagrange Penalty & Dual Function As in classical constrained convex optimisation we define the Lagrange penalty function L : R n R m R associated with problem P as L(z, λ) :=f(z)+λ T g(z) () where λ R m +. Note that L(z, λ) is convex in z for a fixed λ and linear in λ for a fixed z. We also define the Lagrange dual function as follows q(λ) :=L(z (λ), λ) =minl(z, λ) λ where z (λ) arg min L(z, λ). Importantly, since q(λ) is the infimum of a collection of linear functions it is concave and q(λ) L(z, λ) for all λ. The dual function is of particular interest when we have strong duality, i.e., the solution of the dual problem P D maximise λ q(λ) () and the primal problem P coincide. Namely, when we have that f =min max λ L(z, λ) =max λ min L(z, λ) =q(λ ) where λ := arg max λ q(λ). We are guaranteed to have strong duality when the following assumption holds. Assumption (Slater condition). The set C := z C g(z) } has non-empty relative interior, i.e., there exists a point z C such that g( z). A direct consequence of the Assumption is that the set of dual optima is bounded. The following lemma corresponds to Lemma in [9]. Lemma (Bounded Dual Set). Let Q δ := λ : q(λ) q(λ ) δ} with δ and let the Slater condition hold, i.e., there exists a vector z C such that g( z). Then, for every λ Q δ we have that λ Q:= υ (f( z) f + δ) (3) where υ := min j,...,m} g (j) ( z). C. Classical Subgradient Method Problem P D is an unconstrained optimisation problem that can be solved, for example, using the subgradient method. The subgradient method for the dual problem consists of updates z (λ ) arg min ) (4) λ + =[λ + α(g(z (λ )))] + (5) where α> is a step size. Subgradient methods can mae use of more complex step sizes (see [5] for example), however, in this paper we will only use a constant step size for its simplicity and practical importance. The convergence of the subgradient method for the dual problem [4] is based on decreasing the euclidean distance between λ and λ. In particular, the distance between λ and λ decreases when the difference between q(λ ) and q(λ ) is sufficiently large; otherwise λ remains in a ball around λ. The size of the ball λ converges depends on step size α>, having that λ Λ := arg max λ q(λ) as α. Further, since by strong duality arguments z (λ ) z as α we can intuitively expect z (λ ) to converge close to z C for α sufficiently small. III. SUBGRADIENT METHOD WITH APPROXIMATE MULTIPLIERS The classical subgradient updates (4) and (5) can be extended to mae use of approximate Lagrange multipliers µ R m +. Using an approximation of the Lagrange multiplies in the subgradient method is interesting, for example, to capture the fact that in some optimisation problems the exact Lagrange multipliers might not be nown or have errors. Suppose we have the following updates z (µ ) arg min ) (6) λ + =[λ + αg(z (µ ))] + (7) where µ R m +. We can thin of µ as being an approximate multiplier substituted into the classical subgradient updates (4) and (5). We are first interested in nowing when the multiplier λ generated by (7) will converge to a ball around λ.this is answered by the following lemma. Lemma. Consider the setup of problem P and updates (6) and (7) and suppose that λ µ ασ for all with α >, σ. Suppose the Slater condition is satisfied (Assumption ) and that λ R m +.Thenλ converges to a ball around λ. Further, the size of the ball depends on parameter α>, having that λ Λ := arg max λ q(λ) as α. Lemma tells us that all we require for λ to converge to a ball around λ is that the distance between µ and λ is uniformly bounded for all, i.e., λ µ ασ with α, σ >. As we will see later, this condition can be readily evaluated and is commonly satisfied in practical problems of interest. Further, observe that when σ =then µ = λ and so we recover the classical subgradient method.
3 3 In addition to convergence of the multipliers, we also have that the running average z := z i (µ i ) (8) i= converges to a feasible point as for every α>. This is formally presented in the following lemma. Lemma 3. (Feasibility) Consider the setup of problem P and updates (6) and (7) where λ µ ασ for all with α>, σ. Suppose the Slater condition is satisfied (Assumption ) and that λ R m +. Then, for all =,,... ( g(z ):=g i= ) z i (µ i ) λ α where λ λ := Q +max λ, Q + αmḡ}, ḡ := max g(z) and Q is given in Lemma with δ := αm (ḡ /+σ ḡ). As it follows that g(z ) and so z is feasible. Also, by strong duality arguments we can expect z to converge to a feasible point that is close to z for α sufficiently small. This intuition is captured by our main result, which establishs the convergence of f(z ) to f depending on step size α and iterate. Theorem. Consider the setup of problem P and updates (6) and (7) where λ µ ασ for all j =,...,m with α>, σ. Suppose the Slater condition is satisfied (Assumption ). Then, m λ α α m(ḡ /+σ ḡ) f(z ) f αm(ḡ + σ ḡ)+ 3m λ () α Proof: See appendix. Observe that the bounds in Theorem are not asymptotic but rather can be applied at finite times. As the bounds simplify to α m (ḡ +σ ḡ) f(z ) f αm (ḡ +σ ḡ) and can be made small by selecting step size α sufficiently small. Note from Theorem that if the Lagrange multipliers are accessible in the optimisation (µ = λ for all ) thenσ =and so we have a tighter bound. IV. UNSYNCHRONISED UPDATES Theorem has many useful applications. In addition to allowing use of noisy multipliers e.g., µ (j) =[λ (j) + αy ] + where y is a bounded random variable, in this section we show that convergence of the subgradient method with unsynchronised updates can be obtained as a corollary, avoiding the need for complex, specialised analysis machinery. To see this consider the following separable convex problem P s : minimise z (j) C j,,...,n f(z) := subject to g r (z) := f (j) (z (j) ) g (j) r (z (j) ) r =,...,m (9) where f (j),g r (j) : R R, j =,...,n, r =,...,m are convex functions and C := C C n. Here, the objective can be decomposed into functions f (j) which depend on element z (j) of vector z. Each element z (j) is constrained to lie in convex set C j. The constraints g r (z) can also be decomposed but are not separable since the sum on the LHS must satisfy the non-positivity constraint (which imposes a joint constraint on the g r (j) ). The Lagrangian of problem P s is given by L(z, λ) = = m f (j) (z (j) )+ L (j) (z (j), λ) r= λ (r) n g (j) r (z (j) ) where L (j) (z (j), λ) =f (j) (z (j) )+ m r (z (j) ) and λ := [λ (),...,λ (m) ] T. The dual function can be decomposed as the sum of n concave functions q(λ) := L (j) (z (j), λ) = q (j) (λ). () min z (j) C j,,...,n r= λ(r) g (j) where q (j) (λ) :=min z (j) C j L (j) (z (j), λ). While in () each function q (j) uses the same multiplier λ, this is not essential. Instead, we can allow each function q (j) to use a different approximate multiplier µ (j) and select z (j) (µ(j) [ λ (r) + = λ (r) ) arg min L (j) (w, µ (j) w C ) j =,...,n j () + α ] + g r (j) (z(j) (µ (j) )) r =,...,m. (3) Provided λ µ (j) ασ for all, then by Theorem we now that the updates ()-(3) converge to a ball around the optimum. In particular, consider now the situation where at each step only a subset of the elements z (j), j =,,nare updated i.e. updates are unsynchronised. The elements which are updated use multiplier µ (j) = λ (j). Those elements z(j) which are not updated at step can be formally thought of as being updated using the old multiplier value µ (j) = λ (j) where τ (j) τ (j) is the time step where z (j) was last updated. Provided τ (j) is uniformly upper bounded by σ then λ µ (j) is unformly bounded by ασ ḡ and so by Theorem updates ()-(3) converge to a ball around the optimum. That is, we have the following corollary. Corollary. (Unsynchronised Subgradient Updates) Consider the setup of problem P s and let µ (j) = λ (j), j =,...,n τ (j) with τ N n. Then, the bound () holds if there exists a constant σ such that τ σ for all. Note that the delay τ (j) may vary from time step to time step and from element to element in an arbitrary manner (including randomly) and this analysis still applies provided that τ (j) is
4 4 uniformly upper bounded. Extension to random delays which are not uniformly bounded can be made on a sample path basis. That is, we now have a probability of convergence with the probability being the fraction of sample paths for which satisfies a specified uniform upper bound. In this way the convergence of many consensus-lie approaches for distributed optimisation may be encompassed by Theorem. Further, observe that alternatively each element z (j), j =,,nmight be updated using a delayed value of multiplier τ (j) λ (j) (these might, for example, be due to propagation delay across a networ) and the same analysis applies. To illustrate these results we present a brief example with unsynchronised updates and approximate multipliers and compare it to the classical subgradient method (σ = and synchronised subgradient updates). Example. Consider the following optimisation problem minimise subject to z w Az b where z R 3, C := z R 3 z }, w := [ 3 ] [ ] T, A := and b := [ The Lagrangian is 4] T. L(z, λ) = z w + λ T (Az b) 3 = λ T b + (z (j) w (j) ) +(λ T A (j) )z (j), where λ := [λ (),λ () ] T and the dual variable updates are given by λ (j) + =[λ(j) + α(az (µ ) b)] + j =,. We study two cases. Case (i): µ (j) = λ (j) for all, j =, and the subgradients in the dual variable update are computed synchronously. Namely, at each iterate we have z (j) arg min (s s [,] w(j) ) +(λ T A(j) )s for all j =,, 3. Case (ii): approximate multipliers are given by µ (j) =[λ (j) + αy ] + where y is a realisation of a random variable uniformly distributed between and, and the primal variables z (j) are updated in an unsynchronised manner, i.e., z (j) arg min (s s [,] w(j) ) +(µ T A (j) )s where j,, 3} is selected uniformly at random at each time step. Note that case (i) corresponds to the classical subgradient method for the dual problem and that case (ii) extends it to mae use of approximate Lagrange multipliers and unsynchronised updates. We numerically solve this example with initial condition λ = and step size α =/5 for both cases. Figures a, b and c show respectively the convergence of λ, z (µ ), f(z ) to λ, z, f. Observe from Figure a that even though λ converges to a bigger ball around λ in case (ii) than in time slot case (i) - λ () case (i) - λ () case (ii) - λ () case (ii) - λ () (a) Convergence of the Lagrange multipliers. Optimal dual variables are λ () =.7 and λ () = time slot case (i) - z () case (i) - z () case (ii) - z () case (ii) - z () (b) Convergence of primal variables z (j) to z (j) for j =,. Straight lines corresponds to optimal values f time slot case(i) - f(z ) case(ii) - f(z ) (c) Convergence of the objective function in both cases to f =3.4. Fig. : Illustrating the convergence of Example with parameters λ = and α =/5. case (i), the value of f(z ) obtained in both cases is very close for large enough. V. UNSYNCHRONISED MAX-WEIGHT It was recently shown in [] and [4] that max-weight scheduling uses the queues within a networ as approximate scaled multipliers. This observation can be viewed as essentially a special case of Theorem which maes use of the fact that two queue-lie updates Q + =[Q + αδ ] + and
5 5 Q + Fig. : Networ of the example in Section V. =[ Q + α δ ] + satisfy Q Q ασ whenever max δ j δ j σ / (a property which also follows from the continuity of queues in the Sorohod metric). Using Theorem, the extension to unsynchronised updates is immediate. We illustrate this via a simple networ flow example. A. Networ Flow Example Consider the networ illustrated in Figure. Time is slotted at each node but nodes time slots are not synchronised. Exogenous pacets arrive into the system (enter the queue of node ) with mean rate pacet per slot. Node can transmit a single pacet to either node or node 3; the pacets transmitted by nodes and 3 leave the system. All nodes in the networ transmit a single pacet in a slot. Similarly to max-weight scheduling, we aim to design a pacet scheduling policy at each node that eeps the system stable (bounded queues), does not require us to now the traffic characteristics in advance and maes decisions based only on queues occupancies. Furthermore, nodes mae scheduling decisions in order to minimise a local convex utility function of the average lin throughput. 3 B. Fluid-lie Optimisation We can formulate the resource allocation problem as a convex optimisation min s.t. 4 f (j) (z (j) ) (4) z() + z (), z () z (3), z () z (4) where C := z R 4 + z,z () + z () }, i.e., it is the convex hull of actions set, } (where represents transmitting a pacet over a lin whereas to doing nothing) with the restriction that node can only transmit in only one lin in a time slot. Constraint z() +z () enforces the mean service rate of node to be bigger or equal than the mean arrival rate in the system. Similarly, constraints z () z (3) and z () z (4) enforce the service rate of nodes and 3 to be bigger or equal than the mean service rate of node over each lin. The Lagrangian is given by L(z, λ) = 4 f (j) (z (j) )+λ () ( z() z () ) + λ () (z () z (3) )+λ (3) (z () z (4) ). (5) Now consider the following subgradient updates with approximate multipliers [z () (µ ),z () (µ )] arg min w C f () (w () )+f () (w () ) µ () (w() + w () )+µ () w() + µ (3) w()}, (6) f (3) (w) µ () w}, (7) z (3) (µ ) arg min w [,] z (4) (µ ) arg min w [,] f (4) (w) µ (3) w}, (8) λ () + =[λ() + α( z () (µ ) z () (µ ))] +, (9) λ () + =[λ() + α(z () z (3) (µ ))] +, () λ (3) + =[λ(3) + α(z () z (4) (µ ))] +, () where C := z R + z }. Whenµ = λ then (9)-() are equivalent to the classical dual subgradient update. Since the objective function is separable and constraints are linear it can be seen that the optimisation can be solved in a distributed manner. Note from Equations (6)-(8) and Figure that we can compute the dual function without nowing the mean arrival rate at each node. However, updates (9)-() require us to now the mean pacet arrival rates. C. Optimisation with Unsynchronised Updates, Discrete Actions and Queue Occupancies as Approximate Multipliers Suppose now that at the beginning of each slot a node decides to transmit a pacet depending on a transmission policy. The transmission policy of a node at a given time t consists of a single action. The action set of node is [, ], [, ], [, ]} where action [, ] denotes transmitting a pacet to node, [, ] to node 3 and [, ] to doing nothing. Action set of nodes and 3 is, } where denotes transmitting a pacet out of the system and to doing nothing. The transmission policy of the system at a given time t is given by X :=, where the columns of matrix X contain the different permutations of possible transmission policies of the nodes in the system. Pacets arrive into the system (queue of node ) at discrete times t =,, 3,... and they satisfy that t i= (b i ) where b i, } denotes receiving a pacet or not. We use vector Q t = [Q () t,q () t,q (3) t ] T to denote the queue occupancy at nodes,, 3 respectively. Suppose the transmission policy of the system is updated at times t =,, 3,... and is obtained as shown in Equation (3) in [], ( ) x arg min z (µ i ) x i x () x X i= i=
6 6.3.5 f f(z ) time slot Fig. 3: Illustrating the converge of f(z ) to f =.5 in the unsynchronised networ example with step size α =. and initial condition Q =. where a vector x X is a transmission policy of the system and z (µ ):=[z () (µ ),...,z (4) (µ )] T isgivenbythe subgradient updates [z () (µ ),z () (µ )] arg min w C f () (w () )+f () (w () ) µ () (w() + w () )+µ () w() + µ (3) w()}, (3) f (3) (w) µ () w}, (4) z (3) (µ ) arg min w [,] z (4) (µ ) arg min w [,] f (4) (w) µ (3) w}, (5) where here µ = αq at each iterate with probability /5, otherwise µ = µ, i.e., updates are unsynchronised. Importantly, as shown in [] update () eeps the scaled queue occupancy close to the Lagrange multipliers, and therefore Theorem applies. Note as well from Equations (3)-(5) that as explained in the Section V-B the dual function can be computed without nowledge of the pacet arrivals mean rates. We simulate the networ with objective functions f (j) (z (j) )=(z (j) ) for all j =...,4 with initial condition Q = and step size α =.. See in Figure 3 that as expected from Theorem we have that f(z ) converges to f. Figure 4 shows the scaled queue occupancy of the networ and the Lagrange multipliers stay close and both converge to a ball around λ. VI. CONCLUSIONS We consider the subgradient method for the dual problem in convex optimisation with approximate multipliers, i.e., the subgradient used in the update of the dual variables is obtained using an approximation of the true Lagrange multipliers. This problem is interesting for optimisation problems where the exact Lagrange multipliers might not be readily accessible. We show that the running average of the associated primal points can get arbitrarily close to the set of primal optima as step size α is reduced. Further, this running average is asymptotically attracted to a feasible point for every α >. Applications time slot Fig. 4: Illustrating the convergence of the α-scaled queues to a ball around λ () =, λ () =.5 and λ (3) =.5. Blue, green and orange lines correspond, respectively, to the α- scaled queues Q (), Q () and Q (3) and blac lines correspond to the associated Lagrange multipliers. of the analysis include unsynchronised subgradient updates in the dual variable update and unsynchronised max-weight scheduling. REFERENCES [] V. Valls and D. Leith, On the relationship between queues and multipliers, in Communication, Control, and Computing (Allerton), 4 5nd Annual Allerton Conference on, Sept 4, pp [] N. Z. Shor, Minimization Methods for Non-Differentiable Functions. Springer Berlin Heidelberg, 985. [3] B. Polya, Introduction to Optimization. Optimization Software, Inc.,, 987. [4] D. P. Bertseas, Nonlinear programming. Athena Scientific, 999. [5] D. P. Bertseas, A. Nedić, and A. E. Ozdaglar, Convex Analysis and Optimization. Athena Scientific, 3. [6] A. Nedić and D. P. Bertseas, The effect of deterministic noise in subgradient methods, Mathematical programming, vol. 5, no., pp ,. [7] K. Kiwiel and P. Lindberg, Parallel subgradient methods for convex optimization, in Inherently Parallel Algorithms in Feasibility and Optimization and their Applications, ser. Studies in Computational Mathematics. Elsevier,, vol. 8, pp [8] N. Gatsis and G. Giannais, Asynchronous subgradient methods with unbounded delays for communication networs, in Decision and Control (CDC), IEEE 5st Annual Conference on, Dec, pp [9] A. Nedić and A. Ozdaglar, Approximate primal solutions and rate analysis for dual subgradient methods, SIAM Journal on Optimization, vol. 9, no. 4, pp , 9. [Online]. Available: [], Subgradient methods for saddle-point problems, Journal of Optimization Theory and Applications, vol. 4, no., pp. 5 8, 9. [] L. Tassiulas and A. Ephremides, Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networs, Automatic Control, IEEE Transactions on, vol. 37, no., pp , Dec 99. [] M. Neely, E. Modiano, and C. Rohrs, Power allocation and routing in multibeam satellites with time-varying channels, Networing, IEEE/ACM Transactions on, vol., no., pp. 38 5, Feb 3. [3] M. Neely, E. Modiano, and C. ping Li, Fairness and optimal stochastic control for heterogeneous networs, Networing, IEEE/ACM Transactions on, vol. 6, no., pp , April 8. [4] V. Valls and D. J. Leith, Max-weight revisited: Sequences of non-convex optimisations solving convex optimisations, Networing, IEEE/ACM Transactions on, in press.
Lecture 4 Duality and Decomposition Techniques
Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian
More informationDelay-minimal Transmission for Energy Constrained Wireless Communications
Delay-minimal Transmission for Energy Constrained Wireless Communications Jing Yang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, M0742 yangjing@umd.edu
More informationLagrangian Relaxation: An overview
Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction
More informationDistributed Alternating Direction Method of Multipliers
Distributed Alternating Direction Method of Multipliers Ermin Wei and Asuman Ozdaglar Abstract We consider a network of agents that are cooperatively solving a global unconstrained optimization problem,
More informationUtility-Based Rate Control in the Internet for Elastic Traffic
272 IEEE TRANSACTIONS ON NETWORKING, VOL. 10, NO. 2, APRIL 2002 Utility-Based Rate Control in the Internet for Elastic Traffic Richard J. La and Venkat Anantharam, Fellow, IEEE Abstract In a communication
More informationWIRELESS systems have emerged as a ubiquitous part
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 23, NO. 1, JANUARY 2005 89 Dynamic Power Allocation and Routing for Time-Varying Wireless Networks Michael J. Neely, Eytan Modiano, Senior Member,
More informationLecture 5: Duality Theory
Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane
More informationLecture Notes 2: The Simplex Algorithm
Algorithmic Methods 25/10/2010 Lecture Notes 2: The Simplex Algorithm Professor: Yossi Azar Scribe:Kiril Solovey 1 Introduction In this lecture we will present the Simplex algorithm, finish some unresolved
More informationNonlinear Programming
Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,
More informationAM 221: Advanced Optimization Spring 2016
AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 2 Wednesday, January 27th 1 Overview In our previous lecture we discussed several applications of optimization, introduced basic terminology,
More informationSurrogate Gradient Algorithm for Lagrangian Relaxation 1,2
Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.
More informationOptimality certificates for convex minimization and Helly numbers
Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge May 10, 2017 Abstract We consider the problem of minimizing
More informationA Joint Congestion Control, Routing, and Scheduling Algorithm in Multihop Wireless Networks with Heterogeneous Flows
1 A Joint Congestion Control, Routing, and Scheduling Algorithm in Multihop Wireless Networks with Heterogeneous Flows Phuong Luu Vo, Nguyen H. Tran, Choong Seon Hong, *KiJoon Chae Kyung H University,
More informationDistributed non-convex optimization
Distributed non-convex optimization Behrouz Touri Assistant Professor Department of Electrical and Computer Engineering University of California San Diego/University of Colorado Boulder AFOSR Computational
More informationMath 5593 Linear Programming Lecture Notes
Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................
More informationOptimality certificates for convex minimization and Helly numbers
Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge October 20, 2016 Abstract We consider the problem of minimizing
More informationJoint Rate Control and Scheduling in Multihop Wireless Networks
Joint Rate Control and Scheduling in Multihop Wireless Networks Xiaojun Lin and Ness B. Shroff School of Electrical and Computer Engineering, Purdue University West Lafayette, IN 47907, U.S.A. {linx, shroff}@ecn.purdue.edu
More informationIntroduction to Constrained Optimization
Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR
More informationAVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION. Jialin Liu, Yuantao Gu, and Mengdi Wang
AVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION Jialin Liu, Yuantao Gu, and Mengdi Wang Tsinghua National Laboratory for Information Science and
More informationPRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction
PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem
More informationMathematical and Algorithmic Foundations Linear Programming and Matchings
Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis
More informationScheduling Unsplittable Flows Using Parallel Switches
Scheduling Unsplittable Flows Using Parallel Switches Saad Mneimneh, Kai-Yeung Siu Massachusetts Institute of Technology 77 Massachusetts Avenue Room -07, Cambridge, MA 039 Abstract We address the problem
More informationCharacterizing Improving Directions Unconstrained Optimization
Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not
More informationLecture 2 September 3
EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give
More informationRouting over Parallel Queues with Time Varying Channels with Application to Satellite and Wireless Networks
2002 Conference on Information Sciences and Systems, Princeton University, March 20-22, 2002 Routing over Parallel Queues with Time Varying Channels with Application to Satellite and Wireless Networks
More informationConvexity Theory and Gradient Methods
Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality
More informationContents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.
page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5
More informationLecture 19: Convex Non-Smooth Optimization. April 2, 2007
: Convex Non-Smooth Optimization April 2, 2007 Outline Lecture 19 Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials
More informationOptimal network flow allocation
Optimal network flow allocation EE384Y Project intermediate report Almir Mutapcic and Primoz Skraba Stanford University, Spring 2003-04 May 10, 2004 Contents 1 Introduction 2 2 Background 2 3 Problem statement
More informationLECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach
LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction
More informationConvex Analysis and Minimization Algorithms I
Jean-Baptiste Hiriart-Urruty Claude Lemarechal Convex Analysis and Minimization Algorithms I Fundamentals With 113 Figures Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Hong Kong Barcelona
More information15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs
15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest
More informationCollege of Computer & Information Science Fall 2007 Northeastern University 14 September 2007
College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions
More informationConvex Optimization MLSS 2015
Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :
More informationA Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems
A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems Keely L. Croxton Fisher College of Business The Ohio State University Bernard Gendron Département
More informationDelay Stability of Back-Pressure Policies in the presence of Heavy-Tailed Traffic
Delay Stability of Back-Pressure Policies in the presence of Heavy-Tailed Traffic Mihalis G. Markakis Department of Economics and Business Universitat Pompeu Fabra Email: mihalis.markakis@upf.edu Eytan
More informationRevisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. Author: Martin Jaggi Presenter: Zhongxing Peng
Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization Author: Martin Jaggi Presenter: Zhongxing Peng Outline 1. Theoretical Results 2. Applications Outline 1. Theoretical Results 2. Applications
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity
More informationLecture 19 Subgradient Methods. November 5, 2008
Subgradient Methods November 5, 2008 Outline Lecture 19 Subgradients and Level Sets Subgradient Method Convergence and Convergence Rate Convex Optimization 1 Subgradients and Level Sets A vector s is a
More informationMathematical Programming and Research Methods (Part II)
Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types
More informationA duality-based approach for distributed min-max optimization with application to demand side management
1 A duality-based approach for distributed min- optimization with application to demand side management Ivano Notarnicola 1, Mauro Franceschelli 2, Giuseppe Notarstefano 1 arxiv:1703.08376v1 [cs.dc] 24
More informationLinear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS
Linear Programming Larry Blume Cornell University & The Santa Fe Institute & IHS Linear Programs The general linear program is a constrained optimization problem where objectives and constraints are all
More informationSome Optimization Trade-offs in Wireless Network Coding
Some Optimization Trade-offs in Wireless Network Coding Yalin Evren Sagduyu and Anthony Ephremides Electrical and Computer Engineering Department and Institute for Systems Research University of Maryland,
More informationRandom Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization
Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization Cun Mu, Asim Kadav, Erik Kruus, Donald Goldfarb, Martin Renqiang Min Machine Learning Group, NEC Laboratories America
More informationProjection-Based Methods in Optimization
Projection-Based Methods in Optimization Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences University of Massachusetts Lowell Lowell, MA
More informationIN mobile wireless networks, communications typically take
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL 2, NO 2, APRIL 2008 243 Optimal Dynamic Resource Allocation for Multi-Antenna Broadcasting With Heterogeneous Delay-Constrained Traffic Rui Zhang,
More information1 Linear programming relaxation
Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching
More informationOnline Stochastic Matching CMSC 858F: Algorithmic Game Theory Fall 2010
Online Stochastic Matching CMSC 858F: Algorithmic Game Theory Fall 2010 Barna Saha, Vahid Liaghat Abstract This summary is mostly based on the work of Saberi et al. [1] on online stochastic matching problem
More informationSequential Coordinate-wise Algorithm for Non-negative Least Squares Problem
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro
More information1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics
1. Introduction EE 546, Univ of Washington, Spring 2016 performance of numerical methods complexity bounds structural convex optimization course goals and topics 1 1 Some course info Welcome to EE 546!
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i
More informationarxiv: v1 [math.co] 27 Feb 2015
Mode Poset Probability Polytopes Guido Montúfar 1 and Johannes Rauh 2 arxiv:1503.00572v1 [math.co] 27 Feb 2015 1 Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany,
More informationIntroduction to Optimization
Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,
More informationAnalyzing Stochastic Gradient Descent for Some Non- Convex Problems
Analyzing Stochastic Gradient Descent for Some Non- Convex Problems Christopher De Sa Soon at Cornell University cdesa@stanford.edu stanford.edu/~cdesa Kunle Olukotun Christopher Ré Stanford University
More informationMVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg
MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:
More informationLECTURE 18 LECTURE OUTLINE
LECTURE 18 LECTURE OUTLINE Generalized polyhedral approximation methods Combined cutting plane and simplicial decomposition methods Lecture based on the paper D. P. Bertsekas and H. Yu, A Unifying Polyhedral
More informationMultiple Vertex Coverings by Cliques
Multiple Vertex Coverings by Cliques Wayne Goddard Department of Computer Science University of Natal Durban, 4041 South Africa Michael A. Henning Department of Mathematics University of Natal Private
More informationSparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual
Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal
More informationA Short SVM (Support Vector Machine) Tutorial
A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange
More informationScheduling in Multihop Wireless Networks without Back-pressure
Forty-Eighth Annual Allerton Conerence Allerton House, UIUC, Illinois, USA September 29 - October 1, 2010 Scheduling in Multihop Wireless Networks without Back-pressure Shihuan Liu, Eylem Ekici, and Lei
More informationSolution Methods Numerical Algorithms
Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class
More informationDistributed Throughput Maximization in Wireless Mesh Networks via Pre-Partitioning
TO APPEAR IN IEEE/ACM TRANSACTIONS ON NETWORKING, 2008 1 Distributed Throughput Maximization in Wireless Mesh Networks via Pre-Partitioning Andrew Brzezinski, Gil Zussman Senior Member, IEEE, and Eytan
More informationIntroduction to Modern Control Systems
Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November
More informationarxiv: v1 [cs.cc] 30 Jun 2017
On the Complexity of Polytopes in LI( Komei Fuuda May Szedlá July, 018 arxiv:170610114v1 [cscc] 30 Jun 017 Abstract In this paper we consider polytopes given by systems of n inequalities in d variables,
More informationME 555: Distributed Optimization
ME 555: Distributed Optimization Duke University Spring 2015 1 Administrative Course: ME 555: Distributed Optimization (Spring 2015) Instructor: Time: Location: Office hours: Website: Soomin Lee (email:
More information11 Linear Programming
11 Linear Programming 11.1 Definition and Importance The final topic in this course is Linear Programming. We say that a problem is an instance of linear programming when it can be effectively expressed
More informationScheduling Algorithms to Minimize Session Delays
Scheduling Algorithms to Minimize Session Delays Nandita Dukkipati and David Gutierrez A Motivation I INTRODUCTION TCP flows constitute the majority of the traffic volume in the Internet today Most of
More informationCMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro
CMU-Q 15-381 Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization Teacher: Gianni A. Di Caro GLOBAL FUNCTION OPTIMIZATION Find the global maximum of the function f x (and
More informationInteger Programming Theory
Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x
More informationA DYNAMIC RESOURCE ALLOCATION STRATEGY FOR SATELLITE COMMUNICATIONS. Eytan Modiano MIT LIDS Cambridge, MA
A DYNAMIC RESOURCE ALLOCATION STRATEGY FOR SATELLITE COMMUNICATIONS Aradhana Narula-Tam MIT Lincoln Laboratory Lexington, MA Thomas Macdonald MIT Lincoln Laboratory Lexington, MA Eytan Modiano MIT LIDS
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June
More informationFACES OF CONVEX SETS
FACES OF CONVEX SETS VERA ROSHCHINA Abstract. We remind the basic definitions of faces of convex sets and their basic properties. For more details see the classic references [1, 2] and [4] for polytopes.
More informationIntroduction to Optimization
Introduction to Optimization Constrained Optimization Marc Toussaint U Stuttgart Constrained Optimization General constrained optimization problem: Let R n, f : R n R, g : R n R m, h : R n R l find min
More informationModule 1 Lecture Notes 2. Optimization Problem and Model Formulation
Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization
More informationSINCE the publication of the seminal paper [1] by Kelly et al.
962 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 24, NO. 5, MAY 2006 Price-Based Distributed Algorithms for Rate-Reliability Tradeoff in Network Utility Maximization Jang-Won Lee, Member, IEEE,
More information1276 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 19, NO. 5, OCTOBER 2011
1276 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 19, NO. 5, OCTOBER 2011 Cross-Layer Designs in Coded Wireless Fading Networks With Multicast Ketan Rajawat, Student Member, IEEE, Nikolaos Gatsis, Student
More informationConvex Optimization and Machine Learning
Convex Optimization and Machine Learning Mengliu Zhao Machine Learning Reading Group School of Computing Science Simon Fraser University March 12, 2014 Mengliu Zhao SFU-MLRG March 12, 2014 1 / 25 Introduction
More informationLinear Programming in Small Dimensions
Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional
More informationDynamic Control and Optimization of Buffer Size for Short Message Transfer in GPRS/UMTS Networks *
Dynamic Control and Optimization of for Short Message Transfer in GPRS/UMTS Networks * Michael M. Markou and Christos G. Panayiotou Dept. of Electrical and Computer Engineering, University of Cyprus Email:
More informationNumerical Experiments with a Population Shrinking Strategy within a Electromagnetism-like Algorithm
Numerical Experiments with a Population Shrinking Strategy within a Electromagnetism-like Algorithm Ana Maria A. C. Rocha and Edite M. G. P. Fernandes Abstract This paper extends our previous work done
More informationARELAY network consists of a pair of source and destination
158 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 55, NO 1, JANUARY 2009 Parity Forwarding for Multiple-Relay Networks Peyman Razaghi, Student Member, IEEE, Wei Yu, Senior Member, IEEE Abstract This paper
More informationA New Pool Control Method for Boolean Compressed Sensing Based Adaptive Group Testing
Proceedings of APSIPA Annual Summit and Conference 27 2-5 December 27, Malaysia A New Pool Control Method for Boolean Compressed Sensing Based Adaptive roup Testing Yujia Lu and Kazunori Hayashi raduate
More informationGreedy Gossip with Eavesdropping
Greedy Gossip with Eavesdropping Deniz Üstebay, Mark Coates, and Michael Rabbat Department of Electrical and Computer Engineering McGill University, Montréal, Québec, Canada Email: deniz.ustebay@mail.mcgill.ca,
More informationLecture 2 The k-means clustering problem
CSE 29: Unsupervised learning Spring 2008 Lecture 2 The -means clustering problem 2. The -means cost function Last time we saw the -center problem, in which the input is a set S of data points and the
More information6 Randomized rounding of semidefinite programs
6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can
More informationIntroduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer
More informationConstrained Optimization
Constrained Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Constrained Optimization 1 / 46 EC2040 Topic 5 - Constrained Optimization Reading 1 Chapters 12.1-12.3
More informationParallel and Distributed Graph Cuts by Dual Decomposition
Parallel and Distributed Graph Cuts by Dual Decomposition Presented by Varun Gulshan 06 July, 2010 What is Dual Decomposition? What is Dual Decomposition? It is a technique for optimization: usually used
More informationDistance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form
Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report
More informationLecture 2. Topology of Sets in R n. August 27, 2008
Lecture 2 Topology of Sets in R n August 27, 2008 Outline Vectors, Matrices, Norms, Convergence Open and Closed Sets Special Sets: Subspace, Affine Set, Cone, Convex Set Special Convex Sets: Hyperplane,
More informationIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 5, MAY
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 55, NO 5, MAY 2007 1911 Game Theoretic Cross-Layer Transmission Policies in Multipacket Reception Wireless Networks Minh Hanh Ngo, Student Member, IEEE, and
More informationPOLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if
POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :
More informationI How does the formulation (5) serve the purpose of the composite parameterization
Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)
More informationSection 5 Convex Optimisation 1. W. Dai (IC) EE4.66 Data Proc. Convex Optimisation page 5-1
Section 5 Convex Optimisation 1 W. Dai (IC) EE4.66 Data Proc. Convex Optimisation 1 2018 page 5-1 Convex Combination Denition 5.1 A convex combination is a linear combination of points where all coecients
More information3 No-Wait Job Shops with Variable Processing Times
3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select
More informationA Distributed Throughput-Optimal CSMA/CA
A Distributed Throughput-Optimal CSMA/CA Tae Hyun im, Jian Ni and Nitin H. Vaidya Technical Report, March 16, 2010. Coordinated Science Laboratory Dept. Electrical and Computer Engineering in the University
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Eric Xing Lecture 14, February 29, 2016 Reading: W & J Book Chapters Eric Xing @
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 1 3.1 Linearization and Optimization of Functions of Vectors 1 Problem Notation 2 Outline 3.1.1 Linearization 3.1.2 Optimization of Objective Functions 3.1.3 Constrained
More informationConvexity: an introduction
Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and
More informationStochastic Approximation and Transaction-Level Model for IP Network Design
Stochastic Approximation and Transaction-Level Model for IP Network Design Linhai He and Jean Walrand Department of Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley,
More information